Uncompressing a multi-part 7zip file in Debian

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

7zip is a popular compression program for Windows. It allows to effectively compress files, split them into several archives and to add protection by using a password. This all works fine if you are a Windows user. In case you now want to extract such a multi part password protected file in Linux, you’ll find out that this isn’t a standard use case. Uncompressing these files involves some work. 7zip is not made available for Linux by the developer. Gzip or zip won’t work with 7zip compressed files. But: an unofficial version is available and it is possible to extract 7zip files in Debian/Linux.

You have some options available for installing 7zip for Debian, like apt or by compilation. The version you get with apt is quite old: 9.2. In case the version of 7zip used to compress the file on Windows is higher than the one available for Debian, uncompressing may not work. An algorithm may be used that is not available on the lower version. In that case, 7zr will exit with an error and showing Unsupported Method.

Compilation from source

This option will give you the latest available version of 7zip for Linux. Especially useful when you try to unzip a file and get the message: Unsupported Method. To solve this, try to install a higher version of p7zip by downloading the source and compile p7zip.

Get the latest version of p7zip from SourceForge. Unzip it and then run make. After the compilation is done, you’ll have the executable 7za in the bin folder. This version should be able to work with files compressed by 7zip for Windows. Make sure to read the README.

Copy the correct makefile. 7zip provides several makefiles, for each target platform / architecture. In case of Linux, the default one should work. To start compilation, a simple make is sufficient.


This gives you the binary ./bin/7za

Unzip a file multi-part password protected file.

7za x h1.7z


Install the 7zip program for Debian. This installs version 9.2.

sudo apt-get install p7zip

Let’s say we have 1 file that was zipped to file h1.7z using 7zip and splitter into 650 MB. 7zip produces 2 archives:

  • h1.7z.001
  • h1.7z.002

To list the archive:

7zr l h1.7z.001 -tsplit

We can see that the split archives contain one file named h1.7z. That is the zip file created by 7zip under Windows.

To unzip the file, use

7zr x h1.7z.001 -tsplit
Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Adjust image size of Docker qcow2 file

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Short version

Increase image size by 100GB:

qemu-img resize ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 +100G

Resize partition:

qemu-system-x86_64 -drive file=~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2  -m 512 -cdrom ~/Downloads/gparted-live-0.30.0-1-amd64.iso -boot d -device usb-mouse -usb

Get an empty Docker.qcow2 image from my GitHub page and make your Docker use it:


How to adjust the Docker image size for using large containers like SAP NetWeaver ABAP

Docker uses an image file to store Docker containers. The file is named Docker.qcow2 and is located (on Mac) at:


By default, the file can grow to a size of 64 GB.

When you first start Docker, the size of this image is around 1.4GB. Adding containers, image, etc and it will grow to 64GB.

The 64GB default size can be seen when using qemu-img info:

qemu-img info ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2

When this limit is reached, Docker should automatically increase the size of the image, but this isn’t working always. As a result, when the image is at 64 GB, you can get an error message stating that the device is full:

no space left on device

At least with my Dockerfile for SAP NetWeaver ABAP Developer Edition Docker is not increasing the image file dynamically. Because of this I had to split the automatic installation process in two parts: base image setup and installation. I guess that right now the SAP Installation is filling up space faster than Docker can react.

The Docker.qcow2 file is a VM disk. Therefore, it is possible to manipulate it like any other virtual disk: you can increase the disk size and access files within the VM disk when you mount the image in a VM. An easy solution to change the disk size Docker has available to store images and containers is to increase the disk size. This can be done by using Qemu and GParted.


Locate qcow2 on your Computer

Click on open in finder. Finder opens at the specified location.

Shut down Docker.

Make a backup of the Docker.qcow2 file.

Install QEMU

To install qemu, use brew on Mac.

brew install qemu

Now Qemu should be installed.

Download GParted

Download the x64 gparted ISO image from their web site: 


Resize Docker.qcow2

Resizing the Docker.qcow2 file to a new size consists of two steps.

  1. Make the disk larger
  2. Adjust the partition

Increase disk size

First, let’s make the disk larger. SAP can occupy some space, make sure you add enough GB to the image. An additional 100 GB should do it.

qemu-img resize ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 +100G

Output is a simple status message.

Image resized.

Adjust partition table

To resize the image, start Qemu, use the GParted ISO image as boot file and mount the Docker.qcow2 disk.

qemu-system-x86_64 -drive file=~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2  -m 512 -cdrom ~/Downloads/gparted-live-0.30.0-1-amd64.iso -boot d -device usb-mouse -usb

I got some error messages, but Qemu started.

Starting the virtual machine will take some time. Be patient. Next you’ll have to configure the GParted ISO image.

The default values should be enough. This gives you a keyboard, mouse, English and X. After that, Gparted is started and you should see the Docker.qcow2 disk in the Gparted app.

Select the disk and click on Resize / Move. In the new size (MiB) field, enter the new size of the disk you need. The disk size is allocated dynamically and won’t occupy immediately space on your physical disk. So don’t be shy. Assign all free space to the partition.

Click on Resize/Move and on the Apply button

Last chance to stop. But as you need the new free space for Docker, click again on Apply.

The partition will be resized. In case something goes wrong, please restore the backup of the Docker.qcow2 file you made previously.

After the operation finishes, you can see that the partition is now offering 164GB.

Shutdown the VM. As the Docker.cqow2 file changed was the original one used by Docker, you have only to restart Docker to benefit from the new image size. Now you can use Docker to run SAP NetWeaver ABAP with just one command. As the Docker.qcow2 file is empty, even when the image size is reported as 4 GB, compressed (zipped) it’s just a few MB.

With the new Docker disk file you can even start SAP NetWeaver ABAP without getting the “no space left on device” message.

Image creation works. The space occupied by just the SAP NetWeaver ABAP image is already at 65 GB.

Start a container

docker run -P -h vhcalnplci --name nwabap751 -it nwabap:latest /bin/bash

In Kitematic



Change to user npladm

su - npladm

Problem with starting SAP

When you log in to your container and run startsap, the program will fail. It will report that no instance profiles were found.


Take a look at the available profiles.

ls -1 /sapmnt/NPL/profile/

During the installation, the installation script installed the profile files for the container with the dummy name 4f65[…], after starting the container, we specified a specific host name: vhcalnplci. Of course, these do not match and make sapstart fail.

Let’s adjust the instance profile configuration.

  1. Rename files
  2. Substitute references to old hostname to correct one vhcalnplci
mv NPL_ASCS01_4f6e4ee4de40 NPL_ACS01_vhcalnplci
mv NPL_D00_4f6e4ee4de40 NPL_D00_vhcalnplci
sed -i -- 's/4f6e4ee4de40/vhcalnplci /g' *

Now run again sapstart and it should work. If not, stop and start the container and try again.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

xcrun: error: invalid active developer path

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Mac is a nice computer for developing, MacOS and Apple can make your developer life a challange. After updating XCode – after all, why have a Mac when you do not develop iOS apps – it may happen that git stops working.

Running git gives you:

xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

Usual situation: it worked yesterday, today it is broken and you did nothing. Besides updating XCode. The problem occurs easily. When you update XCode, normally you also update the command line tools.

In case the Apple App Store isn’t giving you the option to update the command line tools, run the command

xcode-select –install


xcode-select: note: install requested for command line developer tools

This should either install the command line tools and give you back a working git tool, or let you install the tools manually via the App Store.

After this, git should be working again. Happy coding.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Enable Wake on LAN on Windows 10

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

To be able to wake up your computer via wake-up-on-lan (WOL), you need to enable this feature in the BIOS and in the Windows 10 LAN adaptor settings.

Configuration: BIOS

Configuration depends on the BIOS of your computer. In my case, wake up on LAN is in the power on section and disabled by default. To use this feature, just enable it.

Do not forget to save the change.

Configuration: Windows 10

After activating WOL in the BIOS, you need to configure Windows 10 to allow the device to wake Windows. My test computer is a Lenovo Q180 running Windows 10 German. More information on how to activate WOL for this device can be found here.

Go to Network and adaptor properties. Select the LAN adaptor and open its properties. In the property screen, select Configuration.

Go the Entergy settings. Check all check boxes.

Now go to the next tab: Advanced. Ensure that Wake on Magic Packet is set to Enabled.

Windows Firewall configuration

To know that a computer is running, you can use ping. If the computer responses to a ping, it’s up and running. To allow ping requests through the standard Windows firewall, ensure that the rule for file and print service is activated for your network.

There are two network types: private and public. Activating ping for the private network should be sufficient. If you are unsure if your LAN is part of private or public, you can activate ping requests for both. At least, when your network is still secured by a router with firewall.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Block access from country by IP in Apache

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

In this blog I will show how you can block access to your Apache hosted internet services, forbidding access to a whole country. The access is blocked based on the IP address of a client. In case of a VPN where the user connects to a VPN server in another country, the user will still be able to access the site.

The internet is a great to ensure freedom of speech. Anyone can raise his/her voice; use the information to be informed on what is happening in the world, let others know about something, share knowledge. You can do so by using a social site or by hosting your own site. The ease of access to information; be able to search it instantly; have huge amount of information able to be discovered by a large number of the world population. This is one of the true great contributions to really make the world a better place. Some countries don’t like this, applying censorship, access restriction, or worse. And basically, if you decide to block a country to access your site, it’s one step to the wrong direction.

Why would you block a whole country? Isn’t a great thing about the internet that it’s accessible from anywhere in the world, just using a browser? It’s not as simple. A few reasons to block a country can be:

  • Legal requirements. Your site is not in compliance with the countries law. For instance, maybe you are logging too much personal information?
  • The functionality is not meant for that country. You have a commercial service, and are not offering a payment option or a localized version.
  • You are popular in a country and flooded with a lot of requests, but these are just operational overhead for you as your site is not targeted for these users.
  • If you think hard enough, you can come up with a good reason.

After finding yourself in the situation to block a specific country, the question is: HOW? You can use a blocker in your web platform (WordPress plugin), or use Apache to do so. Using a .htpasswd file for this is not optimal due to performance. Better is to use a module. A quick Google search reveals that a good option is to use the GeoLite DB from MaxMind. And they also offer an Apache 2.4 module. The module works with Apache 2 and the HTTPD server available on Amazon AMI images.

Some references to projects used to set the country blocking up.


Steps for using GeoLite2 DB for blocking countries in Apache

  1. Download GeoLite 2 DB
  2. Install dependencies
  3. Install Apache module
  4. Configuration
  5. Activation

1. Download GeoLite2 database

The GeoLite2 DB is available as a free and commercial license. The free version should be good enough for a private blog. You can get the free version from MaxMind site.

Select GeoLite 2 Country and binary format. Download the file using wget.

wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz

Unzip the file.

tar zxvf GeoLite2-Country.tar.gz

The actual DB file is close to 3 MB in size.

Copy it to a directory were the apache users can find it. A good default location is /usr/local/share in a new directory named GeoIP.

sudo mkdir /usr/local/share/GeoIP
sudo cp /home/ec2-user/geolite2db/GeoLite2-Country_20170704/GeoLite2-Country.mmdb /usr/local/share/GeoIP/

2. Install dependencies

Install libmaxmind

For the Apache module to work, the C library libmaxmind must be installed. This can be done by using yum.

sudo yum install libmaxminddb.x86_64 libmaxminddb-devel.x86_64

HTTPD devel files

Another dependency is the HTTP development files. These can also easily installed using yum.

sudo yum install httpd24-devel.x86_64

3. Install Apache module

The Apache module is available as source code from GitHub. For installation, download the latest release from GitHub. In my case, the latest release was version 1.1.0. Download the tar file.

Download the release to Linux using wgetand unzip it.

wget https://github.com/maxmind/mod_maxminddb/releases/download/1.1.0/mod_maxminddb-1.1.0.tar.gz
tar zxvf mod_maxminddb-1.1.0.tar.gz

Now you can compile and install the module. To do so, run

sudo make install

This should compile and put the files correctly into the right directory of HTTPD. If an error occurs during configuration, compilation or installation, look at the error message and good luck.

The directive to load the new module was automatically added to the file /etc/httpd/conf/httpd.conf

To test that the module can be loaded, restart HTTPD.

sudo service httpd restart

The service needs to start without error. This indicates that the module was successfully loaded. To validate this, check if the new module is actually loaded by HTTPD. To do so, list all loaded modules.

sudo httpd –M

Search for the maxmind module:

maxminddb_module (shared)

The new module is correctly loaded by HTTPD. Now we can configure Apache to make use of the module.

4. Configuration

Edit the HTTP config file and add the directive to block a specific country. The GitHub site of MaxMind contains an example that serves as a very good starting point.

MaxMindDBEnable On
MaxMindDBFile DB /usr/local/share/GeoIP/GeoLite2-Country.mmdb
MaxMindDBEnv MM_COUNTRY_CODE DB/country/iso_code
SetEnvIf MM_COUNTRY_CODE ^(RU|DE|FR) BlockCountry
Deny from env=BlockCountry

Using the above example, let’s adjust it to block Brazil. No worry, I won’t block Brazil, this is just a test as my IP currently is from Brazil, making it easier for me to test the setup. To block Brazil, check if MM_COUNTRY_CODE starts with BR: SetEnvIf MM_COUNTRY_CODE ^(BR) BlockCountry

MaxMindDBEnable On
MaxMindDBFile DB /usr/local/share/GeoIP/GeoLite2-Country.mmdb
MaxMindDBEnv MM_COUNTRY_CODE DB/country/iso_code
SetEnvIf MM_COUNTRY_CODE ^(BR) BlockCountry
Deny from env=BlockCountry

Add the above configuration snippet into a Location or Directory directive. This is because of the Deny command. This cannot be added directly under a virtual host.

<VirtualHost _default_:443>
  <Location />
    MaxMindDBEnable On
    MaxMindDBFile DB /usr/local/share/GeoIP/GeoLite2-Country.mmdb
    MaxMindDBEnv MM_COUNTRY_CODE DB/country/iso_code
    SetEnvIf MM_COUNTRY_CODE ^(BR) BlockCountry
    Order deny,allow
    Allow from al1
    Deny from env=BlockCountry

5. Activation

To activate the configuration and to block Brazil, a restart of HTTPD is needed.

sudo service httpd restart

After HTTPD is successfully restarted, the new configuration is activated. To see if it is working, a basic test is to just access the site from an IP address that is blocked.


My IP is from Brazil, accessing my site now should give me an access denied message.

It works!

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Convert SVG to PDF

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Some time ago I registered for a virtual training. The training material was made available in a web app. No PDF or a downloadable version of the material. Not a big issue, as long as I had internet access and only wanted to read the material. Problems started when I tried to access the material without a stable Internet connection. Depending where I am online is not an option. Plus point for having training material in PDF format. I can print PDF, making it easier to learn: offline, annotate pages, mark words, easier to read for the eyes. While a cloud solution is good for the vendor, it’s not always a very good option for the consumer.

Taking a closer look I found out that the material was loaded by the web app as SVG files. All pages where available as SVG: page1.svg to pageX.svg. So I could save the SVG files to my computer. Not something feasible for all content, but for selected chapters or pages this can make sense. Now I wanted to have these SVG files as PDF. It’s possible to transform SVG to PDF, especially on Linux. The most used tools available are

  • convert
  • Inkscape
  • rsvg-convert

The challenge was to convert the files to PDF in an acceptable quality. All tools transform SVG to PDF, but the quality differs, and most important: how the images are included. The only tool that rendered the embedded images correctly in the PDF was rsvg-convert. A sample of a page that shows perfectly the problem with the image is this one. The SVG as shown in the browser:

Font is clear and easy to read, the image in correctly embedded and shows all content. Now I wanted to have this page in PDF, with the exact same information included and in a good quality. For this, I tried all three tools and compared the result. All tools were installed and tested in Raspbian.


Convert is part of Image Magick. To get the tool under Debian you must install imagemagick.


apt-get install imagemagick


convert page1.svg page1.pdf


After the conversion from SVG to PDF, the result was far away from being useful. The font is not sharp, low quality, and the image is only showing half of the content. I think it’s possible to increase the quality of the font, but that the image isn’t correctly shown is a no-go.


Inkscape is an image editor, and comes with a GUI for end users. But there is a command line option available that can be used to transform a SVG to PDF.


sudo apt-get install inkscape


inkscape page1.svg --export-area-page --without-gui --export-pdf=page1.pdf


The quality of the font is better. Easy to read, sharp and clear. The image however is still not complete. This disqualifies Inkscape also as an acceptable solution.


This tool is part of librsvg. It comes from Gnome and has some “heavy” dependencies. The tool can convert SVG files to images like PNG or PDF.


sudo apt-get install librsvg2-bin


rsvg-convert -f pdf -o page1.pdf page1.svg


The quality of the font is good, easy to read and clear. The image is correctly shown. Definitely the best solution.


A lot of tools are available to convert SVG files to PDF. I did not even mention libraries available for Java or Javascript to do the job. The difference between the tools is how more complex SVG files are converted into PDF and if all information is included. Especially images can be a challenge for the tools. Based on my tests, I can recommend rsvg-convert. It is fast and gives a very good result.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Parallel download of files using curl

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

In a previous blog, I showed how to download files using wget. The interesting part of this blog was to pass the authentication cookies to the server as well as using the file name given by the Content-Disposition directive when saving the file. The example of the previous blog was to download a single file. What if you want to download several files from a server? Maybe hundreds or even thousands of files? wget is not able to read the location from a file and download these in parallel, neither is curl capable of doing so. You can start the download as a sequence, letting wget/curl download the files one by one, as shown in my other blog. Just use a FOR loop until you reach the end.


For downloading a large amount of files in parallel, you`ll have to start the download command several times in parallel. To achieve this, several programs in bash must be combined.

Create the list of files to download. This is the same as shown in my previous blog.

for i in 1 {1..100}; do `printf "echo https://server.fqdn/path/to/files/%0*d/E" 7 $i` >> urls.txt; done

Start the parallel download of files. Start 10 threads of curl in background. This is an enhanced version of the curl download command of my previous blog. Xargs is used to run several instances of curl.

nohup cat urls.txt | xargs -P 10 -n 1 curl -O -J -H "$(cat headers.txt)" >nohup.out 2>&1 &


The first command is creating a list of files to download and stores them in the file urls.txt.

The second command is more complex. First, cat is printing the content of urls.txt to standard-out. Then, xargs is reading from standard-in and uses it as input for the curl command. For authentication and other headers, the content of the file headers.txt is used. The input for curl for the first line is then:

curl -O -J -H "$(cat headers.txt)" https://server.fqdn/path/to/files/0000001/E

The parameter –P 10 informs xargs to run the command 10 times in parallel. It takes the first 10 lines of input and starts for each input a new curl process. Therefore, 10 processes of curl are running in parallel. To run more downloads in parallel, give a higher value for –P, like 20, or 40.

To run the download in background, nohup is used. All output is redirected to nohup.out: >nohup.out 2>&1


To have the download running while being logged on via SSH, the tool screen should be used. After logon via ssh, call screen, run the above command, and hit CTRL + A + D to exit screen.

ssh user@server.fqdn
nohup cat urls.txt | xargs -P 10 -n 1 curl -O -J -H "$(cat headers.txt)" >nohup.out 2>&1 &
Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Download files with leading zero in name using wget

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

In my previous blog I showed how wget can be used to download a file from a server using HTTP headers for authentication and how to use Content-Disposition directive send by the server to determine the correct file name. With the information of the blog it`s possible to download a single file from a server. But what if you must download several files? Maybe hundreds or thousands of files? Files whose file name is created using a mask, adding leading zeros?

Add leading zeros

What you need is a list of files to download. I`ll follow my example from the previous post and my files follow a specific patter: number. All files are numbered from 1 to n. To make it more special / complicated, it`s not only 1 to n. A mask is applied: 7 digits in total, with leading 0. 123 is 0000123, or 5301 is 0005301. In recent versions of Bash, you can use a FOR loop to loop through the numbers and printf for formatting the output and add the leading zeros. To get the numbers correctly formatted, the command is:

for i in 140000 {140001..140005}; 
  do echo `printf "%0*d" 7 $i`; 

This prints (echo) the numbers 140000 to 140005 with leading zero.

Start download

Adding the wget command in the printf directive allows to download the files. The execution flow is to let the FOR loop together with printf create the right number with mask, and wget downloads the file. After the file is download, the next iteration of the FOR loop starts, and the next file is downloaded. Assuming that I have PDF documents named 0140000.pdf to 0140005.pdf on server http://localhost:9080, the FOR loop with wget is:

for i in 140000 {140001..140005}; 
  do `printf "wget -nc --content-disposition http://localhost:9080/%0*d.pdf\n" 7 $i`; 



The above example is using wget. Of course, you can do the same using curl.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Custom 503 error page for Apache

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

A 5xx error code is returned by a web server when something went wrong: The server was not able to process the request. For a reverse proxy, a common 5xx error message is 503, meaning that the backend server is not reachable.

In the technical architecture of my blog site, the WordPress site with my blogs is hosted on a Raspberry Pi in my living room, while external access is through a reverse proxy hosted on Amazon EC2. If now the reverse proxy on EC2 cannot reach my Raspberry Pi, a 503 error message is given.

The root cause can be that the Raspberry Pi is turned off, there is no Internet connection available for some reason (power outage, provider problem), or something else. In case this happens, EC2 reverse proxy will throw an error and try to show the Apache standard 503 error page. The web page used to display the 503 is the same for all Apache installations worldwide. Giving your users a more personalized message can be a nice touch. For instance, including a statement that you are aware of the issue, it won`t take long to get solved, or a better explanation of what happened.

For this to work, you need to have

  1. A custom 503.html file and
  2. Configure Apache to use this web page.

Create custom 503 file

This is up to you. Internet and Google are your friend.

Apache configuration

Apache has the ErrorDocument directive. For an HTTP error code you specify a HTML file to be shown. Make the 503 HTML file created by you in the above section available on the web server.


Important: the document root of Apache is /var/www/html. For accessing the file, the browser will call the URL /error/503.html. Reference it in the Apache configuration.

sudo vim /etc/httpd/conf/httpd.conf


ErrorDocument 503 /error/503.html

You are done in the case of a normal web server setup. The configuration shown so fare won`t work for a reverse proxy. A reverse proxy will forward all requests to the backend server, including the request for the 503 document. To not forward /error/503.html to the backend, put /error/ in a exception list. With this, every request to /error/ won`t be forwarded by Apache, and instead be served from the local web server. To exclude /error/ from the ProxyPass rule, add:

ProxyPass /error/ !

This exclusion must be before the other ProxyPass directives. A somewhat more complete example of a Apache configuration:

<VirtualHost _default_:443>
  DocumentRoot "/var/www/html"
  ProxyPass /error/ ! 
  ErrorDocument 503 /error/503.html
  SSLProxyEngine On
  ProxyPass / https://backend/
  ProxyPassReverse / https://backend/
  SSLEngine on

Restart Apache

sudo service httpd restart

The next time the backend server is not reachable, the reverse proxy will serve the custom 503 error page to the users.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Download files with wget

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

A tool for download web resources is wget. It comes with a feature to mirror web sites, but you can also use it to download specific files, like PDFs. This is very easy and straightforward to do:

wget <url>
Example: wget http://localhost/doc.pdf

This will instruct wget to download the file doc.pdf from localhost and save it as doc.pdf. It is not as easy when the weber service is

  • requesting authentication or
  • the URL of the PDF file ends in the same file name


The documentation of wget states that you can provide the username and password for BASIC authentication. What about a web site that asks for SAML 2.0? You can pass HTTP headers to wget via parameter –header. This feature makes it easy: log on to the server via a browser and then copy the headers. These headers contain the session information of you user and can be used by wget to connect as an authenticated user.

How to get the HTTP headers

  1. Log on to the web site
  2. Open developer tools
  3. Select a web resource
  4. Copy the HTTP headers. For cURL, its just selecting Copy all as cURL. This gives the complete cURL command. For just the headers, select Copy Request Headers.


User-Agent: Mozilla/5.0 Chrome/56
Accept-Encoding: gzip, deflate, sdch, br

Each line is one –header parameter for wget. It is not feasible to add all these headers to each wget request individually. For maintenance and better readability these values should be read from a file. Problem: wget does not allow to read the header parameter from a file. There is no option for something like –header <file_with_headers>. What there is the . wgetrc file. This is the configuration file wget reads when called, and in this file it`s possible to define HTTP header values. For each HTTP header, create a new “header = <value>” entry in the file.

With this configured in the file, wget will send always these HTTP headers with each request. If the session cookies copied from the browser are valid the requests are authenticated and wget is able to download the file.

File name

Sometimes the file you want to download has a generic URL. Each file ends in the same file name at the server. For instance, http://localhost/category/doc.pdf, or /uid/E.pdf. In such cases, wget will download the file and save it as doc.pdf or E.pdf. This is not a problem when you download just one file, but when you download more files, like 20, wget numerate the files: E.pdf.1, E.pdf.2, E.pdf.3, …

This makes it hard to work with the files. A solution can be to check if the web server is supporting content-disposition. If so, the server should send the real file name of the archive in the HTTP response. The real file name can be seen in the Conent-Disposition header as filename.

With content-diposition, wget can save the downloaded file from /<UID>/E.pdf as <UID>.pdf instead of E.pdf. As the UID is unique, the file can easily be identified after download.

wget --content-disposition http://localhost/<uid>/E.pdf

Given the above example, the file download is saved locally as 2399104_E_20170304.pdf

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn