xcrun: error: invalid active developer path

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Mac is a nice computer for developing, MacOS and Apple can make your developer life a challange. After updating XCode – after all, why have a Mac when you do not develop iOS apps – it may happen that git stops working.

Running git gives you:

xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

Usual situation: it worked yesterday, today it is broken and you did nothing. Besides updating XCode. The problem occurs easily. When you update XCode, normally you also update the command line tools.

In case the Apple App Store isn’t giving you the option to update the command line tools, run the command

xcode-select –install

Output

xcode-select: note: install requested for command line developer tools

This should either install the command line tools and give you back a working git tool, or let you install the tools manually via the App Store.

After this, git should be working again. Happy coding.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Enable Wake on LAN on Windows 10

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

To be able to wake up your computer via wake-up-on-lan (WOL), you need to enable this feature in the BIOS and in the Windows 10 LAN adaptor settings.

Configuration: BIOS

Configuration depends on the BIOS of your computer. In my case, wake up on LAN is in the power on section and disabled by default. To use this feature, just enable it.

Do not forget to save the change.

Configuration: Windows 10

After activating WOL in the BIOS, you need to configure Windows 10 to allow the device to wake Windows. My test computer is a Lenovo Q180 running Windows 10 German. More information on how to activate WOL for this device can be found here.

Go to Network and adaptor properties. Select the LAN adaptor and open its properties. In the property screen, select Configuration.

Go the Entergy settings. Check all check boxes.

Now go to the next tab: Advanced. Ensure that Wake on Magic Packet is set to Enabled.

Windows Firewall configuration

To know that a computer is running, you can use ping. If the computer responses to a ping, it’s up and running. To allow ping requests through the standard Windows firewall, ensure that the rule for file and print service is activated for your network.

There are two network types: private and public. Activating ping for the private network should be sufficient. If you are unsure if your LAN is part of private or public, you can activate ping requests for both. At least, when your network is still secured by a router with firewall.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Block access from country by IP in Apache

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

In this blog I will show how you can block access to your Apache hosted internet services, forbidding access to a whole country. The access is blocked based on the IP address of a client. In case of a VPN where the user connects to a VPN server in another country, the user will still be able to access the site.

The internet is a great to ensure freedom of speech. Anyone can raise his/her voice; use the information to be informed on what is happening in the world, let others know about something, share knowledge. You can do so by using a social site or by hosting your own site. The ease of access to information; be able to search it instantly; have huge amount of information able to be discovered by a large number of the world population. This is one of the true great contributions to really make the world a better place. Some countries don’t like this, applying censorship, access restriction, or worse. And basically, if you decide to block a country to access your site, it’s one step to the wrong direction.

Why would you block a whole country? Isn’t a great thing about the internet that it’s accessible from anywhere in the world, just using a browser? It’s not as simple. A few reasons to block a country can be:

  • Legal requirements. Your site is not in compliance with the countries law. For instance, maybe you are logging too much personal information?
  • The functionality is not meant for that country. You have a commercial service, and are not offering a payment option or a localized version.
  • You are popular in a country and flooded with a lot of requests, but these are just operational overhead for you as your site is not targeted for these users.
  • If you think hard enough, you can come up with a good reason.

After finding yourself in the situation to block a specific country, the question is: HOW? You can use a blocker in your web platform (WordPress plugin), or use Apache to do so. Using a .htpasswd file for this is not optimal due to performance. Better is to use a module. A quick Google search reveals that a good option is to use the GeoLite DB from MaxMind. And they also offer an Apache 2.4 module. The module works with Apache 2 and the HTTPD server available on Amazon AMI images.

Some references to projects used to set the country blocking up.

Steps

Steps for using GeoLite2 DB for blocking countries in Apache

  1. Download GeoLite 2 DB
  2. Install dependencies
  3. Install Apache module
  4. Configuration
  5. Activation

1. Download GeoLite2 database

The GeoLite2 DB is available as a free and commercial license. The free version should be good enough for a private blog. You can get the free version from MaxMind site.

Select GeoLite 2 Country and binary format. Download the file using wget.

wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz

Unzip the file.

tar zxvf GeoLite2-Country.tar.gz

The actual DB file is close to 3 MB in size.

Copy it to a directory were the apache users can find it. A good default location is /usr/local/share in a new directory named GeoIP.

sudo mkdir /usr/local/share/GeoIP
sudo cp /home/ec2-user/geolite2db/GeoLite2-Country_20170704/GeoLite2-Country.mmdb /usr/local/share/GeoIP/

2. Install dependencies

Install libmaxmind

For the Apache module to work, the C library libmaxmind must be installed. This can be done by using yum.

sudo yum install libmaxminddb.x86_64 libmaxminddb-devel.x86_64

HTTPD devel files

Another dependency is the HTTP development files. These can also easily installed using yum.

sudo yum install httpd24-devel.x86_64

3. Install Apache module

The Apache module is available as source code from GitHub. For installation, download the latest release from GitHub. In my case, the latest release was version 1.1.0. Download the tar file.

Download the release to Linux using wgetand unzip it.

wget https://github.com/maxmind/mod_maxminddb/releases/download/1.1.0/mod_maxminddb-1.1.0.tar.gz
tar zxvf mod_maxminddb-1.1.0.tar.gz

Now you can compile and install the module. To do so, run

./configure
sudo make install

This should compile and put the files correctly into the right directory of HTTPD. If an error occurs during configuration, compilation or installation, look at the error message and good luck.

The directive to load the new module was automatically added to the file /etc/httpd/conf/httpd.conf

To test that the module can be loaded, restart HTTPD.

sudo service httpd restart

The service needs to start without error. This indicates that the module was successfully loaded. To validate this, check if the new module is actually loaded by HTTPD. To do so, list all loaded modules.

sudo httpd –M

Search for the maxmind module:

maxminddb_module (shared)

The new module is correctly loaded by HTTPD. Now we can configure Apache to make use of the module.

4. Configuration

Edit the HTTP config file and add the directive to block a specific country. The GitHub site of MaxMind contains an example that serves as a very good starting point.

MaxMindDBEnable On
MaxMindDBFile DB /usr/local/share/GeoIP/GeoLite2-Country.mmdb
MaxMindDBEnv MM_COUNTRY_CODE DB/country/iso_code
SetEnvIf MM_COUNTRY_CODE ^(RU|DE|FR) BlockCountry
Deny from env=BlockCountry

Using the above example, let’s adjust it to block Brazil. No worry, I won’t block Brazil, this is just a test as my IP currently is from Brazil, making it easier for me to test the setup. To block Brazil, check if MM_COUNTRY_CODE starts with BR: SetEnvIf MM_COUNTRY_CODE ^(BR) BlockCountry

MaxMindDBEnable On
MaxMindDBFile DB /usr/local/share/GeoIP/GeoLite2-Country.mmdb
MaxMindDBEnv MM_COUNTRY_CODE DB/country/iso_code
SetEnvIf MM_COUNTRY_CODE ^(BR) BlockCountry
Deny from env=BlockCountry

Add the above configuration snippet into a Location or Directory directive. This is because of the Deny command. This cannot be added directly under a virtual host.

<VirtualHost _default_:443>
  <Location />
    MaxMindDBEnable On
    MaxMindDBFile DB /usr/local/share/GeoIP/GeoLite2-Country.mmdb
    MaxMindDBEnv MM_COUNTRY_CODE DB/country/iso_code
    SetEnvIf MM_COUNTRY_CODE ^(BR) BlockCountry
    Order deny,allow
    Allow from al1
    Deny from env=BlockCountry
  </Location>
</VirtualHost>

5. Activation

To activate the configuration and to block Brazil, a restart of HTTPD is needed.

sudo service httpd restart

After HTTPD is successfully restarted, the new configuration is activated. To see if it is working, a basic test is to just access the site from an IP address that is blocked.

Test

My IP is from Brazil, accessing my site now should give me an access denied message.

It works!

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Convert SVG to PDF

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Some time ago I registered for a virtual training. The training material was made available in a web app. No PDF or a downloadable version of the material. Not a big issue, as long as I had internet access and only wanted to read the material. Problems started when I tried to access the material without a stable Internet connection. Depending where I am online is not an option. Plus point for having training material in PDF format. I can print PDF, making it easier to learn: offline, annotate pages, mark words, easier to read for the eyes. While a cloud solution is good for the vendor, it’s not always a very good option for the consumer.

Taking a closer look I found out that the material was loaded by the web app as SVG files. All pages where available as SVG: page1.svg to pageX.svg. So I could save the SVG files to my computer. Not something feasible for all content, but for selected chapters or pages this can make sense. Now I wanted to have these SVG files as PDF. It’s possible to transform SVG to PDF, especially on Linux. The most used tools available are

  • convert
  • Inkscape
  • rsvg-convert

The challenge was to convert the files to PDF in an acceptable quality. All tools transform SVG to PDF, but the quality differs, and most important: how the images are included. The only tool that rendered the embedded images correctly in the PDF was rsvg-convert. A sample of a page that shows perfectly the problem with the image is this one. The SVG as shown in the browser:

Font is clear and easy to read, the image in correctly embedded and shows all content. Now I wanted to have this page in PDF, with the exact same information included and in a good quality. For this, I tried all three tools and compared the result. All tools were installed and tested in Raspbian.

Convert

Convert is part of Image Magick. To get the tool under Debian you must install imagemagick.

Installation

apt-get install imagemagick

Run

convert page1.svg page1.pdf

Result

After the conversion from SVG to PDF, the result was far away from being useful. The font is not sharp, low quality, and the image is only showing half of the content. I think it’s possible to increase the quality of the font, but that the image isn’t correctly shown is a no-go.

Inkscape

Inkscape is an image editor, and comes with a GUI for end users. But there is a command line option available that can be used to transform a SVG to PDF.

Installation

sudo apt-get install inkscape

Run

inkscape page1.svg --export-area-page --without-gui --export-pdf=page1.pdf

Result

The quality of the font is better. Easy to read, sharp and clear. The image however is still not complete. This disqualifies Inkscape also as an acceptable solution.

rsvg-convert

This tool is part of librsvg. It comes from Gnome and has some “heavy” dependencies. The tool can convert SVG files to images like PNG or PDF.

Installation

sudo apt-get install librsvg2-bin

Run

rsvg-convert -f pdf -o page1.pdf page1.svg

Result

The quality of the font is good, easy to read and clear. The image is correctly shown. Definitely the best solution.

Conclusion

A lot of tools are available to convert SVG files to PDF. I did not even mention libraries available for Java or Javascript to do the job. The difference between the tools is how more complex SVG files are converted into PDF and if all information is included. Especially images can be a challenge for the tools. Based on my tests, I can recommend rsvg-convert. It is fast and gives a very good result.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Parallel download of files using curl

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

In a previous blog, I showed how to download files using wget. The interesting part of this blog was to pass the authentication cookies to the server as well as using the file name given by the Content-Disposition directive when saving the file. The example of the previous blog was to download a single file. What if you want to download several files from a server? Maybe hundreds or even thousands of files? wget is not able to read the location from a file and download these in parallel, neither is curl capable of doing so. You can start the download as a sequence, letting wget/curl download the files one by one, as shown in my other blog. Just use a FOR loop until you reach the end.

Commands

For downloading a large amount of files in parallel, you`ll have to start the download command several times in parallel. To achieve this, several programs in bash must be combined.

Create the list of files to download. This is the same as shown in my previous blog.

for i in 1 {1..100}; do `printf "echo https://server.fqdn/path/to/files/%0*d/E" 7 $i` >> urls.txt; done

Start the parallel download of files. Start 10 threads of curl in background. This is an enhanced version of the curl download command of my previous blog. Xargs is used to run several instances of curl.

nohup cat urls.txt | xargs -P 10 -n 1 curl -O -J -H "$(cat headers.txt)" >nohup.out 2>&1 &

Explanation

The first command is creating a list of files to download and stores them in the file urls.txt.

The second command is more complex. First, cat is printing the content of urls.txt to standard-out. Then, xargs is reading from standard-in and uses it as input for the curl command. For authentication and other headers, the content of the file headers.txt is used. The input for curl for the first line is then:

curl -O -J -H "$(cat headers.txt)" https://server.fqdn/path/to/files/0000001/E

The parameter –P 10 informs xargs to run the command 10 times in parallel. It takes the first 10 lines of input and starts for each input a new curl process. Therefore, 10 processes of curl are running in parallel. To run more downloads in parallel, give a higher value for –P, like 20, or 40.

To run the download in background, nohup is used. All output is redirected to nohup.out: >nohup.out 2>&1

SSH

To have the download running while being logged on via SSH, the tool screen should be used. After logon via ssh, call screen, run the above command, and hit CTRL + A + D to exit screen.

ssh user@server.fqdn
screen
nohup cat urls.txt | xargs -P 10 -n 1 curl -O -J -H "$(cat headers.txt)" >nohup.out 2>&1 &
CTRL+A+D
exit
Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Download files with leading zero in name using wget

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

In my previous blog I showed how wget can be used to download a file from a server using HTTP headers for authentication and how to use Content-Disposition directive send by the server to determine the correct file name. With the information of the blog it`s possible to download a single file from a server. But what if you must download several files? Maybe hundreds or thousands of files? Files whose file name is created using a mask, adding leading zeros?

Add leading zeros

What you need is a list of files to download. I`ll follow my example from the previous post and my files follow a specific patter: number. All files are numbered from 1 to n. To make it more special / complicated, it`s not only 1 to n. A mask is applied: 7 digits in total, with leading 0. 123 is 0000123, or 5301 is 0005301. In recent versions of Bash, you can use a FOR loop to loop through the numbers and printf for formatting the output and add the leading zeros. To get the numbers correctly formatted, the command is:

for i in 140000 {140001..140005}; 
  do echo `printf "%0*d" 7 $i`; 
done

This prints (echo) the numbers 140000 to 140005 with leading zero.

Start download

Adding the wget command in the printf directive allows to download the files. The execution flow is to let the FOR loop together with printf create the right number with mask, and wget downloads the file. After the file is download, the next iteration of the FOR loop starts, and the next file is downloaded. Assuming that I have PDF documents named 0140000.pdf to 0140005.pdf on server http://localhost:9080, the FOR loop with wget is:

for i in 140000 {140001..140005}; 
  do `printf "wget -nc --content-disposition http://localhost:9080/%0*d.pdf\n" 7 $i`; 
done

Result

Alternative

The above example is using wget. Of course, you can do the same using curl.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Custom 503 error page for Apache

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

A 5xx error code is returned by a web server when something went wrong: The server was not able to process the request. For a reverse proxy, a common 5xx error message is 503, meaning that the backend server is not reachable.

In the technical architecture of my blog site, the WordPress site with my blogs is hosted on a Raspberry Pi in my living room, while external access is through a reverse proxy hosted on Amazon EC2. If now the reverse proxy on EC2 cannot reach my Raspberry Pi, a 503 error message is given.

The root cause can be that the Raspberry Pi is turned off, there is no Internet connection available for some reason (power outage, provider problem), or something else. In case this happens, EC2 reverse proxy will throw an error and try to show the Apache standard 503 error page. The web page used to display the 503 is the same for all Apache installations worldwide. Giving your users a more personalized message can be a nice touch. For instance, including a statement that you are aware of the issue, it won`t take long to get solved, or a better explanation of what happened.

For this to work, you need to have

  1. A custom 503.html file and
  2. Configure Apache to use this web page.

Create custom 503 file

This is up to you. Internet and Google are your friend.

Apache configuration

Apache has the ErrorDocument directive. For an HTTP error code you specify a HTML file to be shown. Make the 503 HTML file created by you in the above section available on the web server.

/var/www/html/error/503.html

Important: the document root of Apache is /var/www/html. For accessing the file, the browser will call the URL /error/503.html. Reference it in the Apache configuration.

sudo vim /etc/httpd/conf/httpd.conf

Insert

ErrorDocument 503 /error/503.html

You are done in the case of a normal web server setup. The configuration shown so fare won`t work for a reverse proxy. A reverse proxy will forward all requests to the backend server, including the request for the 503 document. To not forward /error/503.html to the backend, put /error/ in a exception list. With this, every request to /error/ won`t be forwarded by Apache, and instead be served from the local web server. To exclude /error/ from the ProxyPass rule, add:

ProxyPass /error/ !

This exclusion must be before the other ProxyPass directives. A somewhat more complete example of a Apache configuration:

<VirtualHost _default_:443>
  DocumentRoot "/var/www/html"
  ProxyPass /error/ ! 
  ErrorDocument 503 /error/503.html
  SSLProxyEngine On
  ProxyPass / https://backend/
  ProxyPassReverse / https://backend/
  SSLEngine on
</VirtualHost>

Restart Apache

sudo service httpd restart

The next time the backend server is not reachable, the reverse proxy will serve the custom 503 error page to the users.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Download files with wget

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

A tool for download web resources is wget. It comes with a feature to mirror web sites, but you can also use it to download specific files, like PDFs. This is very easy and straightforward to do:

wget <url>
Example: wget http://localhost/doc.pdf

This will instruct wget to download the file doc.pdf from localhost and save it as doc.pdf. It is not as easy when the weber service is

  • requesting authentication or
  • the URL of the PDF file ends in the same file name

Authentication

The documentation of wget states that you can provide the username and password for BASIC authentication. What about a web site that asks for SAML 2.0? You can pass HTTP headers to wget via parameter –header. This feature makes it easy: log on to the server via a browser and then copy the headers. These headers contain the session information of you user and can be used by wget to connect as an authenticated user.

How to get the HTTP headers

  1. Log on to the web site
  2. Open developer tools
  3. Select a web resource
  4. Copy the HTTP headers. For cURL, its just selecting Copy all as cURL. This gives the complete cURL command. For just the headers, select Copy Request Headers.

Example:

User-Agent: Mozilla/5.0 Chrome/56
Accept-Encoding: gzip, deflate, sdch, br
Cookie: JSESSIONID=DBE1FED5C040B2DF7;

Each line is one –header parameter for wget. It is not feasible to add all these headers to each wget request individually. For maintenance and better readability these values should be read from a file. Problem: wget does not allow to read the header parameter from a file. There is no option for something like –header <file_with_headers>. What there is the . wgetrc file. This is the configuration file wget reads when called, and in this file it`s possible to define HTTP header values. For each HTTP header, create a new “header = <value>” entry in the file.

With this configured in the file, wget will send always these HTTP headers with each request. If the session cookies copied from the browser are valid the requests are authenticated and wget is able to download the file.

File name

Sometimes the file you want to download has a generic URL. Each file ends in the same file name at the server. For instance, http://localhost/category/doc.pdf, or /uid/E.pdf. In such cases, wget will download the file and save it as doc.pdf or E.pdf. This is not a problem when you download just one file, but when you download more files, like 20, wget numerate the files: E.pdf.1, E.pdf.2, E.pdf.3, …

This makes it hard to work with the files. A solution can be to check if the web server is supporting content-disposition. If so, the server should send the real file name of the archive in the HTTP response. The real file name can be seen in the Conent-Disposition header as filename.

With content-diposition, wget can save the downloaded file from /<UID>/E.pdf as <UID>.pdf instead of E.pdf. As the UID is unique, the file can easily be identified after download.

wget --content-disposition http://localhost/<uid>/E.pdf

Given the above example, the file download is saved locally as 2399104_E_20170304.pdf

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

A start job is running for dev-disk-by

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

Recently I restarted one of my Linux servers and the computer did not start up as expected. No external accessible service was running, like apache or SSH. This made the computer inaccessible from remote and left me in the dark. After a while, the server responded to ping, but nothing more.

After I connected the server to a display and keyboard, I could see the error message: “a start job is running for dev-disk-by […]”. After that, Linux gave me only the option to log on in rescue mode or to restart the system. A restart didn`t help. I checked the internet and found out that the message can be caused by a fstab entry.

Looking at the content of my /etc/fstab file I could see an old entry I once created for a test and never maintained (aka deleted). The system is trying to mount a partition that was not available / broken and the system stopped.

UUID=a6674495 -4249-9696-0d9c83 /mnt/disk btrfs defaults,noatime,auto 0 0

I commented out this line in fstab and restarted the system. Now the system was restarted correctly and all the services came up again.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

OpenSSL CA to sign CSR with SHA256 – Sign CSR issued with SHA-256

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn

The overall process is:

  1. Create CA
    1. Private CA key
      1. Create private key
      2. Check private key
    2. Public CA certificate
      1. Create public certificate
      2. Check public certificate
  2. Sign CSR
    1. SHA-1
      1. Create CSR using SHA-1
      2. Check CSR
      3. Sign CSR enforcing SHA-256
      4. Check signed certificate
    2. SHA-256
      1. Create CSR using SHA-256
      2. Check CSR
      3. Sign CSR
      4. Check signed certificate

Sign CSR request – SHA-256

When a CSR is created a signature algorithm can be specified. Currently, this should be SHA-256. Installing a TLS certificate that is using SHA-256 ensures that browsers like Chrome, Firefox, etc won`t show a security warning to the user. Signing the CSR using the CA is straight forward.

Create CSR using SHA-256

openssl req -out sha256.csr -new -newkey rsa:2048 -nodes -keyout sha256.key –sha256

The command creates two files: sha256.key containing the private key and sha256.csr containing the certificate request.

Check CSR

openssl req -verify -in sha256.csr -text -noout

The signature algorithm of the CSR is SHA-256

Sign CSR

Singing the CSR using the CA

openssl x509 -req -days 360 -in sha256.csr -CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial -out sha256.crt

This will sign the CSR using SHA-256 as provided by CSR.

Check signed certificate

openssl x509 -text -noout -in sha256.crt

The certificate is signed using SHA-256.

 

 

 


Possible problem: the certificate may be signed using SHA-1.

Why is the certificate signed with SHA1? Without providing –sha256 parameter, openssl is using the default value. Depending on the version of openssl you are using, the default may be using SHA-1. This is the case when you use the default openssl binary available on MacOs.

openssl version –a

This version is old. Better to install a newer one using brew.

After updating, the default algorithm is SHA-256 and not SHA-1 anymore. In case you cannot update the default openssl binary, install a newer version to a different location and use that one.

Let the world know ...Tweet about this on TwitterShare on Google+Share on FacebookEmail this to someoneShare on LinkedIn