Assign a static IP to DHCP client

After setting up a DHCP server on a Raspberry Pi running Linux I get working leases for my clients. However, these are not static. It can happen that my smartphone gets a new IP address the next it connects: 192.168.0.161 instead of 192.168.0.160. For some clients I want to make sure they always use the same IP. This can achieved with ISC DHCP Server by registering a static lease for a specific MAC.

Example

I’ll use my soundbar for the rest of this blog as an example. The MAC of the network card is bc:30:d9:2a:c9:50. I want to always assign the IP 192.168.0.152 to the soundbar.

Find out client data

To find out the client data like MAC and current lease, check the DHCP server log. Or take a look at the back of the device or its settings to find out the MAC. For the DHCP server log on assigned leases:

sudo systemctl status isc-dhcp-server.service

The last line shows that the DHCP server assigned an IP to a client and shows also the MAC address.

DHCPACK on 192.168.0.152 to bc:30:d9:2a:c9:50

Let’s make sure the MAC bc:30:d9:2a:c9:50 always gets the IP 192.168.0.152.

Configuration

sudo vim /etc/dhcp/dhcpd.conf

This is the DHCP server configuration file. I already configured it for a subnet 192.168.0.x where the server is assigning leases for the IP addresses in the range of 192.168.0.150 to 192.168.0.240.

Inside the subnet configuration, I have to add a configuration for the soundbar for IP 192.168.0.152.

host soundbar {
  hardware ethernet bc:30:d9:2a:c9:50;
  fixed-address 192.168.0.152;
}

The complete dhcpd.conf file will look like this:

subnet 192.168.0.0 netmask 255.255.255.0 {
  range 192.168.0.150 192.168.0.240;
  option routers 192.168.0.1;
  option domain-name "itsfullofstars.de";
  option domain-name-servers 8.8.8.8, 8.8.4.4;
  host soundbar {
    hardware ethernet bc:30:d9:2a:c9:50;
    fixed-address 192.168.0.152;
  }
}

Activate configuration

To activate the new configuration, make either DHCPD load the new configuration from file, or restart the service.

sudo systemctl restart isc-dhcp-server.service

Check the status of the service.

sudo systemctl status isc-dhcp-server.service

Result

The assigned leases can be found in the dhcpd.leases file. All leases assigned are listed here, including the mac address, IP address, start and end time of the lease. If all works out as planned, the soundbar will be in there with the static IP.

sudo more /var/lib/dhcp/dhcpd.leases

 

DHCP Server on Linux with Raspberry Pi

My internet provider is Unitymedia. Their default router comes with a DHCP server. Honestly, it’s one of the worst products I ever had to work with. My private network is 192.168.0.x. The DHCP server of the Unitymedia box is distributing from time to time leases for 192.168.192.x. Changing my private network to 192.168.192.x one is not working, as then the DHCP server picks another address range. Advise from Unitymedia help desk was to reboot the box, which, of course, won’t solve the problem. Because of this error, some of my devices are in a different network: Chromecast won’t work, broken internet connection on smartphones, etc.

I do have a Raspberry Pi (RP) in 24/7 use. My idea is to run my own DHCP server on the RP. This not only solves the DHCP problem, but also gives me more control over the DHCP configuration.

Preparation

sudo apt-get update
sudo apt-get install isc-dhcp-server

This installs ISC DHCP server. As you can see in the output, starting the DHCP server failed.

sudo systemctl status isc-dhcp-server.service

The error is simply caused because the DHCP server is not configured. Let’s change that.

Configuration

Several parameters must be activated and configured.

sudo vim /etc/dhcp/dhcpd.conf

Lease time

default-lease-time 600;
max-lease-time 7200;

Activate DHCP server

# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
authoritative;

Subnet

This configures what IP address are going to be distributed. My private network is 192.168.0.x with the router on 192.168.0.1. As DNS you can use whatever you want, as an example I am using Google DNS servers.

subnet 192.168.0.0 netmask 255.255.255.0 {
  range 192.168.0.150 192.168.0.240;
  option routers 192.168.0.1;
  option domain-name "itsfullofstars.de";
  option domain-name-servers 8.8.8.8, 8.8.4.4;
}

This will give DHCP clients an IP address between .150 and .240, with router .1, Google DNS and sets the domain name to my own.

Deactivate old DHCP server

To not have the DHCP server provided by Unitymedia box still issuing wrong IP address, I am going to deactivate the service via the web interface.

Start DHCP server

After installing and configuring the new DHCP server on RP and deactivating the one from the router box, it’s time to start the new DHCP server.

Result

To see if a IP address is assigned, use this command:

sudo systemctl status isc-dhcp-server.service

Android

Putting my Android device into flight mode and back makes it connect to Wifi again and obtain a new IP address via DHCP. In the DHCP status log, I can see the DHCPDISCOVER from the Android device and that it got the IP address 192.168.0.150 assigned.

Mac

As my Mac always got the wrong IP assigned, I changed it to manual configuration. Change the mode to DHCP, apply and deactivate / activating Wifi.

Soundbar

And my soundbar that got a strange IP address assigned by the Unitymedia router box? Works too!

Chromcast streaming shows the SoundBar is now in the same network.

Apt-get unable to connect to IPv6 address

Recently I had the problem that running apt-get update stalled while trying to connect to an IPv6 address. For instance, on a Raspberry Pi, the update process stalls while trying to connect to archive.raspberrypi.org. All other connections worked fine. Looking at the console output, a difference was that apt was trying to connect to an IPv6 address.

The problem was caused by:

100% [Connecting to archive.raspberrypi.org (2a00:1098:0:80:1000:13:0:8)]

A quick internet search showed that you can force apt to not use IPv6 and only IPv4. As the download worked for IPv4, this seems like a reasonable workaround.

Solution

You can pass a parameter to disable IPv4 to apt-get, or write it to apt config file to make it persistent.

Configuration file

Create a new configuration file. This makes it easy for you to keep the change during updates and to know that you configured this.

sudo vim /etc/apt/apt.conf.d/99disable-ipv6
Insert Acquire::ForceIPv4 "true";
Save
apt-get update

Parameter

To disable IPv6 just once while calling apt, the parameter is Acquire::ForceIPv4=true.

sudo apt-get -o Acquire::ForceIPv4=true update

Result

Loading the package data from archive raspberrypi.org is now ignored and apt-get update works again.

 

Partitioning and formatting a large disk

I got a new 10 TB disk. Before I’ll add this one to a RAID, I want to play around with it, aka: test the drive. Therefore, I’ll need to format the drive to mount it. And before that, I need to create a partition.

FDISK

In the good old days, you used fdisk to partition a HDD. Since a few years, fdisk was replaced by parted as fdisk got some issues with large. Nevertheless, it still works.

Make sure to create a GPT partition table (g), and not the old new partition (n) alternative. Creating a new partition using “n” gives you a 2 TB partition.

Creating a new disklabel of type GPT using “g” gives you 10TB. The create the GPT disklabel and the new partition, use: g & n.

PARTED

An alternative to fdisk is parted. Parted is newer than fdisk and there are GUIs available to make it easier to end users to use it. Parted allows to pass parameters to it and do all the partition and sizing stuff in one command.

parted -s -a optimal /dev/sde mkpart primary ext4 0% 100%

Create Filesystem

Finally, after the HDD is partitioned, it’s time to format the partition with EXT4. Of course you can use a different filesystem.

mkfs.ext4 /dev/sde1

mount /dev/sde1 /mnt/sde/

Response for preflight does not have HTTP ok status

Issuing an AJAX request is more complex than you might think.

Issue

When you issue an AJAX request to a server in another domain (CORS), you may get the following error message:

Response for preflight does not have HTTP ok status.

Problem

The server is configured to allow CORS. The Apache configuration includes

Header set Access-Control-Allow-Origin "*"

The response header of the service contains the correct header value. With this set you can access the service via CORS.

Solution

Now, why does it not work?

You have to be aware that this works for simple CORS requests. For more complex requests that set custom headers, etc, the service may not work. This is due to the preflight mechanism of the browser that checks if the service accepts the request. Before issuing an AJAX request (e.g. GET or POST), an OPTIONS is triggered to check what the service is accepting. You can see this in the network tab of Chrome.

The request includes two headers:

  • Access-Control-Request-Headers
  • Access-Control-Request-Method

Before issuing the actual GET request, the browser is checking if the service is correctly configured for CORS. This is done by checking if the service accepts the methods and headers going to be used by the actual request. Therefore, it is not enough to allow the service to be accessed from a different origin, but also the additional requisites must be fulfilled.

Solution

To configure Apache to set the headers, add the missing headers. Include in your HTTP service configuration the header set directive.

Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT"
Header always set Access-Control-Allow-Headers "append,delete,entries,foreach,get,has,keys,set,values,Authorization"

Still not working!

After setting the values, you may still not able to call the service, as the browser still reports an error. The response code is not 2xx. Returning a 200 HTTP code can be enforced in Apache config using a rewrite rule.

RewriteEngine On
RewriteCond %{REQUEST_METHOD} OPTIONS
RewriteRule ^(.*)$ $1 [R=200,L]

With this configuration, the service will now work with CORS. The first OPTIONS request will pass:

The following GET request will also pass:

Links

 

Automount share

The example used in this blog is a CIFS share from a Samba server running on a Raspberry Pi mounted on demand by a client running Debian.

Goal

The goal is to have a share on a client that is dynamically mounted. The share should only be mounted when an app needs to access.

In my case I do have a server with a data storage share connected. The storage is made available to clients in the network. To not have the share being mounted by the clients all the time, the share should be mounted only when real demand exists. For instance, an app needs to read data from the share. If the share is not needed by the client, it should not be mounted.

Process

To understand the scenario better, take a look at the picture below. The process can be separated into 4 steps.

  • Step 1: the client is configured, but the share is inactive.
  • Step 2: An app is accessing the share. This can be an app as simple as ls /mnt/share
  • Step 3: The client is connecting to the server and mounting the share to the local mount point /mnt/share. The data is now available to the app.
  • Step 4: The app is not using the data from the share any longer. After a timeout, the client disconnects the share.

The example is using for the server a Raspberry Pi with Raspbian and for the client a Debian based system (Proxmox). As share type, CIFS is used. On the client, Samba is running and configured to give access to a named user to the data storage.

Installation

Server

Install Samba and configure access for a named user. This is not part of this blog.

Client

Autofs is the package and tool taking of mounting shares automatically. Install it using apt-get.

apt-get update
apt-get install autofs

Configuration of autofs

Autofs needs to be configured. To make it easier, the packages comes with templates. I am going to use the autofs master template as my starting point. Take a look at the master template as it contains an explanation of what is needed.

cd /etc/auto.master
more /etc/auto.master

To add an auto mount share a new line must be added to the file. The format is: mount-point file options.

Before adding the line, you first must understand how the template and autofs works and what you want to achieve. First parameter is for the local mount point. The directory informed here is the parent. The actual shares mounted are created as sub-folders in that directory. For instance, if you choose /mnt/server and the remote share is data, the final mount point will be /mnt/server/data. I am going to mount the remote share to /mnt/server.

The seconds parameter is for the configuration file for that mount point. The third parameter specifies how autofs is treating the mount point. To unmount the share after 1 minute of inactivity, use option –timeout=60. The ghost option will create the subfolder even in case the server is not reachable.

Edit master template

Add a new configuration line for mounting the server share

/media/net /etc/autofs/auto.net --timeout=60 --ghost

Mount configuration

The actual mount configuration for the share is specified in the file /etc/auto.server. Create file /etc/auto.server and edit it.

touch /etc/auto.server
vim /etc/auto.server

Insert mount options.

  • [username] – name of user used to connect to Samba share
  • [password] – password of the user
  • [remote_server] – IP / server name of the Samba server
  • /data – name of the share configured at Samba. In case you have a share configured named Music, or Fotos, or Work, substitute data with the correct share name.
data -fstype=cifs,username=[username],password=[password] ://[remote_server]/data

Save file

Change permission

chmod 0644 auto.server

Start autofs as service

Stop autofs and start it in debug mode to see if the configuration works.

If it worked, exit and start the service.

systemctl start autofs.service

Test

To test, go through the 4 steps described in the picture at the top of this blog.

Step 1

Client is ready. Check the mounted volumes. You will see that no CIFS volume is available.

Using mount, you can see that the mount point is available.

/etc/auto.server on /mnt/server type autofs (rw,relatime,…

In the parent mount point for the CIFS share, autofs created the folder data.

Step 2 & 3

Run an app that is accessing the share

ls /mnt/share/data

Accessing now the content of /mnt/server/data will automatically mount the CIFS share.

df -h

Mount

//192.168.0.164/data on /mnt/server/data type cifs (rw,relatime

Step 4

Assure that no app is using the share and wait until the specified timeout occurs. Check with mount and df to see that the share is umounted.

Additional information

Start / stop autofs service

Start service

systemctl stop autofs.service

Stop service

systemctl stop autofs.service

Links

https://wiki.archlinux.org/index.php/autofs

 

OpenVPN Assign static IP to client

After configuring the overall OpenVPN client and server infrastructure, my clients can connect to a VPN. The client can access server resources and vice versa. While the server gets normally always the same IP assigned, the client IP address is assigned dynamically from a pool of IP addresses. Meaning: there is no guarantee that the client always gets the same IP address. Normally, this is not a problem, as the client connects to consume server resources. Such like a web site, or git repository. In my case, the architecture is that the OpenVPN server acts as a proxy to internal services. The web site, git repository, etc are running on the client. Therefore, the server must be able to connect to the client using a fix address.

To make this work, each time a client connects, the same IP must be assigned to. OpenVPN allows to assign a static IP to a client.

Configuration

  1. In /etc/openvpn create folder ccd. Ccd stands for client config directory, meaning: it contains the configuration for a client.
  2. Edit file server.conf and add line “client-config-dir ccd
# EXAMPLE: Suppose the client
# having the certificate common name "Thelonious"
# also has a small subnet behind his connecting
# machine, such as 192.168.40.128/255.255.255.248.
# First, uncomment out these lines:
client-config-dir ccd

3. Create a configuration file for each client and put into directory ccd. As file name, use the same name for the client as used in the CN field of the client certificate.

ifconfig-push IP MASK

Example:

ifconfig-push 10.8.0.2 255.255.255.255

CLI steps

sudo mkdir /etc/openvpn/ccd
sudo touch /etc/openvpn/ccd/client1
sudo vim /etc/openvpn/server.conf
Uncomment the line containing client config parameter
client-config-dir ccd

sudo vim /etc/openvpn/ccd/client1
Insert:
ifconfig-push 10.8.0.2 255.255.255.255
Restart OpenVPN service on server
sudo /etc/init.d/openvpn restart

Client with automatic assignment of IP: 10.8.0.6

After restart of OpenVPN server: IP is now 10.8.0.2

Server log

 

Additional information can be found in OpenVPN documentation.

client-config-dir

“This file can specify a fixed IP address for a given client using –ifconfig-push, as well as fixed subnets owned by the client using –iroute.https://openvpn.net/index.php/open-source/documentation/manuals/65-openvpn-20x-manpage.html

ifconfig-push

„Push virtual IP endpoints for client tunnel, overriding the –ifconfig-pool dynamic allocation.” https://openvpn.net/index.php/open-source/documentation/manuals/65-openvpn-20x-manpage.html

ERR_CONTENT_DECODING_FAILED

Configuring a reverse proxy is not an easy task. It involves some trial and error and dealing with unexpected errors. One of those errors is ERR_CONTENT_DECODING_FAILED. The web site won’t load in your browser will show this error message:

Error ERR_CONTENT_DECODING_FAILED may show up in your browser when a resource is configured on your reverse proxy, and the backend communication is working. That is: the backend is returning data, but not in a form the browser expects.

Example: browser expects a GZIP response, but receives plain text. Therefore the hint from your browser about content decoding failed. The content is received, but the browser is not able to decode / understand the data. If a plain text response is expected, but the received response from the backend is zipped, the browser cannot read the content.

To solve this error, reset the Accept-Encoding request header in your Reverse Proxy configuration.

Apache

Documentation

RequestHeader unset Accept-Encoding

Example

<Location /test>
    RequestHeader unset Accept-Encoding
    ProxyPass https://0.0.0.0:443
    ProxyPassReverse https://0.0.0.0:443/
    Order allow,deny
    Allow from all
</Location>

NGINX

Documentation

proxy_set_header Accept-Encoding "";

How to use find to sort files across folders

Short version

You have files named File0.txt to File100.txt in different folders and want to move the first 30 files in a separate directory (command for Mac users. Linux users can use find and mv):

For sorting FileNN.txt (character + number)

gfind -type f -printf "%f %p\n" | sort -n -k 1.5 | sed 's/.* //' | head -30 | xargs gmv -t ./A/

For sorting NN.txt (numeric filename)

gfind -type f -printf "%f %p\n" | sort -n | sed 's/.* //' | head -30 | xargs gmv -t ./A/

Preparation

For the below commands to work, you’ll need to use GNU find. If you are using a Mac, you’ll need to install the GNU version of find and mv via homebrew.

brew install findutils coreutils

Create a test folder structure. There will be 3 folders and several files in them.

mkdir 1
mkdir 2
mkdir 3

Create 100 files with name TestNN.txt with sample content and place them in one of the three directories randomly.

for i in {000..100}
  do
    Num=$((1 + RANDOM % 3))
    echo hello > "$Num/File${i}.txt"
done

After running the above script, the folder will look like this (running ls -R)

Also create the target directory A:

mkdir A

Commands

After the initial setup is done, we have several files in 3 directories. If you use find to get a list of all files, you’ll see that the output is not sorted.

gfind ./ -type f

A Unix command to sort files is sort. Applying sort in this scenario won’t help, as the files are sorted by the folder name:

gfind ./ -type f | sort -n

The output is now sorted by folder name and then by file name, but not only by file name. Copying the first 50 elements won’t result in the File1 – File 50. The files are not distributed across the directories as needed.

It is possible to see a solution to the problem: sort only on the filename, while still having the complete path in the output for piping the parth to the copy command. Find includes exactly this possibility: print a specific field. To control the output, parameter -printf is available, and %f prints the filename, while %p includes the folder.

gfind -type f -printf "%f\n"

The output of the command only prints the filename.

To output the file with path, use %p. In both cases \n is used to have each file in a new line.

gfind -type f -printf "%p\n"

Both output parameters can be combined. %f %p\n will first print the filename, then space, then the path.

gfind -type f -printf "%f %p\n"

Applying sort on this output will sort on the file name only.

gfind -type f -printf "%f %p\n" | sort -n

Close, but not exactly how it should be. In case your filename consists only of numbers, this will already work. In the example however, the filename contains characters. Therefore, sorting is not working correctly. It starts with File0.txt, then File1.txt, but then comes File10.txt and not File2.txt. To sort by the number, add to sort an additional parameter: -k 1.5. As the filename contains a fixed value (File), the parameter will instruct sort to ignore this part when sorting and focus only on the number.

Note: you may apply the same sort parameter without using find, just ls. As long as your path has the same size, it will work. For folders named 1..9 it’s ok, but when your folder has two or more chars (like 10, or 213, or test), the parameter needs to be adjusted.

List all files with directory name using ls:

ls -d1 */*

Sort by number in filename:

ls -d1 */* | sort -n -k 1.7

gfind -type f -printf "%f %p\n" | sort -n -k 1.5

With the last command, the output is correctly sorted based on the filename. Now, how to use this output to move the files to the target directory? Just piping the output to mv won’t work. The first part with the filename is not needed, only the second part. Both parts are separated by blank, and using sed, it’s possible to eliminate the part before the blank from the output.

gfind -type f -printf "%f %p\n" | sort -n -k 1.5 | sed 's/.* //'

The last step is now to use mv to move the files to the target directory. To not have to move all files, let’s take only the first 30 files. Gnu mv is needed to move the files, as the default MacOS BSD mv does not include the -t parameter. To pass the files line by line, xargs is used together with gmv.

gfind -type f -printf "%f %p\n" | sort -n -k 1.5 | sed 's/.* //' | head -30 | xargs gmv -t ./A/

Result

Now there are the first 30 files in folder A.

gls -1v ./A

 

Download resources from SAP Cloud for your CI job

When running a CI job you may need to use some SAP tools. For instance, the MTA builder or Neo tools. Many CI servers include integration to build tools or plugins are provided by the community or vender. Jenkins offers plugins for Maven, Ant or Node that let you easily integrate these into a CI jobs. If you have a CI job for SAP, it is your task to make the necessary tools available. There are not many plugins for SAP available for Jenkins.

Some tools you may need can be found on SAP’s tool site. For instance, the MTA builder. A simple JAR file that is available for download and needed in case you are working with MTA apps.

Before you can download the JAR file, you need to agree to the EUL.

This means that you cannot download the JAR using cli:

wget https://tools.hana.ondemand.com/additional/mta_archive_builder-1.1.0.jar

Solution

Running the above wget command will not download the tool, but a web site. Some may know that this is very close to how Oracle protected it’s Java download. And the “solution” here is the same: send the right cookie via wget.

wget --header "Cookie: eula_3_1_agreed=tools.hana.ondemand.com/developer-license-3_1.txt" https://tools.hana.ondemand.com/additional/mta_archive_builder-1.1.0.jar

Works for downloading other tools from the download page like the Neo SDK too:

wget --header "Cookie: eula_3_1_agreed=tools.hana.ondemand.com/developer-license-3_1.txt" https://tools.hana.ondemand.com/sdk/neo-javaee6-wp-sdk-2.137.0.1.zip

Let’s hope SAP provides some Jenkins plugins that take care of downloading these automatically.