Increase RAID sync rate

Scenario

  • The HDDs are in an external USB case.
  • RAID5 with 3 HDD (10TB)
  • Software RAID5 with mdadm and Debian Linux

Adding a new disk

When you add a new HDD to an existing RAID, a sync is started. In my case I added a 10TB disk to a RAID5. The sync started and as estimated time I got something in the range of days. The estimated time is listed in finish=5384 min.

This number goes up and down a little bit, but overall result is that the sync will need days. After checking the status again after a while, it still showed days: finish=3437min.

The main problem here Is the rate at which mdadm can sync the data. The value is between 30000K and 43000K. That’s not much given the size of the RAID. There are several tips available on the internet. What help me was to set the stripe_cache_size.

STRIPE_CACHE_SIZE

You set the size of stripe_cache_size for each RAID device (mdX). In case your RAID is md0:

echo 32768 > /sys/block/md0/md/stripe_cache_size

Result

The speed increased to 100000K/sec. That’s close to 3x faster than before. Time went down drastically.

Let the world know

Assign a static IP to DHCP client

After setting up a DHCP server on a Raspberry Pi running Linux I get working leases for my clients. However, these are not static. It can happen that my smartphone gets a new IP address the next it connects: 192.168.0.161 instead of 192.168.0.160. For some clients I want to make sure they always use the same IP. This can achieved with ISC DHCP Server by registering a static lease for a specific MAC.

Example

I’ll use my soundbar for the rest of this blog as an example. The MAC of the network card is bc:30:d9:2a:c9:50. I want to always assign the IP 192.168.0.152 to the soundbar.

Find out client data

To find out the client data like MAC and current lease, check the DHCP server log. Or take a look at the back of the device or its settings to find out the MAC. For the DHCP server log on assigned leases:

sudo systemctl status isc-dhcp-server.service

The last line shows that the DHCP server assigned an IP to a client and shows also the MAC address.

DHCPACK on 192.168.0.152 to bc:30:d9:2a:c9:50

Let’s make sure the MAC bc:30:d9:2a:c9:50 always gets the IP 192.168.0.152.

Configuration

sudo vim /etc/dhcp/dhcpd.conf

This is the DHCP server configuration file. I already configured it for a subnet 192.168.0.x where the server is assigning leases for the IP addresses in the range of 192.168.0.150 to 192.168.0.240.

Inside the subnet configuration, I have to add a configuration for the soundbar for IP 192.168.0.152.

host soundbar {
  hardware ethernet bc:30:d9:2a:c9:50;
  fixed-address 192.168.0.152;
}

The complete dhcpd.conf file will look like this:

subnet 192.168.0.0 netmask 255.255.255.0 {
  range 192.168.0.150 192.168.0.240;
  option routers 192.168.0.1;
  option domain-name "itsfullofstars.de";
  option domain-name-servers 8.8.8.8, 8.8.4.4;
  host soundbar {
    hardware ethernet bc:30:d9:2a:c9:50;
    fixed-address 192.168.0.152;
  }
}

Activate configuration

To activate the new configuration, make either DHCPD load the new configuration from file, or restart the service.

sudo systemctl restart isc-dhcp-server.service

Check the status of the service.

sudo systemctl status isc-dhcp-server.service

Result

The assigned leases can be found in the dhcpd.leases file. All leases assigned are listed here, including the mac address, IP address, start and end time of the lease. If all works out as planned, the soundbar will be in there with the static IP.

sudo more /var/lib/dhcp/dhcpd.leases

 

Let the world know

DHCP Server on Linux with Raspberry Pi

My internet provider is Unitymedia. Their default router comes with a DHCP server. Honestly, it’s one of the worst products I ever had to work with. My private network is 192.168.0.x. The DHCP server of the Unitymedia box is distributing from time to time leases for 192.168.192.x. Changing my private network to 192.168.192.x one is not working, as then the DHCP server picks another address range. Advise from Unitymedia help desk was to reboot the box, which, of course, won’t solve the problem. Because of this error, some of my devices are in a different network: Chromecast won’t work, broken internet connection on smartphones, etc.

I do have a Raspberry Pi (RP) in 24/7 use. My idea is to run my own DHCP server on the RP. This not only solves the DHCP problem, but also gives me more control over the DHCP configuration.

Preparation

sudo apt-get update
sudo apt-get install isc-dhcp-server

This installs ISC DHCP server. As you can see in the output, starting the DHCP server failed.

sudo systemctl status isc-dhcp-server.service

The error is simply caused because the DHCP server is not configured. Let’s change that.

Configuration

Several parameters must be activated and configured.

sudo vim /etc/dhcp/dhcpd.conf

Lease time

default-lease-time 600;
max-lease-time 7200;

Activate DHCP server

# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
authoritative;

Subnet

This configures what IP address are going to be distributed. My private network is 192.168.0.x with the router on 192.168.0.1. As DNS you can use whatever you want, as an example I am using Google DNS servers.

subnet 192.168.0.0 netmask 255.255.255.0 {
  range 192.168.0.150 192.168.0.240;
  option routers 192.168.0.1;
  option domain-name "itsfullofstars.de";
  option domain-name-servers 8.8.8.8, 8.8.4.4;
}

This will give DHCP clients an IP address between .150 and .240, with router .1, Google DNS and sets the domain name to my own.

Deactivate old DHCP server

To not have the DHCP server provided by Unitymedia box still issuing wrong IP address, I am going to deactivate the service via the web interface.

Start DHCP server

After installing and configuring the new DHCP server on RP and deactivating the one from the router box, it’s time to start the new DHCP server.

Result

To see if a IP address is assigned, use this command:

sudo systemctl status isc-dhcp-server.service

Android

Putting my Android device into flight mode and back makes it connect to Wifi again and obtain a new IP address via DHCP. In the DHCP status log, I can see the DHCPDISCOVER from the Android device and that it got the IP address 192.168.0.150 assigned.

Mac

As my Mac always got the wrong IP assigned, I changed it to manual configuration. Change the mode to DHCP, apply and deactivate / activating Wifi.

Soundbar

And my soundbar that got a strange IP address assigned by the Unitymedia router box? Works too!

Chromcast streaming shows the SoundBar is now in the same network.

Let the world know

Apt-get unable to connect to IPv6 address

Recently I had the problem that running apt-get update stalled while trying to connect to an IPv6 address. For instance, on a Raspberry Pi, the update process stalls while trying to connect to archive.raspberrypi.org. All other connections worked fine. Looking at the console output, a difference was that apt was trying to connect to an IPv6 address.

The problem was caused by:

100% [Connecting to archive.raspberrypi.org (2a00:1098:0:80:1000:13:0:8)]

A quick internet search showed that you can force apt to not use IPv6 and only IPv4. As the download worked for IPv4, this seems like a reasonable workaround.

Solution

You can pass a parameter to disable IPv4 to apt-get, or write it to apt config file to make it persistent.

Configuration file

Create a new configuration file. This makes it easy for you to keep the change during updates and to know that you configured this.

sudo vim /etc/apt/apt.conf.d/99disable-ipv6
Insert Acquire::ForceIPv4 "true";
Save
apt-get update

Parameter

To disable IPv6 just once while calling apt, the parameter is Acquire::ForceIPv4=true.

sudo apt-get -o Acquire::ForceIPv4=true update

Result

Loading the package data from archive raspberrypi.org is now ignored and apt-get update works again.

 

Let the world know

Partitioning and formatting a large disk

I got a new 10 TB disk. Before I’ll add this one to a RAID, I want to play around with it, aka: test the drive. Therefore, I’ll need to format the drive to mount it. And before that, I need to create a partition.

FDISK

In the good old days, you used fdisk to partition a HDD. Since a few years, fdisk was replaced by parted as fdisk got some issues with large. Nevertheless, it still works.

Make sure to create a GPT partition table (g), and not the old new partition (n) alternative. Creating a new partition using “n” gives you a 2 TB partition.

Creating a new disklabel of type GPT using “g” gives you 10TB. The create the GPT disklabel and the new partition, use: g & n.

PARTED

An alternative to fdisk is parted. Parted is newer than fdisk and there are GUIs available to make it easier to end users to use it. Parted allows to pass parameters to it and do all the partition and sizing stuff in one command.

parted -s -a optimal /dev/sde mkpart primary ext4 0% 100%

Create Filesystem

Finally, after the HDD is partitioned, it’s time to format the partition with EXT4. Of course you can use a different filesystem.

mkfs.ext4 /dev/sde1

mount /dev/sde1 /mnt/sde/

Let the world know

Ideen für 2019

2019 ist schon ein paar Tage alt, Zeit die Ideen zu sammeln und einen Plan für 2019 und 2020 zu Erstellen. Seit meiner Rückkehr nach Deutschland habe ich die Idee den SAP Community Erfolg aus Brasilien in Karlsruhe und Umgebung zu wiederholen.

Rückblick

In Brasilien habe ich maßgeblich dazu beigetragen das die SAP Inside Tracks von einem Event von wenigen Nerds zu einem Erfolg wurde. Der Prozess lief leider nicht ganz reibungslos, da man hierbei festgezurrte Denkmuster durchbrechen muss, und nicht jeder mag Veränderungen. Davon profitieren ja, aber aktiv mitgestalten? Was zählt ist aber auch das Resultat. Von einem SIT in Sao Paulo mit etwas 20 Leuten hin zu einer ganzen Menge an SITs in verschiedenen Städten mit um die 100 Teilnehmern pro Event. Ganz neue Themen wurden erschlossen, neue Leute kamen hinzu, neue Freundschaften geschlossen.

In Rio habe ich mich gegen das SIT Konzept entschieden und Meetups organisiert. Der Vorteil ist das diese keine ganztägige Veranstaltung sind wo man möglichst viel Themenkomplexe abdecken will. Während SIT schon das “Schimpfwort” SAP enthält, ist ein Meetup ein etablierter Begriff, in und außerhalb der IT. Das hilft wirklich wenn man erklären muss was man vorhat und was einen erwartet.

Was war der Unterschied? Mehrmals im Jahr eine Veranstaltung mit etwa 3 Stunden Vorträge auf 2 bis 4 Stunden verteilt und ein klarer Fokus auf ein Thema. Vor allem die Möglichkeit sich auf ein Thema zu fokussieren kam sehr gut an. Es gibt keine Unterbrechung in der Agenda, kein Vortrag morgens und erst wieder abends zu ABAP. Die Gruppe ist dadurch auch homogener, was gut und schlecht ist, aber: unter Gleichen findet man einfacher Anschluß und kann die Probleme besser besprechen. Die Fortführung der Meetups ist auch einfacher. Auch heute noch finden meine Meetups SAP Rio de Janeiro erfolgreich statt, ohne das ich noch komplett involviert bin. Es gab Veranstaltungen zu ABAP, Fiori, Cloud, S/4, etc. 2015 fand der erste Meetup statt, bis heute haben weit über 600 Leute teilgenommen. Weit über 1.000 Leute haben sich registriert. Für SAP Themen ist es die größte Veranstaltung im Bundestaat Rio, wenn nicht sogar für alle Bundesländer mit Ausnahme Sao Paulo (hier findet das SAP Forum von der SAP statt)., oder sogar für ganz Südamerika.

Die Idee

Und für 2019 und die Region Nord-Baden / Karlsruhe? Es gibt auf jeden Fall keinen Bedarf für einen SAP Inside Track. Dazu finde ich das Konzept einer großen Veranstaltung mit Vorträgen aus dem ganzen Land + Nachbarländer nicht mehr zeitgemäß. Dafür sind die Themen aus der SAP Welt zu komplex und zu schnelllebig. SAP stellt neue Funktionen alle 2 Wochen in der SAP Cloud bereit, darüber mit 1 – 2 Jahren Verspätung zu berichten ist einfach zu spät.

Der Plan

Etablieren einer Vortragsreihe im Meetup Format. Motto: think global, act local.

  • Anzahl: 2 bis 4 mal im Jahr.
  • Dauer: 2 bis 4 Stunden
  • Tag: unter der Woche, zwischen Dienstag, Mittwoch oder Donnerstag
  • 3 bis 4 Vortragende
  • etwa 30 Minuten pro Vortrag
  • Rahmen: Keynote, Vorträge, Wrap-up
  • Fokus auf 1 Thema (Cloud, C4C, Analytics, Mobile, Fiori, Personas, Design Thinking, etc.)
  • Offen für alle: SAP oder nicht SAP, Student, Anfänger, Berufserfahren, CTO
  • Kostenlos
  • So viel hands-on wie möglich, so viel Theorie wie nötig

Was wird benötigt

  • Motivierte Leute. Ob einfach hingehen um zu lernen oder einen Vortrag zu halten. Vor 20 Leuten einen Vortrag zu halten motiviert mehr als vor 20 leeren Stühlen.
  • Räume. Irgendwo muss die Veranstaltung ja stattfinden.
  • Mehr Leute. So etwas steht und fällt mit den Menschen.

Den Anfang habe ich mal gemacht und versuche interessierte Leute im Raum KA zu finden. Der Stammtisch SAP Karlsruhe existiert, einfach vorbeikommen. Auch auf Twitter (@tobiashofmann) kann man mit mir Kontakt aufnehmen. Oder über LinkedIn, Xing, E-Mail, etc.

Der nächste Stammtisch SAP Karlsruhe findet am 30.1. statt. Mehr infos hier.

Let the world know

State of the art documentation from SAP

SAP is investing heavily in marketing the Fiori for iOS and the SDK. In case you are slightly interested in Fiori and UX in general in SAP, for sure you heard a lot about the SDK. 2 ½ years after the announcement the Fiori Design guidelines include an iOS section, there are SAP Developers tutorials, a special iPad app for learning its usage is available, even Apple has set up a Fiori page. Current version of the SDK is 3.0, and now there is even an Android version available (with much less marketing activities).

If you want to write an app with the SDK, make sure you have an iPad. The online SDK documentation is available too, but offers less benefit than the Fiori Mentor app. In case you are wondering why the SDK documentation is not good enough: I suggest you take a look at it. For instance, the documentation for the map component.

As you can see, you see … not much.

No images, therefore: good luck in finding out what the UI control should look like. A look at the page source code reveals that the images are only visible to SAP employees with access to SAP’s intranet.

Server github.wdf.sap.corp is not accessible from the internet. In case you are wondering how to find the SDK documentation: SAP Cloud Platform SDK for iOS Assistant contains a link in its help to the API. And of course: Google. So yes, made available to the interested developer. No S-, I- or D-User required.

In case you are one of the few that develop apps using Fiori for iOS SDK, ask your manager to get an iPad. The public available SDK documentation is already not easily consumable (use the Fiori Mentor app), does not include complete sample code and comes with missing images.

SAP is now pushing the intelligent enterprise. Let’s hope that it will be intelligent enough to test if the public available documentation is complete.

 

Let the world know

Remove last n characters of file in MacOs

With MacOS and finder you can easily substitute characters of files using the rename functionality. Just select 2 or more files, right click, and inform the character you want to substitute, like _ with space.

To remove the last N characters from a file that looks like Text-2018221112.mp4 to Text is more complicated. The rename dialog does not understand regex. What you can use is the shell and rename

Install rename

brew install rename

Go to the directory with the files and run

rename -n 's/.{11}.mp4/\.mp4/' *

Rename uses the well known sed syntax s/char/replace/.

  • -n runs the replace in simulation mode. It will print the result, without renaming the files yet. Perfect for testing.
  • {11} number of characters to replace
  • \.mp4 is to insert mp4 again, as .{}11.mp4 will replace also the file suffix

In case the output matches your goal, run rename without -n and the files will be renamed.

rename 's/.{11}.mp4/\.mp4/' *
Let the world know

Response for preflight does not have HTTP ok status

Issuing an AJAX request is more complex than you might think.

Issue

When you issue an AJAX request to a server in another domain (CORS), you may get the following error message:

Response for preflight does not have HTTP ok status.

Problem

The server is configured to allow CORS. The Apache configuration includes

Header set Access-Control-Allow-Origin "*"

The response header of the service contains the correct header value. With this set you can access the service via CORS.

Solution

Now, why does it not work?

You have to be aware that this works for simple CORS requests. For more complex requests that set custom headers, etc, the service may not work. This is due to the preflight mechanism of the browser that checks if the service accepts the request. Before issuing an AJAX request (e.g. GET or POST), an OPTIONS is triggered to check what the service is accepting. You can see this in the network tab of Chrome.

The request includes two headers:

  • Access-Control-Request-Headers
  • Access-Control-Request-Method

Before issuing the actual GET request, the browser is checking if the service is correctly configured for CORS. This is done by checking if the service accepts the methods and headers going to be used by the actual request. Therefore, it is not enough to allow the service to be accessed from a different origin, but also the additional requisites must be fulfilled.

Solution

To configure Apache to set the headers, add the missing headers. Include in your HTTP service configuration the header set directive.

Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT"
Header always set Access-Control-Allow-Headers "append,delete,entries,foreach,get,has,keys,set,values,Authorization"

Still not working!

After setting the values, you may still not able to call the service, as the browser still reports an error. The response code is not 2xx. Returning a 200 HTTP code can be enforced in Apache config using a rewrite rule.

RewriteEngine On
RewriteCond %{REQUEST_METHOD} OPTIONS
RewriteRule ^(.*)$ $1 [R=200,L]

With this configuration, the service will now work with CORS. The first OPTIONS request will pass:

The following GET request will also pass:

Links

 

Let the world know

Automount share

The example used in this blog is a CIFS share from a Samba server running on a Raspberry Pi mounted on demand by a client running Debian.

Goal

The goal is to have a share on a client that is dynamically mounted. The share should only be mounted when an app needs to access.

In my case I do have a server with a data storage share connected. The storage is made available to clients in the network. To not have the share being mounted by the clients all the time, the share should be mounted only when real demand exists. For instance, an app needs to read data from the share. If the share is not needed by the client, it should not be mounted.

Process

To understand the scenario better, take a look at the picture below. The process can be separated into 4 steps.

  • Step 1: the client is configured, but the share is inactive.
  • Step 2: An app is accessing the share. This can be an app as simple as ls /mnt/share
  • Step 3: The client is connecting to the server and mounting the share to the local mount point /mnt/share. The data is now available to the app.
  • Step 4: The app is not using the data from the share any longer. After a timeout, the client disconnects the share.

The example is using for the server a Raspberry Pi with Raspbian and for the client a Debian based system (Proxmox). As share type, CIFS is used. On the client, Samba is running and configured to give access to a named user to the data storage.

Installation

Server

Install Samba and configure access for a named user. This is not part of this blog.

Client

Autofs is the package and tool taking of mounting shares automatically. Install it using apt-get.

apt-get update
apt-get install autofs

Configuration of autofs

Autofs needs to be configured. To make it easier, the packages comes with templates. I am going to use the autofs master template as my starting point. Take a look at the master template as it contains an explanation of what is needed.

cd /etc/auto.master
more /etc/auto.master

To add an auto mount share a new line must be added to the file. The format is: mount-point file options.

Before adding the line, you first must understand how the template and autofs works and what you want to achieve. First parameter is for the local mount point. The directory informed here is the parent. The actual shares mounted are created as sub-folders in that directory. For instance, if you choose /mnt/server and the remote share is data, the final mount point will be /mnt/server/data. I am going to mount the remote share to /mnt/server.

The seconds parameter is for the configuration file for that mount point. The third parameter specifies how autofs is treating the mount point. To unmount the share after 1 minute of inactivity, use option –timeout=60. The ghost option will create the subfolder even in case the server is not reachable.

Edit master template

Add a new configuration line for mounting the server share

/media/net /etc/autofs/auto.net --timeout=60 --ghost

Mount configuration

The actual mount configuration for the share is specified in the file /etc/auto.server. Create file /etc/auto.server and edit it.

touch /etc/auto.server
vim /etc/auto.server

Insert mount options.

  • [username] – name of user used to connect to Samba share
  • [password] – password of the user
  • [remote_server] – IP / server name of the Samba server
  • /data – name of the share configured at Samba. In case you have a share configured named Music, or Fotos, or Work, substitute data with the correct share name.
data -fstype=cifs,username=[username],password=[password] ://[remote_server]/data

Save file

Change permission

chmod 0644 auto.server

Start autofs as service

Stop autofs and start it in debug mode to see if the configuration works.

If it worked, exit and start the service.

systemctl start autofs.service

Test

To test, go through the 4 steps described in the picture at the top of this blog.

Step 1

Client is ready. Check the mounted volumes. You will see that no CIFS volume is available.

Using mount, you can see that the mount point is available.

/etc/auto.server on /mnt/server type autofs (rw,relatime,…

In the parent mount point for the CIFS share, autofs created the folder data.

Step 2 & 3

Run an app that is accessing the share

ls /mnt/share/data

Accessing now the content of /mnt/server/data will automatically mount the CIFS share.

df -h

Mount

//192.168.0.164/data on /mnt/server/data type cifs (rw,relatime

Step 4

Assure that no app is using the share and wait until the specified timeout occurs. Check with mount and df to see that the share is umounted.

Additional information

Start / stop autofs service

Start service

systemctl stop autofs.service

Stop service

systemctl stop autofs.service

Links

https://wiki.archlinux.org/index.php/autofs

 

Let the world know