Gitlab behind a reverse proxy

I am using GitLab for private projects. GitLab is run using the Docker image provided by GitLab. I can access my instance from outside via a reverse proxy (Apache).

The setup is simple:

  • GitLab Docker container is running on NUC and listens on port 7080 for HTTP connections
  • NUC is connected via OpenVPN to the server on AWS
  • Apache as a reverse proxy listening on port 443 for HTTPS
  • Apache terminates SSL: incoming requests are HTTPS, but forwarded as HTTP to GitLab
  • Apache forwards incoming requests to GitLab on Docker

Standard setup of GitLab in Docker with Apache as reverse proxy will give access to GitLab without problems. Start GitLab container, configure Apache, done. You can access GitLab from the internet, create repositories, clone, push, etc.

While the setup will work out of the box, you need to carry out additional configuration for GitLab to really make it work with SSL terminating. What is not working correctly is:

  • The external URL is not configured, so the URL in the repository clone dialog is not using HTTPS.
  • You cannot upload attachments in the Wiki
  • You cannot add pictures in the Wiki via copy & paste from the clipboard
  • Uploading files / images may work in the issues dialog, but not in the wiki, as the wiki is using different upload service.

Attaching an image from clipboard fails.

Problem

My external URL is https://gitlab.itsfullofstars.de. Setting this value as external URL in gitlab.rb. You configure GitLab by setting the parameters in the file gitlab.rb and then reconfigure GitLab.

## GitLab URL
##! URL on which GitLab will be reachable.
external_url 'https://gitlab.itsfullofstars.de'

Run reconfigure to enable the configuration.

gitlab-ctl reconfigure

Accessing gitlab.itsfullofstars.de:

This will set all parameters in all involved components of GitLab based on the values set in gitlab.rb. You can see the new value by looking at the automatically generated configuration file for the internal web server.

## GitLab settings
gitlab:
## Web server settings (note: host is the FQDN, do not include http://)
  host: gitlab.itsfullofstars.de
  port: 443
  https: true

The problem is: GitLab thinks it is running standalone, with direct access to the internet. There is not a specific parameter to inform that the requests are coming from a reverse proxy with SSL termination. Setting the values in gitlab.rb will result in an erroneous configuration:

  • SSL for internal GitLab web server (nginx) is enabled
  • Nginx is not listening on port 80, only on 443
  • My Apache reverse proxy is configured to connect to nginx port 80. Hence the Service Unavailable error.

Port 80 is not working any longer. Accessing GitLab directly via 192.168.x.x:7443 on HTTPS port (Docker mapping 7443 to 443).

Access will work. GitLab tries to get a new TLS certificate during the reconfiguration process, but fails, therefore the self signed certificate.

Attaching an image won’t work

Because of the external_url value, GitLab will redirect to gitlab.itsfullofstars.de. As the reverse proxy is not able to connect, it’s a 503 error.

Configuring the external GitLab URLs results in:

  • An incorrect HTTPS configuration due to wrong certificate
  • Adjustment of Apache reverse proxy: no longer SSL termination

I do not want to take of managing GitLabs internal TLS certificate. I want to access it via HTTP only and use Apache for SSL termination.

Solution

The solution is to configure the external URL and to let the internal nginx run on port 80 and no HTTPS.

Gitlab.rb

Configure a value for external_url.

vim config/gitlab.rb
external_url 'https://gitlab.itsfullofstars.de'
nginx['listen_port'] = 80
nginx['listen_https'] = false
gitlab-ctl reconfigure

GitLab HTTP server

Check the configuration for the internal GitLab web server. The server should be gitlab.itsfullofstars, the port 80 and protocol HTTP.

more data/gitlab-rails/etc/gitlab.yml
## GitLab settings

gitlab:
## Web server settings (note: host is the FQDN, do not include http://)
  host: gitlab.itsfullofstars.de
  port: 80
  https: false

Optional: Restart

Running reconfigure restarts the services, but if you want to be sure, restart GitLab.

gitlab-ctl restart

Apache configuration

My Apache configuration. Maybe not all parameters are needed, but it works.

<VirtualHost *:443>
  ServerName gitlab.itsfullofstars.de
  ProxyPreserveHost On
  ProxyRequests Off
  SSLProxyEngine on
  SSLEngine on
  SSLHonorCipherOrder on
  <Location />
    RequestHeader unset Accept-Encoding
    RequestHeader set Host "gitlab.itsfullofstars.de"
    RequestHeader add X-Forwarded-Ssl on
    RequestHeader set X-Forwarded-Proto "https"
    ProxyPass http://nuc:7080/
    ProxyPassReverse http://nuc:7080/
    Order allow,deny
    Allow from all
  </Location>
</VirtualHost>

Result

After executing the above steps, your configuration should be:

An external request is now for server gitlab.itsfullofstars.de. Apache does SSL termination, and nginx is accepting the request without either blocking it or trying to redirect to HTTPS.

Attaching an image to GitLab Wiki by pasting it from the clipboard


Links

Some resources I found while solving the issue for myself.

https://gitlab.com/gitlab-org/gitlab-ce/issues/27583

https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl

https://gitlab.com/gitlab-org/gitlab-ce/issues/52243

Let the world know

Elastic APM forbidden request: endpoint is disabled

I am currently going through an UI5 app of mine that I want to enhance so I can use APM for performance monitoring. While I have done this several time before, I always run into the same problem:

  • I install a new version of ELK and APM on my laptop.
  • I add the necessary NPM files for the backend and it works.
  • I add RUM to the UI5 app and it won’t work.

As I have done the same scenario before I know why it is not working. In this blog I’ll share the needed steps to let an app with RUM send data to APM server.

Error message

The response I get from APM is that the endpoint is disabled.

{
"error": "forbidden request: endpoint is disabled"
}

Browser

APM server

2019-01-11T01:01:22.507+0200 ERROR [request] beater/common_handler.go:299 error handling request {"request_id": "7be8514c-e929-4ec1-af1a-0dc037743302", "method": "GET", "URL": "/intake/v2/rum/events", "content_length": 0, "remote_address": "127.0.0.1", "user-agent": "Mozilla/5.0", "response_code": 403, "error": {"error":"forbidden request: endpoint is disabled"}}

Solution

APM configuration in Kibana contains all the information necessary: you have to enable RUM support in APM.

Go to APM server directory and edit the configuration file apm-server.yml

vim apm-server.yml

Enable RUM

To enable it, set enabled: true. The documentation contains some examples that you can use / adapt for your needs. Default settings are OK, so you only have to activate RUM.

rum:
# To enable real user monitoring (RUM) support set this to true.
  enabled: true

Be careful to remove the # before rum and enabled. Just removing it before enabled makes enabled to a child of rum. It’s a YAML configuration file, hierarchy matters.

Save and start APM server.

Frontend app (UI5) configuration

Add RUM Javascript file and set serviceName parameter.

<script src="elastic-apm-rum.umd.min.js" crossorigin></script>
<script>
  elasticApm.init({
    serviceName: ui',
    serverUrl: 'http://localhost:8080/proxy/http/localhost:8200',
  })
</script>

Result

Calling the UI5 app, RUM is loaded and AJAX calls to events in APM server are now passed. RUM is working.

Browser

Kibana

The request is shown as Unkown, but that’s a different problem.

Let the world know

How to download your iOS distribution certificate

To be able to sign your app and let an external build tool like Microsoft AppCenter upload it to iTunes Connect, you need to provide two files:

  • Certificate: iOS Distribution
  • Provisioning Profile: App Store

Microsoft provides technical documentation on how to get the code signing certificates and how to upload them to your build pipeline. I’ll try to add more explanation and screenshots to make it easier to get both files. This blog is for the iOS distribution certificate.

Distribution certificate

I need the correct distribution certificate for the provided provisioning certificate. The provisioning profile contains a list of “linked” distribution certificates. If yours is not in the list, you cannot use your certificate to sign the app.

Get certificate

Log on to Apple Developer Center. Select Certificates, IDs & Profiles from the left menu

I have several (3) certificates available.

Which one is it? When I created the provisioning profile, I added a distribution certificate. I only have one, so this is the certificate I need.

To be able to use the distribution certificate in an external tool like Microsoft AppCenter, I have to convert the certificate into a p12 file. To convert, you can import the certificate into Mac Key Toolchain, export cert and private key and save as p12.

1. Download the certificate

2. Check file

The downloaded cer file is named ios_distribution.cer. The see if this certificate is for distribution just read the content using more. It must contain the line iPhone Distribution.

more ios_distribution.cer

3. Import into keychain

Open the certificate in MAC Keychain.

I also have the private key for that certificate.

4. Export

Select the certificate and private key and export both. Save the file and provide a strong key phrase.

Now I have my personal distribution certificate (Zertifikate.p12) and provisioning profile (.mobileprovision).

Let the world know

How to get your iOS App provisioning profile

To be able to sign your app and let an external build tool like Microsoft AppCenter upload it to iTunes Connect, you need to provide two files:

  • Certificate: iOS Distribution
  • Provisioning Profile: App Store

Microsoft provides technical documentation on how to get the code signing certificates and how to upload them to your build pipeline. I’ll try to add more explanation and screenshots to make it easier to get both files. This blog is for the provisioning profile.

Provisioning Profile

There are two ways to get the provisioning profile:

  • XCode automatically generates one
  • You create it manually

In case a single developer does everything from coding to uploading to the App Store from the MAC, it’s a good idea to let XCode handle the provisioning profile. For more complicated use cases like an external build pipeline, creating the profile manually is better or let the pipeline tool do everything for you (fastlane). Let’s take a look at each alternative.

Automatic

In case of letting XCode handle automatically the provisioning profile, it can be found on your MAC. Go to folder:

~/Library/MobileDevice/Provisioning Profiles/

I have there three provisioning profiles. To know which one to use, I deactivate and activate the automatic code signing in XCode.

Uncheck and check again the option. XCode recreates the provisioning profile and the correct one is the newly created file.

Manual

To create the provisioning profile manually, log in to Apple Developer Center. From the initial page to final profile it’s just 8 steps. You create a provisioning profile for an app and associate a distribution certificate to it. Only certificates assigned to the profile can be used to sign the app. Therefore, you need to create a new provisioning profile in case you add or change a distribution certificate.

  1. Go to Certificates, IDs & Profiles.
  2. Go to section Provisioning Profiles
  3. Create a new profile.
  4. Select type

Select App Store, as the profile will be used to publish the app to the Apple App Store / Connect.

  1. Select the app.

Select the App Id you want this provisioning profile. This is the bundle id used in XCode (namespace). The profile will only be valid for apps using that App Id.

  1. Select the developer certificates

The certificates added here can be used together with the profile. If your distribution certificate is not listed, you cannot sign and publish the app using the profile.

  1. Name profile

Give a unique name to the provisioning profile.

  1. Download

The profile is now generated and can be downloaded.

The new provisioning profile is listed in the Apple Developer Center.

Let the world know

Create App-Specific password

Log on to your Apple ID account. On the main screen, you can find a section for Security (Sicherheit in German).

Click on Create Password (Passwort erstellen) to create an app-specific password. Give an unique name for the password. You may consider using the name of the app that is going to use the password.

Give a unique password. This is the password the app will use for authentication.

That’s it, now you have a password that an app can use to log in to your account.

Let the world know

Lossless audio with Odroid C2 and Libreelec

For several years I have been running Kodi on a Raspberry Pi. It started with Openelec, followed by LibreELEC, using a Raspberry Pi 1, 2 and finally 3. Every time I upgraded the Raspberry Pi that runs my home server, I took the replaced Raspberry to run Kodi with LibreELEC. To be able to watch MPEG-2 from DVDs, I bought the license from the RP Foundation. Over the years I switched from DVD to BlueRay, and with that the quality of the picture and sound changed.

The sound formats you get on BlueRay made me switch and replace the Raspberry by a Odroid C2. Depending on the BlueRay movie, you get DTS, True-HD and Atmos. To be able to listen to DTS or Atmos, you need an audio receiver supporting the format. Kodi can pass through the audio channels to your receiver (AVR). Decoding the bit stream is then a task of the AVR. In case the audio signaled received is valid, AVR will show the correct audio format (DTS, Atmos) or PCM. PCM means: it did not work, information is missing and the AVR is not able to understand the received audio format.

The sound is transported together with the video signal through HDMI. Raspberry Pi supports HDMI rev 1.3. This is just not enough for transporting high quality audio with several channels. Because of this limitation, not all channels are transmitted, and the audio receiver is PCM, but not DTS or Atmos any longer. Odroid C2 offers HDMI rev 2.0, meaning you get 4k at 60Hz and enough bandwidth to support high quality Audio. It comes with more RAM and faster LAN too, so the streaming and user experience is better.

The main plus point is that it’s HDMI can pass through high quality audio. Kodi playing a track with 7.1.2 Atmos? Information on whether it will actually work is not easy to find. Some posts say yes, others no. It seems that it wasn’t working a few years back, but today it is working. For testing, Kodi provides a library with sample files. From there you can download official Atmos content. Another site with many samples is the digital theater.

I’ll use the conductor sample: TrueHD 7.1 Atmos.

Configuration

Pictures are in German, but you should be able to find them in Kodi.

  • Go to Settings > System > Audio
  • Audio output over HDMI and 7.1 channels.
  • Allow pass through
  • Activate codecs your AVR supports. Mine supports AC3, E-AC3 (Atmos), DTS, True-HD.

Test

Start the conductor sample for testing the Atmos sound.

Soundbar shows correctly that Dolby Atmos sound is received.

Playing a DTS sample, soundbar shows correctly that DTS sound is received.

Let the world know

Odroid C2 running Librelec and Kodi

Recently I switched from Libreelec and Raspberry Pi to using Libreelec on Odroid C2. In this blog I’ll share more information on the overall setup, configuration and installation.

Setup

Setup is centered around the soundbar. My soundbar is my AVR, supporting DTS and Atmos. Soundbar is connected to the TV via HDMI with ARC. This allows me to watch e.g. Amazon Prime, as well as watching movies from Kodi with sound served from the soundbar. Supporting CEC, my soundbar gets automatically powered on and off together with TV and all devices are controlled by one remote control.

Layout

The overall layout consists of three components: TV, soundbar, Odroid.

  • Kodi sends audio to the DTS / Atmos soundbar via HDMI and pass through.
  • The soundbar plays the sound and forwards the video to the TV.
  • TV and soundbar are connected using HDMI ARC.
  • TV remote control for soundbar volume and Kodi navigation.

Installation

The Odroid C2 comes with a SDCard slot and I reused the SD Card from Raspberry Pi. ODroid C2 can work with a eMMC card, but my SD Card reader in my laptop wasn’t able to recognize the card without errors.

Installation of Libreelec was done using the official installer and image for ODroid available on the project page.

In real live, the connection to the soundbar is simple. Power and HDMI cable, nothing more.

For internet connection, a LAN cable is needed. On the right, the soundbar that acts as audio receiver with DTS and Atmos support.

After connecting power, Libreelec starts and after a few minutes, Kodi is ready.

Internet via LAN is working, 2GB memory is available and Kodi 9.0.1 is running.

Let the world know

How to add a new disk to RAID5

I have a RAID5 consisting of three 10TB HDDs. This RAID5 has a total capacity of 20 TB.

I bought a new 10 TB HDD that I want to use to extend the RAID5: 4 HDDs with a total capacity of 30 TB. The file system on md0 is ext4. Currently, the RAID5 disks are sdc1, sdf1 and sde1. The additional disk is sdd1.

cat /proc/mdstat

The RAID5 is formatted with ext4 and available as md0.

mount

Steps

  1. Prepare new disk
  2. Add disk to RAID
  3. Grow RAID
  4. Extend ext4 files system.

Prepare new disk

First start with the preparation of the new disk. The disk is /dev/sdd and needs to have a partition. I use parted for this. First, create a label of type gpt.

parted -s -a optimal /dev/sdd mklabel gpt

Next is to create the partition using parted. This time, I am using the interface.

parted /dev/sdd

Add disk to RAID

The RAID is a software RAID on Linux, therefore mdadm is used to control the raid. To add a new disk, option –add is used and the raid and new disk are passed as parameters.

mdadm --add /dev/md0 /dev/sdd1

The result of the operation can be seen in mdstat.

cat /poc/mdstat

The new disk is added as a spare device. The (S) behind sdd1 means spare device. In case a device would fail, the spare device will take over automatically and a RAID rebuild will be triggered. This gives me less trouble in case a device fails, as I won’t have to do anything, but it won’t give me more space. The RAID5 is still at 20 TB.

Grow RAID

To make the RAID5 aware of the new disk and that it should be used for data storage, the RAID must be informed to use the new HDD using the grow command.

mdadm --grow --raid-devices=4 /dev/md0

The command informs the RAID that there are now 4 HDDs to be used, instead of 3. This command will trigger a RAID rebuild, as the information must be distributed to the HDDs.

This process will take some time. To learn how to increase the speed the sync, see my other blog about this topic.

The RAID5 consists now of 4 HDD, all working [UUUU]. The size of the RAID is still 20 TB. This is because the md0 has capacity of 30 TB, but the ext4 filesystem is still configured to make use of 20 TB.

Resize ext4 filesystem

To be able to use the 30TB available on the RAID5, you need to resize the file system. First, run an integrity check.

e2fsck -f /dev/md0

After the e2fsck ended without errors, the file system can be extended. This is done by using the tool resize2fs.

resize2fs /dev/md0

After resize2fs completes (can take a while), the size available is now 30TB:

mount /dev/md0 /mnt/md0/

Links

Let the world know

Monitor disk speed in Linux

Running a server allows you to do a lot of stuff from remote. Copying files is one of those tasks you can do from anywhere on the world while being logged on via SSH. For this task it is good to know the speed of read/write to get an idea if it’s working s expected. When sitting in front of your computer, you can see if a HDD is working, in Windows you see a MB/s indication, and in Linux? Not all copy commands show you the transfer rate by standard. Some disk intensive tasks won’t at all (RAID sync).

To monitor disk activities in Linux, several tools are available. One is iostat.

Installation

To install iostat in Debian, you must install the package sysstat

apt-get install sysstat

Execute

To run iostat, just enter iostat in the shell.

iostat

The output will list the captured read / write speed of the available devices. To get a continuous output of the disk activites, run iostat -y 1. This will update the output every second until you end the program.

iostat -y 1

Several options are available to control the output. To get the disk read / write in Mb and not in kB, add the -m flag

iostat -y 1 -m

Using iostat you can see the throughput oft he disks, even when you are running “hidden” tasks like a RAID sync or copy process in another session (screen).

Let the world know

Increase RAID sync rate

Scenario

  • The HDDs are in an external USB case.
  • RAID5 with 3 HDD (10TB)
  • Software RAID5 with mdadm and Debian Linux

Adding a new disk

When you add a new HDD to an existing RAID, a sync is started. In my case I added a 10TB disk to a RAID5. The sync started and as estimated time I got something in the range of days. The estimated time is listed in finish=5384 min.

This number goes up and down a little bit, but overall result is that the sync will need days. After checking the status again after a while, it still showed days: finish=3437min.

The main problem here Is the rate at which mdadm can sync the data. The value is between 30000K and 43000K. That’s not much given the size of the RAID. There are several tips available on the internet. What help me was to set the stripe_cache_size.

STRIPE_CACHE_SIZE

You set the size of stripe_cache_size for each RAID device (mdX). In case your RAID is md0:

echo 32768 > /sys/block/md0/md/stripe_cache_size

Result

The speed increased to 100000K/sec. That’s close to 3x faster than before. Time went down drastically.

Let the world know