Elastic APM forbidden request: endpoint is disabled

I am currently going through an UI5 app of mine that I want to enhance so I can use APM for performance monitoring. While I have done this several time before, I always run into the same problem:

  • I install a new version of ELK and APM on my laptop.
  • I add the necessary NPM files for the backend and it works.
  • I add RUM to the UI5 app and it won’t work.

As I have done the same scenario before I know why it is not working. In this blog I’ll share the needed steps to let an app with RUM send data to APM server.

Error message

The response I get from APM is that the endpoint is disabled.

{
"error": "forbidden request: endpoint is disabled"
}

Browser

APM server

2019-01-11T01:01:22.507+0200 ERROR [request] beater/common_handler.go:299 error handling request {"request_id": "7be8514c-e929-4ec1-af1a-0dc037743302", "method": "GET", "URL": "/intake/v2/rum/events", "content_length": 0, "remote_address": "127.0.0.1", "user-agent": "Mozilla/5.0", "response_code": 403, "error": {"error":"forbidden request: endpoint is disabled"}}

Solution

APM configuration in Kibana contains all the information necessary: you have to enable RUM support in APM.

Go to APM server directory and edit the configuration file apm-server.yml

vim apm-server.yml

Enable RUM

To enable it, set enabled: true. The documentation contains some examples that you can use / adapt for your needs. Default settings are OK, so you only have to activate RUM.

rum:
# To enable real user monitoring (RUM) support set this to true.
  enabled: true

Be careful to remove the # before rum and enabled. Just removing it before enabled makes enabled to a child of rum. It’s a YAML configuration file, hierarchy matters.

Save and start APM server.

Frontend app (UI5) configuration

Add RUM Javascript file and set serviceName parameter.

<script src="elastic-apm-rum.umd.min.js" crossorigin></script>
<script>
  elasticApm.init({
    serviceName: ui',
    serverUrl: 'http://localhost:8080/proxy/http/localhost:8200',
  })
</script>

Result

Calling the UI5 app, RUM is loaded and AJAX calls to events in APM server are now passed. RUM is working.

Browser

Kibana

The request is shown as Unkown, but that’s a different problem.

Let the world know

How to download your iOS distribution certificate

To be able to sign your app and let an external build tool like Microsoft AppCenter upload it to iTunes Connect, you need to provide two files:

  • Certificate: iOS Distribution
  • Provisioning Profile: App Store

Microsoft provides technical documentation on how to get the code signing certificates and how to upload them to your build pipeline. I’ll try to add more explanation and screenshots to make it easier to get both files. This blog is for the iOS distribution certificate.

Distribution certificate

I need the correct distribution certificate for the provided provisioning certificate. The provisioning profile contains a list of “linked” distribution certificates. If yours is not in the list, you cannot use your certificate to sign the app.

Get certificate

Log on to Apple Developer Center. Select Certificates, IDs & Profiles from the left menu

I have several (3) certificates available.

Which one is it? When I created the provisioning profile, I added a distribution certificate. I only have one, so this is the certificate I need.

To be able to use the distribution certificate in an external tool like Microsoft AppCenter, I have to convert the certificate into a p12 file. To convert, you can import the certificate into Mac Key Toolchain, export cert and private key and save as p12.

1. Download the certificate

2. Check file

The downloaded cer file is named ios_distribution.cer. The see if this certificate is for distribution just read the content using more. It must contain the line iPhone Distribution.

more ios_distribution.cer

3. Import into keychain

Open the certificate in MAC Keychain.

I also have the private key for that certificate.

4. Export

Select the certificate and private key and export both. Save the file and provide a strong key phrase.

Now I have my personal distribution certificate (Zertifikate.p12) and provisioning profile (.mobileprovision).

Let the world know

How to get your iOS App provisioning profile

To be able to sign your app and let an external build tool like Microsoft AppCenter upload it to iTunes Connect, you need to provide two files:

  • Certificate: iOS Distribution
  • Provisioning Profile: App Store

Microsoft provides technical documentation on how to get the code signing certificates and how to upload them to your build pipeline. I’ll try to add more explanation and screenshots to make it easier to get both files. This blog is for the provisioning profile.

Provisioning Profile

There are two ways to get the provisioning profile:

  • XCode automatically generates one
  • You create it manually

In case a single developer does everything from coding to uploading to the App Store from the MAC, it’s a good idea to let XCode handle the provisioning profile. For more complicated use cases like an external build pipeline, creating the profile manually is better or let the pipeline tool do everything for you (fastlane). Let’s take a look at each alternative.

Automatic

In case of letting XCode handle automatically the provisioning profile, it can be found on your MAC. Go to folder:

~/Library/MobileDevice/Provisioning Profiles/

I have there three provisioning profiles. To know which one to use, I deactivate and activate the automatic code signing in XCode.

Uncheck and check again the option. XCode recreates the provisioning profile and the correct one is the newly created file.

Manual

To create the provisioning profile manually, log in to Apple Developer Center. From the initial page to final profile it’s just 8 steps. You create a provisioning profile for an app and associate a distribution certificate to it. Only certificates assigned to the profile can be used to sign the app. Therefore, you need to create a new provisioning profile in case you add or change a distribution certificate.

  1. Go to Certificates, IDs & Profiles.
  2. Go to section Provisioning Profiles
  3. Create a new profile.
  4. Select type

Select App Store, as the profile will be used to publish the app to the Apple App Store / Connect.

  1. Select the app.

Select the App Id you want this provisioning profile. This is the bundle id used in XCode (namespace). The profile will only be valid for apps using that App Id.

  1. Select the developer certificates

The certificates added here can be used together with the profile. If your distribution certificate is not listed, you cannot sign and publish the app using the profile.

  1. Name profile

Give a unique name to the provisioning profile.

  1. Download

The profile is now generated and can be downloaded.

The new provisioning profile is listed in the Apple Developer Center.

Let the world know

Create App-Specific password

Log on to your Apple ID account. On the main screen, you can find a section for Security (Sicherheit in German).

Click on Create Password (Passwort erstellen) to create an app-specific password. Give an unique name for the password. You may consider using the name of the app that is going to use the password.

Give a unique password. This is the password the app will use for authentication.

That’s it, now you have a password that an app can use to log in to your account.

Let the world know

Lossless audio with Odroid C2 and Libreelec

For several years I have been running Kodi on a Raspberry Pi. It started with Openelec, followed by LibreELEC, using a Raspberry Pi 1, 2 and finally 3. Every time I upgraded the Raspberry Pi that runs my home server, I took the replaced Raspberry to run Kodi with LibreELEC. To be able to watch MPEG-2 from DVDs, I bought the license from the RP Foundation. Over the years I switched from DVD to BlueRay, and with that the quality of the picture and sound changed.

The sound formats you get on BlueRay made me switch and replace the Raspberry by a Odroid C2. Depending on the BlueRay movie, you get DTS, True-HD and Atmos. To be able to listen to DTS or Atmos, you need an audio receiver supporting the format. Kodi can pass through the audio channels to your receiver (AVR). Decoding the bit stream is then a task of the AVR. In case the audio signaled received is valid, AVR will show the correct audio format (DTS, Atmos) or PCM. PCM means: it did not work, information is missing and the AVR is not able to understand the received audio format.

The sound is transported together with the video signal through HDMI. Raspberry Pi supports HDMI rev 1.3. This is just not enough for transporting high quality audio with several channels. Because of this limitation, not all channels are transmitted, and the audio receiver is PCM, but not DTS or Atmos any longer. Odroid C2 offers HDMI rev 2.0, meaning you get 4k at 60Hz and enough bandwidth to support high quality Audio. It comes with more RAM and faster LAN too, so the streaming and user experience is better.

The main plus point is that it’s HDMI can pass through high quality audio. Kodi playing a track with 7.1.2 Atmos? Information on whether it will actually work is not easy to find. Some posts say yes, others no. It seems that it wasn’t working a few years back, but today it is working. For testing, Kodi provides a library with sample files. From there you can download official Atmos content. Another site with many samples is the digital theater.

I’ll use the conductor sample: TrueHD 7.1 Atmos.

Configuration

Pictures are in German, but you should be able to find them in Kodi.

  • Go to Settings > System > Audio
  • Audio output over HDMI and 7.1 channels.
  • Allow pass through
  • Activate codecs your AVR supports. Mine supports AC3, E-AC3 (Atmos), DTS, True-HD.

Test

Start the conductor sample for testing the Atmos sound.

Soundbar shows correctly that Dolby Atmos sound is received.

Playing a DTS sample, soundbar shows correctly that DTS sound is received.

Let the world know

Small Wishlist for SAPPHIRE

SAP’S prime event SAPPHIRE is happening next week. Of course SAP will talk about how great they are, how latest acquisitions add value, the new additions to the excellent portfolio, that customers are doing great thanks to SAP, and so on. It’s an event driven by marketing and sales, what else to expect?

Personally, I’d like to see some announcements that won’t happen, and won’t be announced at any other SAP event. Nevertheless, here is my personal list of things that I believe could add value to SAP’s overall ecosystem.

Trackable announcements

As with every event, there will be a lot of success stories and product launches and should padding and everybody on stage is either a friend, longtime friend, or for companies, it’s a very special relationship. And all about how great the product is. Why not make it easy to track the success? Of course S/4 is big. What about the other announcements? If it’s announced, provide a way to see how well the product is performing. Make transparent what happens to the product after SAPPHIRE is over. Bring the customer back on stage. Let them talk about the last 2, 3 or 5 years. For instance, what happened to the Leonardo solution for analyzing the health of palm trees? Is the intelligent vending machine used in the market? How is the cloud service for tax calculation performing?

Bring back apps

SAP is really good in delivering solutions for core business processes. For the additional problems, SAP tries to offer tools to customers that make it possible to create missing solutions and apps. What a customer gets is a toolset to develop, not a solution. If you want to have a mobile app for “standard” functionality today, you have to develop it. For large companies this is not a problem, for smaller one it can be, for everyone it means that they are responsible for developing apps. And to support them.

Once we had mobile workflow from Sybase / SAP, a packaged app for mobile. Afaria was a leader in MDM. Both were sold together and client got for one price a complete mobile solution. Today you can automatically create an offline enabled app using for instance mobile cards or the new mobile development kit. Using these, you develop following standard guidelines. Every partner or freelancer can offer the very same app for a customer.

The available toolset is good enough to create apps out of Fiori apps, Mobile Cards or Mobile Development Kit. Many customers want the apps you can develop with these but do not like the idea of doing this. And having to deal with all the licensing and support issues. Make it easy to offer the developed app as an app with all licensing included. SAP could provide the standard documentation on how to create an app for a given process using the toolset from SAP, and let partners either create apps following 100% these guidelines or let them add additional features. In both cases, customer can buy what is needed, with full support from partner and SAP, and without having to license all components involved. In case customer wants to change the partner, the app is build using standard recommendation, another partner can offer the exact same app.

Redefine SAP portfolio

Trusting partners to deliver the kind of apps as they were delivered by SAP means to rethink SAP Consulting positioning and its portfolio. Once SAP Consulting was a powerful organization with competent people helping customers. From technical side, the good consultants largely left. From a functional point, situation is much better. Would be nice to see SAP consulting focusing solely where it can still offer value to customers: functional consulting. Let SAP’s own consultants advice customers on how to improve their business processes. How to do accounting using S/4, Retail or supplier management. Let everything that even slightly touches technical area complete handled by either a partner or freelancer consultant. Same for support. It’s time to stop letting SAP support people act as consultants.

Simplify Cloud

When SAP executives look at it, it must be a dream for them: customers buying HEC, cloud numbers go up, only good feedback. What enters the system and bubbles up is filtered. The lower you go at a HEC project, talking to people actually working on a daily basis with HEC, situation changes: System not correctly configured, unavailability, bad support, and so on.

Personally I had so far: HANA system that wasn’t updated for years, Gateway system with language pack not correctly installed, systems unavailable because support restarted it in PRD without informing anyone, missing components like Web Dispatcher. My personal favorite: 24/7 support from Monday to Friday during Indian business hours.

It’s time to announce when SAP is either closing HEC or restarting it’s offering. Same for SCP. Neo, Cloud Foundry, it’s nice to have choice, but please close one. Announce if Neo is going to survive or not, and give a final date. Also, separate SCP services by user groups: developers & business. Business side is interested in solutions not developer services. Give each group one view at SCP and services. A possible end of this could be to close the developer part of SCP. Bring the tools to multi cloud, aka: offer them on AWS, Azure, etc.

Fiori

SAP pushed Design Thinking and provides helpful tools and guidance on how to enable it. Many customers started to use DT thanks to SAP. The initiative slowed down. Development on Build and web site seems to be at minimum effort. I would like to see SAP investing more into BUILD. Make all UI5 controls available, integrate Bootstrap controls, include controls for iOS, Android and material design. Make it a design tool not only for SAP content, but for everything. Make it the design and mock tool at companies. To speed adoption of UI5 outside SAP context, make all UI controls part of OpenUI5. Include better support for non-OData backends.

None of this will be announced at SAPPHIRE. I just wrote this down to be able to look back in a few years to see if some ideas were good. There are additional ideas that are more suitable for TechEd, like: make S/4 architecture ready for Kubernetes. Having a work process running as pod in K8S makes it easier to scale an SAP System. Allowing arbitrary databases as backend for SAP CAPM. Add SAP technologies to Swagger (OpenAPI). Deliver software via Git. Long time topic from me, think I mentioned this around 2011 in SCN. For instance, instead of installing Fiori Apps the traditional way, let me select the app in Git and import only this app and run also the included tests. Reduce drastically RAM footprint of HANA. And many more. Maybe I will write a similar blog for next SAP TechEd.

Let the world know

Odroid C2 running Librelec and Kodi

Recently I switched from Libreelec and Raspberry Pi to using Libreelec on Odroid C2. In this blog I’ll share more information on the overall setup, configuration and installation.

Setup

Setup is centered around the soundbar. My soundbar is my AVR, supporting DTS and Atmos. Soundbar is connected to the TV via HDMI with ARC. This allows me to watch e.g. Amazon Prime, as well as watching movies from Kodi with sound served from the soundbar. Supporting CEC, my soundbar gets automatically powered on and off together with TV and all devices are controlled by one remote control.

Layout

The overall layout consists of three components: TV, soundbar, Odroid.

  • Kodi sends audio to the DTS / Atmos soundbar via HDMI and pass through.
  • The soundbar plays the sound and forwards the video to the TV.
  • TV and soundbar are connected using HDMI ARC.
  • TV remote control for soundbar volume and Kodi navigation.

Installation

The Odroid C2 comes with a SDCard slot and I reused the SD Card from Raspberry Pi. ODroid C2 can work with a eMMC card, but my SD Card reader in my laptop wasn’t able to recognize the card without errors.

Installation of Libreelec was done using the official installer and image for ODroid available on the project page.

In real live, the connection to the soundbar is simple. Power and HDMI cable, nothing more.

For internet connection, a LAN cable is needed. On the right, the soundbar that acts as audio receiver with DTS and Atmos support.

After connecting power, Libreelec starts and after a few minutes, Kodi is ready.

Internet via LAN is working, 2GB memory is available and Kodi 9.0.1 is running.

Let the world know

How to add a new disk to RAID5

I have a RAID5 consisting of three 10TB HDDs. This RAID5 has a total capacity of 20 TB.

I bought a new 10 TB HDD that I want to use to extend the RAID5: 4 HDDs with a total capacity of 30 TB. The file system on md0 is ext4. Currently, the RAID5 disks are sdc1, sdf1 and sde1. The additional disk is sdd1.

cat /proc/mdstat

The RAID5 is formatted with ext4 and available as md0.

mount

Steps

  1. Prepare new disk
  2. Add disk to RAID
  3. Grow RAID
  4. Extend ext4 files system.

Prepare new disk

First start with the preparation of the new disk. The disk is /dev/sdd and needs to have a partition. I use parted for this. First, create a label of type gpt.

parted -s -a optimal /dev/sdd mklabel gpt

Next is to create the partition using parted. This time, I am using the interface.

parted /dev/sdd

Add disk to RAID

The RAID is a software RAID on Linux, therefore mdadm is used to control the raid. To add a new disk, option –add is used and the raid and new disk are passed as parameters.

mdadm --add /dev/md0 /dev/sdd1

The result of the operation can be seen in mdstat.

cat /poc/mdstat

The new disk is added as a spare device. The (S) behind sdd1 means spare device. In case a device would fail, the spare device will take over automatically and a RAID rebuild will be triggered. This gives me less trouble in case a device fails, as I won’t have to do anything, but it won’t give me more space. The RAID5 is still at 20 TB.

Grow RAID

To make the RAID5 aware of the new disk and that it should be used for data storage, the RAID must be informed to use the new HDD using the grow command.

mdadm --grow --raid-devices=4 /dev/md0

The command informs the RAID that there are now 4 HDDs to be used, instead of 3. This command will trigger a RAID rebuild, as the information must be distributed to the HDDs.

This process will take some time. To learn how to increase the speed the sync, see my other blog about this topic.

The RAID5 consists now of 4 HDD, all working [UUUU]. The size of the RAID is still 20 TB. This is because the md0 has capacity of 30 TB, but the ext4 filesystem is still configured to make use of 20 TB.

Resize ext4 filesystem

To be able to use the 30TB available on the RAID5, you need to resize the file system. First, run an integrity check.

e2fsck -f /dev/md0

After the e2fsck ended without errors, the file system can be extended. This is done by using the tool resize2fs.

resize2fs /dev/md0

After resize2fs completes (can take a while), the size available is now 30TB:

mount /dev/md0 /mnt/md0/

Links

Let the world know

Monitor disk speed in Linux

Running a server allows you to do a lot of stuff from remote. Copying files is one of those tasks you can do from anywhere on the world while being logged on via SSH. For this task it is good to know the speed of read/write to get an idea if it’s working s expected. When sitting in front of your computer, you can see if a HDD is working, in Windows you see a MB/s indication, and in Linux? Not all copy commands show you the transfer rate by standard. Some disk intensive tasks won’t at all (RAID sync).

To monitor disk activities in Linux, several tools are available. One is iostat.

Installation

To install iostat in Debian, you must install the package sysstat

apt-get install sysstat

Execute

To run iostat, just enter iostat in the shell.

iostat

The output will list the captured read / write speed of the available devices. To get a continuous output of the disk activites, run iostat -y 1. This will update the output every second until you end the program.

iostat -y 1

Several options are available to control the output. To get the disk read / write in Mb and not in kB, add the -m flag

iostat -y 1 -m

Using iostat you can see the throughput oft he disks, even when you are running “hidden” tasks like a RAID sync or copy process in another session (screen).

Let the world know

Increase RAID sync rate

Scenario

  • The HDDs are in an external USB case.
  • RAID5 with 3 HDD (10TB)
  • Software RAID5 with mdadm and Debian Linux

Adding a new disk

When you add a new HDD to an existing RAID, a sync is started. In my case I added a 10TB disk to a RAID5. The sync started and as estimated time I got something in the range of days. The estimated time is listed in finish=5384 min.

This number goes up and down a little bit, but overall result is that the sync will need days. After checking the status again after a while, it still showed days: finish=3437min.

The main problem here Is the rate at which mdadm can sync the data. The value is between 30000K and 43000K. That’s not much given the size of the RAID. There are several tips available on the internet. What help me was to set the stripe_cache_size.

STRIPE_CACHE_SIZE

You set the size of stripe_cache_size for each RAID device (mdX). In case your RAID is md0:

echo 32768 > /sys/block/md0/md/stripe_cache_size

Result

The speed increased to 100000K/sec. That’s close to 3x faster than before. Time went down drastically.

Let the world know