Increase EC2 (root) file system size

Some years ago I create a new instance in EC2 with the minimal configuration needed. The disk size of the root device and partition is set to 8 GB. Today I am reaching the limit of the disk size and need more space. Having the server in the cloud allows me to “simply” increase the size without having to buy a new HDD.

To increase the size of an EBS volume, you need to execute three tasks:

  1. Take snapshot
  2. Resize volume
  3. Resize file system

The commands to resize partition and file system are (gp2, ext4, t2):

sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1

Take snapshot

Before starting, create a snapshot of the volume. See my blog on how to do this.

Resize volume

AWS documentation

You can use the EC2 console or CLI to extend a volume. I’ll use EC2 console. The volume used as root device for my EC2 instance is based on Elastic Block Store (EBS) and type gp2. This step is very easy to do, as you inform AWS that you need more storage and you get more storage assigned. You won’t be able to make use of that new storage as long as the file system isn’t resized.

Go to EBS > Volumes

A list of volumes is shown. Find the correct one using the volume ID. The root volume of my instance has 8GB size and type gp2.

To modify the volume, select the volume and then click on Actions > Modify Volume

The current configuration of the volume is shown. Last chance to verify you are changing the right volume.

I’ll only modify the size of the volume. From 8GB to 20 GB.

Confirm the change. Click on Yes.

In case AWS was able to assign more storage to your volume, a confirmation message is shown.

The size of the volume is now shown as 20 GB in the volume table.

Resize file system

AWS documentation

Assigning more storage to the volume is one step. To make use of the new disk space, the partition and filesystem must be resized. To see the available partition:

sudo file -s /dev/xvd*

Resize partition

The size of the volume is adjusted. The partition on the disk must be resized to make use of that space. To see the size of the disk and partition:

lsblk

The available space is 20G in total, with the partition xvda1 taking 8G only. Increase size of partition

sudo growpart /dev/xvda 1

The check if the partition was resized, run lsblk again. The partition xvda1 should now be 20G large.

lsblk

Resize file system

Resizing the EBS volume and partition is not resizing the file system. The file system still thinks it only has 8GB available.

df -h

To change size, the file system must be resized. My root file system is using EXT4 (see output above), therefore I can use resize2fs to adjust it.

sudo resize2fs /dev/xvda1

After resize2fs finishes, the file system can now use the new 20G of the EBS volume.

df -h

Let the world know

Create an AWS snapshot from a volume

I am going to do some work on my AWS EC2 instance that hosts my web site https://www.itsfullofstars.de. More precisely: I did the work already and it worked out well, that’s why you can read the blog Before starting the work, I wanted to have a backup of my data. The data is saved on a EBS volume and is also the root / boot volume / disk of my EC2 instance.

AWS has a nice documentation on how to create and manage snapshots. As always with this kind of generic documentation, it contains a lot of information, or too much, as all possible cases are covered. To have a simpler reference, I’ll show in this blog how I created a snapshot.

Scenario

  • EC2: Instance with root volume on EBS. OS: Linux
  • Data: Size: 8 GB, type: gp2, SSD
  • Task: Create a snapshot of the root device

Note that it seems that you can create a snapshot of a root volume while the instance is running. AWS states that you should stop the instance first:

„To create a snapshot for an Amazon EBS volume that serves as a root device, you should stop the instance before taking the snapshot.“

Steps

  1. Stop instance
  2. Create snapshot
  3. Start instance

Yes, 3 steps is all it takes to take a snapshot of a EBS volume used as root volume in a EC2 Linux instance.

Stop instance

Go to your EC2 instance and stop it. You can also log on to your instance and issue a stop command there. I am using the AWS console as here I can do everything without having to switch to another tool.

Select Stop, not Terminate, and confirm your action. Oh, yes, do not forget: afterwards your server is not online and its services not accessible. Plan for some downtime, communicate it, etc.

Instance state switches to stopping, meaning that the server is going to shut down. This can take a few seconds.

After the instance is stopped, the state is stopped. Now you can start creating a snapshot of your root volume, as it is not accessed anymore.

Take snapshot

To create a snapshot, follow the stops outlined by AWS documentation. Go to create snapshot section in AWS console. In case you do not have any snapshots created yet, the list will be empty.

Let’s create a snapshot. To start, click on Create Snapshot. This will open a wizard. I wanted to create a snapshot of a volume, so I selected as type Volume and selected the volume from the dropdown list. It’s a good idea to provide a description.

To start the creation process, click on Create Snapshot.

The snapshot will be created immediately. Be aware: this means that the snapshot request was created, not the actual snapshot. Taking the snapshot / copy of the volume will take some time.

You can see the status of the snapshot creation in the column Status of the snapshot. It will be in state pending until all data was transferred from the root volume to the snapshot file.

Taking the snapshot can take a few minutes, depending on the size of your EBS volume. Mine was 8 GB and it took like 5-7 Minutes to create the snapshot. This was an initial snapshot, no delta. Only when the status changes to completed, the process ended successfully.

Start instance

After the snapshot it taken, you can start the EC2 instance again.

During startup, the status of your EC2 instance will be pending. After completing, it is running and if everything worked without errors, your server and the services are back online.

Let the world know

How to publish an iOS App from Microsoft AppCenter to Apple App Store Connect

In this blog I will detail how you use Microsoft’s AppCenter to build an iOS app und publish it directly to iTunes Connect. This allows you to decouple the building, testing and distribution process from the developers. The developer only has to push the app to the repository (I am using Azure DevOps) and AppCenter takes care of the rest.

The steps to do so are:

  1. Create app project
  2. Configure build
  3. Add signing certificates
  4. Configure distribution to iTunes Connect

Create App Project

Open AppCenter and create a new project.

You can add AppCenter features to you app, but it’s optional. I already have a running app that I just want to build and distribute. Next step is to configure the build.

Build

Select the repository where the source code is hosted. I use Azure DevOps (free tier). Unfortunately, GitLab is not listed and in the free tier I am using it is not possible to add self-hosted git repositories.

AppCenter will connect to Azure DevOps via SSO and list the available projects.

This adds the repository to the build configuration. You’ll see the branches and last commit message.

To configure the build, click on the configuration option for the branch. The option will only appear when you hover with your mouse over the branch.

AppCenter will scan the project and find the available XCode settings.

You can configure the XCode version to be used for the build. This is very useful when you are using external libraries that do not work with newer XCode versions. For instance, the Fiori libraries included in my project were not released for 10.2.1 and the newer Swift version that comes with it. Therefore, the build exited with an error. Until SAP released an updated version of Fiori for iOS, I had to use XCode 10.2.

AppCenter offers options to automatically increase the build number, or run your XCTests.

Sign build

To be able to send the app to iTunes, you must sign the build using your certificate and provisioning profile. I wrote two blogs on how to get these:

When you have these available, you can start configuring the app signing. You upload the files and provide needed credentials for your private key.

Distribute

Next step is to define where you want to distribute the app to. You can send it to the official App Store, App Store Connect Users for your TestFlight beta testers, or to an internal Company Portal.

I am going to distribute the app to App Store Connect for TestFlight. Select App Store Connect. If you do not have yet an account linked to Apple, you can do this here.

AppCenter is connecting to App Store Connect and retrieves a list of apps. I only have one app available, making the selection easier. It also means that you have to create the app first in App Store Connect. AppCenter is not able to create the app definition for you.

Select the app and click on Assign.

In case 2FA is enabled for your Apple ID, you will have to provide an app-specific password. I wrote a blog an how to create an app specific password.

After informing the app-specific password, you get back to the previous screen. Click again on assign.

Now AppCenter is configured to connect to Apple Connect. Back at the Distribute builds section, you can select App Store Connect Users.

Result

You can now click on save or already start your first build.

Run build and distribute to App Store Connect

After the project is created and the build configured, you can start a build. AppCenter will find an available build agent, clone the repository, build, test, sign and distribute the app.

AppCenter

Waiting for a free build agent

Build starting

Distribute

After the build is done, the app is send to Apple Connect and processed there. Apple will check if the build is OK. This will take some time. The status of the build is Processing.

App Store Connect

When processing is done, you get an email form Apple.

The status of the app in AppCenter and App Store Connect changes and you can distribute the app to your beta testers via TestFlight.

Let the world know

Gitlab behind a reverse proxy

I am using GitLab for private projects. GitLab is run using the Docker image provided by GitLab. I can access my instance from outside via a reverse proxy (Apache).

The setup is simple:

  • GitLab Docker container is running on NUC and listens on port 7080 for HTTP connections
  • NUC is connected via OpenVPN to the server on AWS
  • Apache as a reverse proxy listening on port 443 for HTTPS
  • Apache terminates SSL: incoming requests are HTTPS, but forwarded as HTTP to GitLab
  • Apache forwards incoming requests to GitLab on Docker

Standard setup of GitLab in Docker with Apache as reverse proxy will give access to GitLab without problems. Start GitLab container, configure Apache, done. You can access GitLab from the internet, create repositories, clone, push, etc.

While the setup will work out of the box, you need to carry out additional configuration for GitLab to really make it work with SSL terminating. What is not working correctly is:

  • The external URL is not configured, so the URL in the repository clone dialog is not using HTTPS.
  • You cannot upload attachments in the Wiki
  • You cannot add pictures in the Wiki via copy & paste from the clipboard
  • Uploading files / images may work in the issues dialog, but not in the wiki, as the wiki is using different upload service.

Attaching an image from clipboard fails.

Problem

My external URL is https://gitlab.itsfullofstars.de. Setting this value as external URL in gitlab.rb. You configure GitLab by setting the parameters in the file gitlab.rb and then reconfigure GitLab.

## GitLab URL
##! URL on which GitLab will be reachable.
external_url 'https://gitlab.itsfullofstars.de'

Run reconfigure to enable the configuration.

gitlab-ctl reconfigure

Accessing gitlab.itsfullofstars.de:

This will set all parameters in all involved components of GitLab based on the values set in gitlab.rb. You can see the new value by looking at the automatically generated configuration file for the internal web server.

## GitLab settings
gitlab:
## Web server settings (note: host is the FQDN, do not include http://)
  host: gitlab.itsfullofstars.de
  port: 443
  https: true

The problem is: GitLab thinks it is running standalone, with direct access to the internet. There is not a specific parameter to inform that the requests are coming from a reverse proxy with SSL termination. Setting the values in gitlab.rb will result in an erroneous configuration:

  • SSL for internal GitLab web server (nginx) is enabled
  • Nginx is not listening on port 80, only on 443
  • My Apache reverse proxy is configured to connect to nginx port 80. Hence the Service Unavailable error.

Port 80 is not working any longer. Accessing GitLab directly via 192.168.x.x:7443 on HTTPS port (Docker mapping 7443 to 443).

Access will work. GitLab tries to get a new TLS certificate during the reconfiguration process, but fails, therefore the self signed certificate.

Attaching an image won’t work

Because of the external_url value, GitLab will redirect to gitlab.itsfullofstars.de. As the reverse proxy is not able to connect, it’s a 503 error.

Configuring the external GitLab URLs results in:

  • An incorrect HTTPS configuration due to wrong certificate
  • Adjustment of Apache reverse proxy: no longer SSL termination

I do not want to take of managing GitLabs internal TLS certificate. I want to access it via HTTP only and use Apache for SSL termination.

Solution

The solution is to configure the external URL and to let the internal nginx run on port 80 and no HTTPS.

Gitlab.rb

Configure a value for external_url.

vim config/gitlab.rb
external_url 'https://gitlab.itsfullofstars.de'
nginx['listen_port'] = 80
nginx['listen_https'] = false
gitlab-ctl reconfigure

GitLab HTTP server

Check the configuration for the internal GitLab web server. The server should be gitlab.itsfullofstars, the port 80 and protocol HTTP.

more data/gitlab-rails/etc/gitlab.yml
## GitLab settings

gitlab:
## Web server settings (note: host is the FQDN, do not include http://)
  host: gitlab.itsfullofstars.de
  port: 80
  https: false

Optional: Restart

Running reconfigure restarts the services, but if you want to be sure, restart GitLab.

gitlab-ctl restart

Apache configuration

My Apache configuration. Maybe not all parameters are needed, but it works.

<VirtualHost *:443>
  ServerName gitlab.itsfullofstars.de
  ProxyPreserveHost On
  ProxyRequests Off
  SSLProxyEngine on
  SSLEngine on
  SSLHonorCipherOrder on
  <Location />
    RequestHeader unset Accept-Encoding
    RequestHeader set Host "gitlab.itsfullofstars.de"
    RequestHeader add X-Forwarded-Ssl on
    RequestHeader set X-Forwarded-Proto "https"
    ProxyPass http://nuc:7080/
    ProxyPassReverse http://nuc:7080/
    Order allow,deny
    Allow from all
  </Location>
</VirtualHost>

Result

After executing the above steps, your configuration should be:

An external request is now for server gitlab.itsfullofstars.de. Apache does SSL termination, and nginx is accepting the request without either blocking it or trying to redirect to HTTPS.

Attaching an image to GitLab Wiki by pasting it from the clipboard


Links

Some resources I found while solving the issue for myself.

https://gitlab.com/gitlab-org/gitlab-ce/issues/27583

https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl

https://gitlab.com/gitlab-org/gitlab-ce/issues/52243

Let the world know

Elastic APM forbidden request: endpoint is disabled

I am currently going through an UI5 app of mine that I want to enhance so I can use APM for performance monitoring. While I have done this several time before, I always run into the same problem:

  • I install a new version of ELK and APM on my laptop.
  • I add the necessary NPM files for the backend and it works.
  • I add RUM to the UI5 app and it won’t work.

As I have done the same scenario before I know why it is not working. In this blog I’ll share the needed steps to let an app with RUM send data to APM server.

Error message

The response I get from APM is that the endpoint is disabled.

{
"error": "forbidden request: endpoint is disabled"
}

Browser

APM server

2019-01-11T01:01:22.507+0200 ERROR [request] beater/common_handler.go:299 error handling request {"request_id": "7be8514c-e929-4ec1-af1a-0dc037743302", "method": "GET", "URL": "/intake/v2/rum/events", "content_length": 0, "remote_address": "127.0.0.1", "user-agent": "Mozilla/5.0", "response_code": 403, "error": {"error":"forbidden request: endpoint is disabled"}}

Solution

APM configuration in Kibana contains all the information necessary: you have to enable RUM support in APM.

Go to APM server directory and edit the configuration file apm-server.yml

vim apm-server.yml

Enable RUM

To enable it, set enabled: true. The documentation contains some examples that you can use / adapt for your needs. Default settings are OK, so you only have to activate RUM.

rum:
# To enable real user monitoring (RUM) support set this to true.
  enabled: true

Be careful to remove the # before rum and enabled. Just removing it before enabled makes enabled to a child of rum. It’s a YAML configuration file, hierarchy matters.

Save and start APM server.

Frontend app (UI5) configuration

Add RUM Javascript file and set serviceName parameter.

<script src="elastic-apm-rum.umd.min.js" crossorigin></script>
<script>
  elasticApm.init({
    serviceName: ui',
    serverUrl: 'http://localhost:8080/proxy/http/localhost:8200',
  })
</script>

Result

Calling the UI5 app, RUM is loaded and AJAX calls to events in APM server are now passed. RUM is working.

Browser

Kibana

The request is shown as Unkown, but that’s a different problem.

Let the world know

How to download your iOS distribution certificate

To be able to sign your app and let an external build tool like Microsoft AppCenter upload it to iTunes Connect, you need to provide two files:

  • Certificate: iOS Distribution
  • Provisioning Profile: App Store

Microsoft provides technical documentation on how to get the code signing certificates and how to upload them to your build pipeline. I’ll try to add more explanation and screenshots to make it easier to get both files. This blog is for the iOS distribution certificate.

Distribution certificate

I need the correct distribution certificate for the provided provisioning certificate. The provisioning profile contains a list of “linked” distribution certificates. If yours is not in the list, you cannot use your certificate to sign the app.

Get certificate

Log on to Apple Developer Center. Select Certificates, IDs & Profiles from the left menu

I have several (3) certificates available.

Which one is it? When I created the provisioning profile, I added a distribution certificate. I only have one, so this is the certificate I need.

To be able to use the distribution certificate in an external tool like Microsoft AppCenter, I have to convert the certificate into a p12 file. To convert, you can import the certificate into Mac Key Toolchain, export cert and private key and save as p12.

1. Download the certificate

2. Check file

The downloaded cer file is named ios_distribution.cer. The see if this certificate is for distribution just read the content using more. It must contain the line iPhone Distribution.

more ios_distribution.cer

3. Import into keychain

Open the certificate in MAC Keychain.

I also have the private key for that certificate.

4. Export

Select the certificate and private key and export both. Save the file and provide a strong key phrase.

Now I have my personal distribution certificate (Zertifikate.p12) and provisioning profile (.mobileprovision).

Let the world know

How to get your iOS App provisioning profile

To be able to sign your app and let an external build tool like Microsoft AppCenter upload it to iTunes Connect, you need to provide two files:

  • Certificate: iOS Distribution
  • Provisioning Profile: App Store

Microsoft provides technical documentation on how to get the code signing certificates and how to upload them to your build pipeline. I’ll try to add more explanation and screenshots to make it easier to get both files. This blog is for the provisioning profile.

Provisioning Profile

There are two ways to get the provisioning profile:

  • XCode automatically generates one
  • You create it manually

In case a single developer does everything from coding to uploading to the App Store from the MAC, it’s a good idea to let XCode handle the provisioning profile. For more complicated use cases like an external build pipeline, creating the profile manually is better or let the pipeline tool do everything for you (fastlane). Let’s take a look at each alternative.

Automatic

In case of letting XCode handle automatically the provisioning profile, it can be found on your MAC. Go to folder:

~/Library/MobileDevice/Provisioning Profiles/

I have there three provisioning profiles. To know which one to use, I deactivate and activate the automatic code signing in XCode.

Uncheck and check again the option. XCode recreates the provisioning profile and the correct one is the newly created file.

Manual

To create the provisioning profile manually, log in to Apple Developer Center. From the initial page to final profile it’s just 8 steps. You create a provisioning profile for an app and associate a distribution certificate to it. Only certificates assigned to the profile can be used to sign the app. Therefore, you need to create a new provisioning profile in case you add or change a distribution certificate.

  1. Go to Certificates, IDs & Profiles.
  2. Go to section Provisioning Profiles
  3. Create a new profile.
  4. Select type

Select App Store, as the profile will be used to publish the app to the Apple App Store / Connect.

  1. Select the app.

Select the App Id you want this provisioning profile. This is the bundle id used in XCode (namespace). The profile will only be valid for apps using that App Id.

  1. Select the developer certificates

The certificates added here can be used together with the profile. If your distribution certificate is not listed, you cannot sign and publish the app using the profile.

  1. Name profile

Give a unique name to the provisioning profile.

  1. Download

The profile is now generated and can be downloaded.

The new provisioning profile is listed in the Apple Developer Center.

Let the world know

Create App-Specific password

Log on to your Apple ID account. On the main screen, you can find a section for Security (Sicherheit in German).

Click on Create Password (Passwort erstellen) to create an app-specific password. Give an unique name for the password. You may consider using the name of the app that is going to use the password.

Give a unique password. This is the password the app will use for authentication.

That’s it, now you have a password that an app can use to log in to your account.

Let the world know

Lossless audio with Odroid C2 and Libreelec

For several years I have been running Kodi on a Raspberry Pi. It started with Openelec, followed by LibreELEC, using a Raspberry Pi 1, 2 and finally 3. Every time I upgraded the Raspberry Pi that runs my home server, I took the replaced Raspberry to run Kodi with LibreELEC. To be able to watch MPEG-2 from DVDs, I bought the license from the RP Foundation. Over the years I switched from DVD to BlueRay, and with that the quality of the picture and sound changed.

The sound formats you get on BlueRay made me switch and replace the Raspberry by a Odroid C2. Depending on the BlueRay movie, you get DTS, True-HD and Atmos. To be able to listen to DTS or Atmos, you need an audio receiver supporting the format. Kodi can pass through the audio channels to your receiver (AVR). Decoding the bit stream is then a task of the AVR. In case the audio signaled received is valid, AVR will show the correct audio format (DTS, Atmos) or PCM. PCM means: it did not work, information is missing and the AVR is not able to understand the received audio format.

The sound is transported together with the video signal through HDMI. Raspberry Pi supports HDMI rev 1.3. This is just not enough for transporting high quality audio with several channels. Because of this limitation, not all channels are transmitted, and the audio receiver is PCM, but not DTS or Atmos any longer. Odroid C2 offers HDMI rev 2.0, meaning you get 4k at 60Hz and enough bandwidth to support high quality Audio. It comes with more RAM and faster LAN too, so the streaming and user experience is better.

The main plus point is that it’s HDMI can pass through high quality audio. Kodi playing a track with 7.1.2 Atmos? Information on whether it will actually work is not easy to find. Some posts say yes, others no. It seems that it wasn’t working a few years back, but today it is working. For testing, Kodi provides a library with sample files. From there you can download official Atmos content. Another site with many samples is the digital theater.

I’ll use the conductor sample: TrueHD 7.1 Atmos.

Configuration

Pictures are in German, but you should be able to find them in Kodi.

  • Go to Settings > System > Audio
  • Audio output over HDMI and 7.1 channels.
  • Allow pass through
  • Activate codecs your AVR supports. Mine supports AC3, E-AC3 (Atmos), DTS, True-HD.

Test

Start the conductor sample for testing the Atmos sound.

Soundbar shows correctly that Dolby Atmos sound is received.

Playing a DTS sample, soundbar shows correctly that DTS sound is received.

Let the world know

Small Wishlist for SAPPHIRE

SAP’S prime event SAPPHIRE is happening next week. Of course SAP will talk about how great they are, how latest acquisitions add value, the new additions to the excellent portfolio, that customers are doing great thanks to SAP, and so on. It’s an event driven by marketing and sales, what else to expect?

Personally, I’d like to see some announcements that won’t happen, and won’t be announced at any other SAP event. Nevertheless, here is my personal list of things that I believe could add value to SAP’s overall ecosystem.

Trackable announcements

As with every event, there will be a lot of success stories and product launches and should padding and everybody on stage is either a friend, longtime friend, or for companies, it’s a very special relationship. And all about how great the product is. Why not make it easy to track the success? Of course S/4 is big. What about the other announcements? If it’s announced, provide a way to see how well the product is performing. Make transparent what happens to the product after SAPPHIRE is over. Bring the customer back on stage. Let them talk about the last 2, 3 or 5 years. For instance, what happened to the Leonardo solution for analyzing the health of palm trees? Is the intelligent vending machine used in the market? How is the cloud service for tax calculation performing?

Bring back apps

SAP is really good in delivering solutions for core business processes. For the additional problems, SAP tries to offer tools to customers that make it possible to create missing solutions and apps. What a customer gets is a toolset to develop, not a solution. If you want to have a mobile app for “standard” functionality today, you have to develop it. For large companies this is not a problem, for smaller one it can be, for everyone it means that they are responsible for developing apps. And to support them.

Once we had mobile workflow from Sybase / SAP, a packaged app for mobile. Afaria was a leader in MDM. Both were sold together and client got for one price a complete mobile solution. Today you can automatically create an offline enabled app using for instance mobile cards or the new mobile development kit. Using these, you develop following standard guidelines. Every partner or freelancer can offer the very same app for a customer.

The available toolset is good enough to create apps out of Fiori apps, Mobile Cards or Mobile Development Kit. Many customers want the apps you can develop with these but do not like the idea of doing this. And having to deal with all the licensing and support issues. Make it easy to offer the developed app as an app with all licensing included. SAP could provide the standard documentation on how to create an app for a given process using the toolset from SAP, and let partners either create apps following 100% these guidelines or let them add additional features. In both cases, customer can buy what is needed, with full support from partner and SAP, and without having to license all components involved. In case customer wants to change the partner, the app is build using standard recommendation, another partner can offer the exact same app.

Redefine SAP portfolio

Trusting partners to deliver the kind of apps as they were delivered by SAP means to rethink SAP Consulting positioning and its portfolio. Once SAP Consulting was a powerful organization with competent people helping customers. From technical side, the good consultants largely left. From a functional point, situation is much better. Would be nice to see SAP consulting focusing solely where it can still offer value to customers: functional consulting. Let SAP’s own consultants advice customers on how to improve their business processes. How to do accounting using S/4, Retail or supplier management. Let everything that even slightly touches technical area complete handled by either a partner or freelancer consultant. Same for support. It’s time to stop letting SAP support people act as consultants.

Simplify Cloud

When SAP executives look at it, it must be a dream for them: customers buying HEC, cloud numbers go up, only good feedback. What enters the system and bubbles up is filtered. The lower you go at a HEC project, talking to people actually working on a daily basis with HEC, situation changes: System not correctly configured, unavailability, bad support, and so on.

Personally I had so far: HANA system that wasn’t updated for years, Gateway system with language pack not correctly installed, systems unavailable because support restarted it in PRD without informing anyone, missing components like Web Dispatcher. My personal favorite: 24/7 support from Monday to Friday during Indian business hours.

It’s time to announce when SAP is either closing HEC or restarting it’s offering. Same for SCP. Neo, Cloud Foundry, it’s nice to have choice, but please close one. Announce if Neo is going to survive or not, and give a final date. Also, separate SCP services by user groups: developers & business. Business side is interested in solutions not developer services. Give each group one view at SCP and services. A possible end of this could be to close the developer part of SCP. Bring the tools to multi cloud, aka: offer them on AWS, Azure, etc.

Fiori

SAP pushed Design Thinking and provides helpful tools and guidance on how to enable it. Many customers started to use DT thanks to SAP. The initiative slowed down. Development on Build and web site seems to be at minimum effort. I would like to see SAP investing more into BUILD. Make all UI5 controls available, integrate Bootstrap controls, include controls for iOS, Android and material design. Make it a design tool not only for SAP content, but for everything. Make it the design and mock tool at companies. To speed adoption of UI5 outside SAP context, make all UI controls part of OpenUI5. Include better support for non-OData backends.

None of this will be announced at SAPPHIRE. I just wrote this down to be able to look back in a few years to see if some ideas were good. There are additional ideas that are more suitable for TechEd, like: make S/4 architecture ready for Kubernetes. Having a work process running as pod in K8S makes it easier to scale an SAP System. Allowing arbitrary databases as backend for SAP CAPM. Add SAP technologies to Swagger (OpenAPI). Deliver software via Git. Long time topic from me, think I mentioned this around 2011 in SCN. For instance, instead of installing Fiori Apps the traditional way, let me select the app in Git and import only this app and run also the included tests. Reduce drastically RAM footprint of HANA. And many more. Maybe I will write a similar blog for next SAP TechEd.

Let the world know