Make sure to select a language that supports the team, not just you.
App development is not just coding: that’s why the presentation is about creating apps. It’s a team effort.
Demo apps are mostly for myself and to make my life easier
Cognitive Leave Request was developed by BridgingIT in partnership with Microsoft. More information about the project: Tobias und Martin entwickeln. (Video in German)
Testing is important. Several tools from and for UI5 are available that can be used or other tools. Just use them if you can.
Fruit Checker App is not a productive app. It is a showcase with the intention to make people think about the possibilities: what can you do today, value that combination of services can bring, etc.
OData v4 is not feature complete. SAP is investing and constantly adding new features to OData v4 model.
Testing is important. Several tools from and for UI5 are available that can be used or other tools. Just use them if you can.
EOL for NetWeaver 31.12.2025 is for the on premise version, as listed by SAP PAM.
S/4HANA is running on NetWeaver ABAP, therefore, ABAP will stay the base technology for SAP.
CAP and RAP helps you to keep the core clean. To make this possible for all SAP customers, the options you have a independent of the technology skills the developers have: Java, JavaScript, ABAP.
Fiori Elements or “pure” Fiori app development: this is not either nor situation, both are valid and can complement each other. Important is to have the backend services made ready for Fiori; as SAP does since the beginning for their official Fiori Apps.
Fruit Checker App is not a productive app. It is a showcase with the intention to make people think about the possibilities: what can you do today, value that combination of services can bring, etc.
Possibilities CAP may offer depend solely on SAP. It’s their product and its features and roadmap are controlled 100% by SAP.
App development is not just coding: that’s why the presentation is about creating apps. It’s a team effort.
OData can now also designed and documented in Swagger (OpenAPI).
Demo apps are mostly for myself and to make my life easier
Cognitive Leave Request was developed by BridgingIT in partnership with Microsoft. More information about the project: Tobias und Martin entwickeln. (Video in German)
Testing is important. Several tools from and for UI5 are available that can be used or other tools. Just use them if you can.
Recently I saw that my Matomo reports were not showing the correct data. It seemed like the daily cron job wasn’t running or failing. To see what was causing this issue, I ran the archiving tool manually.
Error: Got invalid response from API request:
?module=API&method=API.get&idSite=1&period=year&date=
last7&format=php&trigger=archivephp.
Response was 'PHP Fatal error: Allowed memory size of
805306368 bytes exhausted (tried to allocate 131072 bytes)
in /var/www/matomo/core/DataAccess/ArchiveWriter.php on
line 142 PHP Fatal error: Allowed memory size of
805306368 bytes exhausted (tried to allocate 32768 bytes) in
/var/www/piwik/core/Http.php on line 248 '
The archive script is reaching its memory limit of 805306368 bytes. Using more is not allowed, therefore the error. 805306368 bytes == 786432 Kbyte == 768 Mbyte. Somewhere a configuration is limiting the memory usage of PHP to 768 MB.
Solution
There are many, many configuration files for PHP available in my system. Matamo is using its own configuration, located at:
/var/www/piwik/config/global.ini.php
The file contains a parameter for setting a memory limit for archiving task.
minimum_memory_limit_when_archiving = 768
768 is exactly the value reported in the error. Increasing this value to 1024 (1GB) should solve the problem.
sudo vim /var/www/piwik/config/global.ini.php
minimum_memory_limit_when_archiving = 1024
OVA is a virtual appliance, ready to run on a hypervisor. With an OVA file, you can import the image into VirtualBox, VMWare, etc and all needed information is loaded from the file and you can start the VM. This works as long as your hypervisor is capable of reading an OVA file. Proxmox does not understand OVA, and you cannot use the image out of the box. Reading the provided VM definition is not possible. As an OVA file contains the VM disk, you can add the disk to a VM.
First, create a new virtual machine definition in Proxmox. You are going to import the disk image from the ova file, not the virtual machine definition. Therefore, you must first create a VM, this creates the necessary information in Proxmox, and then you are adding a disk to this VM.
The overall steps to add OVA image to Proxmox are :
Create VM
Delete associated disk
Import OVA
Assign OVA to VM
Create a new VM definition
In Proxmox, add a new VM. Note the VM ID. You need this later when importing the OVA disk.
Go through the wizard to create a normal new VM.
It seems that you have to add a disk. The disk will be deleted later, the configuration entered here is not important.
I’ll use a CPU with 2 cores.
I am using the VM for SAP HXE, therefore I am going to use a little bit more RAM: 24 GB RAM in total.
After going through the wizard, the VM definition is ready and you can let Proxmox create the VM.
The new VM will appear in the list of available VMs in your server. Note the ID: 101 and the available storage locations.
Delete associated disk
Open the VM configuration and got to Hardware. The disk you added in the wizard is listed. This disk must be removed.
Remove the disk
Detach from VM
Select the disk and click on Detach. The disk state will change to unused.
Remove disk from VM
After the disk is detached, remove it from the VM. This will delete the disk file.
Import OVA
The next step is to import the OVA disk and assign it to the VM. As Proxmox uses LVM for managing its storage, a provided tool must be used to import the disk to LVM and assign it to the VM. Copy ova file to Proxmox server. Unzip OVA file. OVA is a zip file, you can simply unzip it to see its content. It contains the VM definition (ovf) and the vm disk (vmdk).
tar -xzvf hxexsa.ova
To import the image, you need to specify the VM and location where the disk is imported to. This information is available in Proxmox. You can see a list when looking at the server at the left menu. I am going to use local-lvm and VM HXE with id 101.
This starts the import process. Basically, the vmdk file is copied to the storage local-lvm. After the import finishes, the disk is listed in Proxmox.
Assign OVA to VM
The disk is now available in Proxmox and added to the VM, but not usable. The disk must be assigned to the VM. To do so, open the VM definition and go to hardware.
Click on Edit.
Here you can specify how the disk is accessed by the VM. SCSI should work. If you get errors, try IDE, etc. As result, the disk is added to VM and can be used.
Note: SAP HANA Express Edition
To get the disk shipped with SAP HXE working, I had to use SATA, not SCSI.
Add the disk as SATA.
Make sure the boot order is set to SATA.
Starting the server should now work and you should see the configuration dialog.
Some years ago I create a new instance in EC2 with the minimal configuration needed. The disk size of the root device and partition is set to 8 GB. Today I am reaching the limit of the disk size and need more space. Having the server in the cloud allows me to “simply” increase the size without having to buy a new HDD.
To increase the size of an EBS volume, you need to execute three tasks:
Take snapshot
Resize volume
Resize file system
The commands to resize partition and file system are (gp2, ext4, t2):
You can use the EC2 console or CLI to extend a volume. I’ll use EC2 console. The volume used as root device for my EC2 instance is based on Elastic Block Store (EBS) and type gp2. This step is very easy to do, as you inform AWS that you need more storage and you get more storage assigned. You won’t be able to make use of that new storage as long as the file system isn’t resized.
Go to EBS > Volumes
A list of volumes is shown. Find the correct one using the volume ID. The root volume of my instance has 8GB size and type gp2.
To modify the volume, select the volume and then click on Actions > Modify Volume
The current configuration of the volume is shown. Last chance to verify you are changing the right volume.
I’ll only modify the size of the volume. From 8GB to 20 GB.
Confirm the change. Click on Yes.
In case AWS was able to assign more storage to your volume, a confirmation message is shown.
The size of the volume is now shown as 20 GB in the volume table.
Assigning more storage to the volume is one step. To make use of the new disk space, the partition and filesystem must be resized. To see the available partition:
sudo file -s /dev/xvd*
Resize partition
The size of the volume is adjusted. The partition on the disk must be resized to make use of that space. To see the size of the disk and partition:
lsblk
The available space is 20G in total, with the partition xvda1 taking 8G only. Increase size of partition
sudo growpart /dev/xvda 1
The check if the partition was resized, run lsblk again. The partition xvda1 should now be 20G large.
lsblk
Resize file system
Resizing the EBS volume and partition is not resizing the file system. The file system still thinks it only has 8GB available.
df -h
To change size, the file system must be resized. My root file system is using EXT4 (see output above), therefore I can use resize2fs to adjust it.
sudo resize2fs /dev/xvda1
After resize2fs finishes, the file system can now use the new 20G of the EBS volume.
I am going to do some work on my AWS EC2 instance that hosts my web site https://www.itsfullofstars.de. More precisely: I did the work already and it worked out well, that’s why you can read the blog Before starting the work, I wanted to have a backup of my data. The data is saved on a EBS volume and is also the root / boot volume / disk of my EC2 instance.
AWS has a nice documentation on how to create and manage snapshots. As always with this kind of generic documentation, it contains a lot of information, or too much, as all possible cases are covered. To have a simpler reference, I’ll show in this blog how I created a snapshot.
Scenario
EC2: Instance with root volume on EBS. OS: Linux
Data: Size: 8 GB, type: gp2, SSD
Task: Create a snapshot of the root device
Note that it seems that you can create a snapshot of a root volume while the instance is running. AWS states that you should stop the instance first:
„To create a snapshot for an Amazon EBS volume that serves as a root device, you should stop the instance before taking the snapshot.“
Steps
Stop instance
Create snapshot
Start instance
Yes, 3 steps is all it takes to take a snapshot of a EBS volume used as root volume in a EC2 Linux instance.
Stop instance
Go to your EC2 instance and stop it. You can also log on to your instance and issue a stop command there. I am using the AWS console as here I can do everything without having to switch to another tool.
Select Stop, not Terminate, and confirm your action. Oh, yes, do not forget: afterwards your server is not online and its services not accessible. Plan for some downtime, communicate it, etc.
Instance state switches to stopping, meaning that the server is going to shut down. This can take a few seconds.
After the instance is stopped, the state is stopped. Now you can start creating a snapshot of your root volume, as it is not accessed anymore.
Take snapshot
To create a snapshot, follow the stops outlined by AWS documentation. Go to create snapshot section in AWS console. In case you do not have any snapshots created yet, the list will be empty.
Let’s create a snapshot. To start, click on Create Snapshot. This will open a wizard. I wanted to create a snapshot of a volume, so I selected as type Volume and selected the volume from the dropdown list. It’s a good idea to provide a description.
To start the creation process, click on Create Snapshot.
The snapshot will be created immediately. Be aware: this means that the snapshot request was created, not the actual snapshot. Taking the snapshot / copy of the volume will take some time.
You can see the status of the snapshot creation in the column Status of the snapshot. It will be in state pending until all data was transferred from the root volume to the snapshot file.
Taking the snapshot can take a few minutes, depending on the size of your EBS volume. Mine was 8 GB and it took like 5-7 Minutes to create the snapshot. This was an initial snapshot, no delta. Only when the status changes to completed, the process ended successfully.
Start instance
After the snapshot it taken, you can start the EC2 instance again.
During startup, the status of your EC2 instance will be pending. After completing, it is running and if everything worked without errors, your server and the services are back online.
In this blog I will detail how you use Microsoft’s AppCenter to build an iOS app und publish it directly to iTunes Connect. This allows you to decouple the building, testing and distribution process from the developers. The developer only has to push the app to the repository (I am using Azure DevOps) and AppCenter takes care of the rest.
You can add AppCenter features to you app, but it’s optional. I already have a running app that I just want to build and distribute. Next step is to configure the build.
Build
Select the repository where the source code is hosted. I use Azure DevOps (free tier). Unfortunately, GitLab is not listed and in the free tier I am using it is not possible to add self-hosted git repositories.
AppCenter will connect to Azure DevOps via SSO and list the available projects.
This adds the repository to the build configuration. You’ll see the branches and last commit message.
To configure the build, click on the configuration option for the branch. The option will only appear when you hover with your mouse over the branch.
AppCenter will scan the project and find the available XCode settings.
You can configure the XCode version to be used for the build. This is very useful when you are using external libraries that do not work with newer XCode versions. For instance, the Fiori libraries included in my project were not released for 10.2.1 and the newer Swift version that comes with it. Therefore, the build exited with an error. Until SAP released an updated version of Fiori for iOS, I had to use XCode 10.2.
AppCenter offers options to automatically increase the build number, or run your XCTests.
Sign build
To be able to send the app to iTunes, you must sign the build using your certificate and provisioning profile. I wrote two blogs on how to get these:
When you have these available, you can start configuring the app signing. You upload the files and provide needed credentials for your private key.
Distribute
Next step is to define where you want to distribute the app to. You can send it to the official App Store, App Store Connect Users for your TestFlight beta testers, or to an internal Company Portal.
I am going to distribute the app to App Store Connect for TestFlight. Select App Store Connect. If you do not have yet an account linked to Apple, you can do this here.
AppCenter is connecting to App Store Connect and retrieves a list of apps. I only have one app available, making the selection easier. It also means that you have to create the app first in App Store Connect. AppCenter is not able to create the app definition for you.
After informing the app-specific password, you get back to the previous screen. Click again on assign.
Now AppCenter is configured to connect to Apple Connect. Back at the Distribute builds section, you can select App Store Connect Users.
Result
You can now click on save or already start your first build.
Run build and distribute to App Store Connect
After the project is created and the build configured, you can start a build. AppCenter will find an available build agent, clone the repository, build, test, sign and distribute the app.
AppCenter
Waiting for a free build agent
Build starting
Distribute
After the build is done, the app is send to Apple Connect and processed there. Apple will check if the build is OK. This will take some time. The status of the build is Processing.
App Store Connect
When processing is done, you get an email form Apple.
The status of the app in AppCenter and App Store Connect changes and you can distribute the app to your beta testers via TestFlight.
I am using GitLab for private projects. GitLab is run using the Docker image provided by GitLab. I can access my instance from outside via a reverse proxy (Apache).
The setup is simple:
GitLab Docker container is running on NUC and listens on port 7080 for HTTP connections
NUC is connected via OpenVPN to the server on AWS
Apache as a reverse proxy listening on port 443 for HTTPS
Apache terminates SSL: incoming requests are HTTPS, but forwarded as HTTP to GitLab
Apache forwards incoming requests to GitLab on Docker
Standard setup of GitLab in Docker with Apache as reverse proxy will give access to GitLab without problems. Start GitLab container, configure Apache, done. You can access GitLab from the internet, create repositories, clone, push, etc.
While the setup will work out of the box, you need to carry out additional configuration for GitLab to really make it work with SSL terminating. What is not working correctly is:
The external URL is not configured, so the URL in the repository clone dialog is not using HTTPS.
You cannot upload attachments in the Wiki
You cannot add pictures in the Wiki via copy & paste from the clipboard
Uploading files / images may work in the issues dialog, but not in the wiki, as the wiki is using different upload service.
Attaching an image from clipboard fails.
Problem
My external URL is https://gitlab.itsfullofstars.de. Setting this value as external URL in gitlab.rb. You configure GitLab by setting the parameters in the file gitlab.rb and then reconfigure GitLab.
## GitLab URL
##! URL on which GitLab will be reachable.
external_url 'https://gitlab.itsfullofstars.de'
Run reconfigure to enable the configuration.
gitlab-ctl reconfigure
Accessing gitlab.itsfullofstars.de:
This will set all parameters in all involved components of GitLab based on the values set in gitlab.rb. You can see the new value by looking at the automatically generated configuration file for the internal web server.
## GitLab settings
gitlab:
## Web server settings (note: host is the FQDN, do not include http://)
host: gitlab.itsfullofstars.de
port: 443
https: true
The problem is: GitLab thinks it is running standalone, with direct access to the internet. There is not a specific parameter to inform that the requests are coming from a reverse proxy with SSL termination. Setting the values in gitlab.rb will result in an erroneous configuration:
SSL for internal GitLab web server (nginx) is enabled
Nginx is not listening on port 80, only on 443
My Apache reverse proxy is configured to connect to nginx port 80. Hence the Service Unavailable error.
Port 80 is not working any longer. Accessing GitLab directly via 192.168.x.x:7443 on HTTPS port (Docker mapping 7443 to 443).
Access will work. GitLab tries to get a new TLS certificate during the reconfiguration process, but fails, therefore the self signed certificate.
Attaching an image won’t work
Because of the external_url value, GitLab will redirect to gitlab.itsfullofstars.de. As the reverse proxy is not able to connect, it’s a 503 error.
Configuring the external GitLab URLs results in:
An incorrect HTTPS configuration due to wrong certificate
Adjustment of Apache reverse proxy: no longer SSL termination
I do not want to take of managing GitLabs internal TLS certificate. I want to access it via HTTP only and use Apache for SSL termination.
Solution
The solution is to configure the external URL and to let the internal nginx run on port 80 and no HTTPS.
Check the configuration for the internal GitLab web server. The server should be gitlab.itsfullofstars, the port 80 and protocol HTTP.
more data/gitlab-rails/etc/gitlab.yml
## GitLab settings
gitlab:
## Web server settings (note: host is the FQDN, do not include http://)
host: gitlab.itsfullofstars.de
port: 80
https: false
Optional: Restart
Running reconfigure restarts the services, but if you want to be sure, restart GitLab.
gitlab-ctl restart
Apache configuration
My Apache configuration. Maybe not all parameters are needed, but it works.
<VirtualHost *:443>
ServerName gitlab.itsfullofstars.de
ProxyPreserveHost On
ProxyRequests Off
SSLProxyEngine on
SSLEngine on
SSLHonorCipherOrder on
<Location />
RequestHeader unset Accept-Encoding
RequestHeader set Host "gitlab.itsfullofstars.de"
RequestHeader add X-Forwarded-Ssl on
RequestHeader set X-Forwarded-Proto "https"
ProxyPass http://nuc:7080/
ProxyPassReverse http://nuc:7080/
Order allow,deny
Allow from all
</Location>
</VirtualHost>
Result
After executing the above steps, your configuration should be:
An external request is now for server gitlab.itsfullofstars.de. Apache does SSL termination, and nginx is accepting the request without either blocking it or trying to redirect to HTTPS.
Attaching an image to GitLab Wiki by pasting it from the clipboard
Links
Some resources I found while solving the issue for myself.
I use cookies to ensure that I can give you the best experience on my personal website. If you continue to use this site I will assume that you are happy with it.Ok