This blog shows how to use Keycloak for OAuth 2.0 and OpenID Connect. Keycloak is an identity and access management solution. Among its list of supported authentication mechanisms are SAML 2.0 and OpenID Connect. It is open source and can be installed via Docker. I wrote how to install Keycloak via Docker in a separate blog. The content of this blog was created as a side effect of configuring NetWeaver ABAP with Keycloak for SAML 2.0 and OAuth 2.0.
Here I will detail the steps to create an OAuth client in Keycloak, assign an OAuth 2.0 scope to it and how to get the OpenID Connect tokens for the client. For a better readability the steps are available as independent blogs / articles.
After creating an OAuth 2.0 scope and client and assigning the scope to the client, we can test the configuration. To do this, we need to log on in Keycloak as the OAuth 2.0 client. Keycloak will then validate the client and provide the Access Tokens and the scope(s) assigned to the client.
The parameter grant_type informs Keycloak about the authentification type we want. Client_credentials means that we send the client secret, and together with the client id this authenticates the client. Make sure to protect the client secret! This also explains why HTTPS is a minimum requirement.
Keycloak returns the JWT, including the access and refresh token as well as the scope. The assigned scope ZDEMO_CDS_SALESORDERITEM_CDS_0001 is included, allowing the client to access resources that are assigned to that scope.
After performing the previous steps in Keycloak, an OAuth 2.0 scope and client is available. To get the scope after the OAuth 2.0 client authenticates against Keycloak, you need to assign the scope to the client.
Log on to Keycloak and go to clients and select oidclient. This is the client created earlier.
Go to tab “Client Scopes”
Assign the previously created scope to the client.
The scope is assigned to the client. Now the client can authenticate and Keycloak will issue the OIDC tokens and include the given scope.
It was time to update the PHP version on my WordPress server. WordPress gave me warnings; the site health plugin gave me a warning. Plugins gave me warnings. PHP, IT news sites, the internet, warnings everywhere.
I knew that my PHP version was very old. But still supported. At least until beginning of 2019. When I configured the server for the first time several years ago, the installed PHP version was already not the latest. It was what yum install php gave me. Updating software is crucial, so I decided to finally touch my running system.
WordPress provides a site explaining how to update your PHP version. The update process in the documentation goes like: write an email to your hoster. Or: Not working in my case. For those that want to know how to update PHP on a Amazon AMI EC2 instance, here are the stops and my lessons learned.
First, do a backup. Update WordPress and the plugins. Check that the plugins are compatible with PHP 7.2
Backup: See my blog on how to create a snapshot of a EC2 instance.
Update WordPress and plugins: Easy: just do as always and keep it up-to-date.
Check plugins for compatibility: A plugin is available to check the installed plugins and files for compatibility with PHP 7.x. Install and activate it and run a test.
The PHP Compatibility plugin is started from the WP Admin site. Hint: in my case, the plugin worked fine, but also crashed the server. After running it and saving the results, uninstall it.
This gives as an output an evaluation of the plugins and their compatibility status.
Next step is to update PHP. Use the package manager for this. I’ll split the installation process in two parts: PHP and the additional packages.
After installing PHP 7.2 it must be activated. The old PHP version is still the default one, meaning that calling php is not calling php 7.2. To change the paths, run alternative. It will show the available alternatives and asks which one you want to use. I am going to use php 7.2, so the input here was 2.
alternatives --config php
Now PHP 7.2 is installed and activated. After restarting Apache WordPress will run on a newer PHP version.
OAuth uses scopes to restrict access to resources. “Scope is a mechanism in OAuth 2.0 to limit an application’s access to a user’s account. An application can request one or more scopes, this information is then presented to the user in the consent screen, and the access token issued to the application will be limited to the scopes granted.” [link]
A service is assigned to a scope, therefore without being allowed to access a scope, you cannot access the resource. You can create scopes independently from the resource, that is: first create a scope, then assign the scope to a service you want to access. In reality, you should first create the service and then assign a scope to it.
After knowing the scope, log in to Keycloak and create a client scope. Later this scope will be assigned to a client. If the client authenticates then in Keycloak, the scope is assigned to it and the client can access the service.
Click on create
In the following form, enter the data for the OAuth scope:
Name: Scope for service. Here I used ZDEMO_CDS_SALESORDERITEM_CDS_0001, a scope for a CDS Service. Don’t worry, it’s just an example, Gateway does not work with OpenId Connect.
Description: SAP Gateway OData service
Display on Consent Screen: off
The OAuth scope is created. It can now be assigned to a client.
When you change the scope of the service, you need to update the scope information here too.
In this article I will show how to add an OAuth 2.0 client in Keycloak.
Log in to Keycloak and select a realm. In a new (empty) installation of Keycloak, the realm Master is selected by default. The realm name is important, as it is part of the URL used later for OAuth authentication.
To create a new OAuth 2.0 client, click on create.
Insert your information for the client. Make sure the openid-connect is selected as client protocol.
Click on save and the client configuration screen is shown. Here you can add and alter additional information.
Access Type: confidential. This will require the OAuth 2.0 client to send a client secret to authenticate itself.
Service Accounts Enabled: On
Valid Redirect URIs: set to a valid one, like /
All other parameters should work as given.
Switch to tab Credentials
Here you can see the OAuth 2.0 client secret. As in the settings tab the access type was set to confidential, the client must send its client id and secret to Keycloak to authenticate itself. The client id is the name of the client (oidclient), and here you can see the secret: 7bc40…
You can now add the OAuth 2.0 scopes to the client.
EOL for NetWeaver 31.12.2025 is for the on premise version, as listed by SAP PAM.
S/4HANA is running on NetWeaver ABAP, therefore, ABAP will stay the base technology for SAP.
Fiori Elements or “pure” Fiori app development: this is not either nor situation, both are valid and can complement each other. Important is to have the backend services made ready for Fiori; as SAP does since the beginning for their official Fiori Apps.
Fruit Checker App is not a productive app. It is a showcase with the intention to make people think about the possibilities: what can you do today, value that combination of services can bring, etc.
Possibilities CAP may offer depend solely on SAP. It’s their product and its features and roadmap are controlled 100% by SAP.
OVA is a virtual appliance, ready to run on a hypervisor. With an OVA file, you can import the image into VirtualBox, VMWare, etc and all needed information is loaded from the file and you can start the VM. This works as long as your hypervisor is capable of reading an OVA file. Proxmox does not understand OVA, and you cannot use the image out of the box. Reading the provided VM definition is not possible. As an OVA file contains the VM disk, you can add the disk to a VM.
First, create a new virtual machine definition in Proxmox. You are going to import the disk image from the ova file, not the virtual machine definition. Therefore, you must first create a VM, this creates the necessary information in Proxmox, and then you are adding a disk to this VM.
The overall steps to add OVA image to Proxmox are :
Delete associated disk
Assign OVA to VM
Create a new VM definition
In Proxmox, add a new VM. Note the VM ID. You need this later when importing the OVA disk.
Go through the wizard to create a normal new VM.
It seems that you have to add a disk. The disk will be deleted later, the configuration entered here is not important.
I’ll use a CPU with 2 cores.
I am using the VM for SAP HXE, therefore I am going to use a little bit more RAM: 24 GB RAM in total.
After going through the wizard, the VM definition is ready and you can let Proxmox create the VM.
The new VM will appear in the list of available VMs in your server. Note the ID: 101 and the available storage locations.
Delete associated disk
Open the VM configuration and got to Hardware. The disk you added in the wizard is listed. This disk must be removed.
Remove the disk
Detach from VM
Select the disk and click on Detach. The disk state will change to unused.
Remove disk from VM
After the disk is detached, remove it from the VM. This will delete the disk file.
The next step is to import the OVA disk and assign it to the VM. As Proxmox uses LVM for managing its storage, a provided tool must be used to import the disk to LVM and assign it to the VM. Copy ova file to Proxmox server. Unzip OVA file. OVA is a zip file, you can simply unzip it to see its content. It contains the VM definition (ovf) and the vm disk (vmdk).
tar -xzvf hxexsa.ova
To import the image, you need to specify the VM and location where the disk is imported to. This information is available in Proxmox. You can see a list when looking at the server at the left menu. I am going to use local-lvm and VM HXE with id 101.
Some years ago I create a new instance in EC2 with the minimal configuration needed. The disk size of the root device and partition is set to 8 GB. Today I am reaching the limit of the disk size and need more space. Having the server in the cloud allows me to “simply” increase the size without having to buy a new HDD.
To increase the size of an EBS volume, you need to execute three tasks:
Resize file system
The commands to resize partition and file system are (gp2, ext4, t2):
You can use the EC2 console or CLI to extend a volume. I’ll use EC2 console. The volume used as root device for my EC2 instance is based on Elastic Block Store (EBS) and type gp2. This step is very easy to do, as you inform AWS that you need more storage and you get more storage assigned. You won’t be able to make use of that new storage as long as the file system isn’t resized.
Go to EBS > Volumes
A list of volumes is shown. Find the correct one using the volume ID. The root volume of my instance has 8GB size and type gp2.
To modify the volume, select the volume and then click on Actions > Modify Volume
The current configuration of the volume is shown. Last chance to verify you are changing the right volume.
I’ll only modify the size of the volume. From 8GB to 20 GB.
Confirm the change. Click on Yes.
In case AWS was able to assign more storage to your volume, a confirmation message is shown.
The size of the volume is now shown as 20 GB in the volume table.