Response for preflight does not have HTTP ok status

Issuing an AJAX request is more complex than you might think.

Issue

When you issue an AJAX request to a server in another domain (CORS), you may get the following error message:

Response for preflight does not have HTTP ok status.

Problem

The server is configured to allow CORS. The Apache configuration includes

Header set Access-Control-Allow-Origin "*"

The response header of the service contains the correct header value. With this set you can access the service via CORS.

Solution

Now, why does it not work?

You have to be aware that this works for simple CORS requests. For more complex requests that set custom headers, etc, the service may not work. This is due to the preflight mechanism of the browser that checks if the service accepts the request. Before issuing an AJAX request (e.g. GET or POST), an OPTIONS is triggered to check what the service is accepting. You can see this in the network tab of Chrome.

The request includes two headers:

  • Access-Control-Request-Headers
  • Access-Control-Request-Method

Before issuing the actual GET request, the browser is checking if the service is correctly configured for CORS. This is done by checking if the service accepts the methods and headers going to be used by the actual request. Therefore, it is not enough to allow the service to be accessed from a different origin, but also the additional requisites must be fulfilled.

Solution

To configure Apache to set the headers, add the missing headers. Include in your HTTP service configuration the header set directive.

Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT"
Header always set Access-Control-Allow-Headers "append,delete,entries,foreach,get,has,keys,set,values,Authorization"

Still not working!

After setting the values, you may still not able to call the service, as the browser still reports an error. The response code is not 2xx. Returning a 200 HTTP code can be enforced in Apache config using a rewrite rule.

RewriteEngine On
RewriteCond %{REQUEST_METHOD} OPTIONS
RewriteRule ^(.*)$ $1 [R=200,L]

With this configuration, the service will now work with CORS. The first OPTIONS request will pass:

The following GET request will also pass:

Links

 

Let the world know

Automount share

The example used in this blog is a CIFS share from a Samba server running on a Raspberry Pi mounted on demand by a client running Debian.

Goal

The goal is to have a share on a client that is dynamically mounted. The share should only be mounted when an app needs to access.

In my case I do have a server with a data storage share connected. The storage is made available to clients in the network. To not have the share being mounted by the clients all the time, the share should be mounted only when real demand exists. For instance, an app needs to read data from the share. If the share is not needed by the client, it should not be mounted.

Process

To understand the scenario better, take a look at the picture below. The process can be separated into 4 steps.

  • Step 1: the client is configured, but the share is inactive.
  • Step 2: An app is accessing the share. This can be an app as simple as ls /mnt/share
  • Step 3: The client is connecting to the server and mounting the share to the local mount point /mnt/share. The data is now available to the app.
  • Step 4: The app is not using the data from the share any longer. After a timeout, the client disconnects the share.

The example is using for the server a Raspberry Pi with Raspbian and for the client a Debian based system (Proxmox). As share type, CIFS is used. On the client, Samba is running and configured to give access to a named user to the data storage.

Installation

Server

Install Samba and configure access for a named user. This is not part of this blog.

Client

Autofs is the package and tool taking of mounting shares automatically. Install it using apt-get.

apt-get update
apt-get install autofs

Configuration of autofs

Autofs needs to be configured. To make it easier, the packages comes with templates. I am going to use the autofs master template as my starting point. Take a look at the master template as it contains an explanation of what is needed.

cd /etc/auto.master
more /etc/auto.master

To add an auto mount share a new line must be added to the file. The format is: mount-point file options.

Before adding the line, you first must understand how the template and autofs works and what you want to achieve. First parameter is for the local mount point. The directory informed here is the parent. The actual shares mounted are created as sub-folders in that directory. For instance, if you choose /mnt/server and the remote share is data, the final mount point will be /mnt/server/data. I am going to mount the remote share to /mnt/server.

The seconds parameter is for the configuration file for that mount point. The third parameter specifies how autofs is treating the mount point. To unmount the share after 1 minute of inactivity, use option –timeout=60. The ghost option will create the subfolder even in case the server is not reachable.

Edit master template

Add a new configuration line for mounting the server share

/media/net /etc/autofs/auto.net --timeout=60 --ghost

Mount configuration

The actual mount configuration for the share is specified in the file /etc/auto.server. Create file /etc/auto.server and edit it.

touch /etc/auto.server
vim /etc/auto.server

Insert mount options.

  • [username] – name of user used to connect to Samba share
  • [password] – password of the user
  • [remote_server] – IP / server name of the Samba server
  • /data – name of the share configured at Samba. In case you have a share configured named Music, or Fotos, or Work, substitute data with the correct share name.
data -fstype=cifs,username=[username],password=[password] ://[remote_server]/data

Save file

Change permission

chmod 0644 auto.server

Start autofs as service

Stop autofs and start it in debug mode to see if the configuration works.

If it worked, exit and start the service.

systemctl start autofs.service

Test

To test, go through the 4 steps described in the picture at the top of this blog.

Step 1

Client is ready. Check the mounted volumes. You will see that no CIFS volume is available.

Using mount, you can see that the mount point is available.

/etc/auto.server on /mnt/server type autofs (rw,relatime,…

In the parent mount point for the CIFS share, autofs created the folder data.

Step 2 & 3

Run an app that is accessing the share

ls /mnt/share/data

Accessing now the content of /mnt/server/data will automatically mount the CIFS share.

df -h

Mount

//192.168.0.164/data on /mnt/server/data type cifs (rw,relatime

Step 4

Assure that no app is using the share and wait until the specified timeout occurs. Check with mount and df to see that the share is umounted.

Additional information

Start / stop autofs service

Start service

systemctl stop autofs.service

Stop service

systemctl stop autofs.service

Links

https://wiki.archlinux.org/index.php/autofs

 

Let the world know

Activate Clickjacking-Framing-Protection service

SAP NetWeaver comes with its own solution to prevent clickjacking for its most relevant UI frameworks. For more information about this protection, see the corresponding SAP Notes.

By default, clickjacking protection is disabled. To activate it, you need to insert a value into table HTTP_WHITELIST.

Insert values into table HTTP_WHITELIST

Transaction: SE16

Check if clickjacking protection service is enabled or disabled. It is disabled, if no record with ENTRY_TYPE=30 is in the table, or if the table is empty.

Table name: HTTP_WHITELIST

Execute

Result

By default, no values are in the table and the service is not enabled. For data that needs to be inserted into table HTTP_WHITELIST, see SAP Note 2142551. Creating an entry type with vale 30 activates the whitelist.

Transaction: SE16

Select F5 or click on the new entry icon.

Insert data. See links below for additional information on possible values.

Click save to persist the entry in the table.

Afterwards, the table will contain one record. As the record has value 30 for column ENTRY_TYPE, the clickjacking protection service is enabled.

Activate ICF whitelist service

Adding a record activates the service, but to make apps working, additional configuration steps must be taken. For instance, accessing now a WDA app (e.g. SAML2) will resolve in a HTTP 500 internal server error. This is caused by having the clickjacking protection activated, but not the whitelist service.

To solve the HTTP 500 error, you need to activate the ICF whitelist service.

Transaction SICF_INST
Technical name: UICS_BASIC

Execute. This will activate the ICF node

/sap/public/bc/uics/whitelist

Result

After enabling the service and the ICF node, the above WDA app will open in the browser.

https://vhcalnplci:44300/sap/bc/webdynpro/sap/saml2

Additional information on setting whitelist entries.

 

Let the world know

OpenVPN Assign static IP to client

After configuring the overall OpenVPN client and server infrastructure, my clients can connect to a VPN. The client can access server resources and vice versa. While the server gets normally always the same IP assigned, the client IP address is assigned dynamically from a pool of IP addresses. Meaning: there is no guarantee that the client always gets the same IP address. Normally, this is not a problem, as the client connects to consume server resources. Such like a web site, or git repository. In my case, the architecture is that the OpenVPN server acts as a proxy to internal services. The web site, git repository, etc are running on the client. Therefore, the server must be able to connect to the client using a fix address.

To make this work, each time a client connects, the same IP must be assigned to. OpenVPN allows to assign a static IP to a client.

Configuration

  1. In /etc/openvpn create folder ccd. Ccd stands for client config directory, meaning: it contains the configuration for a client.
  2. Edit file server.conf and add line “client-config-dir ccd
# EXAMPLE: Suppose the client
# having the certificate common name "Thelonious"
# also has a small subnet behind his connecting
# machine, such as 192.168.40.128/255.255.255.248.
# First, uncomment out these lines:
client-config-dir ccd

3. Create a configuration file for each client and put into directory ccd. As file name, use the same name for the client as used in the CN field of the client certificate.

ifconfig-push IP MASK

Example:

ifconfig-push 10.8.0.2 255.255.255.255

CLI steps

sudo mkdir /etc/openvpn/ccd
sudo touch /etc/openvpn/ccd/client1
sudo vim /etc/openvpn/server.conf
Uncomment the line containing client config parameter
client-config-dir ccd

sudo vim /etc/openvpn/ccd/client1
Insert:
ifconfig-push 10.8.0.2 255.255.255.255
Restart OpenVPN service on server
sudo /etc/init.d/openvpn restart

Client with automatic assignment of IP: 10.8.0.6

After restart of OpenVPN server: IP is now 10.8.0.2

Server log

 

Additional information can be found in OpenVPN documentation.

client-config-dir

“This file can specify a fixed IP address for a given client using –ifconfig-push, as well as fixed subnets owned by the client using –iroute.https://openvpn.net/index.php/open-source/documentation/manuals/65-openvpn-20x-manpage.html

ifconfig-push

„Push virtual IP endpoints for client tunnel, overriding the –ifconfig-pool dynamic allocation.” https://openvpn.net/index.php/open-source/documentation/manuals/65-openvpn-20x-manpage.html

Let the world know

ERR_CONTENT_DECODING_FAILED

Configuring a reverse proxy is not an easy task. It involves some trial and error and dealing with unexpected errors. One of those errors is ERR_CONTENT_DECODING_FAILED. The web site won’t load in your browser will show this error message:

Error ERR_CONTENT_DECODING_FAILED may show up in your browser when a resource is configured on your reverse proxy, and the backend communication is working. That is: the backend is returning data, but not in a form the browser expects.

Example: browser expects a GZIP response, but receives plain text. Therefore the hint from your browser about content decoding failed. The content is received, but the browser is not able to decode / understand the data. If a plain text response is expected, but the received response from the backend is zipped, the browser cannot read the content.

To solve this error, reset the Accept-Encoding request header in your Reverse Proxy configuration.

Apache

Documentation

RequestHeader unset Accept-Encoding

Example

<Location /test>
    RequestHeader unset Accept-Encoding
    ProxyPass https://0.0.0.0:443
    ProxyPassReverse https://0.0.0.0:443/
    Order allow,deny
    Allow from all
</Location>

NGINX

Documentation

proxy_set_header Accept-Encoding "";
Let the world know

How to access UI5 model data

Old habits are not easily dying and replaced by best practices and general recommendations. In the early days, when UI5 started to gain traction, people discovered it, tried it out, wrote apps, made them somehow work. Everybody was learning, and things we can do today were not possible or known then. Changes to the documentation, API and recommendations are simply the result of lessons learned.

As UI5 apps are based around a model, its data, representation and manipulation, a lot of questions around UI5 development are about the model: how to access data, change, update or delete it. A good thing of UI5 is that it is following semantic versioning, and code written for UI5 1.1x or 1.2x will still work with latest versions like 1.5x. It doesn’t mean that you, as a developer, should simply copy & paste example code found somewhere.

Example

After loading the model, be it JSON or OData, you may have to access a specific property in you code. What you can find in the Internet is code like:

oData

var oData = this.getView().getModel().oData;
var firstname oModel.getModel().oData[entity].firstname;
var name = oData[entity].name;

JSON

var oData = this.getView().getModel(“device”).oData;
var osname = oModel.oData.os.name;
var osname = this.getView().getModel().getData().os.name;

First line will give you the model data object, 2nd and 3rd line the property osname of the model. The 3rd line is partly correct, as the access to the data is done through a method, but access to property name is done directly. You can work with the data object now directly, but you shouldn’t. What you want is not to work with the raw data that defines your model, but with properties.

Use access methods

Data binding is important when developing an UI5 app. You update properties, and you want to have the UI elements automatically be updated too. When accessing the properties directly, you have to check if the change is correctly propagated. Using access methods, UI5 will for sure take care of this. The method to access properties of your data model are getData, getProperty or getObject. Applying this to above code:

JSON

var oData = this.getView().getModel().getData();
var osname = this.getView().getModel(“device”).getProperty(“/os/name”);

OData

var entity = this.getView().getModel().getObject("/"+key);
var property = this.getView().getModel().getProperty("/"+key+"/Depth");

This is now only using the methods to access data. To alter the property, use the setProperty method.

OData

this.getView().getModel().setProperty("/"+key+"/Depth", 11.111)

JSON

this.getView().getModel("device").setProperty("/os/name", "windows")

The above examples may not be perfect. They show that UI5 offers methods to access model data. Using these methods your code will continue to work in the future and is guaranteed to work in future releases of UI5. Direct access to the model data is done at your own risk. In case you work on a UI5 app, use the access methods. If you work on an older app that uses direct access to the model, try to refactor the app. The change from oModel.oData to oModel.getData() is as simple as executing a find and replace.

 

Let the world know

How to use find to sort files across folders

Short version

You have files named File0.txt to File100.txt in different folders and want to move the first 30 files in a separate directory (command for Mac users. Linux users can use find and mv):

For sorting FileNN.txt (character + number)

gfind -type f -printf "%f %p\n" | sort -n -k 1.5 | sed 's/.* //' | head -30 | xargs gmv -t ./A/

For sorting NN.txt (numeric filename)

gfind -type f -printf "%f %p\n" | sort -n | sed 's/.* //' | head -30 | xargs gmv -t ./A/

Preparation

For the below commands to work, you’ll need to use GNU find. If you are using a Mac, you’ll need to install the GNU version of find and mv via homebrew.

brew install findutils coreutils

Create a test folder structure. There will be 3 folders and several files in them.

mkdir 1
mkdir 2
mkdir 3

Create 100 files with name TestNN.txt with sample content and place them in one of the three directories randomly.

for i in {000..100}
  do
    Num=$((1 + RANDOM % 3))
    echo hello > "$Num/File${i}.txt"
done

After running the above script, the folder will look like this (running ls -R)

Also create the target directory A:

mkdir A

Commands

After the initial setup is done, we have several files in 3 directories. If you use find to get a list of all files, you’ll see that the output is not sorted.

gfind ./ -type f

A Unix command to sort files is sort. Applying sort in this scenario won’t help, as the files are sorted by the folder name:

gfind ./ -type f | sort -n

The output is now sorted by folder name and then by file name, but not only by file name. Copying the first 50 elements won’t result in the File1 – File 50. The files are not distributed across the directories as needed.

It is possible to see a solution to the problem: sort only on the filename, while still having the complete path in the output for piping the parth to the copy command. Find includes exactly this possibility: print a specific field. To control the output, parameter -printf is available, and %f prints the filename, while %p includes the folder.

gfind -type f -printf "%f\n"

The output of the command only prints the filename.

To output the file with path, use %p. In both cases \n is used to have each file in a new line.

gfind -type f -printf "%p\n"

Both output parameters can be combined. %f %p\n will first print the filename, then space, then the path.

gfind -type f -printf "%f %p\n"

Applying sort on this output will sort on the file name only.

gfind -type f -printf "%f %p\n" | sort -n

Close, but not exactly how it should be. In case your filename consists only of numbers, this will already work. In the example however, the filename contains characters. Therefore, sorting is not working correctly. It starts with File0.txt, then File1.txt, but then comes File10.txt and not File2.txt. To sort by the number, add to sort an additional parameter: -k 1.5. As the filename contains a fixed value (File), the parameter will instruct sort to ignore this part when sorting and focus only on the number.

Note: you may apply the same sort parameter without using find, just ls. As long as your path has the same size, it will work. For folders named 1..9 it’s ok, but when your folder has two or more chars (like 10, or 213, or test), the parameter needs to be adjusted.

List all files with directory name using ls:

ls -d1 */*

Sort by number in filename:

ls -d1 */* | sort -n -k 1.7

gfind -type f -printf "%f %p\n" | sort -n -k 1.5

With the last command, the output is correctly sorted based on the filename. Now, how to use this output to move the files to the target directory? Just piping the output to mv won’t work. The first part with the filename is not needed, only the second part. Both parts are separated by blank, and using sed, it’s possible to eliminate the part before the blank from the output.

gfind -type f -printf "%f %p\n" | sort -n -k 1.5 | sed 's/.* //'

The last step is now to use mv to move the files to the target directory. To not have to move all files, let’s take only the first 30 files. Gnu mv is needed to move the files, as the default MacOS BSD mv does not include the -t parameter. To pass the files line by line, xargs is used together with gmv.

gfind -type f -printf "%f %p\n" | sort -n -k 1.5 | sed 's/.* //' | head -30 | xargs gmv -t ./A/

Result

Now there are the first 30 files in folder A.

gls -1v ./A

 

Let the world know

SAP Web IDE: Invalid backend response received by SCC

Connectivity between SAP Cloud Platform and an on premise SAP NetWeaver system is normally achieved via SAP Cloud Connector. A nice feature depending on this is the remote connection of SAP Web IDE to an on premise ABAP system. The feature allows to easily load apps from the ABAP system and change or extend them from everywhere.

For this feature to work, some ICF services must be active on the ABAP system and remote access enabled on SCC. If not, Web IDE cannot “talk” to NW ABAP. Some possible errors and solutions regarding the setup are shown in this blog.

Scenario

A NetWeaver ABAP system with Fiori apps is available and the SAP Cloud Connector is configured to expose the system to SAP Cloud. I am using the SAP NetWeaver ABAP 7.51 Developer Edition for the scenario.

In the destination section of SCP, the SCC is shown as connected and the destination NPL is configured and working. A connection tests gives back a successful message: SCP <–> SCC <–> NW works.

Problem

A developer tries to extend a Fiori app. In Web IDE, the project wizard for an extension project is used.

After selecting the on premise system destination, an error message is displayed. The actual error message can differ. Sometimes you see an informative error message or just some red text or maybe nothing.

Error messages

In all cases, you can check the log of SCC and see a detailed information on the error.

The error message is:

Access denied to /sap/bc/adt/discovery for virtual host npl:443

Solution

The ICF service /sap/bc/adt/discovery is not accessible. This can be because the user does not have the right permissions, or the service is not active in the NW system, or SCC is not exposing the service.

Alternative A: SCC not exposing service

Adding a service in SCC will only expose the exact path, not the sub path. Either you add all paths exactly in the resource list, or change the access policy to accept sub-paths too.

Root cause: Path only, excluding sub-paths.

Solution: Change this to will allow Web IDE to access the resource.

Alternative B: ICF service not active

In the NW ABAP system, got to transaction SICF and check node /sap/bc/adt. This node must be activated. By default, this node is deactivated and must be activated by Basis.

Root cause: Service deactivated

Solution: Activate node adt. Right click and select Activate Service.

Alternative C: Missing authorization

Check with SU53 and SAP Help what is missing and assign the right permissions to your user.

Result

After applying the correct solution, the developer can use the extension project wizard in SAP Web IDE to load available applications.

 

Let the world know

Download resources from SAP Cloud for your CI job

When running a CI job you may need to use some SAP tools. For instance, the MTA builder or Neo tools. Many CI servers include integration to build tools or plugins are provided by the community or vender. Jenkins offers plugins for Maven, Ant or Node that let you easily integrate these into a CI jobs. If you have a CI job for SAP, it is your task to make the necessary tools available. There are not many plugins for SAP available for Jenkins.

Some tools you may need can be found on SAP’s tool site. For instance, the MTA builder. A simple JAR file that is available for download and needed in case you are working with MTA apps.

Before you can download the JAR file, you need to agree to the EUL.

This means that you cannot download the JAR using cli:

wget https://tools.hana.ondemand.com/additional/mta_archive_builder-1.1.0.jar

Solution

Running the above wget command will not download the tool, but a web site. Some may know that this is very close to how Oracle protected it’s Java download. And the “solution” here is the same: send the right cookie via wget.

wget --header "Cookie: eula_3_1_agreed=tools.hana.ondemand.com/developer-license-3_1.txt" https://tools.hana.ondemand.com/additional/mta_archive_builder-1.1.0.jar

Works for downloading other tools from the download page like the Neo SDK too:

wget --header "Cookie: eula_3_1_agreed=tools.hana.ondemand.com/developer-license-3_1.txt" https://tools.hana.ondemand.com/sdk/neo-javaee6-wp-sdk-2.137.0.1.zip

Let’s hope SAP provides some Jenkins plugins that take care of downloading these automatically.

Let the world know

Clone a SCP git repository from command line

I have a git repository on SCP that I want to clone using git on my laptop. I thought this should be easy to do. The source code of my project is available in the git repo at SCP. Cloning the repo using git clone from this URL should work.

git clone https://git.hanatrial.ondemand.com/p539123trial/cisample

The clone fails with “service not enabled.” Looking at SAP’s documentation, this should not have happened. Here SAP Cloud Platform documentation for the git service differs from reality.

SAP Help

I did a), and b) did not apply, as I wasn’t asked for my SCN user ID nor password. SAP’s git troubleshooting guide contains a section about the error message. Good to know that there is a possible solution, but I already did already what the proposed solution to the error is:

Ensure that you have the correct repository URL. Copy it from the Source Location section of the repository’s details page in the SAP Cloud Platform cockpit.

As it is possible to access the repository in SAP Web IDE, it should also be possible to access it from outside SCP. I know that the git repository is protected. Maybe the requests from git cli is blocked by SCP? After all, I was not asked to authenticate. Maybe I can force SCP to ask me for my password? Changing the URL to include my SCN user ID did just that: I was asked to provide my password.

git clone https://p539123@git.hanatrial.ondemand.com/p539123trial/cisample

SCP is now asking for my password and – magic happening – the git service is now accessible and the repo can be cloned. Would be nice if the git service would ask me to authenticate instead of failing directly.

Let the world know