Fish with OData

Rui Nogueira published a while back a blog series on SCN on how to implement an IoT scenario using a Raspberry Pi and HCP. I think the example shows very well how what the main use case of IoT is. When the blog was published, there was no SAP HCP IoT service available; if you want to implement the same example in a more correct way, you should use HCP IoT. Nevertheless, Rui`s example is easy to implement and shows how the different parts play together: client, server, user.

When I first came across Rui`s blog I noticed that he uses REST and goes through some effort to persist the data. I thought that it would be nice to adopt this to make use of OData. Took me some while to publish this blog J In the end, I did not adjusted his code, it merely served as an orientation. I wrote my own IoT server and client app. The result is a simple, clean and easy to read JEE app that uses JPA and Olingo for exposing the JPA entities and a Java client that does not need to be run on an IoT device. My user dashboard is very simple, implemented in D3.js, and only shows one sensor`s measurement data.

The client is a Java app that reads current weather data from openweathermap.org. To make this work, you`ll need an API key (free). In case you do not want this, I added a jMeter test that creates random temperature data (as seen in above picture). JMeter test file is located here: fish-with-odata\iotserver\test\jmeter\LoadData.jmx. The test is pre-configured to use localhost and port 7080. The test will run for 3 minutes as the 100 measurements are not created at once, but with a fixed time interval of 3 seconds.

The app

The source code can be found on GitHub: https://github.com/tobiashofmann/fish-with-odata

You will find two folders:

  • iotclient, containing the client app
  • iotserver, containing the server and user dashboard

Both are maven projects. It should not be a problem to transform them into Eclipse projects via mvn eclipse:eclipse, but while I developed both in Eclipse, I did not test transforming to an Eclipse project from maven. Sensor and Measurements are implemented using JPA. The relationship between both is that one sensor can have many measurement assigned, but a measurement can only be assigned to one sensor. In the Snesor class, this is done via @OneToMany

Sensor class

@Entity(name = "Sensor")
public
 class Sensor implements Serializable {
    @Id
    @GeneratedValue(strategy = GenerationType.TABLE)
     @Column(name = "ID")
     private long id;
     private String device;
     private String type;
     private String description;
     @OneToMany(mappedBy = "sensor", cascade = CascadeType.ALL)
     private List<Measurement> employees = new ArrayList<Measurement>();

Measurement class

@Entity(name = "Measurement")
public
 class Measurement implements Serializable {
     @Id
     @GeneratedValue(strategy = GenerationType.TABLE)
     @Column(name = "ID")
     private Long id;
     private String unit;
     @Temporal(TemporalType.TIMESTAMP)
     @Column(insertable = true, updatable = false)
     private Date createdAt;
     @Temporal(TemporalType.TIMESTAMP)
     @Column(insertable = false, updatable = true)
     private Date updatedAt;
     private Double value;
     @ManyToOne
     @JoinColumn(name = "SID", referencedColumnName = "ID")
     private Sensor sensor;

I am lazy so I let JPA decide when a measurement is created or updated. This may not be acceptable in most scenarios, especially when you depend on the exact time when the data was captured by the device and not when it was persisted in the DB. I implemented it that way to not have to take care of capturing the date in my client app and to keep the payload low.

Run server

To run the server:

mvn clean pre-integration-test

This will download the HCP SDK, install the server, run it on port 7080 and deploy the WAR file. After some while, the IoT server is ready.

A benefit of OData can be seen when comparing how Rui is consulting the latest added measurement for a sensor: he adds the latest measurement as an object to the sensor.

private Measurement lastMeasurement;

With OData, the latest added measurement for a sensor can be retrieved by simply adding some parameters to the URL:

$top parameter controls how many data points are returned. Beware that with OData, there is a page size defined that limits the max number of requests returned. This parameter is configurable in the class de.tobias.service.ODataSampleJPAServiceFactory

private static final int PAGE_SIZE = 50;
Assign any value to PAGE_SIZE you consider useful.

Run client

To run the client, you first must add your API key. This is done in the class de.itsfullofstars.iot. WeatherData. Add your API to APPID.

private static final String APPID = “YOUR API KEY”;

To run the client, create the jar:

mvn package
java –jar target\fishodataclient-1.0.0.jar

As an alternative, a jMeter test is included in the server: fish-with-odata\iotserver\test\jmeter\ LoadData.jmx

The final chart can be seen by accessing: http://localhost:7080/iotserver/. Depending on what data source you use, the chart will look like a flat line or like a heart attack.

Real data (Rio de Janeiro)

Fake data

Olingo – Requested entity could not be found

Lately I was playing around with HCP and Olingo and wanted to expose a JPA model as OData. I created some data using EJB and then tried to read this data via OData. Accessing the collection gave me a list of created entities:

To access one entity, it is just using its ID as key and call it in the browser: http://localhost:8080/service.svc/Events(8L) What I got as a response was an error message: Requested entity could not be found.

<error xmlns=”http://schemas.microsoft.com/ado/2007/08/dataservices/metadata”>

<code/>

<message xml:lang=”en-US”>Requested entity could not be found.</message>

</error>

Thing is: the entity was there. I know it (I have DB access), I just could not access it. Turned out that the version of org.eclipse.persistence.jpa I was using does not like when the @ID key is of type long (8L). Using version >= 2.5.2 solved the issue for me. Changing my pom.xml:

<dependency>

    <groupId>org.eclipse.persistence</groupId>

    <artifactId>org.eclipse.persistence.jpa</artifactId>

    <version>2.5.2</version>

</dependency>

Now I can access the entity using the ID as key in the URL:

Note

You`ll have to declare org.eclipse.persistence.jpa in your pom.xml when you are using a non-JTA data source (RESOURCE_LOCAL) and when you create your entity manager like this:

ds = (javax.sql.DataSource) ctx.lookup(“java:comp/env/jdbc/DefaultDB”);

Map properties = new HashMap();

properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, ds);

emf = Persistence.createEntityManagerFactory(“JPATest”, properties);

If you just use

emf = Persistence.createEntityManagerFactory(“JPATest”);

You won`t have to declare it in pom.xml. Either way, if you do not declare it, or use the wrong version, you`ll end up getting the same error: Requested entity could not be found. The version of the library delivered with HCP is 2.75.8.4 org.eclipse.persistence.jpa_2.4.1.v20121003-ad44345 and looking the Google results, this version is also not working 100% when the @ID is long. Best solution should be to declare the dependency in pom.xml and use as version at least 2.5.2.

Enable TLS in SMP3

SSL is out, TLS is the new kid in town (although already pretty old) and to keep security high on your SMP3 server, a question remains: how to enable TLS on SMP3? Easy: it is already configured!

By default, SMP3 comes with TLS enabled. The trick is to configure it how you want it to be. For once, there are the ciphers (not part of this blog) and the protocol. The protocol defines if a browser can use TLS v1, v1.1 or v1.2. The configuration is done on the server side, in the default-server.xml file located at:

/<SMP3 installation directory>/Server/config_master/org.eclipse.gemini.web.tomcat/default-server.xml

As SMP3 is using Tomcat as its web server, the usual Tomcat configuration parameters apply. To have a HTTPS connection on port 8081, the XML looks like this:

<Connector SSLEnabled=”true” ciphers=”TLS_RSA_WITH_AES_128_CBC_SHA” clientAuth=”false” keyAlias=”smp3″ maxThreads=”200″ port=”8081″ protocol=”com.sap.mobile.platform.coyote.http11.SapHttp11Protocol” scheme=”https” secure=”true” smpConnectorName=”oneWaySSL” sslEnabledProtocols=”TLSv1″ sslProtocol=”TLS”/>

Parameters

  • Port: defines the port Tomcat will listen on. Here it is 8081
  • sslEnabledProtocols: “The comma separated list of SSL protocols to support for HTTPS connections. If specified, only the protocols that are listed and supported by the SSL implementation will be enabled.” [1]
  • sslProtocol: “The SSL protocol(s) to use (a single value may enable multiple protocols – see the JVM documentation for details). If not specified, the default is TLS” [1]

Connecting to the port results in a TLSv1 connection:

The parameters that define which protocol can be used are sslEnabledProtocols and sslProtocol. Now, which one does what? I found [2] and [3] explaining this:

  1. setProtocol=”TLS” will enable SSLv3 and TLSv1
  2. setProtocol=”TLSv1.2″ will enable SSLv3, TLSv1, TLSv1.1 and TLS v1.2
  3. setProtocol=”TLSv1.1″ will enable SSLv3, TLSv1, and TLSv1.1
  4. setProtocol=”TLSv1″ will enable SSLv3 and TLSv1

In the above example, sslProtocol = TLS, therefore TLSv1 and SSLv3 is available. To limit the connection to TLSv1, sslEnabledProtocol must be set to TLSv1. To have a connection that allows for TLSv1, TLSv1.1 and TLSv1.2 (and let the browser decide which one to use), set sslEnabledProtocols to TLSv1,TLSv1.1,TLSv1.2.

Example

<Connector SSLEnabled=”true” ciphers=”TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA” clientAuth=”false” keyAlias=”tobias” maxThreads=”200″ port=”8081″ protocol=”com.sap.mobile.platform.coyote.http11.SapHttp11Protocol” scheme=”https” secure=”true” smpConnectorName=”oneWaySSL” sslEnabledProtocols=”TLSv1,TLSv1.1,TLSv1.2″ sslProtocol=”TLS”/>

If I now connect on port 8081, my browser should use the highest protocol available.

[1] https://tomcat.apache.org/tomcat-7.0-doc/config/http.html

[2] http://mail-archives.apache.org/mod_mbox/tomcat-users/201303.mbox/%3C13A085B2E018374C813676301AED0EE412D87457C3@BLR0EXC00.us.sonicwall.com%3E

[3] http://wiki.apache.org/tomcat/Security/POODLE

SMP 3 – Configuring Strong Encryption for JVM Security

SMP 3 is a Java application running inside Virgo. To not have to worry about Java versions and installation, the installer even installs SAP JVM together with the server. So you have a SMP 3 installation and a Java installation at hand. This means that you get automatically Java security features … and some legacy problems that come from the dark ages of Internet. One is that you have to enable Strong encryption for SMP3’s Java. This is needed at least when you are going to use SAML2 with ADFS as authentication provider. SAML 2 allows the IdP to encrypt the SAML response to make sure only the SP can decrypt it. The encryption algorithm used there is using Strong encryption methods. These are not available by default to Java. They need to be activated manually.

Procedure

The procedure for how to do this can be found at SAP Help. To enable Strong encryption, a policy file must be downloaded from Oracle and placed into a Java folder.

  1. Download policy file.

    URL: http://help.sap.com/disclaimer?site=http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

  2. Click on accept to enable the download link.

  3. Click on the link: UnlimitedJCEPolicyJDK7.zip. This will download a ZIP file. The content of the ZIP file are 2 JAR files. These 2 files must be copied to the SMP 3 Java JVM.

  4. Stop SMP 3 server.
  5. Copy the 2 JAR files to:

    Folder: <SMP3 installation dir>/sapjvm_7/jre/lib/security

  6. The installation path is outlined in the Readme that is part of the downloaded policy file:

    3) Install the unlimited strength policy JAR files.

     

    In case you later decide to revert to the original “strong” but

    limited policy versions, first make a copy of the original JCE

    policy files (US_export_policy.jar and local_policy.jar). Then

    replace the strong policy files with the unlimited strength

    versions extracted in the previous step.

     

    The standard place for JCE jurisdiction policy JAR files is:

     

    <java-home>/lib/security [Unix]

    <java-home>\lib\security [Windows]

  7. Restart SMP 3

    Command: go.bat

     

Result

After installing the pocliy file, Java JVM has strong encryption enabled.

Test

If you want to test if it worked: there is a code snippet available on SO.

Just run it as a Java program.

  • Compile: /sap/MobilePlatform3/sapjvm_7/bin/javac TestUCE.java
  • Run: /sap/MobilePlatform3/sapjvm_7/bin/java TestUCE
  • Result:

     

Subsonic on Raspberry Pi

About Subsonic

“Subsonic is an open source, web-based media server. It is written in Java, so it can run on any operating system with Java support. Subsonic supports streaming to multiple clients simultaneously, and supports any streamable media.” (Source: Wikipedia)

My first contact with Subsonic was several years ago. If memory serves me right, it was around 2008 when I was looking for a media software that can be accessed from remote. At that time, Subsonic and the internet didn’t serve me well enough in Rio de Janeiro to continue my endeavor with Subsonic. Only in 2015 I came back to it, thanks to Raspberry Pi. This combination gave me a new look at media access. Up to now the experience I have is good enough to make me want to share it with others. If you want to stream your private music collection without spending money on a cloud based server / service, this blog may be for you.

Pre-Requisites

Install Java 8

Subsonic wants Java 8, and Java 8 is available for Raspberry Pi. You can also download it form the Oracle Java website. The version you need is the one compatible with the Raspberry Pi processor: jdk-8-oracle-arm-vfp-hflt. Or you install it using aptitude.

Command: sudo apt-get install oracle-java8-jdk

This downloads the required packages

Afterwards, Java 8 is configured.

To test if Java 8 is available and correctly installed, just call Java.

Command: java –version

The output shows that Java 8 is installed. Congratulations!

Set JAVA_HOME

Java is installed, but for applications to know where to find it, an environment variable is used: JAVA_HOME. This variable points to the install dir of Java. To not have to configure this for each user, the configuration can be made global to all. The above command installed Java 8 at this location: /usr/lib/jvm/jdk-8-oracle-arm-vfp-hflt

Command: sudo vim /etc/environment

Insert JAVA_HOME=/usr/lib/jvm/jdk-8-oracle-arm-vfp-hflt

Installation

Download SubSonic

Subsonic can be downloaded from the Project homepage: http://www.subsonic.org/pages/download.jsp

Click the link to go to the download page and copy from there the actual download link and use wget to download it from Raspberry Pi.

Command: wget –O http://downloads.sourceforge.net/project/subsonic/subsonic/5.2.1/subsonic-5.2.1.deb

In case the file wasn’t saved as subsonic-5.2.1.deb, rename it. You do not have to, but it makes things easier.

Install Subsonic

The file downloaded above is a deb file. These files are meant to be used by the debian package manager and contain the actual file to be installed and dependencies.

Command: sudo dpkg -i subsonic-5.2.1.deb

This installs and already starts subsonic. To see the output log:

Command: sudo tail /var/subsonic/subsonic_sh.log

Not exactly what we want, as now sSubsonic is already running, but not configured. To stop subsonic:

Command: sudo /etc/init.d/subsonic stop

Subsonic stores its data in default folders. By default, for Debian it is /var/subsonic. Because subsonic was already started, this folder is created and filled with content, using the default subsonic user: root (yep, BAD, very BAD!).

Configuration

Subsonic will be run in the background at start as a service. For this to work, a subsonic user needs to be configured.

Create user

Command: sudo adduser subsonic

Add the user to the audio group, in caes you want subsonic to output audio.

Command: sudo adduser subsonic audio

How to make subsonic use that user and run under that user id and not as root? The user information is stored in the default subsonic configuration file: /etc/default/subsonic.

Command: more /etc/default/subsonic

The last line must be changed to: SUBSONIC_USER=subsonic

Permissions

Make user subsonic owner of /var/subsonic

Command: sudo chown subsonic:subsonic /var/subsonic –Rv

Reverse Proxy

Subsonic can now be accessed, but I want to be able to access it through my standard web site (this one). I want to do that without having to do much port forwarding or virtual hosts. The easiest solution is to make use of Apache as a reverse proxy.

Change URL

As subsonic will be run from behind a reverse proxy, the standard URL will be different: the URL used will be /subsonic. Therefore, the configuration of subsonic must be made aware of that. To find out the correct parameter, take a look at which parameters Subsonic supports.

Command: subsonic –help

The parameter is context-path. This parameter must be added to the config file.

Configure Apache Reverse Proxy

Add the following RP rules to the config file of the virtual server:

In my case, it is default-ssl

For reverse proxy to work, the module must be enabled.

Command: a2enmod proxy_http

Restart Apache

Command: sudo apache2ctl restart

That’s it from the Apache as reverse proxy part. Subsonic is already configured to use the new URL and Apache is ready.

Start subsonic

To be able to use Subsonic from the internet, just start it and check that everything is working correctly. Start subsonic:

Command: sudo /etc/init.d/subsonic start

Check pid:

Command: ls -alh /run/subsonic.pid

  • Created as user subsonic

Check process:

Command: ps -ef | grep subsonic

Use Subsonic

Log on to Subsonic.

Advanced features

Transcoding

It may be useful to transcode some music files on the fly. For instance, when the consumed bandwidth is too high, FLAC is used or when the user is accessing Subsonic over a low bandwidth network like 4G in Brazil. Subsonic allows for automatic transcoding of files. This feature can be activated for each user and the sampling limit can also be specified. It is therefore possible to define a user for mobile client usage and specify a max bitrate of 128 Kbps for him. The max bandwidth is defined in the user section of the configuration settings.

User settings

Transcoding settings

The programs ffmpeg and lame are installed automatically when Subsonic was installed via Debian package manager.

NWDS update site setup

NWDS 7.3/7.4 uses the update site concept of Eclipse. This makes it easier to update NWDS as an updated component only needs to be updated at the central update site. No need to distribute a whole NWDS installation package to the developers. The NWDS update site even includes a zip archive of the latest NWDS. That means that the developer does not have to download a NWDS version from SAP Market Place or nwds.sap.com.

  • Official documentation at SAP Help: link
  • Information on SCN: link

There is no separate NWDS 7.4 for NetWeaver Java 7.4. You use the 7.31 version when developing applicaitons for NW 7.4 (SAP Note). To set up an update site, first download the SCA

This SCA contains the archives, but not the tool needed to create the update site. You can download the tool from here: link. This tool is available for Windows.

The tools helps you in extracting the content of the SCA and to configure the update site URL. Afterwards, create an alias in the NW Java HTTP provider and copy the files to the directory specified by the alias.

Example

Set the alias to updatesite_731SP13. This alias points to the directory /home/cesadm/updatesite/731SP13

On the server, the folder contente looks like this:

The total size of the update size here is 2.5 GB. To access the update site via HTTP, inform the complete path to index.html:

http://host.fqdn:port/updatesite_731SP13/index.html

In NWDS, the update site is configured under the available software sites.

Thats it. Now NWDS can be updated from the update site.

SAP WebDispatcher and Logstash – installation and configuration

This document explains how to install and configure an environment for analyzing SAP Web Dispatcher (WD) logs with logstash, elasticsearch and Kibana under Linux. Kibana 3 needs a running web server. The example shown here is using nginx, but won’t detail how to set up nginx.

Components referred to in this document:

SAP WebDispatcher

“The SAP Web dispatcher lies between the Internet and your SAP system. It is the entry point for HTTP(s) requests into your system, which consists of one or more SAP NetWeaver application servers.” http://help.sap.com/saphelp_nw73ehp1/helpdata/en/48/8fe37933114e6fe10000000a421937/frameset.htm

Logstash

“logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, logstash comes with a web interface for searching and drilling into all of your logs.” http://logstash.net/

Elasticsearch

“Elasticsearch is a powerful open source search and analytics engine that makes data easy to explore.” http://www.elasticsearch.org/

Kibana

“Kibana is an open source, browser based analytics and search dashboard for ElasticSearch.” http://www.elasticsearch.org/overview/kibana/

Nginx

“nginx (pronounced engine-x) is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server.” http://wiki.nginx.org/Main

 

Install Elasticsearch

Installation in 3 steps: http://www.elasticsearch.org/overview/elkdownloads/

  1. Command: wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.2.tar.gz

  2. Extract archive

    Command: tar –zxvf elasticsearch-1.4.2.tar.gz

  3. Start Elasticsearch

    Command

    cd ealsticsearch-1.4.2

    cd bin

    ./elasticsearch

Install Logstash

Installation in 3 steps: http://www.elasticsearch.org/overview/elkdownloads/

  1. Command:

    wget https://download.elasticsearch.org/logstash/logstash/logstash-contrib-1.4.2.tar.gz

  2. Extract

    Command: tar –zxvf logstash-contrib-1.4.2.tar.gz

  3. Run logstsash logstash. Before logstash can be run, it must be configured. Configuration is done in a config file.

Logstash configuration

The configuration of logstash depends on the log configuration of WD. Logstash comes out of the box with everything it takes to read Apache logs. In case WD is configured to write logs in Apache format, no additional configuration is needed. WD also offers the option to write additional information to the log.

Logformats http://help.sap.com/saphelp_nw73ehp1/helpdata/en/48/442541e0804bb8e10000000a42189b/content.htm?frameset=/en/48/8fe37933114e6fe10000000a421937/frameset.htm&current_toc=/en/ed/2429371ec14c23a7508affa1280d07/plain.htm&node_id=46&show_children=false

  • CLF. This is how Apache is logging. It contains most information needed.
  • CLFMOD. Same format as CLF, but without form fields and parameters for security reason.
  • SAP: writes basic information and no client IP, but contains processing time on SAP Application Server. This is a field you really will need.
  • SMD: For SolMan Diagnostics and same as SAP, but contains the correlation ID.

As mentioned before, for CLF logstash comes with everything already configured. A log level that makes sense is SMD because of the response time. In that case, logstash must be configured to parse correctly the WD log. Logstash uses regular expressions to extract information. To make logstash understand SMD log format, the correct regular expression must be made available. Grok uses the pattern file to extract the information from the log http://logstash.net/docs/1.4.2/filters/grok The standard pattern file can be found here: https://github.com/elasticsearch/logstash/tree/v1.4.2/patterns

For instance, to extract the value of the correlation id when log format is set to SMD, the regular is:

CORRELATIONID [a-zA-Z]\[\-\]

For WD with SMD log the complete regular expression is

TEST2 \|

WEBDISPATCHER \[%{HTTPDATE:timestamp}\] %{USER:ident} “(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})” %{NUMBER:response} (?:%{NUMBER:bytes}|-) \[%{NUMBER:duration}\] %{CORRELATIONID:correlationid} %{TEST2:num1}

 

When the IP is added to the WD log with SMD, the regular expression is

TEST2 \|

CORRELATIONID [a-zA-Z]\[\-\]

WEBDISPATCHERTPP %{IP:ip} \[%{HTTPDATE:timestamp}\] %{USER:ident} “(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})” %{NUMBER:response} (?:%{NUMBER:bytes}|-)\[%{NUMBER:duration}\] %{CORRELATIONID:correlationid} %{TEST2:num1}

 

You can find an example pattern file here: https://github.com/tobiashofmann/wd_logstash. The standard grok pattern file defines regular expressions for user id, IPv4/6, data, etc.

The actual configuration file consists of three sections: input, filter and output. The input part defines the logs to read, the filter part defines the filter to be applied to the input and the output part specifies where to write the result to. Let’s take a look at each of the sections:

Input

input {

file {

type => “wd”

path => [“/usr/sap/webdispatcher/access*”]

start_position => “beginning”

codec => plain {

charset => “ISO-8859-1”

}

}

}

All files starting with access at directory /usr/sap/webdispatcher are being read by logstash. The codec parameter ensures URLs with special characters are read correctly. To all lines read a type named wd is added.

Filter

filter {

if [type] == “wd” {

grok {

patterns_dir => “./patterns”

match => { “message” => “%{WEBDISPATCHER}” }

}

date {

match => [“timestamp”, “dd/MMM/yyyy:HH:mm:ss Z” ]

}

mutate {

convert => [ “bytes”, “integer” ]

convert => [ “duration”, “integer” ]

}

}

}

The filter is applied to all lines with type wd (see input). Grok is doing the regular expressions and to find the customized patterns for WD, the patterns_dir parameter is used. The date value is given by the timestamp. If this is not set, logstash takes the timestamp when the line is read. What you want is the timestamp of the logged access time of the HTTP request. To facilitate later analysis, the values bytes and duration are transformed to integer values.

Output

output {

elasticsearch {

host => localhost

index => “wd”

index_type => “logs”

protocol => “http”

}

}

As output a local elasticsearch server is defined. The logs are written to the index wd to index type logs. This stores the log lines as a value to elasticsearch and makes it accessible for further processing.

A sample configuration file can be found here https://github.com/tobiashofmann/wd_logstash

 

Run logstash

To run logstash and let it read the WD logs, use the following command:

./logstash –f logstash.conf

This will start logstash. It takes a few seconds for the JVM to come up and read the first log file. Afterwards the log files are parsed and send over to elastic search.

Kibana

Installation in 3 steps: http://www.elasticsearch.org/overview/elkdownloads/

  1. Go to the HTML directory configured for NGinx, like /var/www/html

    Command: cd /var/www/html

  2. Command: wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz

  3. Extract archive

    Command: tar –zxvf kibana-3.1.2.tar.gz

  4. Configure nginx

    Add a location in nginx configuration file to make the kibana application available under /kibana

    Location /kibana {

    alias /var/www/html/<dir of kibana>

    }

  5. Access Kibana on web browser: http://webserver:port/kibana

Test if certificate is correctly installed in keystore

SMP 3 connects to a authentication provider using SSL. For SMP 3 to be able to connect successfully to the SSL enabled backend service, the server certificate of that backend must be installed in SMP 3. This means to install the certificate into the keystore used by SMP 3.

  • The keystore is located at the folder: /<location of SMP 3>/Server/configuration
  • The keystore file is named smp_keystore.jks

To get the server certificate, just export it in PEM format using a browser like IE, FF or Chrome. To do so, you’ll have to open the url and then export the certificate.

Retrieve the backend server certificate

Open the HTTPS URL in FF

Click on More Information…

Select View Certificate

Tab Details.

Select Export

Select as output X.509 Certificate (PEM) and Save

Add certificate to keystore

In the configuration folder of SMP the Java keystore tool will be used to add this certificate to the list of known and accepted certificates.

/<location of SMP 3>/sapjvm_7/bin/keytool -import -v -alias nw74 -keystore smp_keystore.jks -file nw74.tobias.de

The keystore tool shows the information of the certificate and asks if you really want to import the certificate. Answer as yes and the certificate is added to the trusted list.

Test the keystore

A Java class to test a SSL connection is available here: https://confluence.atlassian.com/display/JIRAKB/Unable+to+Connect+to+SSL+Services+due+to+PKIX+Path+Building+Failed+sun.security.provider.certpath.SunCertPathBuilderException

To download the class:

wget https://confluence.atlassian.com/download/attachments/225122392/SSLPoke.class?version=1&modificationDate=1288204937304&api=v2

This downloads the class file and stores it as SSLPoke.class?version=1

To make sense look good, rename the file to SSLPoke.class. The above URL also explains how to use the class and how to specify the keystore to use for verifying the connection.

java -Djavax.net.ssl.trustStore=/my/custom/truststore SSLPoke localhost 443

For testing the keystore against the public available SAP ES1 system:

/<location of SMP 3>/sapjvm_7/bin/java -Djavax.net.ssl.trustStore=/sap/MobilePlatform3/Server/configuration/smp_keystore.jks SSLPoke sapes1.sapdevcenter.com 443

If it works, the output is a simple: Successfully connected

In case the server certificate is not part of the keystore, the tool prints an error message:

 

 

Loading data into Sonar

The following blog was meant to be part of my Sonar with SAP Java series originally published at SCN, but somehow it was never published. So, here we go, some more information on Sonar and SAP Java, including findbug for code quality checks.

Sonar needs to be fed with data. From inside Sonar you cannot define a path with files to be analyzed. The task to send files is up to the developer (or a kind of software). Yes, in that case the developer can be replaced by a script. The process for loading the data to be analyzed is described in great detail at the Sonar homepage. I’m using ant for this. I created a stub build.xml file for my needs that does the compilation of SAP Portal PAR and EAR archives.

<property
name=“sonar.jdbc.url”
value=“jdbc:mysql://localhost:3306/sonar?useUnicode=true&amp;characterEncoding=utf8”
/>

<property
name=“sonar.jdbc.driverClassName”
value=“com.mysql.jdbc.Driver”
/>

<property
name=“sonar.jdbc.username”
value=“sonar”
/>

<property
name=“sonar.jdbc.password”
value=“sonar”
/>    

<property
name=“sonar.host.url”
value=“http://localhost /sonar”
/>

<target name=”sonar”>

<property name=”sonar.binaries” value=”path to jar files” />

    <sonar:sonar workDir=”.” key=”com.tobias.km.par:KmListFiles” version=”1.0″ xmlns:sonar=”antlib:org.sonar.ant”>

        <sources>

            <path location=”dist” />

            <path location=”src.core” />

        </sources>

        <property key=”sonar.projectName” value=”KM List Files PAR” />

        <property key=”sonar.dynamicAnalysis” value=”false”/>

    </sonar:sonar>

</target>

Complete ant file

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”?>

<project
basedir=“.”
default=“par”
name=“UserList”>


<property
environment=“env”/>


<property
name=“JRE_LIB”
value=“../../../../../../Java14/jre/lib/rt.jar”/>


<property
name=“ECLIPSE_HOME”
value=“../../../eclipse”/>


<property
name=“debuglevel”
value=“source,lines,vars”/>


<property
name=“target”
value=“1.4”/>


<property
name=“source”
value=“1.4”/>

    

    <property
name=“name.par”
value=“com.tobias.km.KmListFiles.par”/>

    <property
name=“sonar.jdbc.url”
value=“jdbc:mysql://localhost:3306/sonar?useUnicode=true&amp;characterEncoding=utf8”
/>

    <property
name=“sonar.jdbc.driverClassName”
value=“com.mysql.jdbc.Driver”
/>

    <property
name=“sonar.jdbc.username”
value=“sonar”
/>

    <property
name=“sonar.jdbc.password”
value=“sonar”
/>    

    <property
name=“sonar.host.url”
value=“http://localhost:2080/sonar”
/>


<path
id=“project.classpath”>


<pathelement
location=“classes.api”/>


<pathelement
location=“${JRE_LIB}”/>


<pathelement
location=“C:/Dev/jar/servlet.jar”/>


<pathelement
location=“C:/Dev/jar/bc.uwl.service.api_api.jar”/>


<pathelement
location=“C:/Dev/jar/bc.rf.framework_api.jar”/>


<pathelement
location=“C:/Dev/jar/bc.util.public_api.jar”/>


<pathelement
location=“C:/Dev/jar/com.sap.portal.ivs.connectorservice_api.jar”/>


<pathelement
location=“C:/Dev/jar/com.sap.portal.usermanagementapi.jar”/>


<pathelement
location=“C:/Dev/jar/com.sap.security.api.ep5.jar”/>


<pathelement
location=“C:/Dev/jar/ConnectorHelper.jar”/>


<pathelement
location=“C:/Dev/jar/ExtendedConnector.jar”/>


<pathelement
location=“C:/Dev/jar/GenericConnector.jar”/>


<pathelement
location=“C:/Dev/jar/lafapi.jar”/>


<pathelement
location=“C:/Dev/jar/portal_services_api_lib.jar”/>


<pathelement
location=“C:/Dev/jar/prtapi.jar”/>


<pathelement
location=“C:/Dev/jar/prtconnection.jar”/>


<pathelement
location=“C:/Dev/jar/prtcoreservice.jar”/>


<pathelement
location=“C:/Dev/jar/prtdeploymentapi.jar”/>


<pathelement
location=“C:/Dev/jar/prtjsp_api.jar”/>


<pathelement
location=“C:/Dev/jar/prttest.jar”/>


<pathelement
location=“C:/Dev/IDE702/eclipse/plugins/com.sap.security_2.0.0/lib/com.sap.security.api.jar”/>


</path>

    <taskdef
uri=“antlib:org.sonar.ant”
resource=“org/sonar/ant/antlib.xml”></taskdef>

    <target
name=“sonar”>

     <!–<property name=”sonar.libraries” value=”C:\Dev\jar\SAP Portal JARs\*.jar” />–>

        <!–<property key=”sonar.libraries” value=”project.classpath” />–>

     <property
name=“sonar.tests”
value=“”
/>

        <property
name=“sonar.binaries”
value=“build/classes”
/>            

        <!– findbugs –>

        <path
id=“sap.jars”>

         <fileset
dir=“C:\Dev\jar\SAP Portal JARs”
includes=“*.jar”
/>

        </path>

        <pathconvert
property=“sonar.libraries”
refid=“sap.jars”
pathsep=“,”
/>

        <sonar:sonar
workDir=“.”
key=“com.tobias.km.par:KmListFiles”
version=“1.0”
xmlns:sonar=“antlib:org.sonar.ant”>

            <sources>

                <path
location=“dist”
/>

                <path
location=“src.core”
/>

            </sources>

            <property
key=“sonar.projectName”
value=“KM List Files PAR”
/>

            <property
key=“sonar.dynamicAnalysis”
value=“false”/>

        </sonar:sonar>

    </target>

    <path
id=“compile.classpath”>

        <pathelement
location=“${JRE_LIB}”/>

         <pathelement
location=“C:/Dev/IDE702/eclipse/plugins/com.sap.security_2.0.0/lib/com.sap.security.api.jar”/>

        <fileset
dir=“C:\Dev\jar”>

            <include
name=“*.jar”/>          

        </fileset>

    </path>

    <target
name=“init”>

     <mkdir
dir=“build/classes”/>

     <mkdir
dir=“dist_temp”
/>

    </target>

    <target
name=“compile”
depends=“init”
>

     <javac
destdir=“build/classes”
debug=“true”
srcdir=“src.core”>

     <classpath
refid=“compile.classpath”/>

     </javac>

    </target>

    <target
name=“par”
depends=“compile”>

     <war
destfile=“dist_temp/${name.par}”
webxml=“dist/PORTAL-INF/portalapp.xml”>

     <fileset
dir=“dist”/>

     <lib
dir=“dist/PORTAL-INF/private/lib”/>

     <classes
dir=“build/classes”/>

     </war>

        <copy
includeemptydirs=“false”
todir=“./”>

         <fileset
dir=“dist_temp”></fileset>

        </copy>

    </target>     

</project>

Load all necessary JARs == make them available to findbugs.

<property name=”sonar.binaries” value=”build/classes” />

<path
id=“sap.jars”>

         <fileset
dir=“C:\Dev\jar\SAP Portal JARs”
includes=“*.jar”
/>

        </path>

        <pathconvert
property=“sonar.libraries”
refid=“sap.jars”
pathsep=“,”
/>

Activate findbugs in Sonar:

  • Sonar way with Findbugs

Example code that triggers a blocker in findbugs:

Object o = null;

public String foo(Object o) {        

    if (Integer.class.isInstance(o)) {

        return (String) o;

    }

    return
“”;

}

String a = foo(o);

The code won’t work as o is null, but will compile. Instead of deploying that code and then be surprised why an exception occurred during runtime, let Sonar and findbugs let do this.

Nice thing from findbug is that it tracks the values of variables and objects.

Object o = null;

if (o instanceof IPortalComponentRequest) {}

Findbug identifies that we are going to check a null value.

Findbug even analyzes the code further and gives for the same line of code more information:

And without findbugs? What does sonar find from just looking at the source code?

No more blockers and the rules compliance went up from 35% to 63%. From the source code analysis Sonar is able to identify serious problems too, but to get the most out of the analysis findbug should also be activated.

The critical error shows only the empty if statement:

So no value tracking and identifying of possible NullPointerExceptions. And these are the ones that are really annoying as they can be predicted but finding them can be a tedious task.

Expose a BAPI using JSON and REST

Note: 1st published on SCN at 22.5.2012

 

REST.

JSON.

These are the technologies you need when writing modern web applications. And that is what makes it so hard to use SAP together with modern web applications: Currently you only get REST from SAP with Gateway, but not JSON. OData comes from Microsoft and support is not as wide spread as someone expects. How OData does looks like? You can try it out by downloading the Gateway Trial from SCN or use the online demo system. To experiment with Gateway the usual flight example from SAP can be used. To get the details of a specific flight: sap/opu/sdata/iwfnd/RMTSAMPLEFLIGHT/FlightCollection(carrid=’AA’,connid=’0017′,fldate=’20110601′)

Data returned from Gateway looks like this:

Instead of being able to use the data exposed by Gateway by the widely used Javascript frameworks like jQuery you need to find an OData plugin. Of course you can still use SAP UI5, but what when you are forced to use jQuery, Sencha, Dojo or something else?

That’s a problem with SAP and ABAP in general: you have to wait until SAP or developer implements functionality, and because of the limit resources available in the ABAP ecosystem this can take a while, costs some serious amount of money or will never happen. That’s where we can be happy that SAP also embraces the use of Java. As Java was made for the internet there is a huge Java community out there that most probably already developed a tool that fits your needs.

For exposing data as JSON in a REST-full way the tool available is: Jersey.

After Jersey transformed the data, the response looks like:

{“EXTENSION_IN”:{“item”:[]},”EXTENSION_OUT”:{“item”:[]},”RETURN”:{“item”:[{“TYPE”:”S”,”ID”:”BC_IBF”,”NUMBER”:”000″,”MESSAGE”:”Method was executed successfully”,”LOG_NO”:””,”LOG_MSG_NO”:”000000″,”MESSAGE_V1″:””,”MESSAGE_V2″:””,”MESSAGE_V3″:””,”MESSAGE_V4″:””,”PARAMETER”:””,”ROW”:0,”FIELD”:””,”SYSTEM”:”NPLCLNT001″}]},”ADDITIONAL_INFO”:{“FLIGHTTIME”:361,”DISTANCE”:2572.0000,”UNIT”:”MI”,”UNIT_ISO”:”SMI”,”PLANETYPE”:”747-400″,”FLIGHTTYPE”:””},”AVAILIBILITY”:{“ECONOMAX”:385,”ECONOFREE”:20,”BUSINMAX”:31,”BUSINFREE”:1,”FIRSTMAX”:21,”FIRSTFREE”:3},”FLIGHT_DATA”:{“AIRLINEID”:”AA”,”AIRLINE”:”American Airlines”,”CONNECTID”:”0017″,”FLIGHTDATE”:1306897200000,”AIRPORTFR”:”JFK”,”CITYFROM”:”NEW YORK”,”AIRPORTTO”:”SFO”,”CITYTO”:”SAN FRANCISCO”,”DEPTIME”:50400000,”ARRTIME”:61260000,”ARRDATE”:1306897200000,”PRICE”:422.9400,”CURR”:”USD”,”CURR_ISO”:”USD”}}

Jersey needs Java 6 and runs in a servlet container. As NetWeaver CE >= 7.2 fulfills these requirements, Jersey can be used to transform POJOs into JSON and expose them using REST. NW CE comes with a framework for creating composite applications (CAF) that can consume BAPIs. CAF uses standard J2EE technology like JCA and Beans. This bean can be used by Jersey to automatically extract the data, transform it into JSON and make it accessible using REST.

Consuming a BAPI using CAF can be done with no coding involved at all as CAF comes with some nice wizards. Just map the input and output parameters and the code will be generated including the bean that can be used to interact with the BAPI. In the Jersey web application the URL and protocol get defined:

@GET

@Path(“getFlight/carrid/{carrid}/connid/{connid}/fldate/{fldate}”)

@Produces(“application/json”)

The CAF bean gets called using the parameters retrieved from the URL:

out = e.bapiFLIGHTGETDETAIL(null, null, null, carrid, connid, flightDate);

That’s it. Jersey will do the rest:

How the URL gets constructed is up to you, the parameters can be part of the URL as above or a query. You can also define if it is GET, POST, PUT, etc. If you want to do some coding you can adjust the output, or combine the result of several BAPIs into one Java object that JSON will expose.

Now that the BAPI can be accessed in a RESTful way and the data retrieved is in the JSON format, it’s easy to use the data in a Javascript framework like jQuery with Datatables:

The actual coding for making the CAF bean accessible to Jersey and expose the data is less than 10 lines of code. From CAF to Jersey to a running HTML application it does not even take 30 minutes.

The framework I used for interacting with the ABAP system is CAF, a framework available for NetWeaver Java since a while (7.0): well documented enterprise ready and supported. If you want or need to expose your BAPI by REST and JSON or XML and you have a NetWeaver CE >= 7.2 (like Portal 7.3) available, you can start today. As your NW CE system is a separate one, upgrades won’t affect your backend system and as Jersey is also a separate application; changes to Jersey don’t affect your other CE server and applications. Of course you don’t need NW CE for using Jersey and CAF for interacting with BAPI. Every platform that Jersey supports and where you can use JCo from SAP to call a BAPI can be used like tomcat. It just means that some extra configuration and coding will be necessary. This also implies that your backend ABAP system can be used as long as a JCo connection to it is possible.