Activate Clickjacking-Framing-Protection service

SAP NetWeaver comes with its own solution to prevent clickjacking for its most relevant UI frameworks. For more information about this protection, see the corresponding SAP Notes.

By default, clickjacking protection is disabled. To activate it, you need to insert a value into table HTTP_WHITELIST.

Insert values into table HTTP_WHITELIST

Transaction: SE16

Check if clickjacking protection service is enabled or disabled. It is disabled, if no record with ENTRY_TYPE=30 is in the table, or if the table is empty.




By default, no values are in the table and the service is not enabled. For data that needs to be inserted into table HTTP_WHITELIST, see SAP Note 2142551. Creating an entry type with vale 30 activates the whitelist.

Transaction: SE16

Select F5 or click on the new entry icon.

Insert data. See links below for additional information on possible values.

Click save to persist the entry in the table.

Afterwards, the table will contain one record. As the record has value 30 for column ENTRY_TYPE, the clickjacking protection service is enabled.

Activate ICF whitelist service

Adding a record activates the service, but to make apps working, additional configuration steps must be taken. For instance, accessing now a WDA app (e.g. SAML2) will resolve in a HTTP 500 internal server error. This is caused by having the clickjacking protection activated, but not the whitelist service.

To solve the HTTP 500 error, you need to activate the ICF whitelist service.

Transaction SICF_INST
Technical name: UICS_BASIC

Execute. This will activate the ICF node



After enabling the service and the ICF node, the above WDA app will open in the browser.


Additional information on setting whitelist entries.


Let the world know

Reset password for SAP Web Dispatcher user

It happened. You do not remember anymore the password created by SAP Web Dispatcher (WD) during bootstrap operation. While this is not bad (who can remember a password like aR$#¨%_09fms!” anyway?) and normally your browser safes it for you (hm, maybe not so good) or your password safe (better). But the password is gone, you cannot log on anymore to WD admin interface. No worries, if you have access to the computer where WD is running, you can either

  1. Get the icmauth.txt file and try to hack the password or
  2. Create a new password for your user.

I prefer option b.

The documentation at SAP Help for this gives you some options, like recreate the configuration (bootstrap) and you’ll get a new password for the icmadm user.

  • Creating Administration Users SAP Help

The online documentation for this section only mentions icmon, but for Web Dispatcher you have to use wdispmon. The authors explain this at the parent page of the topic and justify it that this makes things easier. I am not sure to whom, but definitely not for the person reading the guide, as you have to read the parent page to find out why icmon is not available for WD. Note: the page is for WD and still the documentation is using commands for ICM for NetWeaver ABAP #yay.

Content of the icmauth.txt file looks like:

# Authentication file for ICM and SAP Web Dispatcher authentication


Field 1 Field 2 Field 3
icmadm is the user {SHA384}z3… is the encrypted password of the user admin is the group of the user.

To change the password of the user icmadm you have to use the wdispmon command with the –a flag. Also provide the path to the WD profile file.

Command: wsdispmon –a pf=sapwebdisp.pfl

Enter c to change the password of an existing user.

Inform the new password. As of now, the new password will not be available to WD, as it is not saved to icmauth.txt. To persist the new password you have to save it. To do so, select s from the menu.

Do not worry, a copy of the old file will be created (in case your co-worker still has the old password). With this done, you can exit the program. Select q from the menu.

(Not sure if you have to restart WD, but I did.) Now you can log on using the new password to WD. Access your WD admin page and log on using icmadm and the new password.


Let the world know

Security validation of SCAs by SUM

Updating a system with SUM is as easy as walking into Mordor. Upgrading a SPS needs you to have a stack XML and when you put SCAs that are signed into a folder, you still have to ensure that SUM can verify the validity of the files. To do so, SUM must be able to have access to the certification revocation list (CRL). This list informs SUM if the certificate used to sign the SCA file is valid or not. Short: if the file can be trusted and therefore be installed or not.

To be able to do so, the CRL file must be downloaded and placed in the same directory as the SCA files. If this is not done before running SUM, you’ll get this screen:

SUM can continue without verifying the files, but that’s some kind of security breach you will commit. Therefore it is better to do as the message text tells you: download the file and place it into the foler. Download the CRL from here: Copy the file to the directory that contains the SCA files

Select repeat and continue.

Now SUM can verify the files and will know when a certificate was revoked and tell you that it is not secure to install that file.

Let the world know

NWDS update site setup

NWDS 7.3/7.4 uses the update site concept of Eclipse. This makes it easier to update NWDS as an updated component only needs to be updated at the central update site. No need to distribute a whole NWDS installation package to the developers. The NWDS update site even includes a zip archive of the latest NWDS. That means that the developer does not have to download a NWDS version from SAP Market Place or

  • Official documentation at SAP Help: link
  • Information on SCN: link

There is no separate NWDS 7.4 for NetWeaver Java 7.4. You use the 7.31 version when developing applicaitons for NW 7.4 (SAP Note). To set up an update site, first download the SCA

This SCA contains the archives, but not the tool needed to create the update site. You can download the tool from here: link. This tool is available for Windows.

The tools helps you in extracting the content of the SCA and to configure the update site URL. Afterwards, create an alias in the NW Java HTTP provider and copy the files to the directory specified by the alias.


Set the alias to updatesite_731SP13. This alias points to the directory /home/cesadm/updatesite/731SP13

On the server, the folder contente looks like this:

The total size of the update size here is 2.5 GB. To access the update site via HTTP, inform the complete path to index.html:


In NWDS, the update site is configured under the available software sites.

Thats it. Now NWDS can be updated from the update site.

Let the world know

Bind ICM to port 443

To run SAP Portal on the standard web ports 80 and 443 you should use Web Dispatcher. In that case, WD runs on the privileged ports and SAP Portal / NetWeaver Java / ICM continue to run on their usual 5nnXX ports. Changing the ports directly on ICM of NetWeaver is something I cannot recommend, and you should not do it.

Configuration of ICM

To run NetWeaver on low ports, follow the procedure outlined in SAP Note 421359. ICMBND is the executable that will run at port 443. This file does not exists. To create it, follow the steps outlined in the SAP Note as user root:

  • cd /usr/sap/<SID>/J00/exe
  • cp icmbnd
  • chown root:sapsys icmbnd
  • chmod 4750 icmbnd
  • ls –al icmbnd

The super user bit is now set. With this, the executable can now “act” as being root and listen on port 443. The instance profile must now be changed to include the new ICM parameters to bind to port 443 for HTTPS and to use the external program icmbnd for doing that.

Currently the port configuration may look like this:

After the change

Note: The parameter exe/icmbnd should not be needed as long as the binary resides in the normal place. I added it here to show how the parameter looks when configured.

Restart SAP system: stopsap; startsap.


NetWeaver is now listening on port 443.


Default configuration is that NetWeaver first asks the client to provide a certificate and if none is given, proceeds with the normal authentication defined in logon profile.

This can be disabled be setting the parameter VCLIENT=0 in the instance profile:



Let the world know

Analyze Web Dispatcher logs with Kibana

SAP Web Dispatcher (WD) is the entry point of your users that access your web enabled applications. These can be any HTML service or app you have running on NetWeaver, or other systems like HANA XS. For over a decade WD offers reverse proxy functionality for SAP systems, and while until shortly its main usage area was SAP Portal and Web Dynpro applications, with the rise of Fiori WD is more exposed. Naturally, more and more companies will use it. Of course WD can be integrated into SolMan and therefore can be managed and monitored.

While this is nice, analytical requirements for a web application can be quite complex. A standard approach is to use a web analytics application that helps you to find out how your site is used (sessions, entry/exit points, campaigns). While this gives you transparency about the site experience of your end users, it is not really useful when it comes to a more administratively driven approach: what kind of content is passed through WD, impact on configuration parameters: CSS, JavaScript, response times, data throughput. Besides, your users must be OK with the tracking code and modern browsers allow users to deactivate tracking cookies and related technology (do not track).

WD is the single point of entry to web applications; it contains viable information about their usage. This information can heavily influence the understanding of the app. Think about finding the bottlenecks of the app, the most accessed resources, usage patterns, and so on. The log of the web dispatcher contains all this kind of information. You only have to gather it, store it and analyze it.

Basically, WD is a reverse proxy, and in the non-SAP context, Apache is one of the most used reverse proxies. Analyzing HTTP traffic is a common task for web site administrators, and so it is not a big surprise to find a huge list of Apache traffic analyze log tools available. The Swiss army knife among them is logstash. Now, logstash does not really analyze web server logs. It rather parses them and can send them to another tool for storing and analyzing the data. Like elasticsearch.

To learn how to configure your own system for WD, logstash, please read the how to document I posted here.

This is the default use case of logstash: Parse logs, extract the information and send it to elasticsearch for storing and retrieval. After the information is stored in elasticsearch, it can be used by Kibana for retrieving information like statistics and analytical data. Think about access statistics or trends.

The vantage of the combination of logstash, elasticsearch and Kibana over a web analytic app is that you do not have to install a tracking / analyzing part in your web application. You can also analyze part of your web page normally invisible, like resources. Depending on your WD configuration, you gain insights into how WD works, like how long it takes to retrieve files from the SAP system.

Information retrieval

After connecting Kibana to elasticsearch it is easy to surf the data and to create your own dashboard. Drilling down is no problem and while logstash is running in the background adding new data, the dashboard can reflect this instantly. A few sample reports may include:

Total number of files served by WD

Total number of MB transferred

Hits to resouces

You can correlate this data to find out interesting stuff like:

  • Number of requests: a cached resource is served locally by browser, this can decrease drastically the load on WD and backend.
  • Requests for a specific file / site
  • Average response time for CSS or JS files: does it make sense to use WD as a web cache? Think about it: the data may indicate that WD waits to retrieve a file from ICM, multiply it with the numbers of requests it takes for a user to access a resource and you have an idea of time wasted.
  • Data send by serving static files: is your cache configuration correct?
  • What is the largest file requested?
  • Usages: your application is accessible only internally, does the access statistics reflect this?
  • Hitting a lot of 304, 404 or 500? What is causing this?
  • Monitor ICM admin resources to find out possible attack vectors.
Let the world know

SAP WebDispatcher and Logstash – installation and configuration

This document explains how to install and configure an environment for analyzing SAP Web Dispatcher (WD) logs with logstash, elasticsearch and Kibana under Linux. Kibana 3 needs a running web server. The example shown here is using nginx, but won’t detail how to set up nginx.

Components referred to in this document:

SAP WebDispatcher

“The SAP Web dispatcher lies between the Internet and your SAP system. It is the entry point for HTTP(s) requests into your system, which consists of one or more SAP NetWeaver application servers.”


“logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, logstash comes with a web interface for searching and drilling into all of your logs.”


“Elasticsearch is a powerful open source search and analytics engine that makes data easy to explore.”


“Kibana is an open source, browser based analytics and search dashboard for ElasticSearch.”


“nginx (pronounced engine-x) is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server.”


Install Elasticsearch

Installation in 3 steps:

  1. Command: wget

  2. Extract archive

    Command: tar –zxvf elasticsearch-1.4.2.tar.gz

  3. Start Elasticsearch


    cd ealsticsearch-1.4.2

    cd bin


Install Logstash

Installation in 3 steps:

  1. Command:


  2. Extract

    Command: tar –zxvf logstash-contrib-1.4.2.tar.gz

  3. Run logstsash logstash. Before logstash can be run, it must be configured. Configuration is done in a config file.

Logstash configuration

The configuration of logstash depends on the log configuration of WD. Logstash comes out of the box with everything it takes to read Apache logs. In case WD is configured to write logs in Apache format, no additional configuration is needed. WD also offers the option to write additional information to the log.


  • CLF. This is how Apache is logging. It contains most information needed.
  • CLFMOD. Same format as CLF, but without form fields and parameters for security reason.
  • SAP: writes basic information and no client IP, but contains processing time on SAP Application Server. This is a field you really will need.
  • SMD: For SolMan Diagnostics and same as SAP, but contains the correlation ID.

As mentioned before, for CLF logstash comes with everything already configured. A log level that makes sense is SMD because of the response time. In that case, logstash must be configured to parse correctly the WD log. Logstash uses regular expressions to extract information. To make logstash understand SMD log format, the correct regular expression must be made available. Grok uses the pattern file to extract the information from the log The standard pattern file can be found here:

For instance, to extract the value of the correlation id when log format is set to SMD, the regular is:


For WD with SMD log the complete regular expression is

TEST2 \|

WEBDISPATCHER \[%{HTTPDATE:timestamp}\] %{USER:ident} “(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})” %{NUMBER:response} (?:%{NUMBER:bytes}|-) \[%{NUMBER:duration}\] %{CORRELATIONID:correlationid} %{TEST2:num1}


When the IP is added to the WD log with SMD, the regular expression is

TEST2 \|


WEBDISPATCHERTPP %{IP:ip} \[%{HTTPDATE:timestamp}\] %{USER:ident} “(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})” %{NUMBER:response} (?:%{NUMBER:bytes}|-)\[%{NUMBER:duration}\] %{CORRELATIONID:correlationid} %{TEST2:num1}


You can find an example pattern file here: The standard grok pattern file defines regular expressions for user id, IPv4/6, data, etc.

The actual configuration file consists of three sections: input, filter and output. The input part defines the logs to read, the filter part defines the filter to be applied to the input and the output part specifies where to write the result to. Let’s take a look at each of the sections:


input {

file {

type => “wd”

path => [“/usr/sap/webdispatcher/access*”]

start_position => “beginning”

codec => plain {

charset => “ISO-8859-1”




All files starting with access at directory /usr/sap/webdispatcher are being read by logstash. The codec parameter ensures URLs with special characters are read correctly. To all lines read a type named wd is added.


filter {

if [type] == “wd” {

grok {

patterns_dir => “./patterns”

match => { “message” => “%{WEBDISPATCHER}” }


date {

match => [“timestamp”, “dd/MMM/yyyy:HH:mm:ss Z” ]


mutate {

convert => [ “bytes”, “integer” ]

convert => [ “duration”, “integer” ]




The filter is applied to all lines with type wd (see input). Grok is doing the regular expressions and to find the customized patterns for WD, the patterns_dir parameter is used. The date value is given by the timestamp. If this is not set, logstash takes the timestamp when the line is read. What you want is the timestamp of the logged access time of the HTTP request. To facilitate later analysis, the values bytes and duration are transformed to integer values.


output {

elasticsearch {

host => localhost

index => “wd”

index_type => “logs”

protocol => “http”



As output a local elasticsearch server is defined. The logs are written to the index wd to index type logs. This stores the log lines as a value to elasticsearch and makes it accessible for further processing.

A sample configuration file can be found here


Run logstash

To run logstash and let it read the WD logs, use the following command:

./logstash –f logstash.conf

This will start logstash. It takes a few seconds for the JVM to come up and read the first log file. Afterwards the log files are parsed and send over to elastic search.


Installation in 3 steps:

  1. Go to the HTML directory configured for NGinx, like /var/www/html

    Command: cd /var/www/html

  2. Command: wget

  3. Extract archive

    Command: tar –zxvf kibana-3.1.2.tar.gz

  4. Configure nginx

    Add a location in nginx configuration file to make the kibana application available under /kibana

    Location /kibana {

    alias /var/www/html/<dir of kibana>


  5. Access Kibana on web browser: http://webserver:port/kibana

Let the world know

Debug a portal application

Note: 1st published on SCN on 25. 5. 2012

Debugging an application comes down to see what is going on during execution of the application. One way of debugging is to log specific messages to a file. To get e detail analysis of the program flow you set breakpoints. These breakpoints are set in your source code and are instructions of the Java VM to stop execution when it hits a breakpoint. After the execution stopped you can see the current values of variables.

The instructions here are for SAP Portal 7.x, but the overall process should be the same for the 7.3 portal.

Before you can start debugging a portal application (PAR) you have to enable the debug option on your portal server. This means that you will enable a specific debug port to which your NWDS will connect. To actually debug the portal application you’ll use NWDS. This implies that you rarely will activate and execute a debug session in your productive portal environment. If you have debug a PAR that already is in production, something is wrong with how you release software.

The debug port is activated using the configtool. You set the debug port to an arbitrary value, just make sure that the port is free.

In the portal application, set breakpoints where you want the Java VM to stop execution so you can take a closer look at the VM environment.

This will instruct the VM to stop every time the array a gets a value assigned. Next step is to deploy the PAR. Make sure that you select the option: “Include the source code of the portal application”. If not, the debug won’t work.

To actually start the debug session you create a remote java application configuration:

Specify the portal project, the server and debug port (the one you gave in configtool). After clicking on debug, NWDS will connect to the Java VM and already present you with some nice information of the VM:

When you now execute the PAR, the Java VM will stop the application where you have set the breakpoint.

You can start exploring the current state of your application. The variables are shown with their current values:

To gain a deeper understanding of the environment of the PAR, you can also look at the request object and find out variables and their values:

Resuming execution and the values are being updates:

Let the world know

Testing SAP Portal applications with Selenium

Note: 1st published at SCN on 2.5.2012

Before an application can go into production, it needs to be tested. For Java there are several widely adopted test frameworks available like jUnit or TestNG. As these are used to test Java applications, what about web applications? How to test the UI that actually is constructed by using HTML, CSS, JSP and JavaScript? This is not easy as you have to simulate user input and validate the generated response, including different browsers. The example in this blog is about testing a simple HTMLB checkbox.

A tool that can help you creating test cases for web applications is Selenium. Selenium offers a Firefox plugin called Selenium IDE that records what the user is clicking and translates these actions into HTML code. To test an application with Selenium the following steps are needed:

  1. Record the user action
  2. Select a test framework and save the recorded actions in Java code
  3. Import the generated code into your Java project

For the 1st step, Selenium offers a Firefox plugin called Selenium IDE. This means while the user actions can be replayed by several browsers (IE, Opera, FF, mobile, etc.), with the IDE only the user actions from a Firefox user can be recorded.

In the generated HTML code of Selenium, these values are shown as HTML:

    <td>Simple checkbox checked</td>

It is easy to see a possible problem here: how Selenium will find the actual HTML UI to test. In the above example the absolute path is used. When the portlet is not run as a single application but inside the portal the path will change. So make sure that the HTML code of the application is surrounded by a unique HTML that can serve as a starting point of the path. As the HTML isn’t really useful, you’ll have to export the actions to a framework of your choice (I created my own export template called SAPHtmlbTestNG).

For those who already know Selenium: the IDE does not come with an export to TestNG and WebDriver, so you are bound to use Selenium RC. After exporting the test case you get Java code that actually can be used to run the tests:

class test extends Tests {

    @Parameters({“seleniumServer”, “seleniumPort”, “seleniumBrowser”, “testBaseUrl”})

test(String seleniumServer, int seleniumPort, String seleniumBrowser, String testBaseUrl) {

        super (seleniumServer, seleniumPort, seleniumBrowser, testBaseUrl);



void setUp() throws Exception {



void testTest() throws Exception {“/irj/servlet/prt/portal/prtroot/com.tobias.test.testSelenium.testCheckbox”);






void tearDown() throws Exception {




What does that actually mean? In the Selenium IDE I added a test that checks if a HTML element (e.g. a UI element like a button, checkbox or any other HTML element) is in the HTML code retrieved from the application:


For this Selenium identified the actual location of the element in question in the HTML source code:


Transformed into Java code, this reads as:


As for Selenium and testing: the actual Java code for executing the tests is Selenium. TestNG only serves as the framework running the Selenium Java code. That’s why there are testing annotations like @BeforeClass. TestNG will control the flow of the tests (starting selenium, running the tests, stopping selenium).

Running the Selenium test with TestNG

To run the tests a few things have to be considered:

  1. A Selenium server needs to be up and running
  2. TestNG can be controlled over a XML file

Starting the Selenium server can be done via a (ant/maven) script or manually. What is more interesting is the XML file that controls TestNG. There the actual tests to be performed together with parameters are defined:


<!DOCTYPE suite

















The parameters are read by the annotation @Parameters and are used to define the selenium server. I added a listener that allows me to take a screenshot when a test fails and in the <test> section the actual tests are listened. There is only one test listened; to add more tests just add them by <class> definitions.

Run the test

After the tests are prepared and the tests and parameters are defined in the XML, the tests can run: either by invoking them directly in Eclipse or by running a script. TestNG will create HTML reports of the tests results.

Adding new test cases isn`t really hard when using the Selenium IDE. Where the IDE doesn`t help creating test cases, the developer can create them directly in Java. A benefit of Selenium is that you`ll end up having functional tests, as a real browser will make the calls to the application and validate the result.

From the 21 tests in my test suite, 2 failed.

Because of my screenshot class added to my test suite, TestNG made a screenshot when the test failed.

Selenium with a test framework like TestNG allows for functional testing of SAP web application. This is not bound to SAP Portal or WDJ apps, everything that has a HTML based UI can be tested. You have to keep in mind as UI testing is important, you should not focus solely on it. Unit and service tests should make the largest part in your test efforts.

Let the world know

Using JPA in SAP Java development

Note: first published on SCN on 29.2.2012

Java offers many technologies and mechanisms that help developers realize their ideas. One is to save data in a database without dealing with object relative mapping. From the several technologies available to save a Java object into a DB NetWeaver AS Java comes with SAP’s implementation of JPA. For the following blog I’m only looking into NetWeaver >= 7.1 as the code will be using annotations, available since Java 5.

The first step to save a Java object is to create the database table. The data stored will be the name and price of a product, as well as a unique id. For this, the Data Dictionary perspective in NWDS is used.

A table should have a primary key value and the columns defining the values that can be stored inside the Java object (this can also be done in the reverse way: create first the Java object and then the DB based on the object).

Second step is to create the Java object. The values that are going to be stored are mapped to the DB columns. This can be done automatically or by using the @Column annotation. The primary key column is special as the value of this column is normally determined by the application or database automatically without any interference of the user. For this the annotation @ID is used. Here the additional constraint of a table generated Id is used.




@NamedQuery(name=“findTest”, query=“SELECT p FROM Test p”)

class TestJPA implements Serializable {


    @TableGenerator(name=“TABLE_GEN_TEST”, table=“TMP_TEST_SEQ”, pkColumnName=“GEN_SEQ”,

     valueColumnName=“GEN_COUNT”, pkColumnValue = “TEST”)

    @GeneratedValue(strategy=TABLE, generator=“TABLE_GEN_TEST”)


    private String name;



  • @Table defines the table where the object values will be stored.
  • @NamedQuery is a select string to get all the object stored in the DB
  • The @ before the variable id define id as the Id (@Id) for JPA, how to store the id value (@GeneratedValue) and the actual value that will be used (@TableGenerator).

To automatically create the PK value for the ID column, SAP offers 3 alternatives:

  • AUTO

For table Id generation SAP Help states: “The TABLE ID generation strategy relies on the existence of a database table that manages ID values

  • The database table needs to exist!

In case the table isn’t specified, the default table TMP_SEQUENCE will be used. You have to ensure that either the default table or your custom table exists in the DB. Besides that, the table needs to contain specific elements (see the SAP Help link for more details).

  • AUTO defaults to TABLE sequence. The other 2 alternatives (IDENTITY and SEQUENCE) come with certain restrictions:
  • SEQUENCE needs a database sequence objects that has to be created manually at the DB
  • IDENTITY won’t work with OpenSQL and MS SQL server and IBM DB2.

TABLE sequence is the only way to create the ID by using the data dictionary and for every database. The table generator option for the ID value of your JPA tables requires a specific format for the sequence table:

  • primary key column needs to be of type varchar (Java: String)
  • the value column needs to be a number value

Creating a sequence table in NWDS for CE 7.1 in such a way is possible:

The table generator definition in the Java class works like this:

  • Name: defines the name of this table generator. Used by the @GeneratedValue part.
  • Table: defines the table where the sequence values are stored
  • valueColumnName: the name of the column where to look up the value. When the value found there is 56721 the value will be incremented by 1, thus the id will be 56722.
  • pkColumnName: name of the primary column of the table defined by table (TMP_TEST_SEQ). That value can be the name of the Java class or something else.

The JPA Details view shows all the information needed:

The object can be used as any other Java object. To store the data into the DB an entity manager is used. In the SAP scenario, this means that a data source and alias as well as a persistence unit need to be defined. The data source alias gets defined in the EAR project:

And in the EJB project:

The entity manager is using the persistence unit name to find out the data source. The data source is defined in the NWA of the AS Java and contains the information how to connect to the DB. The persistence unit gets defined in the EJB project. Java objects that are persisted normally are beans, so it makes sense to wrap the class in an EJB.

TestBean is the Bean, while TestLocal is the local interface for lookups. The actual business logic has to be implemented in the Bean, while adding the method to the local interface will make it public.

Hint: Using beans allows you to expose your business logic as a JSON object (with Apache Jersey). As beans can also be exposed remotely the Jersey server can run on another server.



class TestBean implements TestLocal {

@PersistenceContext(unitName = “PERM_UNIT”, type = PersistenceContextType.TRANSACTION)

    private EntityManager em;

void createTest(TestJPA test) {



    public List<TestJPA> getAllTest() {        

        List<TestJPA> test = em.createNamedQuery(“findAllTest”).getResultList();

        return test;



Local Interface:


interface TestLocal {

int createTest (TestJPA test);

    public List<TestJPA> getAllTest ();


Now the bean can be used in your Java code. In your SAP Portal application (PAR/WAR, of course, in WDJ too). Use the context and JNDI lookup to find the bean and start using it:

Context ctx = new InitialContext();

TestLocal testLocal = (TestLocal) ctx.lookup(“patlafldj TestLocal”);

Test test = testLocal.sldfjlajfdlskfj();

Let the world know