XPages: High Performance Applications

During the last months I worked on a high performance XPages application used by a lot of end users.  To get a better data throughput, I decided to use a reverse proxy for load balancing, caching of ressources, SSL connections etc.

For static resources I am using some „special“ domains: This means that the browser is allowed to do more HTTP requests at once to the same NSF. If you have for example some images in your database which are reachable from outside via http://www.example.com/mydb.nsf/image.gif, this can be changed to http://static.example.com/mydb.nsf/image.gif (Here you can find a list of best practices).

I solved the problem of multiple execution during the JSF lifecycle  by checking if the request has already actual data (details can be found here – German only), but there was still a problem: Everytime a XPage wants some data to display, a query is sent to the domino server. This is nice if your application requires data in real-time, but not for a normal application – it kills the user experience.

This is why I searched a way which easily allows to implement memory cached database queries for domino databases.  The main idea is that the database query is no longer send directly to the backend . Instead, the request is made against a JSON wrapper, and the request to this wrapper (an agent residing in the same NSF) is done via a proxy. This allows a full control of the requested data.

The browser sends the HTTP request to the XPage and receives the response. The XPage queries the MemCache for data; if the data is not in the cache, the proxy queries the data storage (the domino database) and caches the result. The XPage has to parse the JSON data only, and this boosts the performance in my test environment for about 250-300%.

By „stealing“ the session cookie, the database query to the backend database will be done in the context of the user; the security for domino databases is not influenced.

In my solution, I am using Apache 2.2 as reverse proxy. The following modules are additionally enabled for this solution:

  • cache_module
  • deflate_module
  • expires_module
  • headers_module
  • mem_cache_module
  • proxy_module
  • proxy_http_module
  • rewrite_module
  • setenvif_module

The virtual host configuration looks like this:

<VirtualHost *:8080>
 ServerName localhost

 # Enable reverseproxy
 ProxyRequests Off
 ProxyPreserveHost On
 <Proxy *>
  AddDefaultCharset off
  Order allow,deny
  Allow from all

 # Proxy config for Domino server
 ProxyPass / http://localhost:80/
 ProxyPassReverse / http://localhost:80/

  # prevent max-age calculation from Last-Modified
  # prevent If-Modified-Since requests
  # reduces the number of requests that hit the server
 <LocationMatch "/.*$">
  Header unset Last-Modified
  Header unset ETag

 # MemCache Config
 CacheEnable mem /
 CacheEnable mem http://

 # Cache-Size 80 MB
 MCacheSize 81920
 MCacheMaxObjectCount 8192

 # Min Obj. Size 1 Byte
 MCacheMinObjectSize 1

 # Max Obj. Size 1 MB
 MCacheMaxObjectSize 1000000

 # cache for 60 seconds by default
 CacheDefaultExpire 60

 # FORCE caching for all documents (without Cache-Control: no-cache)
 CacheIgnoreNoLastMod On

 # force caching for all requests
 # ignore client side Cache-Control header
 CacheIgnoreCacheControl On
 # don't add Set-Cookie header to cache
 CacheIgnoreHeaders Set-Cookie

 # Add expires headers for images, css & js files
 # reduces the number of requests that hit the server
 ExpiresActive On
 ExpiresByType domino/json A600


As you can see, the proxy runs on port 8080, and I have added a special content type „domino/json„. This makes it easier to identify the relevant data.

This is the XPage the user accesses:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core"

    <xp:pager layout="Previous Group Next" partialRefresh="true"
        id="pager1" for="repeat1">

    <xp:inputText id="inputSearch" value="#{sessionScope.searchFor}" />

    <xp:button value="Label" id="button1">
        <xp:eventHandler event="onclick" submit="true"

    <xp:repeat id="repeat1" rows="30" var="rowData">
            importPackage( ch.hasselba.xpages.util );

            var sessionId = null;
                 sessionId = cookie.get("DomAuthSessId").getValue();

            var url = "http://localhost:8080/Data.nsf/DoSearch?OpenAgent";
            url += "&sessionId=" + sessionId;

            if( sessionScope.get("searchFor") !== null ){
                if( sessionScope.get("searchFor") !== "" )
                    url += "&search="; 
                    url += java.net.URLEncoder.encode(sessionScope.get("searchFor"),"UTF-8");

            var data = ch.hasselba.xpages.util.URLReader.read( url, sessionId );
            var parsed = null;
                 parsed = fromJson(data).data;
        <xp:text escape="true" id="computedField1" value="#{javascript:rowData.LastName}">
        <xp:text escape="true" id="computedField2" value="#{javascript:rowData.FirstName}">
        <xp:br />


The Java class used is really simple, I know there are better ways to do a Http request, but this is a proof of concept.

package ch.hasselba.xpages.util;

import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;

 * URLReader
 * performs a HTTP request
 * @author Sven Hasselbach
 * @category URL
 * @category Proxy
 * @version 0.2
public class URLReader {

    // Constants
    private final static String PROPERTY_COOKIE_NAME = "Cookie";
    private final static String PROPERTY_DOMAUTHSESSID_VALUE = "DomAuthSessId=";

     * reads data from a given URL
     * @param pURL URL to load data from
     * @param pSessionId session data for doing a request in the current user context
     * @return String containg the result of the http request
     * @author Sven Hasselbach
     * @category URL
     * @category Proxy
     * @version 0.2
    public static String read( final String pURL, final String pSessionId ){
        String data = null;

            // init the URL connection
            URL url = new URL( pURL );
            URLConnection uc = url.openConnection();

            // "steal" the original user session cookie
            if( !("".equals(pSessionId)))
                    uc.setRequestProperty ( PROPERTY_COOKIE_NAME ,
                       PROPERTY_DOMAUTHSESSID_VALUE + pSessionId);

            // do the HTTP request
            BufferedReader in = new BufferedReader( 
               new InputStreamReader( uc.getInputStream() ));

            // process the data returned 
            StringBuffer strBuf = new StringBuffer();
            String tmpStr = "";
            while((tmpStr = in.readLine()) != null ){
                strBuf.append( tmpStr );
            data = strBuf.toString();

        }catch(Exception e){

        return data;

And here comes the JSON handler, a simple Lotus Script agent:

Sub Initialize
    Dim session As New NotesSession
    Dim db As NotesDatabase
    Dim dc As NotesDocumentCollection
    Dim doc As NotesDocument
    Dim isFirst As Boolean
    Dim contextDoc As NotesDocument
    Dim hlp
    Set contextDoc = session.Documentcontext
    Set db = session.Currentdatabase

    ' get the search string from the URL or use the default search
    hlp = Split( contextDoc.QUERY_STRING_DECODED(0), "search=" )
    If UBound( hlp ) = 0 Then
        Set dc = db.Ftsearch("[FirstNAME] CONTAINS AARON", 0)
        Set dc = db.Ftsearch(hlp(1), 0)
    End If

    ' create the JSON output    
    isFirst = true
    Set doc = dc.Getfirstdocument()

    ' special content type domino/json
    Print |Content-type: domino/json|
    Print |{"data":[|
    While Not doc Is Nothing
        If Not isFirst Then Print ","
        Print |{"LastName":"| & doc.LastName(0) & _
        |","FirstName":"| & doc.FirstName(0) & |"}|
        isFirst = False
        Set doc = dc.Getnextdocument( doc )

    Print |]}|

End Sub

In the next days I will provide a sample database and add more details. A database with 300k test datasets will be added too.

Dieser Beitrag wurde unter Java, Java Script, Lotus Script, Performance, ServerSide JavaScript, Web, XPages abgelegt und mit , , , , , , , , verschlagwortet. Setze ein Lesezeichen auf den Permalink.

8 Kommentare zu XPages: High Performance Applications

  1. This is cool, but I’m confused. Why involve a reverse proxy and JSON at all here? Why not just create an App scope managed bean that implements DataObject, and use that bean as the value in your repeat? No need to serialize to strings; just keep the cached version in standard Java constructs.

    If you think it’s faster to access a Lotusscript-constructed bit of JSON through an ?OpenAgent call, you should try accessing a HashMap that’s already in classloader memory. You should be able to get another 5-10 times improvement that way.

    • Because of the amount of data to be cached in the server memory. If you store all user specific query data in application scope or better a session scope, this would bring the domino to serialize to/from disk till end of time. And of course the flexibility: You don’t need the LS-Agent, this is for demonstration / testing purposes only. You can cache f.e. every view opened with ?readviewentries&OutputFormat=JSON without effort.

  2. XPages doesn’t serialize session or application scope data, so why would this be an I/O burden? If you’re going to retain a JSON string in memory as a String, the same data as a HashMap will take up very little more memory, but provide much faster access. If you provide a JSON object to the repeat control, it is ultimately going to be interpreted by a parser at runtime, whereas a bean will just be cast to whatever Collection API is appropriate. It’s much faster.

    Obviously it’s your app. Do what you want. But if your objective is runtime performance, you should be able to improve it with a more direct strategy. If not, then I’d be eager to know why not.

    • Oops, your right. My fault! You are right, my argument because of the I/O was not correct – it was a little bit early in the morning… Thanks for your comment.

      But to give you a better answer:
      This solution is only a proof-of-concept, but the more I am thinking about the advantages / disadvantages, I come to the point to say: Yes, this is a good one!

      The idea behind this solution is that I can prefetch a lot of data before the users asks for it. This can be realized f.e. with a simple wget-Script running on the server and a list of prefetch urls, or something similar. The database can run on another server(s). This would make the user experience better, because the application runs a lot smoother.

      I have a lot of data for the endusers, and storing them all in the sessionScope would blow up the memory usage of the domino server. I had a lot of trouble in other projects because of the memory management in domino / XPages. I am using Apache now for several years in different high performance situations without any problems – this is something I can not say about domino.

      Your idea of using of a TreeMap/HashMap instead of plain JSON is really good! I have to test how this can be realized with caching of the serialized objects and loading them to the runtime.

      The current data could still reside in the sessionScope for a better performance of the current request (for pagers etc.). I will give this a try.

  3. I look forward to hearing the results! 🙂

  4. Interesting topic!
    Since we prefer to use newer Dojo versions than 1.6.x or other toolkits like Ext.js, we have switched more or less to an architecture where we render the web UI ourselves and use stateless REST services via xhrGet to pull data from the server.
    Might also be an option to reduce memory footprint on the server and tune I/O between browser and server.

  5. There is the Apache JCS if you want to do the caching in Java. I used that before (in a websphere app) and it worked well

  6. Cameron Gregor sagt:

    Thanks Sven, it was helpful to see how you were stealing the cookie!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.