During the last months I worked on a high performance XPages application used by a lot of end users. To get a better data throughput, I decided to use a reverse proxy for load balancing, caching of ressources, SSL connections etc.
For static resources I am using some „special“ domains: This means that the browser is allowed to do more HTTP requests at once to the same NSF. If you have for example some images in your database which are reachable from outside via http://www.example.com/mydb.nsf/image.gif, this can be changed to http://static.example.com/mydb.nsf/image.gif (Here you can find a list of best practices).
I solved the problem of multiple execution during the JSF lifecycle by checking if the request has already actual data (details can be found here – German only), but there was still a problem: Everytime a XPage wants some data to display, a query is sent to the domino server. This is nice if your application requires data in real-time, but not for a normal application – it kills the user experience.
This is why I searched a way which easily allows to implement memory cached database queries for domino databases. The main idea is that the database query is no longer send directly to the backend . Instead, the request is made against a JSON wrapper, and the request to this wrapper (an agent residing in the same NSF) is done via a proxy. This allows a full control of the requested data.

The browser sends the HTTP request to the XPage and receives the response. The XPage queries the MemCache for data; if the data is not in the cache, the proxy queries the data storage (the domino database) and caches the result. The XPage has to parse the JSON data only, and this boosts the performance in my test environment for about 250-300%.
By „stealing“ the session cookie, the database query to the backend database will be done in the context of the user; the security for domino databases is not influenced.
In my solution, I am using Apache 2.2 as reverse proxy. The following modules are additionally enabled for this solution:
- cache_module
- deflate_module
- expires_module
- headers_module
- mem_cache_module
- proxy_module
- proxy_http_module
- rewrite_module
- setenvif_module
The virtual host configuration looks like this:
<VirtualHost *:8080>
ServerName localhost
# Enable reverseproxy
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
AddDefaultCharset off
Order allow,deny
Allow from all
</Proxy>
# Proxy config for Domino server
#
ProxyPass / http://localhost:80/
ProxyPassReverse / http://localhost:80/
# prevent max-age calculation from Last-Modified
# prevent If-Modified-Since requests
# reduces the number of requests that hit the server
<LocationMatch "/.*$">
Header unset Last-Modified
Header unset ETag
Header unset HTTP_CACHE_CONTROL
</LocationMatch>
# MemCache Config
#
CacheEnable mem /
CacheEnable mem http://
# Cache-Size 80 MB
MCacheSize 81920
MCacheMaxObjectCount 8192
# Min Obj. Size 1 Byte
MCacheMinObjectSize 1
# Max Obj. Size 1 MB
MCacheMaxObjectSize 1000000
# cache for 60 seconds by default
CacheDefaultExpire 60
# FORCE caching for all documents (without Cache-Control: no-cache)
CacheIgnoreNoLastMod On
# force caching for all requests
# ignore client side Cache-Control header
CacheIgnoreCacheControl On
# don't add Set-Cookie header to cache
CacheIgnoreHeaders Set-Cookie
# Add expires headers for images, css & js files
# reduces the number of requests that hit the server
ExpiresActive On
ExpiresByType domino/json A600
</VirtualHost>
As you can see, the proxy runs on port 8080, and I have added a special content type „domino/json„. This makes it easier to identify the relevant data.
This is the XPage the user accesses:
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core"
xmlns:xc="http://www.ibm.com/xsp/custom">
<xp:pager layout="Previous Group Next" partialRefresh="true"
id="pager1" for="repeat1">
</xp:pager>
<xp:inputText id="inputSearch" value="#{sessionScope.searchFor}" />
<xp:button value="Label" id="button1">
<xp:eventHandler event="onclick" submit="true"
refreshMode="complete">
</xp:eventHandler>
</xp:button>
<xp:repeat id="repeat1" rows="30" var="rowData">
<xp:this.value><![CDATA[#{javascript:
importPackage( ch.hasselba.xpages.util );
var sessionId = null;
try{
sessionId = cookie.get("DomAuthSessId").getValue();
}catch(e){}
var url = "http://localhost:8080/Data.nsf/DoSearch?OpenAgent";
url += "&sessionId=" + sessionId;
if( sessionScope.get("searchFor") !== null ){
if( sessionScope.get("searchFor") !== "" )
url += "&search=";
url += java.net.URLEncoder.encode(sessionScope.get("searchFor"),"UTF-8");
}
var data = ch.hasselba.xpages.util.URLReader.read( url, sessionId );
var parsed = null;
try{
parsed = fromJson(data).data;
}catch(e){}
parsed
}]]>
</xp:this.value>
<xp:text escape="true" id="computedField1" value="#{javascript:rowData.LastName}">
</xp:text>
 
<xp:text escape="true" id="computedField2" value="#{javascript:rowData.FirstName}">
</xp:text>
<xp:br />
</xp:repeat>
</xp:view>
The Java class used is really simple, I know there are better ways to do a Http request, but this is a proof of concept.
package ch.hasselba.xpages.util;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
/**
* URLReader
* performs a HTTP request
*
* @author Sven Hasselbach
* @category URL
* @category Proxy
* @version 0.2
*/
public class URLReader {
// Constants
private final static String PROPERTY_COOKIE_NAME = "Cookie";
private final static String PROPERTY_DOMAUTHSESSID_VALUE = "DomAuthSessId=";
/**
* reads data from a given URL
*
* @param pURL URL to load data from
* @param pSessionId session data for doing a request in the current user context
* @return String containg the result of the http request
* @author Sven Hasselbach
* @category URL
* @category Proxy
* @version 0.2
*/
public static String read( final String pURL, final String pSessionId ){
String data = null;
try{
// init the URL connection
URL url = new URL( pURL );
URLConnection uc = url.openConnection();
// "steal" the original user session cookie
if( !("".equals(pSessionId)))
uc.setRequestProperty ( PROPERTY_COOKIE_NAME ,
PROPERTY_DOMAUTHSESSID_VALUE + pSessionId);
// do the HTTP request
BufferedReader in = new BufferedReader(
new InputStreamReader( uc.getInputStream() ));
// process the data returned
StringBuffer strBuf = new StringBuffer();
String tmpStr = "";
while((tmpStr = in.readLine()) != null ){
strBuf.append( tmpStr );
}
data = strBuf.toString();
}catch(Exception e){
e.printStackTrace();
}
return data;
}
}
And here comes the JSON handler, a simple Lotus Script agent:
Sub Initialize
Dim session As New NotesSession
Dim db As NotesDatabase
Dim dc As NotesDocumentCollection
Dim doc As NotesDocument
Dim isFirst As Boolean
Dim contextDoc As NotesDocument
Dim hlp
Set contextDoc = session.Documentcontext
Set db = session.Currentdatabase
' get the search string from the URL or use the default search
hlp = Split( contextDoc.QUERY_STRING_DECODED(0), "search=" )
If UBound( hlp ) = 0 Then
Set dc = db.Ftsearch("[FirstNAME] CONTAINS AARON", 0)
Else
Set dc = db.Ftsearch(hlp(1), 0)
End If
' create the JSON output
isFirst = true
Set doc = dc.Getfirstdocument()
' special content type domino/json
Print |Content-type: domino/json|
Print
Print |{"data":[|
While Not doc Is Nothing
If Not isFirst Then Print ","
Print |{"LastName":"| & doc.LastName(0) & _
|","FirstName":"| & doc.FirstName(0) & |"}|
isFirst = False
Set doc = dc.Getnextdocument( doc )
Wend
Print |]}|
End Sub
In the next days I will provide a sample database and add more details. A database with 300k test datasets will be added too.