Saturday, October 24, 2015

WSO2 ESB: Polling a database for changes

WSO2 ESB has DBLookup mediator to read data from a database. But in some cases, you may need ESB to keep polling a database and proceed with some operations only if there's a change in it. Although this is not supported out-of-the-box, we can easily write a simple class mediator to implement this requirement.

One problem that arises when doing this is that there's no way to keep a value in-memory across multiple messages, since the properties used in ESB artifacts are local to the message context. To overcome this, we can consider two options:
  1. Writing a class mediator that keeps the last read value in an instance variable and performs any comparisons required to identify changes in the database.

  2. Simply store the last read value to another table/field and compare it with the current field (this requires multiple DBLookup/Report calls).

In this blog post, we'll explore how we can implement the first option.

So the basic flow will be:
  • A scheduled task keeps calling a sequence
  • In the called sequence, we have a DBLookup mediator that fetches a field from a database table (includes a column that indicates a change, such as a timestamp)
  • The change indicator field is extracted and added to a property
  • Class mediator is called, it compares the property's current value with the previously stored value and sets the result (changed or not) to another property
  • With a filter on the resultant property, we can identify a change and proceed with necessary operations

A sample configuration is as follows.

The scheduled task that keeps calling a sequence:

The sequence that does the DB lookup and identifies if there's a change:

Sample code for the class mediator used in the above sequence can be found here.

Wednesday, October 21, 2015

WSO2 ESB: Transferring data from files to a database

In some integration scenarios you may want to use ESB to poll a directory for files and store the content to a database.

Following steps can be used to read CSV files from a directory, extract their data and then store them in a database.

We use a VFS proxy to poll the directory and look for new files. The parameters transport.vfs.FileURI and transport.vfs.FileNamePatter are used to indicate where/what types of files to look for (any regular expression can be used here).
Once a file is read, we need to convert it to a format that is supported by the DB Report mediator of the ESB. In this case we convert the csv data to xml using the Smooks mediator.

Once the conversion is done, we can easily use the converted xml data inside the DB Report mediator to enter them to a DB with XPath expressions.

In addition, we can specify some actions such as where the file to be placed (or deleted) after storing in the DB (transport.vfs.ActionAfterProcess),  what actions to be taken if a failure occurs (transport.vfs.ActionAfterFailure) etc

The complete proxy configuration is given below. Please note that you have to enable vfs transport in axis2.xml in order to get this working.

In addition to the above proxy, you have to add the smooks-config xml file to the location specified in the proxy configuration. The smooks configuration for this sample is as follows:

You also need to add a local entry to the esb configuration to point to the smooks config file. (the above proxy configuration points to this local entry key for smooks configuration)

Thursday, September 17, 2015

Switching from HTTP/HTTPS to VFS in WSO2 ESB

In some cases, there can be requirements to transfer some content arriving through the HTTP transport to an FTP/SFTP location synchronously and then acknowledge the http client with the status of the transfer. This can easily be accomplished with a proxy similar to the following in WSO2 ESB 4.8.x.

Here we use the OUT_ONLY property when calling the VFS endpoint since we are not going to receive a response from it. We use the call mediator to synchronously call it. Also we append retry counts and and reconnection timeouts to the VFS URI to retry transfer in case the endpoint is temporarily unavailable. Finally we build a payload with the status and send it back to the client using respond mediator. 

Saturday, January 31, 2015

Disabling http access logs in WSO2 CEP and other carbon products

When you use the http input adaptor of CEP 3.1.0, after some time you may notice that the http access log files are growing large when there are frequent http requests.  (this happens when we use the servlet transport, hence this does not apply to ESB and other products with synapse engine)

You can disable the http access logs as follows:
Goto repository/conf/tomcat/catalina-server.xml. At the bottom part of the file, you'll find an entry like following
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="${carbon.home}/repository/logs" prefix="http_access_" suffix=".log" pattern="combined" />

Comment this out or remove this entry to disable the access log and restart the server.

Sunday, March 31, 2013

Writing a Relational Database broker for the WSO2 Complex Event Processor

WSO2 Complex Event Processor (CEP) has a pluggable architecture where users can write and plug 'brokers' to send events from an event source to CEP engine and also receive output events from the CEP engine and send them to an event sink. Currently there are quite a few broker types available with the CEP supporting JMS, WS-Events, Email etc. If you want to persist events in a relational database, here's how you can achieve it by writing an RDBMS broker.

There are two classes you have to extend in order to implement a custom broker.


These classes are available in package which can be added as a maven dependency as follows:


Depending on the RDBMS vendor you are going to support, you'll have to add a dependency to the relevant JDBC driver as well. Here for this example we will be using the MySQL driver.


BrokerType implementation class contains the methods that will receive events from the CEP engine. It also has a way to define the broker specific configuration parameters (such as database host ip/username) that will be available in the web console at runtime.

Out of the methods in BrokerType, you'll have to implement following two methods.

1. getBrokerTypeDto() - this should return the BrokerTypeDto object that contains the required configuration parameters for the Broker.

When implementing getBrokerTypeDto(), you should populate the BrokerTypeDto with all the parameters that are going to be adjusted at runtime. Here's an example how you can add the hostname field so that user is asked to fill it when configuring a new broker.

    Property hostName = new Property(“Hostname”);
    hostName.setDisplayName(“Host name”));

2. publish(String topic, Object message, BrokerConfiguration brokerConfiguration) - this is where the output events from the CEP engine will be received and written to the database.

The parameter 'topic' used here maps to a database table name and the user can give it when configuring a new CEP bucket. The 'message' parameter will contain the output and if you choose the output mapping to be 'Map mapping' (which is again available at the time when configuring a new CEP bucket), you will receive a java.util.Map with all the output key/value pairs. The brokerConfiguration contains all the configuration parameters you requested in the getBrokerTypeDto() method, including hostname, username etc. The first thing you have to do here is to check whether the given table exists and create a new one using the topic name if it doesn't already exist. You can infer its column names and data types by examining the Map received. Once it is done, you can simply create an statement and feed the values to the table.

BrokerTypeFactory, as its name implies, is responsible for creating/providing BrokerType objects. All you have to do here is to implement getBrokerType() method to return an instance of the BrokerType implementation class.

Once you complete writing the code, package it and put the jar file into repository/components/lib folder. Also remember to add the JDBC driver to the same directory. Then put an entry to repository/conf/broker.xml as follows (create a new broker.xml file if it does not exist):

    <brokerTypes xmlns="">
    <brokerType class=""/>

Now start the server and select 'Configure' in the web console. When you select Broker -> Add, you will see that the newly deployed broker is listed under the 'Broker Type' dropdown. You can configure the broker instance from there.

An example implementation can be found here.

Thursday, February 9, 2012

International shipping for Amazon, eBay and other U.S.-only shipping sites

Some popular online merchants like Amazon and eBay often have products with restricted delivery to U.S. only.

If you live outside U.S. and want to buy any such item, here's how:

- Register yourself at a mail/package forwarding service. Some of the popular sites include:, and

- They will ask your current address and you may have to pay a small fee for registration. Once the registration is complete, you will receive a U.S. address from them. This is actually their address with your name assigned to it. So it can be used as your own residential address to receive anything from mail/couriers.

- Now you can buy anything from Amazon/eBay and others without bothering whether it has worldwide shipping or not. Just use your newly received US-address as the shipping address.

- Whenever the forwarding service receives any mail/package to your name, they will forward it to your real residential address. You'll have to pay a handling fee to them with the postage/courier fees.

I tried this once recently and it worked fine. Items arrived within a week as the forwarding service sent it via a courier. Note however that this can sometimes be costly and not worth it for receiving cheaper items.

Sunday, October 30, 2011