Liferay Cluster in the Cloud

Liferay Portal is one of the most popular Java CMSs in the world due to its impressive ease-of-use. Since we published the tutorial on Liferay deployment to the cloud we have seen an extremely positive reaction from its community. Also we have received multiple requests from Liferay fans asking about clustering, replication and fail-over capabilities in the cloud. Hopefully, this post will provide the answer. Today I’ll focus on how to create high available cluster for Liferay, which can easily cope with huge traffic, improve performance and provide full fail-over.

I want to underline that Liferay is quite applicable to Jelastic from a scaling perspective: this CMS works perfect in horizontal as well as in vertical clusters and even in their mix. You can see the more detailed description of Liferay cluster architecture in Jelastic on the scheme below:

LifeRay

By the above configuration you can see that we’ll install Liferay in two app servers (Tomcat in our case) and select one load balancer (NGINX), which handles all requests, in front of them. NGINX balancer distributes the requests to various servers according to availability and server load. Also we’ll define two replicated data sources (MySQL).

So, if your server cannot serve the high traffic of your Liferay site, the next tutorial is for you. Don’t worry about administration difficulties, with Jelastic Platform only a few simple configurations are needed.

Let’s start now!

Deploy application

Create your environment and deploy Liferay Portal to the cloud using this tutorial.

Configure database

In Liferay cluster there are two modes which can be used for your database: a master-slave configuration and database sharding. Let’s examine both of them.

Master-slave database configuration

In this case you can use two different data sources for reading and writing and split the database infrastructure into two sets. We’ll use MySQL master-slave replication mechanism, which offers multiple benefits for the response time, system administration and failover capabilities. But this approach has one small minus: both of your databases are doing the same work and this means additional resource consumption.

1. To enable master-slave replication you need to create two identical environments (liferayread and liferaywrite) with MySQL and set few simple configurations like it’s described in our previous article “Database Master-Slave Replication in the Cloud”.

2. After both of our databases are successfully configured, let’s create the new user and the database with the same name (liferay) in your master (writer) base.

liferay cluster database user

3. Go back to the Jelastic dashboard and click on the Config button for Tomcat.

4. Navigate to the home folder and click the New file button to create a new portal-ext.properties configuration file. Type the following string:

resource.repositories.root=${user.home}/liferay

5. In the same file enable a read-writer database by configuring of two different data sources for Liferay to use:

jdbc.default.driverClassName=com.mysql.jdbc.ReplicationDriver
jdbc.default.url=jdbc:mysql:replication://mysql-{your_write_database_environment_name}.{hoster's_domain}:3306,mysql-{your_read_database_environment_name}.{hoster's_domain}:3306/liferay?useUnicode=true&characterEncoding=UTF-8
jdbc.default.username={your_database_user_name}
jdbc.default.password={your_database_password}

6. Save the changes and restart your server.

Configure database sharding

Liferay also supports database shard. This allows you to split up your database by different types of data that might be in it. In this case, processing is evenly, and the amount of data the application has to handle is decreased. But you have to keep in mind that such an approach doesn’t alleviate system failures.

1. Create two (or more) identical environments with MySQL databases like we did for master-slave.liferay cluster ext properties

2. After both (or more) of our databases are successfully configured, let’s create the new users and the databases for each shard. Let’s call them liferayone and liferaytwo.

liferay sharding

3. Navigate to the home folder and click the New file button to create a new portal-ext.properties configuration file. Type the following string:

shard.selector=com.liferay.portal.dao.shard.RoundRobinShardSelector

This will allow you to use the round robin shard selector, which is the default sharding algorithm in Liferay. 4. In the same file set your different database shards:

jdbc.one.driverClassName=com.mysql.jdbc.Driver 
jdbc.one.url=jdbc:mysql://mysql-{your_first_database_environment_name}.{hoster's_domain}/lportalone?useUnicode=true&characterEncoding=UTF-8 
jdbc.one.username={your_database_user_name}
jdbc.one.password={your_database_password}
jdbc.two.driverClassName=com.mysql.jdbc.Driver 
jdbc.two.url=jdbc:mysql://mysql-{your_second_database_environment_name}.{hoster's_domain}/lportaltwo?useUnicode=true&characterEncoding=UTF-8
jdbc.two.username={your_database_user_name}
jdbc.two.password={your_database_password}
shard.available.names=one,two

liferay-cluster-database-sharding

5. Save all the changes and restart Tomcat.

Start Liferay

Now you can open your application in a browser and come though the steps of Liferay CMS installation.

liferay cluster open in browser

Configure cluster

The final step is cluster configuration. With Jelastic you can create a highly available cluster in a few clicks, here’s how:

1. Navigate to your Liferay cluster environment and click on Change environment topology.

liferay-cluster-change-topology

2. Swich on High Availability and specify the cloudlet limit for NGINX load balancer.

liferay cluster configuration

Note: We switched on HA only after CMS installation in order to generate portal-setup-wizard.properties (home/liferay) and clone it to the other instance for avoiding doubled installation.

liferay cluster setup wizard

In case if one node of your Liferay goes down the other one takes over. At the same time Liferay uses hibernate on both nodes for interaction between application and databases. Using the approach described in the instruction above you can easily extend your Liferay cluster by adding more server instances into your environment, however, the configuration can be further optimized for you own needs. If you have, or have had, such an experience please let me know by adding a comment below.

3 Responses to “Liferay Cluster in the Cloud”

  1. sotix

    Nice article. I think Liferay is caching stuff from the database so if for example if you add a page on on node of the Liferay cluster it will take a while (or even need a restart?) until such a change will be visible on all nodes. How did you manage this?
    Does setting the HA feature mean you activate session replication?

    Reply
  2. Marina Sprava

    Hi! Thanks for good words:)

    >I think Liferay is caching stuff from the database so if for example if you add a page on on node of the Liferay cluster it will take a while (or even need a restart?) until such a change will be visible on all nodes. How did you manage this?

    Yes, sure, this takes some time, but it’s inconsiderable.

    >Does setting the HA feature mean you activate session replication?

    You are absolutely right. Session replication keeps copying session data between server instances thus providing high reliability, scalability, and perfect failover capabilities. You can find more info about this feature here http://jelastic.com/docs/session-replication

    Best regards,
    Marina

    Reply
  3. Rafik

    Hello,

    I think you have configured a load balancer on Liferay and not a Cluster.
    What about clustering cache, document and media library, lucene indexes and quartz jobs ?
    Best Regards

    Rafik

    Reply

Leave a Reply