Automated Traffic Distribution for Blue-Green Deployment, Zero Downtime Updates and Failover Protection

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someone

jelastic-traffic-distributorThe absolute majority of production environments need to be accessible to customers at any time, and the most common problem here is the process of project re-deployment. Usually, it is solved with the help of additional software, but their integration is rather complicated and may require extra human resources, as well as the valuable time spent for configuring these tools. Also, multiple environment copies can be used as an insurance of high availability. In this situation, you can face the problem of proper traffic distribution between such project copies, including aspects like method for requests routing, servers loading rates, etc. Solving all of these issues can become a challenge even for experienced developers.

So in this article we’d like to talk about the solutions that allow to implement an advanced traffic routing, maximally automate this process and get a number of benefits, like:

  • Apply the implicit blue-green deployment or zero-downtime update by redirecting portion or all traffic to a new version of an application
  • Perform ongoing A/B testing by routing part of the traffic to a newer application version to compare performance and UX rates
  • Achieve advanced failover protection and high availability through sharing the load between two fully functional application copies in different cloud regions

Let’s go through the main traffic routing methods, see how to manually configure them based on the example of NGINX server and as well how to automate this process using add-on that can be installed in one click.

Round Robin

Round Robin is the most simple and frequently used routing method. It allows to distribute requests one-by-one among the servers with preconfigured priorities and traffic ratio.round-robin-jelastic

Note that this method should be selected only when you have identical content on your endpoints, since data requested by users will be loaded from both backends.

Sticky Sessions

Sticky Sessions method assigns every end user to a specific backend that receives their requests until the corresponding session is live. Herewith, on the first visit, a customer is routed based on the servers’ weights, while assigned backend is remembered, ensuring that all subsequent requests from this user go to the same environment.

Commonly, this is implemented through remembering IP address, which is not optimal, as there could be a lot of customers behind a proxy, resulting in unfair balancing. Thus, there can be used a solution based on the session cookies to make a persistent routing, when each browser becomes a unique “user”, allowing to make balancing more even.

In such a way, Sticky Sessions distribution of new users is similar to the round robin method and is performed according to the pre-set priority. For example, setting 50% to 50% will make both application versions being visited by the equal amount of unique users, which is useful. But, irrespectively of the server’s weights, the “old” user’s requests will always be redirected to the hosts they are assigned to until their session is expired or cookie is removed.

Also, upon setting 100% ratio for any server, the second one won’t be removed from the settings completely, so it will be able to process the already existing sessions.sticky-sessions-jelastic

Failover

Failover traffic routing allows to redirect the customers to a backup standby server in case of any failure on the production server. In other words, you have the primary server and backup one – all the requests are initially forwarded to the first endpoint, while the second one is used only in case the primary service goes down. The requests will be automatically redirected to the working server, so that your users, probably, won’t even notice any interruption in the application work.failover-jelastic

Manual Configuration of Traffic Routing Methods

Let’s see how to configure these three types of traffic routing for NGINX 1.8 server.

1. First of all, make sure that your NGINX is compiled with the following modules:

2. After that, access the server under your NGINX user:

su nginx

3. Edit nginx.conf for:

  • Round Robin traffic routing
          http {
               upstream cluster {
                 server {host1}  weight={host1_weight};
                 server {host2} weight={host2_weight};
                 check interval=30000 rise=2 fall=5 timeout=10000 default_down=false type=http;
              }

              server {
                listen 80;

                location / {
                   proxy_pass http://cluster;
                }
              }
            }

 

  • Sticky Sessions
          http {
               upstream cluster {
                 server {host1}  weight={host1_weight};
                 server {host2}  weight={host2_weight};
                 check interval=30000 rise=2 fall=5 timeout=10000 default_down=false type=http;
                 sticky name={cookie} path=/;
               }

               server {
                 listen 80;

                 location / {
                   proxy_pass http://cluster;
                 }
               }
             }

 

  • Failover Routing
            http {
               upstream cluster {
                 server {host1};
                 server {host2} backup;
                 check interval=30000 rise=2 fall=5 timeout=10000 default_down=false type=http;
               }

               server {
                 listen 80;

                 location / {
                    proxy_pass http://cluster;
                 }
               }
             }

 

where

  • {host1} – the domain or IP address of the first endpoint
  • {host1_weight} – the weight of the first server, default: 1
  • {host2} – the domain or IP address of the second endpoint
  • {host2_weight} – the weight of the second server, default: 1
  • {cookie} –  the name of the cookie used to track the persistent upstream server, default: route
  • path – the path in which the cookie will be valid, default: /
  • interval – delay between two consecutive check requests in milliseconds
  • rise – amount of successful checkups, after which the server is marked as up and working
  • fall – amount of checkup failures, after which the server will be marked as unavailable
  • timeout – reply timeout in milliseconds before the check request is considered as failed
  • default_down (true/false) – sets the initial state (down or up correspondingly) of both endpoints, please state it as false
  • type – protocol type to be used for health check

4. Reload the server to apply the changes:

nginx -s reload

In case you need to tune the settings, you’ll have to edit nginx.conf and reload the server each time.

Automated Traffic Distribution

In order to automate the settings, there can be used Traffic Distributor add-on that is available for one-click installation through Jelastic Marketplace. It provides smart traffic routing using the method and distribution ratio specified by the developer or system administrator within user-friendly wizard.automated-traffic-distribution-jelastic

With a help of Traffic Distributor, it’s simple to perform so-called “invisible” updates, that will cause no downtime for your application. Such possibility is highly demanded in the current reality of the rapid development and fast growing concurrency, as you need to constantly update your project in order to remain demanded, conquer new users and, generally, not to fall behind your competitors.

Try out for free at one of the Jelastic hosting partners.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someone

Leave a Reply

Subscribe to get the latest updates