Pages

Monday, May 12, 2014

Apache load balancer

An add-in module that acts as a software load balancer and ensures that traffic is split across back-end servers or workers to reduce latencies and give users a better experience.

mod_proxy_balancer distributes requests to multiple worker processes running on back-end servers to let multiple resources service incoming traffic and processing. It ensures efficient utilization of the back-end workers to prevent any single worker from getting overloaded.

When you configure mod_proxy_balancer, you can choose among three load-balancing algorithms: Request Counting, Weighted Traffic Counting, and Pending Request Counting, which we'll discuss in detail in a moment. The best algorithm to use depends on the individual use case; if you are not sure which to try first, go with Pending Request Counting.

The add-in also supports session stickyness, meaning you can optionally ensure that all the requests from a particular IP address or in a particular session goes to the same back-end server. The easiest way to achieve stickyness is to use cookies, either inserted by the Apache web server or by the back-end servers.

A general configuration for load balancing defined in /etc/httpd/httpd.conf would look like this:

<Proxy balancer://A_name_signifying_your_app>
BalancerMember http://ip_address:port/ loadfactor=appropriate_load_factor # Balancer member 1
BalancerMember http://ip_address:port/ loadfactor=appropriate_load_factor # Balancer member 2
ProxySet lbmethod=the_Load_Balancing_algorithm
</Proxy>
You can specify anything for a name, but it's good to choose one that's significant. BalancerMember specifies a back-end worker's IP address and port number. A worker can be a back-end HTTP server or anything that can serve HTTP traffic. You can omit the port number if you use the web server's default port of 80. You can define as many BalancerMembers as you want; the optimal number depends on the capabilities of each server and the incoming traffic load. The loadfactor variable specifies the load that a back-end worker can take. Depending upon the algorithm, this can represent a number of requests or a number of bytes. lbmethod specifies the algorithm to be used for load balancing.

 

Let's look at how to configure each of the three options.
Get an Open Source Support Quote

Request Counting
With this algorithm, incoming requests are distributed among back-end workers in such a way that each back end gets a proportional number of requests defined in the configuration by the loadfactor variable. For example, consider this Apache config snippet:
<Proxy balancer://myapp>
BalancerMember http://192.168.10.11/ loadfactor=1 # Balancer member 1
BalancerMember http://192.168.10.10/ loadfactor=3 # Balancer member 2
ProxySet lbmethod=byrequests
</Proxy>
In this example, one request out of every four will be sent to 192.168.10.11, while three will be sent to 192.168.10.10. This might be an appropriate configuration for a site with two servers, one of which is more powerful than the other.

 

Weighted Traffic Counting Algorithm
The Weighted Traffic Counting algorithm is similar to Request Counting algorithm, with a minor difference: Weighted Traffic Counting considers the number of bytes instead of number of requests. In the configuration example below, the number of bytes processed by 192.168.10.10 will be three times that of 192.168.10.11.
<Proxy balancer://myapp>
BalancerMember http://192.168.10.11/ loadfactor=1 # Balancer member 1
BalancerMember http://192.168.10.10/ loadfactor=3 # Balancer member 2
ProxySet lbmethod=bytraffic
</Proxy>
Pending Request Counting Algorithm
The Pending Request Counting algorithm is the latest and most sophisticated algorithm provided by Apache for load balancing. It is available from Apache 2.2.10 onward.

 

In this algorithm, the scheduler keeps track of the number of requests that are assigned to each back-end worker at any given time. Each new incoming request will be sent to the back end that has least number of pending requests – in other words, to the back-end worker that is relatively least loaded. This helps keep the request queues even among the back-end workers, and each request generally goes to the worker that can process it the fastest.

 

If two workers are equally lightly loaded, the scheduler uses the Request Counting algorithm to break the tie.
<Proxy balancer://myapp>
BalancerMember http://192.168.10.11/ # Balancer member 1
BalancerMember http://192.168.10.10/ # Balancer member 2
ProxySet lbmethod=bybusyness
</Proxy>
Enable the Balancer Manager
Sometimes you may need to change your load balancing configuration, but that may not be easy to do without affecting the running server. For such situations, the Balancer Manager module provides a web interface to change the status of back-end workers on the fly. You can use Balancer Manager to put a worker in offline mode or change its loadfactor. You must have mod_status installed in order to use Balance Manager. A sample config, which should be defined in /etc/httpd/httpd.conf, might look like:

 

<Location /balancer-manager>

SetHandler balancer-manager

Order Deny,Allow
Deny from all
Allow from .test.com
</Location>
Once you add directives like those above to httpd.conf and restart Apache you can open the Balancer Manager by pointing a browser at http://test.com/balancer-manager.

 

<VirtualHost *:80>
ProxyRequests off

ServerName domain.com

<Proxy balancer://mycluster>
# WebHead1
BalancerMember http://10.176.42.144:80
# WebHead2
BalancerMember http://10.176.42.148:80

# Security "technically we aren't blocking
# anyone but this the place to make those
# chages
Order Deny,Allow
Deny from none
Allow from all

# Load Balancer Settings
# We will be configuring a simple Round
# Robin style load balancer. This means
# that all webheads take an equal share of
# of the load.
ProxySet lbmethod=byrequests

</Proxy>

# balancer-manager
# This tool is built into the mod_proxy_balancer
# module and will allow you to do some simple
# modifications to the balanced group via a gui
# web interface.
<Location /balancer-manager>
SetHandler balancer-manager

# I recommend locking this one down to your
# your office
Order deny,allow
Allow from all
</Location>

# Point of Balance
# This setting will allow to explicitly name the
# the location in the site that we want to be
# balanced, in this example we will balance "/"
# or everything in the site.
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/

</VirtualHost>

 

========================

Enable proxy_module, proxy_balancer_module and proxy_http_module in httpd.conf of Apache web server
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_http_module modules/mod_proxy_http.so
Add proxy pass along with balancer name for application context root. In this example, I have proxy path as examples and balancer name as mycluster. Very important to include stickysession as not having this option will distribute same request to multiple tomcat server and you will have session expiry issues in application.

<IfModule proxy_module>
ProxyRequests Off
ProxyPass /examples balancer://mycluster stickysession=JSESSIONID
ProxyPassReverse /examples balancer://mycluster stickysession=JSESSIONID
<Proxy balancer://mycluster>
BalancerMember http://localhost:8080/examples route=server1
BalancerMember http://localhost:8090/examples route=server2
</Proxy>
</IfModule>
As you can see in above configuration, I have added route in BalancerMember so route value can be appended to session ID. Now, let’s configure Apache to print JSESSIONID in access logs.

Add following in LogFormat directive
%{JSESSIONID}C
Ex:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"\"%{JSESSIONID}C\"" combined
Restart Apache Web Server

Extend a root volume in the Aws

To extend a root volume in the Aws we first need to take a snap shot of the correct volume and the create a new volume of the needed size from the snapshot. Checking the size of the Current partition. We can see that the current size is 10Gb. Intial State

First find the correct instance ID

Instance-ID-00png

Check the volume section and find out which all volume are attached to the Correct instance. Check and note the volume instance and find the mount path as we need to mount the new volume to same mount path.     Instance-ID-01

  Select the Correct Volume and create a snapshot of that Volume. Right Click and select the option to create a snapshot.

Instance-ID-02

Enter the snapshot name and description to create a snapshot.

Instance-ID-03

Snapshot Created.

Instance-ID-04

    Now select the snap shot option in the right menu and choose the correct snapshot which needs to be extended and Right click to create the new volume.

Instance-ID-05  

Select the Needed Size , type of storage etc

  Instance-ID-06  

Once the Volume in created you can see it in the Volume Panel in available model.

  Instance-ID-07  

Stop the instance in which the Volume is mounted

  Instance-ID-08

  Select the Volume panel  and Unmount the  initial Volume which is of small size by right click the old volume.

Instance-ID-09  

Once the volume ins unmounted both the Volumes will be in available state. Now right click on the new volume and attach it into the instance.

  Instance-ID-10

  Enter the instance to which the volume should be added and also the device path which we have save before.

Instance-ID-11  

Make sure you enter the device path correctly as we have noted down before.  

Instance-ID-12


Once mounted make sure that the instance ID, Volume ID and device path are correct.

Instance-ID-13  

Once its all set start the instance and check the size. In some cases we need to run resize2fs to make the size of the volume extent.

instance    

In windows you can resize it from the disk management option.      

Friday, May 9, 2014

AWS IAM- Identity and Access Management

An AWS account has full permission to perform all actions on the vaults in the account. However, the AWS Identity and Access Management (IAM) users don't have any permission by default.

IAM helps us to securely control access to Amazon Web Services and your account resources. With IAM, you can create multiple IAM users under the umbrella of your AWS account.

Every user you create in the IAM system starts with no permissions. In other words, by default, users can do nothing. Permission is a general term we use to mean the ability to perform an action against a resource, unless you explicitly grant a user permissions, that user cannot perform any of these actions. You grant permission to a user with a policy. A policy is a document that formally states one or more permissions.

IAM Users

An IAM user is an entity that you create in AWS that provides a way to interact with AWS. A primary use for IAM users is to give people you work with identities that they can use to sign in to the AWS Management Console and to make requests to AWS services.

AWS IAM GROUP.

group is a collection of IAM users. Groups let you specify permissions for a collection of users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group. If a new user joins your organization and should have administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user's permissions, you can remove him or her from the old group and add him or her to the new group.

IAM_Group

 

Creating a Group with needed Privileges.

Entering the Group Name

IAM_Group-00

Selecting Permissions

Aws provides a set of custom permission templates which we can use. The custom template provided by the Aws covers all the services in the AWS.

IAM_Group-00-Policy-section-00

We can also generate Custom Policies with the help of Policy Generator

IAM_Group-00-Policy-Generator-00

First select the service of which we need to create Policies.

IAM_Group-00-Policy-Generator-01

 

Select the permission’s we need to add into the Policies

IAM_Group-00-Policy-Generator-03

The Amazon Resource Name  : This gives the API details about the service ,region, resource account ect.

 

Arn format

==========

arn:aws:service:region:account:resource

arn:aws:service:region:account:resourcetype/resource

arn:aws:service:region:account:resourcetype:resource

 

More details can be found at

http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html

IAM_Group-00-Policy-Generator-04

Once the ARN is added we can Add Statement so that we can see the rule’s added.

IAM_Group-00-Policy-Generator-05

 

Now we will be able to see the policy codes which if needed we can use for create custom Policies.

IAM_Group-00-Policy-Generator-06

Creating the Group

IAM_Group-00-Policy-Generator-07

 

Creating the User

IAM_User-00

Keep the Access Key ID and Secret Key safe because this is the last time you will see it in AWS. AWS will not save them for you. But you can create as many keys you need.

IAM_User-01

 

Adding the User to Group

Right click on the needed user to get more options.

IAM_User-01-togroup-00

Select the required Group

IAM_User-01-togroup-01

Once the group is added we need to give the user a password

IAM_User-01-password-setting-00

Assign the needed Password

IAM_User-01-password-setting-01

 

The Group and password are set for the User.

IAM_User-01-password-setting-02

 

 

 

Once the User is set we can set the IAM URL alias

IAM-URL

Give the needed Alias

IAM-URL-Alias

 

The URL is set.

IAM-URL-Alias-01

 

Now you can use the URL to access the IAM login portal.