Comparison Analysis:Amazon ELB vs HAProxy EC2

In this article i have analysed Amazon Elastic Load Balancer (ELB) and HAProxy (popular LB in AWS infra) in the following production scenario aspects and fitment:

Algorithms: In terms of algorithms ELB provides Round Robin and Session Sticky algorithms based on EC2 instance health status. HAProxy provides variety of algorithms like Round Robin, Static-RR, Least connection, source, uri, url_param etc. For most of the production cases use Round Robin and Session Sticky is more than enough, But in case you require algorithms like least connection you might have to lean towards HAProxy currently. In future AWS might add this algorithm in their Load Balancer

Spikey or Flash Traffic: Amazon ELB is designed to handle unlimited concurrent requests per second with “gradually increasing” load pattern. It is not designed to handle heavy sudden spike of load or flash traffic. For example: Imagine an e-commerce website whose traffic increases gradually to thousands of concurrent requests/sec in hours, Whereas imagine use cases like Mass Online Exam or GILT load pattern or 3-Hrs Sales/launch campaign sites expecting 20K+ concurrent requests/sec spike suddenly in few minutes, Amazon ELB will struggle to handle this load volatility pattern. If this sudden spike pattern is not a frequent occurrence then we can Pre-warm ELB else we need to look for alternative Load balancers like HAProxy in AWS infrastructure. If you expect a sudden surge of traffic you can provision X number of HAProxy EC2 instances in running state.

Gradually Increasing Traffic: Both Amazon ELB and HAProxy can handle gradually increasing traffic. But when your needs become elastic and traffic increases in a day, you either need to automate or manually add new HAProxy EC2 instances when the threshold is breached. Also when the load decreases you may need to manually remove the HAProxy EC2 instances from Load Balancing Tier. If you want to avoid these manual efforts you may need to engineer using automation scripts and programs. Amazon has intelligently automated this elastic problem in their ELB Tier. We just need to configure and use this, that’s all.

Protocols : Currently Amazon ELB only supports following protocols: HTTP, HTTPS (Secure HTTP), SSL (Secure TCP) and TCP protocols. ELB supports load balancing for the following TCP ports: 25, 80, 443, and 1024-65535. In case RTMP or HTTP Streaming protocol is needed, we need to use Amazon CloudFront CDN in your architecture. HAProxy can support both TCP and HTTP protocols. In case HAProxy EC2 instance is working in pure TCP mode. A full-duplex connection will be established between clients and servers, and no layer 7 examination will be performed. This is the default mode. It can be used for SSL, SSH, SMTP etc. Current 1.4 version of HAProxy does not support HTTPS protocol natively, you may need to use Stunnel or Stud or Nginx before HAProxy to do the SSL termination. HAProxy 1.5 dev-12 comes with SSL support, it will become production ready soon.

Timeouts: Amazon ELB currently timeouts persistent socket connections @ 60 seconds if it is kept idle. This condition will be a problem for use cases which generates large files (PDF, reports etc) at backend EC2, sends them as response back and keeps connection idle during entire generation process. To avoid this you’ll have to send something on the socket every 40 or so seconds to keep the connection active in Amazon ELB. In HAProxy you can configure very large socket timeout values to avoid this problem.

White listing IP’s :Some Enterprises might want to white list 3rd party Load Balancer IP range in their firewalls . If the 3rd party service is hosted using Amazon ELB it will become a problem. Currently Amazon ELB does not provide fixed or permanent IP address for the Load balancing instances that are launched in its tier. This will be a bottleneck for enterprises which have compulsion to white list the Load balancer IP’s in external firewalls/gateways. For such use cases, currently we can use HAProxy EC2 attached with Elastic IPs as load balancers in AWS infrastructure and white list the Elastic IP’s.

Amazon VPC/ Non VPC : VPC- Virtual Private Cloud. Both Amazon ELB and HAProxy EC2 can work inside the VPC and Non VPC environments of AWS.

Internal Load Balancing: Both Amazon ELB and HAProxy can be used for internal load balancing inside VPC. You might provide a service that is consumed internally by the other applications which needs load balancing. ELB and HAProxy can fit in the same. In case internal Load balancing is required in Amazon Non-VPC environments, ELB is not capable currently and HAProxy can be deployed.

URI/URL based Load balancing: Amazon ELB cannot Load Balance based on URL patterns like other Reverse proxies. Example Amazon ELB cannot direct and load balance between request URLs www.xyz.com/URL1 and www.xyz.com/URL2. Currently for such use cases you can use HAProxy on EC2.

Sticky problem: This point comes as a surprise to many users using Amazon ELB. Amazon ELB behaves little strange when incoming traffic is originated from Single or Specific IP ranges, it does not efficiently do round robin and sticks the request to some EC2’s only. Since i do not know the ELB internals i assume ELB might be using “Source” algorithm as default for such conditions. No such cases were observed with HAProxy EC2 in AWS unless the balance algorithm is “Source”. In HAProxy you can combine “Source” and “Round Robin” efficiently. In case the HTTP request does not have cookie it uses source algorithm, but if the HTTP request has a cookie HAProxy automatically shifts to RR or Weighted. (I will have to check this with AWS team)

Logging: Amazon ELB currently does not provide access to its log files for analysis. We can only monitor some essential metrics using CloudWatch for ELB. We cannot debug load balancing problems, analyze the traffic and access patterns; categorize bots / visitors etc currently because we do not have access to the ELB logs.This will also be a bottleneck for some organizations which has strong audit/compliance requirements to be met at all layers of their infrastructure. In case very strict/specific log requirements are needed, You might need to use HAProxy on EC2, in case it suffices the need.

Monitoring: Amazon ELB can be monitored using Amazon CloudWatch. Refer this URL for ELB metrics that can be currently monitored: http://harish11g.blogspot.in/2012/02/cloudwatch-elastic-load-balancing.html. CloudWatch+ELB is detailed for most use cases and provides consolidated result of the entire ELB tier in console/API. On the other hand HAProxy provides user interface and stats for monitoring its instances. But if you have farms(20+) of HAProxy EC2 instances it becomes complex to manage this monitoring part efficiently. You can use tools like ServerDensity to monitor such HAProxy farms, but it has huge dependency on NAT instances availability for inside Amazon VPC deployments.

SSL Termination and Compliance requirements:
SSL Termination can be done at 2 levels using Amazon ELB in your application architecture .They are
SSL termination can be done at Amazon ELB Tier, which means connection is encrypted between Client(browser etc) and Amazon ELB, but connection between ELB and Web/App EC2 is clear. This configuration may not be acceptable in strictly secure environments and will not pass through compliance requirements.
SSL termination can be done at Backend with End to End encryption, which means connection is encrypted between Client and Amazon ELB, and connection between ELB and Web/App EC2 backed is also encrypted. This is the recommended ELB configuration for meeting the compliance requirements at LB level.
HAProxy 1.4 does not support SSL termination directly and it has to be done in Stunnel or Stud or Nginx layer before HAProxy. HAProxy 1.5 dev-12 comes with SSL support, it will become production ready soon, i have not yet analyzed/tested the backend encryption support in this version.

Scalability and Elasticity : Most important architectural requirements of web scale systems are scalability and elasticity. Amazon ELB is designed for this and handle these requirements with ease.Elastic Load Balancer does not cap the number of connections that it can attempt to establish with the load balanced Amazon EC2 instances.Amazon ELB is designed to handle unlimited concurrent requests per second. ELB is inherently scalable and it can elastically increase /decrease its capacity depending upon the traffic. According to a benchmark done by RightScale, Amazon ELB was easily able to scale out and handle 20K+ or more concurrent requests /sec. Refer URL:http://blog.rightscale.com/2010/04/01/benchmarking-load-balancers-in-the-cloud/
Note: The load testing was stopped after 20K req/sec by RightScale because ELB kept expanding its capacity. Considerable of DevOps engineering is needed to automate this functionality with HAProxy.

High Availability: Amazon ELB is inherently fault tolerant and a Highly available service. Since it is a managed service, Unhealthy load balancer instances are automatically replaced in ELB tier. In case of HAProxy, you need to do this work yourself and build HA on your own. Refer URL http://harish11g.blogspot.in/2012/10/high-availability-haproxy-amazon-ec2.html to understand more about High Availability @ Load Balancing Layer using HAProxy.

Integration with Other services: Amazon ELB can be configured with work seamlessly with Amazon AutoScaling, Amazon CloudWatch and Route 53 DNS services. The new web EC2 instances launched by Amazon AutoScaling are added to the Amazon ELB for Load balancing automatically and whenever load drops; existing EC2 instances can be removed by Amazon Auto Scaling from ELB. Amazon AutoScaling and CloudWatch cannot be integrated seamlessly with HAProxy EC2 for this functionality. But HAProxy can be integrated with Route53 easily for DNS RR/Weighted algorithms.

Cost: If you run a ELB in US-East Amazon EC2 region for a month (744 hrs) processing close to 1 TB of data, it will cost around ~26 USD (ELB usage+Data charge). In case if you use HAProxy (2 X m1.large EC2 for HAProxy, S3 backed AMI, Linux instances, No EBS attached) as base capacity and add upto 4 or more m1.large EC2 depending upon traffic. It will minimum cost 387 USD for EC2 compute + Data Charges to start with. it is very clear and evident that larger deployments can save lots of cost and immensely benefit using Amazon ELB compared to HAProxy on EC2.

Use Amazon S3 Object Expiration for Cost Savings

Amazon S3 is one of the earliest and most popular services in AWS infra for storing files & documents. Customers usually store variety of files including their logs, documents, images, videos, dumps etc in Amazon S3. We all understand different files have different lifetime and use cases in any production application. Some documents are frequently accessed for a limited period of time and after that, you might not need real-time access to these objects, it becomes a candidate for deletion or archival.
For example:
Log files will have limited life time and they can be either parsed to Data Store or archived every few months
Database and Data store dumps also have retention period and hence limited life time
Files related to campaigns are not most of the time not needed once the Sales promotion is over
Customer documents are dependent upon customer usage life cycle and have to be retained till the customer is active in the application
Digital media archives, financial and healthcare records must be retained for regulatory compliance

Usually IT teams have to build some sort of mechanism or automated programs in-house to track these document ages and initiate a deletion process (individual or bulk) from time to time. In my customer consulting experience, I have often observed that above mechanism is not adequately in place because of following reasons:
Not all the IT teams are efficient in their development and operations
No mechanism/automation in place to manage the retention period efficiently
IT staff not fully equipped with AWS cloud knowledge
IT teams are usually occupied with their solutions/products catering to their business and hence do not have time to keep track of the rapid AWS feature roll out pace

Imagine your application stores ~5TB of documents every month. In a year it will aggregate to ~60TB of documents in Standard storage of Amazon S3. In Amazon S3 standard on US-East Region ~60TB of aggregated storage for the year will cost ~30,000 USD. Out of this imagine ~20 TB of documents aggregated for the year have limited life time and can be deleted or archived periodically a month. This equates to ~1650 USD cost leakage a year. This can avoided if proper mechanism or automation is put in place by the respective teams.
Note: Current charges for Amazon S3 standard storage in US-EAST per GB is 0.095 USD for first 1TB & 0.80 for next 49 TB.
But is there a simpler way for IT teams to cut this leakage and save costs in Amazon S3. Yes, Use Amazon S3 object expiration feature.

What is Amazon S3 Object expiration?
Amazon S3 introduced a feature called Object Expiration (in late 2011) for easing the above automation mechanism. This is a very helpful feature for the customers who want their data on s3 for a limited period of time and after that you might not need to keep those files and it should be deleted automatically by Amazon S3. Earlier as a customer you were responsible for deleting those files manually, when they do not remain useful but now you do not have to worry about it, just use Amazon S3 Object Expiration.
The leakage of ~1650 USD you saw in the above scenario can be saved by implementing Amazon S3 Object expiration feature in your system. Since it does not involve automation effort, Compute hours for the automation program to run and does not consume manual labor as well, it offers invisible savings in addition to the direct savings.

Overall Savings = ~1650 USD (scenario) + Cost of compute hrs (for deletion program) + Automation engineering effort (or) Manual deletion effort

How does it work?
Amazon S3 Object Expiration feature allows you to define rules to schedule the removal of your objects after a pre-defined time period. The rules are specified in the Lifecycle Configuration policy of an Amazon S3 bucket. Updates can be done either through AWS Management Console or S3 API’s.
Once the rule is set, the Object Expiration time is calculated by Amazon S3 by adding the expiration lifetime to the file creation time and then roundup the result time to the next day midnight GMT . For example : if a file was created on 11/12/2012 11:00 am UTC and the expiration time period was specified 3 days, then Amazon S3 would be calculating the expiration date-time of the file as 15/12/2012 00:00 UTC. Once the objects are past their expiration date, they will be queued for deletion. You can use Object Expiration rules on objects stored in both Standard and Reduced Redundancy storage of Amazon S3.

Add Spot Instances with Amazon EMR

Most of us know that Amazon Spot EC2 instances are usually good choice for Time-flexible and interruption-tolerant tasks. These instances gets traded frequently on a Spot market price and you can fix your Bid Price using AWS API’s or AWS Console. Once free Spot EC2 instances are available for your Bid Price, AWS will allot them for use in your account. Spot instances are usually available way cheaper than On-Demand EC2 instances most of the times. Example: On-Demand m1.xlarge per hour price is 0.48 USD and on spot market you can find them sometimes @ 0.052 per hour. This is ~9 times cheaper than the on-demand price; imagine if you can bid competitively and get hold of spot EC2 even around 0.24 USD most of the times, you are saving 50% from the on-demand price straight away. In Big data use cases usually you might need lots of EC2 nodes for processing, adopting such techniques can vastly make difference in your infra cost and operations in long term. I am sharing my experience on this subject as tips and techniques you can adopt to save costs while using EMR clusters in Amazon for big data problems.
Note : While dealing with spot you can be sure that you will never pay more than your maximum bid price per hour.

Tip 1: Make right choice (Spot vs On-Demand) for the cluster components
Data Critical workloads: For workloads which cannot afford to lose data you can have the Master + Core on Amazon On-Demand EC2 and your task nodes on Spot EC2. This is the most common pattern while combining Spot and On-Demand on Amazon EMR cluster. Since task nodes are operating on spot prices depending upon your bidding strategy you can save ~50% costs from running your task nodes using On-Demand EC2. You can further save(if you are lucky) by reserving your Core and Master Nodes , but you will be tied to an AZ. According to me this is not a good or common technique, because some AZ’s can be very noisy with high spot prices.
Cost Driven workloads: When solving big data problems, sometimes you might have to face scenarios where cost is very important than time. Example: You are processing archives of old logs as low priority jobs, where cost of processing is very important and usually with abundant time left. Such cases you can have all the Master+Core+Task run on Spot EC2 to get further savings from the data critical workloads approach. Since all the nodes are operating on spot prices depending upon your bidding strategy you can save ~60% or more costs from running your nodes using On-Demand EC2. The below mentioned table published by AWS gives an indication of the Amazon EMR + Spot combinations that are widely used:

Tip 2: There is free lunch sometimes
Spot Instances can be interrupted by AWS when the spot price reaches your bidding price. What interruption means is that, AWS can pull out the Spot EC2’s assigned to your account when the price matches/exceeds. If your Spot Task Nodes are interrupted you will not be charged for any partial hour of usage by AWS i.e. if you have started the instance @ 10:05 am and if your instances are interrupted by spot price fluctuations @ 10:45 am you will not be charged for the partial hour of usage. If your processing exercise is totally time insensitive, you can keep your bidding price at closer level to spot price which are easily interrupt-able by AWS and exploit this partial hours concept. Theoretically you can get most of the processing done through your task nodes for free* exploiting this strategy.

Tip 3: Use the AZ wisely when it comes to spot
Different AZ’s inside an Amazon EC2 region has different spot prices for the same Instance type. Observe this pattern for a while, build some intelligence around the price data collected and rebuild your cluster in the AZ with lowest price. Since the Master+Core+Task need to run on the same AZ for better latency, it is advisable to architect your EMR clusters in such a way they can be switched(i.e.recreate) to different AZ’s according to spot prices. If you can build this flexibility in your architecture you can save costs by leveraging the Inter AZ price fluctuations. Refer the below images for Spot Price variations in 2 AZ’s inside the same Region for same time period. “Make your choice wisely time to time”

Tip 4: Keep your Job logic small and store intermediate outputs in S3
Breakdown your complex processing logic into small jobs and design your jobs and tasks in EMR cluster in such a way that they run for very small period of time (example few minutes). Store all the intermediate job outputs in Amazon S3. This approach is helpful in EMR world and gives you following benefits:

When your Core+ Task nodes are interrupted frequently, you can still continue from the intermediate points. Data accessed from S3.
You now have the flexibility to recreate the EMR clusters in multiple AZ depending upon the Spot price fluctuations
You can decide the number of nodes needed for your EMR cluster(even every hour) depending upon the data volume, density and velocity

All the above 3 points when implemented contribute to elasticity in your architecture and there by helps you save costs in Amazon cloud. The above recommendation is not suitable for all Jobs, it has to be carefully mapped with right use cases by the architects.

The AdWantageS

Every customer has a reason to move in to the cloud. Be it cost, scalability, on demand provisioning, there are plenty of reasons why one moves into the Cloud. The latest whitepaper ”The Total Cost of (Non) Ownership of Web Applications in the Cloud” by Jinesh Varia, Technical evangelist of Amazon Web Services provides a good insight between hosting an infrastructure in-house and in the Cloud (AWS). There are plenty of pricing models available with AWS currently which can provide cost benefits ranging from 30% to 80% when compared to hosting the servers in-house.

On-Demand Instances – this is where every one starts with. You simply add your credit card to your AWS account and start spinning up Instances. You provision them on demand and pay for how long you run them. You of course have the option of stopping and starting them whenever needed. You are charged for every hour of running an Instance. For example, a Large Instance (2 CPU, 7.5GB Memory, Linux) you will pay $0.32/hr (US-East).

Reserved Instances – let’s say after you migrate/host your web application to AWS and run multiple web servers and DB servers in AWS. After couple of months, you may notice that some of your servers will run 24 hours/day. You may spin up additional web servers during peak load but will at least run 2 of them always; plus a DB server. For such cases, where you know that you will always run the Instances, AWS provides an option for reducing your cost – Reserved Instances. This is purely a pricing model, where you purchase the required Instance type (say Large/X-Large) in a region for a minimum amount. You will then be charged substantially less for your on-demand charge of that Instance. This way there is a potential savings of 30% over a period of one year when compared to on-demand Instances. The following provides an illustration of the cost savings for purchasing an m1.large Reserved Instance against using it on demand through an year

Cost comparison between On-Demand and Reserved Instance for m1.large Linux Instance in US-East

Be careful that,

Reserved Instances are purchased against an Instance type. If you purchase an m1.large Reserved Instance, at the end of the month when your bill is generated, only m1.large usage will be billed at the reduced Instance hourly charge. Hence, on that given month if you figure out m1.large is not sufficient and move up to m1.xlarge, you will not be billed at the reduced hourly charge. In such a case, you may end up paying more to AWS on an yearly basis. So, examine your usage pattern, fix your Instance type and purchase a Reserved Instance.
Reserved Instances are for a region – if you buy one in the US-East region and later decide to move your Instances to US-West, the cost reduction will not be applicable for Instances running out of US-West region
Of course, you have the benefits of,

Reduced cost – a minimum of 30-40% cost savings which increases if you purchase a 3-year term
Guaranteed Capacity – AWS will always provide you the number of Reserved Instances you have purchased (you will not get an error saying “Insufficient Capacity”)
Spot Instances – in this model, you will bid against the market price of an Instance and if your’s is the highest bid then you will get an Instance at your bid price. The Spot Market price is available through an API which can be queried regularly. You can write code that will check for the Spot Market price and keep placing bids mentioning the maximum price that you are willing to pay against the current Spot Market price. If your bid exceeds then you will get an Instance provisioned. The spot price will be substantially low than on demand pricing. For example, at the time of writing this article, the spot instance pricing for a m1.large Linux Instance in US-East was $0.026/hr as against $0.32/hr on demand pricing. This provides about 90% cost reduction on an hourly basis. But the catch is,

The Spot Market price will keep changing as other users place their bids and AWS has additional excess capacity
If your maximum bid price falls below the Spot Market price, then AWS may terminate your Instance. Abruptly
Hence you may loose your data or your code may terminate unfinished.
Jobs that you anticipate to be completed in one hour may take few more hours to complete
Hence Spot Instances are not suitable for all kind of work loads. Certain work loads like log processing, encoding can exploit Spot Instances but requires writing careful algorithms and deeper understanding. Here are some of the use cases for using Spot Instances.

Now, with that basic understanding, let’s examine the whitepaper little carefully not just from cost point of view. Web application can be classified in to three types based on traffic nature – Steady Traffic, Periodic Burst and Always Unpredictable. Here is a summary of the comparison of benefits of hosting them in AWS

AWS benefits for different type of web applications

Steady Traffic

The website has steady traffic. You are running couple of servers on-premise and consider moving it to AWS. Or you are hosting a new web application on AWS. Here’s the cost comparison from the whitepaper

Source: AWS Whitepaper quoted above

You will most likely start with spinning up On-Demand Instances. You will be running couple of them for web servers and couple of them for your database (for HA)
Over long run (3 years) if you only use On-Demand Instances, you may end up paying more than hosting it on-premise. Do NOT just run On-Demand Instances if you your usage is steady
If you are on AWS for about couple of months and are OK with the performance from your setup, you should definitely consider purchasing Reserved Instances. You will end up with a minimum of 40% savings against on-premise infrastructure and about 30% against running On-Demand Instances
You will still benefit from spinning up infrastructure on demand. Unlike on-premise, where you need to plan and purchase ahead, here you have the option of provisioning on demand; just in time
And in case, you grow and your traffic increases, you have the option to add more capacity to your fleet and remove it. You can change server sizes on demand. And pay only for what you use. This will go a long way in terms of business continuity, user experience and more sales
You can always mix and match Reserved and On-Demand Instances and reduce your cost whenever required. Reserved Instances can be purchased anytime
Periodic Burst

In this scenario, you have some constant traffic to your website but periodically there are some spikes in the traffic. For example, every quarter there can be some sales promotion. Or during thanks giving or Christmas you will have more traffic to the website. During other months, the traffic to the website will be low. Here’s the cost comparison for such a scenario

Source: AWS Whitepaper quoted above

You will spin up On-Demand Instances to start with. You will run couple of them for web servers and couple of them for the database
During the burst period, you will need additional capacity to meet the burst in traffic. You need to spin up additional Instances for your web tier and application tier to meet the demand
Here is where you will enjoy the benefits of on demand provisioning. This is something that is not possible in on-premise hosting. In on-premise hosting, you will purchase the excess capacity required well ahead and keep running them even though the traffic is less. With on demand provisioning, you will only provision them during the burst period. Once the promotion is over, you can terminate those extra capacity
For the capacity that you will always run as the baseline, you can purchase Reserved Instances and reduce the cost up to 75%
Even if you do not purchase Reserved Instances, you can run On-Demand Instances and save around 40% against on-premise infrastructure. Because, for the periodic burst requirement, you can purchase only during the burst period and turn off later. This is not possible in an on-premise setup where you will anyways purchase this ahead of time
Always Unpredictable

In this case, you have an application where you cannot predict the traffic all the time. For example, a social application that is in experimental stage and you expect it to go viral. If it goes viral and gains popularity you will need to expand the infrastructure quickly. If it doesn’t, then you do not want to risk a heavy cap-ex. Here’s the cost comparison for such a scenario

Source: AWS Whitepaper quoted above

You will spin up On-Demand Instances and scale them according to the traffic
You will use automation tools such as AutoScaling to scale the infrastructure on demand. You can align your infrastructure setup according to the traffic
Over a 3 year period, there will be some initial steady growth of the application. As the application goes viral you will need to add capacity. And beyond its lifetime of say, 18 months to 2 years the traffic may start to fall
Through monitoring tools such as CloudWatch you can constantly tweak the infrastructure and arrive at a baseline infrastructure. You will figure out that during the initial growth and “viral” period you will need certain baseline servers. You can go ahead and purchase Reserved Instances for them and mix them with On-Demand Instances when you scale them. You will enjoy a cost saving benefit of around 70% against on-premise setup
It is not advisable to plan for full capacity and run at full capacity. Or purchase full Reserved Instances for the full capacity. If the application doesn’t go well as anticipated, you may end up paying more to AWS than the actual
As you can see, whether you have a steady state application or trying out a new idea, AWS proves advantageous from different perspectives for different requirements. Cost, on-demand provisioning, scalability, flexibility, automation tools are things that even a startup can think off and get on board quickly. One question that you need to ask yourself is “Why am I moving in to AWS?”. Ask this question during the early stages and spend considerable time in design and architecture for the infrastructure setup. Otherwise, you may end up asking yourself “Why did I come into AWS?”.

  • August 20, 2012
  • blog

UIGestureRecognizer for IOS

Introductions:

UIGestureRecognizer is an abstract base class for concrete gesture-recognizer classes.

If you need to detect gestures in your app, such as taps, pinches, pans, or rotations, it’s extremely easy with the built-in UIGestureRecognizer classes.

In the old days before UIGestureRecognizers, if you wanted to detect a gesture such as a swipe, you’d have to register for notifications on every touch within a UIView – such as touchesBegan, touchesMoves, and touchesEnded.

The concrete subclasses of UIGestureRecognizer are the following:

UITapGestureRecognizer
UIPinchGestureRecognizer
UIRotationGestureRecognizer
UISwipeGestureRecognizer
UIPanGestureRecognizer
UILongPressGestureRecognizer

A gesture recognizer has one or more target-action pairs associated with it. If there are multiple target-action pairs, they are discrete, and not cumulative. Recognition of a gesture results in the dispatch of an action message to a target for each of those pairs. The action methods invoked must conform to one of the following signatures:

-(void)handleGesture;

-(void)handleGesture:(UIGestureRecognizer *)gestureRecognize;

Solution:

Step 1: Create a new file of class ViewController and a UIView

Step 2: in viewDidLoad implement concrete subclasses of UIGestureRecognizer

eg : UITapGestureRecognizer

-(void) viewDidLoad {

UITapGestureRecognizer * recognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleTap:)];

recognizer.delegate = self;

[view addGestureRecognizer:recognizer];

}

Step 3: Implement the gesture handle

– (void)handleTap:(UITapGestureRecognizer *)recognizer {

NSLog (@”your implementation here”);

}

Conclusion:

UIGestureRecognizer classes! These provide a default implementation of detecting common gestures such as taps, pinches, rotations, swipes, pans, and long presses. By using them, not only it reduce you code length , but it easy too.