Cost Optimization Tips for Azure Cloud-Part III

In continuation to my previous blog am going to jot down more on how to optimize cost while moving into Azure public cloud

1. UPGRADE INSTANCES TO THE LATEST GENERATION-

With Microsoft Introducing next generation of Azure deployment via Azure Resource Manager (ARM) we can avail significant performance improvement just by upgrading the VM’s to latest versions (From Azure V1 to Azure V2). In all case the price would either be same or near to same.
For example- if you are upgrading a DV1-series VM to DV2- Series it gives you 35-40% faster processing for the same price point .

2. TERMINATE ZOMBIE ASSETS –

It is not enough to shut down VMs from within the instance to avoid being billed because Azure continues to reserve the compute resources for the VM including a reserved public IP. Unless you need VMs to be up and running all the time, shut down and deallocate them to save on cost. This can be achieved from Azure Management portal or Windows Powershell.

3. DELETING A VM-

If you delete a VM, the VHDs are not deleted. That means you can safely delete the VM without losing data. However, you will still be charged for storage. To delete the VHD, delete the file from Blob storage.

  •  When an end-user’s PC makes a DNS query, it doesn’t contact the Traffic Manager Name servers directly. Instead, these queries are sent via “recursive” DNS servers run by enterprises and ISPs. These servers cache the DNS responses, so that other users’ queries can be processed more quickly. Since these cached responses don’t reach the Traffic Manager Name servers, they don’t incur a charge.

The caching duration is determined by the “TTL” parameter in the original DNS response. This parameter is configurable in Traffic Manager—the         default is 300 seconds, and the minimum  is 30 seconds.

By using a larger TTL, you can increase the amount of caching done by recursive DNS servers and thereby reduce your DNS query charges. However, increased caching will also impact how quickly changes in endpoint status are picked up by end users, i.e. your end-user failover times in the event of an endpoint failure will become longer. For this   reason, we don’t recommend using very large TTL values.

Likewise, a shorter TTL gives more rapid failover times, but since caching is reduced the query counts against the Traffic Manage name servers will be higher.

By allowing you to configure the TTL value, Traffic Manager enables you to make the best choice of TTL based on your application’s business needs.

  • If you provide write access to a blob, a user may choose to upload a 200GB blob. If you’ve given them read access as well, they may choose do download it 10 times, incurring 2TB in egress costs for you. Again, provide limited permissions, to help mitigate the potential of malicious users. Use short-lived Shared Access Signature (SAS) to reduce this threat (but be mindful of clock skew on the end time).
  • Azure App Service charges are applied to apps in stopped state. Please delete apps that are not in use or update tier to Free to avoid charges.
  • In Azure Search, The stop button is meant to stop traffic to your service instance. As a result, your service is still running and will continue to be charged the hourly rate.
  • Use Blob storage to store Images, Videos and Text files instead of storing in SQL Database. The cost of the Blob storage is much less than SQL database. A 100GB SQL Database costs $175 per month, but the Blob storage costs only $7 per month. To reduce the cost and increase the performance, put the large items in the blob storage and store the Blob Record key in SQL database.
  • Cycle out old records and tables in your database. This saves money, and knowing what you can or cannot delete is important if you hit your database Max Size and you need to quickly delete records to make space for new data.
  • If you intend to use substantial amount of Azure resources for your application, you can choose to use volume purchase plan. These plans allow you to save 20 to 30 % of your Data Centre cost for your larger applications.
  • Use a strategy for removing old backups such that you maintain history but reduce storage needs. If you maintain backups for last hour, day, week, month and year, you have good backup coverage while not incurring more than 25% of your database costs for backup. If you have 1GB database, your cost would be $9.99 per month for the database and only $0.10 per month for the backup space.
  • Azure Document DB with the stored procedure is that they enable applications to perform complex batches and sequence of operations directly inside the database engine, closer to the data. So, the network traffic latency cost for batching and sequencing operations can be completely avoided. Another advantage to using stored procedure is that they get implicitly pre-complied to the byte code format upon registration, avoiding script compilation costs at the time of each invocation.
  • The default of a cloud service size is ‘small’. You can change it to extra small in your cloud service – properties – settings. This will reduce your costs from $90 to $30 a month at the time of writing. The difference between ‘extra small’ and ‘small’ is that the virtual machine memory is 780 MB instead of 1780 MB.
  • Windows Azure Diagnostic may burst your bill on Storage Transaction. If you do not control it properly.

We’ll need to define what kind of log (IIS Logs, Crash Dumps, FREB Logs, Arbitrary log files, Performance Counters, Event Logs, etc.) to be collected and send to Windows Azure Storage either on-schedule-basis or on-demand.

However, if you are not carefully define what you are really need for the diagnostic info, you might end up paying the unexpected bill.

Assuming the following figures:

  • You a few application that require high processing power of 100 instances
  • You apply 5 performance counter logs (Processor% Processor Time, Memory Available Bytes, Physical Disk% Disk Time, Network Interface Connection: Bytes Total/sec, Processor Interrupts/sec)
  • Performing a schedule transfer for every 5 seconds
  • The instance will run 24 hours per day, 30 days per month

How much it costs for Storage Transaction per month?

5 counters X 12 times X 60 min X 24 hours X 30 days X 100 instances = 259,200,000 transactions

$ 0.01 per 10,000 transactions X 129,600,000 transactions =$ 259.2 per month

To bring it down, if you really need to monitor all 5 performance counters on every 5 seconds? What if you reduce them to 3 counters and monitor it every 20 seconds?

3 counters X 3 times X 60 min X 24 hours X 30 days X 100 instances = 3,8880,000 transactions

$ 0.01 per 10,000 transactions X 129,600,000 transactions =$ 38.8 per month

You can see how much you save for this numbers. Windows Azure Diagnostic is really needed but use it improperly may cause you paying unnecessary money

  • An application will organize the blobs in different container per each user. It also allows the users to check size of each container. For that, a function is created to loop through entire files inside the container and return the size in decimal. Now, this functionality is exposed at UI screen. An admin can typically call this function a few times a day.

Assuming the following figures for illustration:

  • I have 1,000 users.
  • I have 10,000 of files in average for each container.
  • Admin call this function 5 times a day in average.
  • How much it costs for Storage Transaction per month?

Remember: a single Get Blob request is considered 1 transaction!

1,000 users X 10,000 files X 5 times query X 30 days = 1,500,000,000 transaction

$ 0.01 per 10,000 transactions X 1,500,000,000 transactions = $ 1,500 per month

Well, that’s not cheap at all so to bring it down.

Do not expose this functionality as real time query to admin. Considering to automatically run this function once in a day, save the size in somewhere. Just let admin to view the daily result (day by day).With limiting the admin to just only view once a day, what will be the monthly cost looks like:

1,000 users X 10,000 files X 1 times query X 30 days = 300,000,000 transaction

$ 0.01 per 10,000 transactions X 300,000,000 transactions = $ 300 per month

Author Credits: This article was written by Utkarsh Pandey, Azure Solution Architect at 8KMiles Software Services and originally published here

Cost Optimization Tips for Azure Cloud-Part II

Cloud computing comes with myriad benefits with its various as-a-service models and hence most businesses consider it wise to move their IT infrastructure to cloud. However, many IT admins worry that hidden costs will lower their department’s total cost of ownership.

We believe that it is more about estimating your requirements correctly and managing resources in the right way.

Microsoft Azure Pricing

Microsoft Azure allows you to quickly deploy infrastructures and services to meet all of your business needs. You can run Windows and Linux based applications in 22 Azure data-center regions, delivered with enterprise grade SLAs. Azure services come with:

  • No upfront costs
  • No termination fees
  • Pay only for what you use
  •  Per minute billing

You can calculate your expected monthly bill using Pricing Calculator and track your actual account usage and bill at any time using the billing portal.

1. Azure allows you to set a monthly spending limit on your account. So, if you forget to turn off your VMs, your Azure account will get disabled before you run over your predefined monthly spending limit. You can also set email billing alerts if your spend goes above a preconfigured amount.

2. It is not enough to shut down VMs from within the instance to avoid being billed because Azure continues to reserve the compute resources for the VM including a reserved public IP. Unless you need VMs to be up and running all the time, shut down and deallocate them to save on cost. This can be achieved from Azure Management portal or Windows Powershell.

3. Delete the unused VPN gateway and application gateway as they will be charged whether they run inside virtual network or connect to other virtual networks in Azure. Your account will be charged based on the time gateway is provisioned and available.

4. At least one VM is required to be running all the time, with one reserved IP included in 5 reserved public IP in use, in order to avoid reserved IP address charges. If you down all your VMs in service, then Microsoft is likely to reassign that IP to some other customer’s cloud service, which can hamper your business.

5. Minimize the number of compute hours by using auto scaling. Auto scaling can minimize the cost by reducing the total compute hours so that the number of nodes on Azure scales up or down based on demand.

6. When an end-user’s PC makes a DNS query, recursive DNS servers run by enterprises and ISPs cache the DNS responses. These cached responses don’t incur charge as they don’t reach the Traffic Manager Name servers. The caching duration is determined by the “TTL” parameter in the original DNS response. With larger TTL value, you can reduce DNS query charges but it would result in longer end-user failover times. On the other hand, shorter TTL value will reduce caching resulting in more query counts against Traffic Manager Name server. Hence, configure TTL in Traffic Manager based on your business needs.

7. Blob storage offers a cost effective solution to store graphics data. Blob storage of type Table and Queue of 2 GB costs $0.14/month and type block blob costs just $0.05/month

az03

A SQL Database of similar capacity will cost $4.98/month. Hence, use blob storage to store images, videos and text files instead of storing in SQL Database.

az02

To reduce the cost and increase the performance, put the large items in the blob storage and store the blob record key in SQL database.

Above tips will definitely help you cut cost on Azure and leverage the power of cloud computing to the best!

 

Cost Optimization Tips for Azure Cloud-Part I

In general there are quite a few driving forces behind rapid adoption of cloud platforms off late, but doing it within the industry cost budget is the actual challenge. Though the key benefit from public cloud providers like Azure is its pay-as-you-go pricing model which makes customers immune of any capital investment but there are chances that the expenses in cloud start to add up and can soon get out of control if we are not practicing effective cost management. It needs attention and care to “Take Control over Your Cloud Costs” and decide about a better cost management strategy.

Under these Articles I will try to outline few of the Azure’s cost saving and optimization considerations .Its gonna be 3 part article first of this can be subtitled as “7 consideration for highly effective azure architecture “ because it covers the stuff from an architect’s point of view—

1. Design for Elasticity

Elasticity has been one of the fundamental properties of Azure that drives many of its economic benefits. By designing you architecture for elasticity you will avoid Over Provisioning of resources, that way you should always restrict yourself to use only what is needed. There are umbrella of service in azure which helps customers getting rid of under-utilization of resources. (Always make use of services like VM scale set & Auto scaling).

2. Leverage Azure Application Services (Notification, Queue, Service Bus etc.)
Application services in azure doesn’t only help you in performance optimization but they can greatly affect the cost of overall infrastructure. Judicially decide on which all are the service needed for your workload and provision them in optimum way. Make use of the existing service don’t try to reinvent the wheel.
When you install software’s to suffice the requirements there is a benefit of Customize features but the trade-off is immense you have to have an instance for this which intern restrict the availability of these software’s by tying in to a particular VM. Whereas if you choose different services from Azure you enjoy the inbuilt Availability, Scalability and High Performance with option of Pay as you go.

3. Always Use Resource Group
Keep the related resource in close proximity that way you can save money on communication among the services in addition to that application will get boost on performance as latency would no longer be a factor. In the latter articles I will specifically talk about other benefits this particular service can offer.

4. Off Load From Your Architecture
Try to offload as much as possible by distributing things to their more suited services it doesn’t only reduce the maintenance headache but help in optimizing the cost too.Move the session related data out of server, Optimize the infrastructure for performance and cost by caching and edge caching static content.

Combine Multiple JS & CSS files into one and then perform the Compression for minification. Once bundled into compressed form move them to azure blob.When you’re content (Static content) is popular frontend it with Azure Content delivery network. Use Blob + Azure CDN as it will reduce the cost as well as latency (depends on cache-hit ratio).For anything related to media streaming make use of Azure CDN as it frees you from running Adobe FMS.

5. Caching And Compression For CDN Content
After analyzing multiple Customer subscriptions, we can derive a pattern of modest to huge CDN spends. As a common practice, customers would have forgotten to enable caching for CDN resources either at origin servers like Azure Blob. You should enable compression for content like CSS, JavaScript, Text Files, JSON, HTML etc. to ensure cost savings on bandwidth. Also, frequently deploy production changes and often forget to enable caching & compression for static resources, dynamic content like text/HTML/JSON etc. We recommend you to have post-deploy job as a part of your release automation to ensure client side caching, server-side compression etc. are enabled for your application and resources.

6. Continuous Optimization In Your Architecture
If you are using Azure for the past few years, there is high possibility of using outdated services, Though once designed you should not do too much tinkering with architecture but it’s good to have a look and see if there are things which can be replaced with new generation service. They might be best fit for the workload and can offer same results in less expenses. Always match resources with the workload.
With that it doesn’t only give you instant benefits but offers you recurring savings in your next month’s bill.

7. Optimize The Provisioning Based On Consumption Trend

You need to be aware of what you are using. There is no need of wasting your money on expensive instances or services if you don’t need them. Automatically turn off what you don’t need, there are services like Azure Automation which can help you achieving that.Make use of azure service like auto-scaling, VM scale set and azure automation for uninterrupted services even when traffic tends to increase beyond expectations.Special mention for Azure DevTest- a service specially designed for Development and testing scenarios. With this service azure helps end users to model their infrastructure where they will be charged only for office hours (usually 8*5) these settings are customizable which makes it even more flexible.While dealing with Azure storage, make use of Appropriate Storage Classes with required redundancy options. Service like File Storage, Page-Blob, Block-Blob etc. have their specific purpose so be clear while designing your architecture.

Author Credits: This article was written by Utkarsh Pandey, Azure Solution Architect at 8KMiles Software Services and originally published here

25 Best Practice Tips for architecting your Amazon VPC

According to me Amazon VPC is one of the most important feature introduced by AWS. We have been using AWS from 2008 and Amazon VPC from the day it was introduced and i strongly feel the customer adoption towards AWS cloud gained real momentum only after the introduction of VPC into the market.
Amazon VPC comes with lots of advantages over the limitations faced in Amazon Classic cloud like: Static private IP address , Elastic Network Interfaces :  possible to bind multiple Elastic Network Interfaces to a single instance, Internal Elastic Load Balancers, Advanced Network Access Control ,Setup a secure bastion host , DHCP options , Predictable internal IP ranges , Moving NICs and internal IPs between instances, VPN connectivity, Heightened security etc. Each and everything is a interesting topic on its own and i will be discussing them in detail in future.
Today i am sharing some of our implementation experience on working with hundreds of Amazon VPC deployments as best practice tips for the AWS user community. You can apply some of the relevant ones in your existing VPC or use these points as part of your migration approach to Amazon VPC.

Practice 1) Get your Amazon VPC combination right: Select the right Amazon VPC architecture first.  You need to decide the right Amazon VPC & VPN setup combination based on your current and future requirements. It is tough to modify/re-design the Amazon VPC at later stage, so it is better to design it taking into consideration your NW and expansion needs for next ~2 years. Currently different types of Amazon VPC setups are available; Like Public facing VPC, Public and Private setup VPC, Amazon VPC with Public and Private Subnets and Hardware VPN Access, Amazon VPC with Private Subnets and Hardware VPN Access, Software based VPN access etc. Choose the one which you feel you will be in next 1-2 years.

Practice 2) Choose your CIDR Blocks: While designing your Amazon VPC, the CIDR block should be chosen in consideration with the number of IP addresses needed and whether we are going to establish connectivity with our data center. The allowed block size is between a /28 netmask and /16 netmask. Amazon VPC can have contain from 16 to 65536 IP addresses. Currently Amazon VPC once created can’t be modified, so it is best to choose the CIDR block which has more IP addresses usually. Also when you design the Amazon VPC architecture to communicate with the on premise/data center ensure your CIDR range used in Amazon VPC does not overlaps or conflicts with the CIDR blocks in your On premise/Data center. Note: If you are using same CIDR blocks while configuring the customer gateway it may conflict.
E.g., Your VPC CIDR block is 10.0.0.0/16 and if you have 10.0.25.0/24 subnet in a data center the communication from instances in VPC to data center will not happen since the subnet is the part of the VPC CIDR. In order to avoid these consequences it is good to have the IP ranges in different class. Example., Amazon VPC is in 10.0.0.0/16 and data center is in 172.16.0.0/24 series.

Practice 3) Isolate according to your Use case: Create separate Amazon VPC for Development , Staging and Production environment (or) Create one Amazon VPC with Separate Subnets/Security/isolated NW groups for Production , Staging and development. We have observed 60% of the customer preferring the second choice. You chose the right one according to your use case.

Practice 4) Securing Amazon VPC : If you are running a machine critical workload demanding complex security needs you can secure the Amazon VPC like your on-premise data center or more sometimes. Some of the tips to secure your VPC are:

  • Secure your Amazon VPC using Firewall virtual appliance, Web application firewall available from Amazon Web Services Marketplace. You can use check point, Sophos etc for this
  • You can configure Intrusion Prevention or Intrusion Detection virtual appliances and secure the protocols and take preventive/corrective actions in your VPC
  • Configure VM encryption tools which encrypts your root and additional EBS volumes. The Key can be stored inside AWS (or) in your Data center outside Amazon Web Services depending on your compliance needs. http://harish11g.blogspot.in/2013/04/understanding-Amazon-Elastic-Block-Store-Securing-EBS-TrendMicro-SecureCloud.html
  • Configure Privileged Identity access management solutions on your Amazon VPC to monitor and audit the access of Administrators of your VPC.
  • Enable the cloud trail to audit in the VPC environments  ACL policy’s. Enable cloud trail :http://harish11g.blogspot.in/2014/01/Integrating-AWS-CloudTrail-with-Splunk-for-managed-services-monitoring-audit-compliance.html
  • Apply anti virus for cleansing specific EC2 instances inside VPC. Trend micro has very good product for this.
  • Configure Site to Site VPN for securely transferring information between Amazon VPC in different regions or between Amazon VPC to your On premise Data center
  • Follow the Security Groups and NW ACL’s best practices listed below

Practice 5) Understand Amazon VPC Limits: Always design the VPC subnets in consideration with the expansion in the future. Also understand the Amazon VPC’s limits before using the same. AWS has various limitations on the VPC components like Rules per security group, No of route tables and Subnets etc. Some of them may be increased after providing the request to the Amazon support team while few components cannot be increased. Ensure the limitations are not affecting your overall design. Refer URL:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html

Practice 6) IAM your Amazon VPC: When you are going to assign people to maintain your Amazon VPC you can create Amazon IAM account with the fine grained permissions (or) use Sophisticated Privileged identity Management solutions available on AWS marketplace to IAM your VPC.

Practice 7) Disaster Recovery or Geo Distributed Amazon VPC Setup : When you are designing a Disaster Recovery Setup plan using VPC or expanding to another Amazon VPC region you can follow these simple rules. Create your Production site VPC CIDR : 10.0.0.0/16 and your DR region VPC CIDR:  172.16.0.0/16. Make sure they do not conflict with on premises subnet CIDR block in event both needs to be integrated to on premise DC as well. After CIDR blocks creation , setup a VPC tunnel between regions and to your on premise DC. This will help to replicate your data using private IP’s.

Practice 8) Use security groups and Network ACLs wisely:  It is advisable to use security groups over Network ACLs inside Amazon VPC wherever applicable for better control. Security groups are applicable on EC2 instance level while network ACL is applicable on Subnet level.  Security groups are used for White list mostly. To blacklist IPs, one can use Network ACLs.

Practice 9) Tier your Security Groups : Create different security groups for different tiers of your infrastructure architecture inside your VPC. If you have Web, App, DB tiers create different security group for each of them. Creating tier wise security groups will increase the infrastructure security inside Amazon VPC.  EC2 instances in each tier can talk only on application specified ports and not at all ports. If you create Amazon VPC security groups for each and every tier/service separately it will be easier to open a port to a particular service. Don’t use same security group for multiple tiers of instances, this is a bad practice.
Example: Open ports for security group instead of IP ranges : For example : People have tendency to open for port 8080 to 10.10.0.0/24 (web layer) range. Instead of that, open port 8080 to web-security-group. This will make sure only web security group instances will be able to contact on port 8080. If someone launches NAT instance with NAT-Security-Group in 10.10.0.0/24, he won’t be able to contact on port 8080 as it allows access from only web security group.
Practice 10 ) Standardize your Security Group Naming conventions : Following a security group naming conventions inside Amazon VPC will improve operations/management for large scale deployments inside VPC. It also avoids manual errors, leaks and saves cost and time overall.
For example: Simple ones like Prod_DMZ_Web_SG or Dev_MGMT_Utility_SG (or) complex coded ones for large scale deployments like
USVA5LXWEBP001- US East Virginia AZ 5 Linux Web Server Production 001
This helps in better management of security groups.
Practice 11) ELB on Amazon VPC:  When using Amazon ELB for Web Applications, put all other EC2 instances( Tiers like App,cache,DB,BG etc)  in private subnets as much possible. Unless there is a specific requirement where instances need outside world access and EIP attached, put all instances in private subnet only. Only ELBs should be provisioned in Public Subnet as secure practice in Amazon VPC environment.
Practice 12) Control your outgoing traffic in Amazon VPC: If you are looking for better security, for the traffic going to internet gateway use Software’s like Squid or Sophos to restrict the ports,URL,Domains etc so that all traffic go through the proxy tier controlled and it also gets logged. Using these proxy/security systems we can also restrict the unwanted ports, by doing so,  if there is any security compromise to the application running inside Amazon VPC they can be detected by auditing the restricted connections captured from the logs. This helps in corrective security measure.
Practice 13) Plan your NAT Instance Type: Whenever your Application EC2 instances residing inside private subnet of Amazon VPC are making Web Service/HTTP/S3/SQS calls they go through NAT instance. If you have designed Auto scaling for your application tier and there are chances ten’s of app EC2 instances are going to make lots of web calls concurrently, NAT instance will become a performance bottleneck at this juncture. Size your NAT instance capacity depending upon application needs for avoiding performance bottlenecks. Using the NAT instances provides us with advantages of saving cost of Elastic IP and provides extra security by not exposing the instances to outside world for accessing the internet.
Practice 14) Spread your NAT instance with Multiple Subnets: What if you have hundreds of EC2 instances inside your Amazon VPC and they are making lots of heavy web service/HTTP calls concurrently. A single NAT instance with even largest EC2 size cannot handle that bandwidth sometimes and may become performance bottleneck. In Such scenarios, span your EC2 across multiple subnets and create NAT’s for each subnet. This way you can spread your out going bandwidth and improve the performance in your VPC based deployments.
Practice 15) Use EIP when needed: At times you may need to keep a part of your application services to be kept in Public subnet for external communication. It is recommended practice to associate them with Amazon Elastic IP and white list these IP address in the target services used by them
Practice 16) NAT instance practices : If needed, enable Multi factor authentication on NAT instance. SSH and RDP ports are open only on sources and destination IP’s, not global network (0.0.0.0/0). SSH / RDP ports are opened only on static exit IP’s not dynamic exit IP’s.
Practice 17) Plan your Tunnel between On-Premise DC to Amazon VPC: 
Select the right mechanism to connect your on premises DC to Amazon VPC. This will help you to connect the EC2 instance via private IP’s in a secure manner.
  • Option 1: Secure IPSec tunnel to connect a corporate network with Amazon VPC (http://aws.amazon.com/articles/8800869755706543)
  • Option 2 : Secure communication between sites using the AWS VPN CloudHub (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CloudHub.html)
  • Option 3: Use Direct connect between Amazon VPC and on premise when you have lots of data to be transferred with reduced latency (or) you have spread your mission critical workloads across cloud and on premise. Example: Oracle RAC in your DC and Web/App tier in your Amazon VPC. Contact us if you need help on setting up direct connect between Amazon VPC and DC.
Practice 18) Always span your Amazon VPC across multiple subnets in Multiple Availability zones inside a Region. This helps is architecting high availability inside your Amazon VPC properly. Example: Classification of the VPC subnet : WEB Tier Subnet : 10.0.10.0/24 in Az1 and 10.0.11.0/24 in Az2, Application Tier Subnet :  10.0.12.0/24 and 10.0.13.0/24, DB Tier Subnet :  10.0.14.0/24 and 10.0.15.0/24, Cache Tier Subnet : 10.0.16.0/24 and 10.0.17.0/24 etc
Practice 19) Good security practice is that to have only public subnet with route table which carries route to internet gateway. Apply this wherever applicable.
Practice 20) Keep your Data closer : For small scale deployments in AWS where cost is critical than high availability, It is better to keep the Web/App in same availability zone as of ElastiCache , RDS etc inside your Amazon VPC. Design your subnets accordingly to suit this. This is not a recommended architecture for applications demanding High Availability.
Practice 21) Allow and Deny Network ACL : Create Internet outbound allow and deny network ACL in your VPC.
First network ACL: Allow all the HTTP and HTTPS outbound traffic on public internet facing subnet.
Second network ACL: Deny all the HTTP/HTTPS traffic. Allow all the traffic to Squid proxy server or any virtual appliance.
Practice 22 ) Restricting Network ACL : Block all the inbound and outbound ports. Only allow application request ports. These are stateless traffic filters that apply to all traffic inbound or outbound from a Subnet within VPC. AWS recommended Outbound rules : http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_NACLs.html
Practice 23) Create route tables only when needed and use the Associations option to map subnets to the route table in your Amazon VPC
Practice 24) Use Amazon VPC Peering (new) : Amazon Web Services has introduced VPC peering feature which is quite useful one. AWS VPC peering connection is a networking connection between two Amazon VPCs that enables you to route traffic between them using private IP addresses. Currently it can be in same AWS region, Instances in either VPC can communicate with each other as if they are within the same network. Since AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware (which essentially means there is no single point of failure for communication or a bandwidth bottleneck).

We have seen it is useful in following scenarios :
  1. Large Enterprises usually run Multiple Amazon VPC in single region and some of their applications are so interconnected that they may need to access them privately + securely inside AWS. Example Active Directory, Exchange, Common business services will be usually interconnected.
  2. Large Enterprise have different AWS accounts for different business units/teams/departments , at times systems deployed by some business units in different AWS accounts need to be shared or need to consume a shared resource privately. Example: CRM , HRMS ,File Sharing etc can be internal and shared. In such scenarios VPC peering comes very useful.
  3. Customer can peer their VPC with their core suppliers to have tighter integrated access of their systems.
  4. Companies offering Infra/Application Managed Services on AWS can now safely peer into customer Amazon VPC and provide monitoring and management of AWS resources.

Practice 25) Use Amazon VPC: It is highly recommended that migrate all your new workloads inside Amazon VPC rather than Amazon Classic Cloud. I also strongly recommend to migrate your existing workloads from Amazon Classic cloud to Amazon VPC in phases or one shot which ever is feasible. In addition to the benefits of the VPC that is detailed in the start of the article, AWS has started introducing lots of features which are compatible only inside VPC and in the AWS marketplace as well there are lots of products which are compatible only with Amazon VPC.  So make sure you leverage this strength of VPC. If you require any help for this migration please contact me.

readers feel free to suggest more.. I will link relevant ones in this article