Solutions in Azure : Azure CDN for Maximum Bandwidth & Reduced Latency – Part II

From the First Part of this Article we can derive a conclusion that the CDN’s job is to enhance the regular hosting by reducing bandwidth consumption, minimizing latency and providing the scalability needed to handle abnormal traffic loads. It cuts down on round-trip time (RTT), effectively giving similar response to end user irrespective of their geographical presence.

In MicroMarketMonitor’s recent report, it has clearly mentioned that only the North American content delivery network market is expected to grow from $1.95 billion in 2013 to $7.83 billion in 2019. One significant factor driving this growth is end user interaction with online content. So moving forward it is going to be a major factor while architecting any application.-

Azure CDN Highlights

  1. Improve rendering speed & Handle high traffic loads: Azure CDN servers manage the content by making use of its large network of POPs. This dramatically increases the speed and availability, resulting in significant user experience improvements.
  2. Designed for today’s web: Azure CDN is specifically designed for the dynamic, media-centric web of today and cater to the requirement of its users who expect everything to be fast, high quality, and always-on.
  3. Streaming Aware Era: Azure CDN can be helpful under all possible three ways while serving videos over HTTP- Progressive Download and Play, HTTP Pseudo-streaming & Live Streaming.
  4. Dynamic Content Acceleration: If you understand the nitty-gritties of Azure CDN it also uses series of techniques to serve uncatchable content faster. For example, it can route all communication from a client in India to a server in the US through an edge in India and an edge in the US. They then maintain a constant connection between those two edges and apply WAN optimization techniques to accelerate it.
  5. Block spammers, scrapers and other bad bots: Azure Content Delivery Network is built on a highly scalable, reverse-proxy architecture with sophisticated DDoS identification and mitigation technologies to protect your website from DDoS attacks.
  6. When the expectations are at peak, Azure CDN delivers: Thanks to its distributed global scale, Azure Content Delivery Network handles sudden traffic spikes and heavy loads, like the start of a major product launch or global sporting event.

Working With Azure Storage

Once the CDN is enabled on an Azure storage account, any blobs that are in public containers and are available for anonymous access will be cached via the CDN. Only blobs that are publicly available can be cached with the Azure CDN. To make a blob publicly available for anonymous access, you must denote its container as public. Once you do so, all blobs within that container will be available for anonymous read access and you have the option of making container data public as well, or restricting access only to the blobs within it.

For best performance, use CDN edge caching for delivering blobs less than 10 GB in size. When you enable CDN access for a storage account, the Management Portal provides you with a CDN domain name in the following format: This domain name can be used to access blobs in a public container. For example, given a public container named music in a storage account named myaccount, users can access the blobs in that container using either of the following two URLs:

Working With Azure Websites

You can enable CDN from your websites to cache your web contents, such as images, scripts, and stylesheets. See Integrate an Azure Website with Azure CDN. When you enable CDN access for a website, the Management Portal provides you with a CDN domain name in the following format: This domain name can be used to retrieve objects from a website. For example, given a public container named cdn and an image file called music.png, users can access the object using either of the following two URLs:

  • Azure Website URL:
  • Azure CDN URL:

Working With Azure Cloud Services

You can cache objects to the CDN that are provided by an Azure cloud service. Caching for cloud services has the following constraints:

  • The CDN should be used to cache static content only.
  • Your cloud service must be deployed to in a production deployment.
  • Your cloud service must provide the object on port 80 using HTTP.
  • The cloud service must place the content to be cached in, or delivered from, the /cdn folder on the cloud service

When you enable CDN access for a cloud service, the Management Portal provides you with a CDN domain name in the following format: This domain name can be used to retrieve objects from a cloud service. For example, given a cloud service named myHostedService and an ASP.NET web page called music.aspx that delivers content, users can access the object using either of the following two URLs:

  • Azure cloud service URL:
  • Azure CDN URL:

Accessing Cached Content over HTTPS

Azure allows you to retrieve content from the CDN using HTTPS calls. This allows you to incorporate content cached in the CDN into secure web pages without receiving warnings about mixed security content types.

To serve your CDN assets over HTTPS there are couple of constraints worth mentioning:

  • You must use the certificate provided by the CDN. Third party certificates are not supported.
  • You must use the CDN domain to access content. HTTPS support is not available for custom domain names (CNAMEs) since the CDN does not support custom certificates at this time.

Even when HTTPS is enabled, content from the CDN can be retrieved using both HTTP and HTTPS.

Note: If you’ve created a CDN for an Azure Cloud Service (e.g. http://[XYZ] it’s important that you create a self-signed certificate for your Azure domain ([XYZ] If you’re using Azure Virtual Machines can be done through IIS.

Custom Domain to Content Delivery Network (CDN) endpoint

In case you want to access the cached content with custom domain, azure lets you map your domain to particular CDN End point. With that in place you can use your own domain name in URLs to retrieve the cached content.

For detailed information on implementation please check- Map CDN to Custom Domain

CDNs are an essential part of current generation’s Internet, and their importance will increase eventually. Even now, companies are trying hard trying to figure out ways to move more functionality to edge servers (POP Locations) in order to provide users with the fastest possible experience. Azure CDN plays a vital role as it suffice current generation CDN requirement. While implementing Azure CDN (Or any CDN for that matter) the important thing is to formulate a strategy regarding the maximum lifespan of an object beforehand.

Author Credits: This article was written by Utkarsh Pandey, Azure Solution Architect at 8K Miles Software Services and originally published here .

There is a Microsoft Azure event happening on 16th September 2017 and it is great opportunity for all Azure enthusiasts to meet, greet and share ideas on latest innovation in this field. Click here for more details.



Solutions in Azure : Azure CDN for Maximum Bandwidth & Reduced Latency – Part I

Under the Current Ecosystem of Microsoft Cloud; Azure CDN has been widely recognized as CDaaS (Content Delivery as a Service), with its growing network of POP locations it can be used for offloading content to a globally distributed network of servers. As its prime functionality it caches static content at strategically placed locations to help distributing Content with low latency and higher data transfer that ensures faster throughput to your end users.
Azure CDN particularly offers a global solution to the developers for delivering high-bandwidth content by caching the content at physical nodes across the world. Now requests for these contents has to travel shorter distance reducing the number of hops in between. With CDN in place you can be ensured that Static files such as (images, JS, CSS, videos etc.) and website assets are sent from servers closest to your website visitors. For content heavy websites like e- commerce this latency savings could be of significant performance factor.

In essence, Azure CDN puts your content in many places at once, providing superior coverage to your users. For example, when someone in London accesses your US-hosted website, it is done through an Azure UK PoP. This is much quicker than having the visitor’s requests, and your responses, travel the full width of the Atlantic and back.

There are two players (Verizon & Akamai) to provide us those edge locations for Azure CDN. Both providers have distinct ways of building their CDN infrastructures. Verizon on one hand has been quite happy disclosing their location on the contrary Azure CDN from Akamai POP locations are not individually disclosed. To get the updated list of locations keep checking Azure CDN POP Locations.

How Azure CDN Works?

Today, over half of all internet traffic is already being served by CDNs. Those numbers are rapidly trending upward with every passing year, and azure has been significant contributor there.

As with most of the azure services, Azure CDNs are not magic and actually work in a pretty simple and straightforward manner. Let’s just go through the actual case-
1) A user (XYZ) requests a file (also called an asset) using a URL with a special domain name, such as DNS routes the request to the best performing Point-of-Presence (POP) location. Usually this is the POP that is geographically closest to the user.
2) If the edge servers in the POP do not have the file in their cache, the edge server requests the file from the origin. The origin can be an Azure Web App, Azure Cloud Service, Azure Storage account, or any publicly accessible web server.
3) The origin returns the file to the edge server, including optional HTTP headers describing the file’s Time-to-Live (TTL).
4) The edge server caches the file and returns the file to the original requestor (Alice). The file will remain cached on the edge server until the TTL expires. If the origin didn’t specify a TTL, the default TTL is 7 days.
5) Additional users (ex. ABC) may then request the same file using that same URL, and may also be directed to that same POP.
6) If the TTL for the file hasn’t expired, the edge server returns the file from the cache. This results in a faster, more responsive user experience.

Reasons for using a CDN

1) To understand the reasons behind why Azure CDN is so widely used, we first have to recognize the issue they’re designed to solve (LATENCY). It’s the annoying delay that occurs from the moment you request to load a web page to the moment its content actually appears onscreen, especially applications where many “internet trips” are required to load content. There are quite a few factors which contribute to this, many being specific to a given web page. In all cases however, the delay duration is impacted by the physical distance between you and that website’s hosting server. Azure CDN’s mission is to virtually shorten that physical distance, the goal being to improve site rendering speed and performance.
2) Another obvious reason for using the Azure CDN is throughput. If you look at a typical webpage, about 20% of it is HTML which was dynamically rendered based on the user’s request. The other 80% goes to static files like images, CSS, JavaScript and so forth. Your server has to read those static files from disk and write them on the response stream, both actions which take away some of the resources available on your virtual machine. By moving static content to the Azure CDN, your virtual machine will have more capacity available for generating dynamic content.

When a request for an object is first made to the CDN, the object is retrieved directly from the Blob service or from the cloud service. When a request is made using the CDN syntax, the request is redirected to the CDN endpoint closest to the location from which the request was made to provide access to the object. If the object is not found at that endpoint, then it is retrieved from the service and cached at the endpoint, where a time-to-live (TTL) setting is maintained for the cached object.

Author Credits: This article was written by Utkarsh Pandey, Azure Solution Architect at 8KMiles Software Services and originally published here

Cost Optimization Tips for Azure Cloud-Part III

In continuation to my previous blog am going to jot down more on how to optimize cost while moving into Azure public cloud


With Microsoft Introducing next generation of Azure deployment via Azure Resource Manager (ARM) we can avail significant performance improvement just by upgrading the VM’s to latest versions (From Azure V1 to Azure V2). In all case the price would either be same or near to same.
For example- if you are upgrading a DV1-series VM to DV2- Series it gives you 35-40% faster processing for the same price point .


It is not enough to shut down VMs from within the instance to avoid being billed because Azure continues to reserve the compute resources for the VM including a reserved public IP. Unless you need VMs to be up and running all the time, shut down and deallocate them to save on cost. This can be achieved from Azure Management portal or Windows Powershell.


If you delete a VM, the VHDs are not deleted. That means you can safely delete the VM without losing data. However, you will still be charged for storage. To delete the VHD, delete the file from Blob storage.

  •  When an end-user’s PC makes a DNS query, it doesn’t contact the Traffic Manager Name servers directly. Instead, these queries are sent via “recursive” DNS servers run by enterprises and ISPs. These servers cache the DNS responses, so that other users’ queries can be processed more quickly. Since these cached responses don’t reach the Traffic Manager Name servers, they don’t incur a charge.

The caching duration is determined by the “TTL” parameter in the original DNS response. This parameter is configurable in Traffic Manager—the         default is 300 seconds, and the minimum  is 30 seconds.

By using a larger TTL, you can increase the amount of caching done by recursive DNS servers and thereby reduce your DNS query charges. However, increased caching will also impact how quickly changes in endpoint status are picked up by end users, i.e. your end-user failover times in the event of an endpoint failure will become longer. For this   reason, we don’t recommend using very large TTL values.

Likewise, a shorter TTL gives more rapid failover times, but since caching is reduced the query counts against the Traffic Manage name servers will be higher.

By allowing you to configure the TTL value, Traffic Manager enables you to make the best choice of TTL based on your application’s business needs.

  • If you provide write access to a blob, a user may choose to upload a 200GB blob. If you’ve given them read access as well, they may choose do download it 10 times, incurring 2TB in egress costs for you. Again, provide limited permissions, to help mitigate the potential of malicious users. Use short-lived Shared Access Signature (SAS) to reduce this threat (but be mindful of clock skew on the end time).
  • Azure App Service charges are applied to apps in stopped state. Please delete apps that are not in use or update tier to Free to avoid charges.
  • In Azure Search, The stop button is meant to stop traffic to your service instance. As a result, your service is still running and will continue to be charged the hourly rate.
  • Use Blob storage to store Images, Videos and Text files instead of storing in SQL Database. The cost of the Blob storage is much less than SQL database. A 100GB SQL Database costs $175 per month, but the Blob storage costs only $7 per month. To reduce the cost and increase the performance, put the large items in the blob storage and store the Blob Record key in SQL database.
  • Cycle out old records and tables in your database. This saves money, and knowing what you can or cannot delete is important if you hit your database Max Size and you need to quickly delete records to make space for new data.
  • If you intend to use substantial amount of Azure resources for your application, you can choose to use volume purchase plan. These plans allow you to save 20 to 30 % of your Data Centre cost for your larger applications.
  • Use a strategy for removing old backups such that you maintain history but reduce storage needs. If you maintain backups for last hour, day, week, month and year, you have good backup coverage while not incurring more than 25% of your database costs for backup. If you have 1GB database, your cost would be $9.99 per month for the database and only $0.10 per month for the backup space.
  • Azure Document DB with the stored procedure is that they enable applications to perform complex batches and sequence of operations directly inside the database engine, closer to the data. So, the network traffic latency cost for batching and sequencing operations can be completely avoided. Another advantage to using stored procedure is that they get implicitly pre-complied to the byte code format upon registration, avoiding script compilation costs at the time of each invocation.
  • The default of a cloud service size is ‘small’. You can change it to extra small in your cloud service – properties – settings. This will reduce your costs from $90 to $30 a month at the time of writing. The difference between ‘extra small’ and ‘small’ is that the virtual machine memory is 780 MB instead of 1780 MB.
  • Windows Azure Diagnostic may burst your bill on Storage Transaction. If you do not control it properly.

We’ll need to define what kind of log (IIS Logs, Crash Dumps, FREB Logs, Arbitrary log files, Performance Counters, Event Logs, etc.) to be collected and send to Windows Azure Storage either on-schedule-basis or on-demand.

However, if you are not carefully define what you are really need for the diagnostic info, you might end up paying the unexpected bill.

Assuming the following figures:

  • You a few application that require high processing power of 100 instances
  • You apply 5 performance counter logs (Processor% Processor Time, Memory Available Bytes, Physical Disk% Disk Time, Network Interface Connection: Bytes Total/sec, Processor Interrupts/sec)
  • Performing a schedule transfer for every 5 seconds
  • The instance will run 24 hours per day, 30 days per month

How much it costs for Storage Transaction per month?

5 counters X 12 times X 60 min X 24 hours X 30 days X 100 instances = 259,200,000 transactions

$ 0.01 per 10,000 transactions X 129,600,000 transactions =$ 259.2 per month

To bring it down, if you really need to monitor all 5 performance counters on every 5 seconds? What if you reduce them to 3 counters and monitor it every 20 seconds?

3 counters X 3 times X 60 min X 24 hours X 30 days X 100 instances = 3,8880,000 transactions

$ 0.01 per 10,000 transactions X 129,600,000 transactions =$ 38.8 per month

You can see how much you save for this numbers. Windows Azure Diagnostic is really needed but use it improperly may cause you paying unnecessary money

  • An application will organize the blobs in different container per each user. It also allows the users to check size of each container. For that, a function is created to loop through entire files inside the container and return the size in decimal. Now, this functionality is exposed at UI screen. An admin can typically call this function a few times a day.

Assuming the following figures for illustration:

  • I have 1,000 users.
  • I have 10,000 of files in average for each container.
  • Admin call this function 5 times a day in average.
  • How much it costs for Storage Transaction per month?

Remember: a single Get Blob request is considered 1 transaction!

1,000 users X 10,000 files X 5 times query X 30 days = 1,500,000,000 transaction

$ 0.01 per 10,000 transactions X 1,500,000,000 transactions = $ 1,500 per month

Well, that’s not cheap at all so to bring it down.

Do not expose this functionality as real time query to admin. Considering to automatically run this function once in a day, save the size in somewhere. Just let admin to view the daily result (day by day).With limiting the admin to just only view once a day, what will be the monthly cost looks like:

1,000 users X 10,000 files X 1 times query X 30 days = 300,000,000 transaction

$ 0.01 per 10,000 transactions X 300,000,000 transactions = $ 300 per month

Author Credits: This article was written by Utkarsh Pandey, Azure Solution Architect at 8KMiles Software Services and originally published here

Cost Optimization Tips for Azure Cloud-Part II

Cloud computing comes with myriad benefits with its various as-a-service models and hence most businesses consider it wise to move their IT infrastructure to cloud. However, many IT admins worry that hidden costs will lower their department’s total cost of ownership.

We believe that it is more about estimating your requirements correctly and managing resources in the right way.

Microsoft Azure Pricing

Microsoft Azure allows you to quickly deploy infrastructures and services to meet all of your business needs. You can run Windows and Linux based applications in 22 Azure data-center regions, delivered with enterprise grade SLAs. Azure services come with:

  • No upfront costs
  • No termination fees
  • Pay only for what you use
  •  Per minute billing

You can calculate your expected monthly bill using Pricing Calculator and track your actual account usage and bill at any time using the billing portal.

1. Azure allows you to set a monthly spending limit on your account. So, if you forget to turn off your VMs, your Azure account will get disabled before you run over your predefined monthly spending limit. You can also set email billing alerts if your spend goes above a preconfigured amount.

2. It is not enough to shut down VMs from within the instance to avoid being billed because Azure continues to reserve the compute resources for the VM including a reserved public IP. Unless you need VMs to be up and running all the time, shut down and deallocate them to save on cost. This can be achieved from Azure Management portal or Windows Powershell.

3. Delete the unused VPN gateway and application gateway as they will be charged whether they run inside virtual network or connect to other virtual networks in Azure. Your account will be charged based on the time gateway is provisioned and available.

4. At least one VM is required to be running all the time, with one reserved IP included in 5 reserved public IP in use, in order to avoid reserved IP address charges. If you down all your VMs in service, then Microsoft is likely to reassign that IP to some other customer’s cloud service, which can hamper your business.

5. Minimize the number of compute hours by using auto scaling. Auto scaling can minimize the cost by reducing the total compute hours so that the number of nodes on Azure scales up or down based on demand.

6. When an end-user’s PC makes a DNS query, recursive DNS servers run by enterprises and ISPs cache the DNS responses. These cached responses don’t incur charge as they don’t reach the Traffic Manager Name servers. The caching duration is determined by the “TTL” parameter in the original DNS response. With larger TTL value, you can reduce DNS query charges but it would result in longer end-user failover times. On the other hand, shorter TTL value will reduce caching resulting in more query counts against Traffic Manager Name server. Hence, configure TTL in Traffic Manager based on your business needs.

7. Blob storage offers a cost effective solution to store graphics data. Blob storage of type Table and Queue of 2 GB costs $0.14/month and type block blob costs just $0.05/month


A SQL Database of similar capacity will cost $4.98/month. Hence, use blob storage to store images, videos and text files instead of storing in SQL Database.


To reduce the cost and increase the performance, put the large items in the blob storage and store the blob record key in SQL database.

Above tips will definitely help you cut cost on Azure and leverage the power of cloud computing to the best!


Cost Optimization Tips for Azure Cloud-Part I

In general there are quite a few driving forces behind rapid adoption of cloud platforms off late, but doing it within the industry cost budget is the actual challenge. Though the key benefit from public cloud providers like Azure is its pay-as-you-go pricing model which makes customers immune of any capital investment but there are chances that the expenses in cloud start to add up and can soon get out of control if we are not practicing effective cost management. It needs attention and care to “Take Control over Your Cloud Costs” and decide about a better cost management strategy.

Under these Articles I will try to outline few of the Azure’s cost saving and optimization considerations .Its gonna be 3 part article first of this can be subtitled as “7 consideration for highly effective azure architecture “ because it covers the stuff from an architect’s point of view—

1. Design for Elasticity

Elasticity has been one of the fundamental properties of Azure that drives many of its economic benefits. By designing you architecture for elasticity you will avoid Over Provisioning of resources, that way you should always restrict yourself to use only what is needed. There are umbrella of service in azure which helps customers getting rid of under-utilization of resources. (Always make use of services like VM scale set & Auto scaling).

2. Leverage Azure Application Services (Notification, Queue, Service Bus etc.)
Application services in azure doesn’t only help you in performance optimization but they can greatly affect the cost of overall infrastructure. Judicially decide on which all are the service needed for your workload and provision them in optimum way. Make use of the existing service don’t try to reinvent the wheel.
When you install software’s to suffice the requirements there is a benefit of Customize features but the trade-off is immense you have to have an instance for this which intern restrict the availability of these software’s by tying in to a particular VM. Whereas if you choose different services from Azure you enjoy the inbuilt Availability, Scalability and High Performance with option of Pay as you go.

3. Always Use Resource Group
Keep the related resource in close proximity that way you can save money on communication among the services in addition to that application will get boost on performance as latency would no longer be a factor. In the latter articles I will specifically talk about other benefits this particular service can offer.

4. Off Load From Your Architecture
Try to offload as much as possible by distributing things to their more suited services it doesn’t only reduce the maintenance headache but help in optimizing the cost too.Move the session related data out of server, Optimize the infrastructure for performance and cost by caching and edge caching static content.

Combine Multiple JS & CSS files into one and then perform the Compression for minification. Once bundled into compressed form move them to azure blob.When you’re content (Static content) is popular frontend it with Azure Content delivery network. Use Blob + Azure CDN as it will reduce the cost as well as latency (depends on cache-hit ratio).For anything related to media streaming make use of Azure CDN as it frees you from running Adobe FMS.

5. Caching And Compression For CDN Content
After analyzing multiple Customer subscriptions, we can derive a pattern of modest to huge CDN spends. As a common practice, customers would have forgotten to enable caching for CDN resources either at origin servers like Azure Blob. You should enable compression for content like CSS, JavaScript, Text Files, JSON, HTML etc. to ensure cost savings on bandwidth. Also, frequently deploy production changes and often forget to enable caching & compression for static resources, dynamic content like text/HTML/JSON etc. We recommend you to have post-deploy job as a part of your release automation to ensure client side caching, server-side compression etc. are enabled for your application and resources.

6. Continuous Optimization In Your Architecture
If you are using Azure for the past few years, there is high possibility of using outdated services, Though once designed you should not do too much tinkering with architecture but it’s good to have a look and see if there are things which can be replaced with new generation service. They might be best fit for the workload and can offer same results in less expenses. Always match resources with the workload.
With that it doesn’t only give you instant benefits but offers you recurring savings in your next month’s bill.

7. Optimize The Provisioning Based On Consumption Trend

You need to be aware of what you are using. There is no need of wasting your money on expensive instances or services if you don’t need them. Automatically turn off what you don’t need, there are services like Azure Automation which can help you achieving that.Make use of azure service like auto-scaling, VM scale set and azure automation for uninterrupted services even when traffic tends to increase beyond expectations.Special mention for Azure DevTest- a service specially designed for Development and testing scenarios. With this service azure helps end users to model their infrastructure where they will be charged only for office hours (usually 8*5) these settings are customizable which makes it even more flexible.While dealing with Azure storage, make use of Appropriate Storage Classes with required redundancy options. Service like File Storage, Page-Blob, Block-Blob etc. have their specific purpose so be clear while designing your architecture.

Author Credits: This article was written by Utkarsh Pandey, Azure Solution Architect at 8KMiles Software Services and originally published here

Azure Storage – High Level Architecture


Windows Azure storage is a cloud storage service that is highly durable, available, and scalable. Once your data is stored in Azure storage, you can access that data any time and from anywhere. It provides the following four abstractions (services)- Blob storage, Table storage, Queue storage, and File storage. Each of these has different role to play, you can get more information here…

In conjunction to aforementioned services It also provides the storage foundation for Azure Virtual Machines in the form of persistent data disks.
The goal of this article is not to explain the offering but to understand the fundamentals of Azure storage using which they are able able to achieve all the design goal sets out. Let’s try to decipher how Microsoft do some of these things on Azure storage under the cover.
Any data stored on Azure Storage is triplicated by default, in normal circumstances it creates at least 3 copies of that inside storage stamp (and potentially another region with geo-resiliency enabled). So, before diving into storage architecture it’s only fair to put some lights on Stamps-

Azure Storage Stamp

Azure Divides things into Stamps where each stamp has its own fabric controller. A single Storage Stamp can best be understood as a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power. Clusters typically range from 10 to 20 racks with 18 disk-heavy storage nodes per rack.

Microsoft deploys these stamps in its Azure data centers across the world, and adds more stamps as demand grows. Inside the datacenter it’s used as a unit of deployment and management. They provide huge assistance in achieving fault-tolerance.

When user creates storage accounts all the data of that is stored on a single stamp. They do get migrated between stamp only when need arises, the way it works is that Microsoft always make sure that single storage stamp should only be utilized ~75% of its capacity & bandwidth. This is because ~20% is kept as a reserve for –

  1. Disk short stroking to gain better seek times and higher throughput by utilizing the outer tracks of the disks
  2. To continue providing storage capacity and availability in the presence of a rack failure within a stamp. When the storage stamp reaches ~75% utilization, the location service migrates accounts to different stamps using Inter-Stamp replication

imageLocation Service

Location Service in Azure is the one responsible for storage stamps management. Another notable responsibility of this to manage the account name-spaces across all stamps. Internally this Location Service itself is distributed across two geographical locations for its own disaster recovery. That makes it immune from geo failure.

As shown in the architecture diagram- We have a Location Service with two storage stamps and within the stamps we have the all three-layer mentioned. While tracking the resources Location Service look for them at each storage stamp in production across all the locations. Now when an application requests a new account for storing data, it specifies the location affinity for the storage (e.g., US North). The LS then chooses a storage stamp within that location as the primary stamp for the account using heuristics based on the load information across all stamps (which considers the fullness of the stamps and other metrics such as network and transaction utilization). The LS then stores the account metadata information in the chosen storage stamp, which tells the stamp to start taking traffic for the assigned account. The LS then updates DNS to allow requests to now route from the name to that storage stamp’s IP which it has exposed for external traffic system

Architecture Layers inside Stamps

To keep the data durable, consistent and available with in specific Azure Region lest understand the different layer which constitute the stamp. Windows Azure Storage is a layered system, so let’s take it from the bottom upblog3Stream Layer: The first or lowest layer on this subsystem which also gets referred as DFS (Distributed File System) layer. This layer stores the bits on disk and is actually in charge of handling the disk, it’s the responsibility of this layer to persist your data on the disk by distributing and replicating the data across many servers to provide durability within a storage stamp. you can think of the underling system as JBOD with in the stamp, so your data when you store in the Azure Storage Service it gets stored in the DFS into these files which is called extent and these extend gets replicated three times across UD’s (update domain)/FD’s (fault domain). The unique thing about this file system is that this is append only file system so when you override any data azure always keeps on appending the data to these extents.

Partition Layer: Then comes the partition layer which can be called as the brain of Azure Storage service. Most of the decision making happens at this layer. It has this significance because it serves two unique purposes-First this layer is built for managing and understanding higher level data abstractions (Blob, Table, Queue, Files). So, it’s this layer which understand what blob, is table is and how to perform transaction on those objects.Secondly it providing a scalable object namespace, and responsible for massive scalable indexIn addition to above two it does help in providing transaction ordering and strong consistency for objects, storing object data on top of the stream layer. ‘

Front End Layer: And finally, at the top of Azure Storage Service we have front end layer which provides a rest protocol for those abstractions (Blob, Table, Queue, Files). The Front-End layer consists of a set of stateless servers that take incoming requests. Upon receiving a request, an FE looks up the AccountName, authenticates and authorizes the request, then routes the request to a partition server in the partition layer (based on the PartitionName). Every write request which comes to the system will have a partition key specified to it. So, in nutshell it does extend its roles for authentication/authorization.
Global Name-space

One of the key design goal of Azure Storage Service is to provide a single global namespace that allows the data to be stored and accessed in a consistent manner from any location in the world. To provide this capability Microsoft leverages DNS as part of the storage namespace and break the storage namespace into three parts: an account name, a partition name, and an object name. As a result, all data is accessible via a uniform URI of the form:


Two Replication Engines

To address the replication challenge which is backbone of all the Azure Storage Service design goals Azure in general two types of replication-
Intra-Stamp Replication: Under this replication model Azure keeps your data durable within a region or stamp. It provides synchronous replication and is focused on making sure all the data written into a stamp is kept durable within that stamp. It keeps three replicas of the data across different nodes in different fault domains/Update Domain to keep data durable within the stamp in the face of disk, node, and rack failures. Intra-stamp replication is done completely by the stream layer and is on the critical path of the customer’s write requests. Once a transaction has been replicated successfully with intra-stamp replication, success can be returned back to the customer. Under this engine Azure provides strong consistency, until data is written on all the three places transaction is not committed. And because the replication implementation is happening at stream layer it can provide
Inter-Stamp Replication: Under this replication model Azure provides asynchronous replication and is focused on replicating data across stamps. Inter-stamp replication is done in the background and is off the critical path of the customer’s request. Because of asynchronous replication the write operation is not strongly consistent. This replication is at the object level, where either the whole object is replicated or recent delta changes are replicated for a given account. Inter-stamp replication is configured for an account by the location service and performed by the partition layer.

“Inter-stamp replication is focused on replicating objects and the transactions applied to those objects, whereas intra-stamp replication is focused on replicating blocks of disk storage that are used to make up the objects.”

When Windows Azure Storage was being comprehended, it had some design goals to achieve –

Consistency: One of the design goal of Microsoft for WAS was to provide strong consistency with in a region. By virtue of mentioned technological brilliance all the data which goes to Azure Storage gets triplicated and All committed data across all 3 replicas are identical.
Durability: Second design goal was that the data must be durable that means there must be proper replication mechanism in place, to address the same all data stored with at least 3 replicas
Availability: Can read from any 3 replicas; If any issues writing seal extent and continue appending to new extent
Performance/Scale: Retry based on 95% latencies; Auto scale out and load balance based on load/capacity. As it should automatically scale to meet the customers peak demand

*You also need a global name space to access the data around the world

Author Credits: This article was written by Utkarsh Pandey, Azure Solution Architect at 8KMiles Software Services and originally published here .