Azure Resource Lock: Safeguard Your Critical Resources

Prevention is better than Cure – There were quite a few instances when I thought I should have applied this logic and this has even more significance if you are playing around public cloud more so while dealing with mission critical resources there. There are numerous occasions when you want to protect your resources from some unwarranted human actions or to put it bluntly we are seeking a solution to prevent other users in organization from accidentally deleting or modifying critical resources.

Azure has given us couple of ways to apply that level of control, firstly with role-based access control (RBAC), With the Reader and various Contributor roles RBAC is a great way to help protect resources in Azure. You can effectively limit the actions that a user can take against a resource. However, even with one of the Contributor roles, it is still possible to delete specific resources. This makes it very easy to accidentally delete an item.

Azure Lock provides you the options using which you can effetely control any such adventure. Unlike RBACK, you use management locks to apply a restriction across all users and roles. To learn about setting permissions for users and roles, see Azure Role-based Access Control. Using Resource lock you can lock a particular subscription, a particular resource group or even a specific resource. With this in place authorize users can still be able to read or modify the resources but they CAN NOT breach that lock and delete the same.

To make this happen you have to apply the Resource Lock Level to aforementioned scopes. You can set the lock level toCanNotDelete or ReadOnly(As of now these two are the only options supported). CanNotDelete means authorized users can still read and modify a resource, but they can’t delete it. ReadOnly means authorized users can only read from a resource, but they can’t modify or delete it.

When you apply a lock at a parent scope, all child resources inherit the same lock.

One point worth mentioning here is that you will also need to be in either an Owner or User Access Administrator role for the desired scope, because to play with Resource Lock it’s prerequisite to have access to Microsoft.Authorization/* orMicrosoft.Authorization/locks/* actions (only these two have appropriate permissions).

Create Resource Lock Using ARM Template

With Azure Resource Manager template we can lock the resources at the time of its creation. An ARM template is a JSON-formatted template file which provide a declarative way to define the deployment of Azure resources. Here is the example of how to create a lock on particular Storage Account-



If you see the example clearly the name of storage account coming via parameter while the most important section to be noticed is how the lock (utLock) has been created by concatenating the resource name with /Microsoft.Authorization/ and the name of the lock.

Create Resource Lock using PowerShell

Placing a resource lock on an entire group could be helpful in situations where you want to ensure no resources in that group are deleted. With below example I have tried to create a resource lock on a particular resource Group” UT-RG”


To remove the resource Lock make use of Remove-AzureResourceLock cmdlet, make sure you are providing proper ResourceId.


Off late Azure has brought this support to ARM Portal as well, to achieve the similar things via portal click the Settings blade for the resource, resource group, or subscription that you wish to lock, select Locks. Once prompted Give the lock a name and lock level and you are immune to those talked about unwanted situations. It gives you options to lock an entire subscription to ReadOnly if malicious activity was detected.


Author Credits: This article was written by Utkarsh Pandey, Azure Solution Architect at 8KMiles Software Services and originally published here.


Azure Virtual Machine – Architecture

Microsoft Azure is built on Microsoft’s definition of commodity infrastructure. The most intriguing part of Azure is its cloud operating system that is at its heart. During the initial days of azure when it started it stated using fork of windows as its underlying platform Back then they named it as red dog operating system & red dog hypervisor. If you go into the history of Azure the project which became azure was originally named as project red dog. David Cutler was the brain behind designing and developing the various Red Dog core components and it was he who gave this his own words- the premises of Red Dog (RD) is being able to share a single compute node across several properties. This enables better utilization of compute resources and the flexibility to move capacity as properties are added, deleted, and need more or less compute power. This is turn drives down capital and operational expenses.

It was actually a custom version of windows and the driving reason for this customization was because hyper v during those didn’t had the features which was needed for Azure (particularly support for booting from VHD). if you try to understand the main components of its architecture we can count four pillars-

  • Fabric Controller
  • Storage
  • Integrated Development Tools and Emulated Execution Environment
  • OS and Hypervisor

Those were initial (early 2006) days of azure as it matured running a fork of an OS is not ideal (in terms of cost and complexity), so Azure team talked to the Windows team, and efforts were made to use Windows itself. As time passed windows eventually caught up and now Azure runs on Windows.

Azure Fabric Controller
Among there one component which contributed immensely in its success is fabric controller. The fabric controller owns all the resources in the entire cloud and runs on a subset of nodes in a durable cluster. It manages the placement, provisioning, updating, patching, capacity, load balancing, and scale out of nodes in the cloud all without any operational intervention.

Fabric Controller which still is backbone of azure compute is the kernel of the Microsoft Azure cloud operating system. Azure Fabric Controller regulates the creation, provisioning, de-provisioning and supervising of all the virtual machines and their back-end physical server. In other words It provisions, stores, delivers, monitors and commands the virtual machines (VMs) and physical servers that make up Azure. One added benefit is that It also detects and responds to both software and hardware failure automatically.

Patch Management
When we try to understand the underlying mechanism/workflow which Microsoft follows for patch management the common misconception is that it keeps updating all the nodes just like we do in our environment. But things in cloud is little different, AS Azure hosts are image-based (hosts boot from VHD) and it follows the image based deployment. So instead of just having patches delivered, azure roll out new VHD of the host operating system. Means they are not actually going and patching everyone but instead azure update at one place and because its orchestrated update it can use this image to update the whole environment.

This offers a major advantage in host maintenance as the volume itself can be replaced, enabling quick rollback. Host updates role out every few weeks (4-6 weeks), with an approach where updates are well-tested before they are rolled out broadly to the data centers. It’s the responsibility of Microsoft to ensure that each roll out is tested before updating the data center servers. To do so they start this implementation with few fabric controller stamps which could be called as pilot cluster and then once through they will gradually push the updated to production (Data Center) hosts. The underlying technology behind this is called Update Domain (UDs). When you create VM’s and put them in an availability set they get bucketed into update domain (by default you get 5 but there are provisions to increase them to 20). So, all the VMs part of availability set will get distributed equally among these UDs. With this the patching will take place in batches and Microsoft will ensure that at a time only single update domain should go for patching. You can call this as staged rollout. To understand this in more detail let’s see how Fabric controller manages the partitioning-

Under Azure’s Fabric Controller it has two types of partitions: Update Domains(UDs) and Fault Domains(FDs). These two are responsible for not only high availability for also for resiliency of infrastructure with this in place in empowers the Azure with ability to recover from failures and continue to function. It’s not about avoiding failures, but responding to failures in a way that avoids downtime or data loss.

Update Domain: An Update Domain is used to upgrade a service’s role instances in groups. Azure deploys service instances into multiple update domains. For an in-place update, the FC brings down all the instances in one update domain, updates them, and then restarts them before moving to the next update domain. This approach prevents the entire service from being unavailable during the update process.

Fault Domain: Fault Domain defines potential points of hardware or network failure. For any role with more than one instance, the FC ensures that the instances are distributed across multiple fault domains, in order to prevent isolated hardware failures from disrupting service. All exposure to server and cluster failure in Azure is governed by fault domains.

Azure Compute Stamp
As in Azure, things gets divided into stamps where each stamp will have one fabric controller and this fabric controller is the one responsible for managing the VMs inside that stamp. In Azure, there are only two type of stamps, it could either be compute stamp or storage stamp. This Fabric controller is also not single; it has its distributed branches. Based on the available information, azure will have 5 replicas of the fabric controller where it uses synchronous mechanism to replicate the state. In this setup, there will be one primary and to which control pane will talk to. Now it’s the responsibility of this primary to act on the instruction (example- provision a VM) and also let other replicas know about it. And when at least 3 of them acknowledge the fact that this operation is going to happen then the operation take place (this is called quorum based approach).

VM Availability
Talking about Azure Virtual Machines there are three major components (Compute, Storage, Networking) which constitute Azure VM.While discussing Azure Virtual Machine (VM) resiliency with customers, they typically assume it is comparable to their on-prem VM architecture and as such, features from on-prem is expected in Azure. Well it is not the case, thus I wanted to put this together to provide more clarity on the VM construct in Azure to better understand how VM availability in Azure is typically more resilient then most on-prem configuration.
“Talking about Azure Virtual Machines there are three major components (Compute, Storage, Networking) which constitute Azure VM. So, when we talk about Virtual machine in Azure we must take two dependencies into consideration. Windows Azure Compute (to run the VM’s), and Windows Azure Storage (to persist the state of those VM’s). What this means is that you don’t have a single SLA, instead you actually have two SLA’s. And as such, they need to be aggregated since a failure in either, could render your service temporarily unavailable.”
Under this article lets have our discussions on Compute(VM) and Storage components.

Azure Storage:You can check my other article where I have talked about this in great details, on how an Azure Storage Stamp is a cluster of servers hosted in Azure Datacenter. These Stamps follows layer architecture with built-in redundancy to provide High Availability. Under this multiple (most of the times 3) replicas of each file, referred as Extent, are maintained on multiple different servers partitioned between Update Domains and Fault Domains. Each write operations are performed Synchronously (till we are talking about intra Stamp replication) and control is returned only after the 3 copies completed the write, thus making the write operation strongly consistent.

Virtual Machine:


Microsoft Azure has provided a means to detect health of virtual machines running on the platform and to perform auto-recovery of those virtual machines should they ever fail. This process of auto-recovery is referred to as “Service Healing”, as it is a means of “healing” your service instances. In this case, Virtual Machines and the Hypervisor physical hosts are monitored and managed by the Fabric Controller. The Fabric Controller has the ability to detect failures.

It can perform the detection in two mode-Reactive and Proactive. If the FC detects failures in reactive mode (Heartbeats missing) or proactive mode (known situations leading to a failure) from a VM or a hypervisor host, it will initiate a recovery by either redeploying the VM on a healthy host (same host or another host) and mark the failed resource as unhealthy and remove it from the rotation for further diagnosis. This process is also known as Self-Healing or Auto Recovery.
With Above diagram we can see different layers of the system where faults can occur and the health checks that Azure performs to detect them

*auto-recovery mechanism is enabled and available on virtual machines across all the different VM sizes and offerings, across all Azure regions and datacenters.

Author Credits: This article was written by Utkarsh Pandey, Azure Solution Architect at 8KMiles Software Services and originally published here

For more interesting information follow us in LinkedIn by clicking here

8K Miles Tweet Chat 2: Azure

If you missed our latest Twitter chat on Azure or wish to once again go through the chat, this is the right place! Here’s a recap on what happened during the 12th April Tweet chat, as a compilation of all the questions asked and answers as given by the tweet chat participants. The official tweet chat handle of 8K Miles being @8KMilesChat shared frequently asked questions (FAQs) related to Azure and here’s how they were answered.







twitter chat

7-2 7-3








We received clear answers to every question asked and it was an informative chat on Azure. For more such tweet chats on cloud industry follow our Twitter handle @8KMiles.

The active participants during the tweet chat were cloud experts Utkarsh Pandey and Harish CP. Here’s a small brief on their expertise:

Utkarsh Pandey

Utkarsh, is a Solutions Architect who in his current role as AWS & Azure Certified solution architect holds the responsibility of cloud development services.


HarishCP, is a Cloud Engineer. He works in the Cloud Infrastructure team helping customers in Infrastructure management and migration.

Top 10 Azure Glossary: Demystified

1. Affinity Group

“Affinity group” AKA Scale units helps co-locating related resources in close proximity to reduce network latency.  for e.g., when you launch a multi-tiered web application with front end tier, business logic tier and database server, you don’t want to place these resources in different parts of the datacenter instead you want to group it together for better network performance.  Azure highly recommends Affinity Group for grouping of related resources but doesn’t mandate.

Azure Data centers consists of multiple Affinity groups and not all the affinity groups contains all the Services of Azure, for e.g. New High power VM Families, Internal Load  Balancers, Reserved IPs may not be available  in all the Affinity Groups.

2. Regional VNet

Regional VNet is the enhanced version of VNet. Until 2014, VNet was originally bound to Affinity Groups which is just a sub section of Azure Data Center. Affinity group has limited set of resources and it doesn’t contain all the services offered within a region. As of this writing, Azure has 17 regions spread globally and planning to power up many more datacenters. When you create a Regional Virtual Network, it can span the entire region and thus you can avail all the services available within the region and not limited to Affinity Groups.

3. Availability Set

Azure’s main promise is High Availability. To achieve HA for your applications, it is always recommended to run at least two instances of your solution to qualify for HA and 99.95 Azure SLA.

Availability Set has two main concepts called Fault Domain & Upgrade Domain.

As the name suggest Fault Domain is an individual or group of Container/Rack placed inside the Azure Datacenter that shares Power and Network Switches. 2 Virtual Machines placed under an Availability group, will be virtually deployed in two different Fault Domains so that problems occurred in one Fault domain will not affect another.

Upgrade Domains is a categorization of resources to manage Host Operating Updates and patches. This helps us with avoiding both the VMs get updated or patched at the same time.

4. Resource Group

Resource Group helps you to group all the related services together for better resource management, tagging and billing. Not to be confused with Affinity Groups, which is keeping virtual resources close proximity.

For e.g. If you manage 2 different projects 1. Internal SharePoint Portal, 2. Public facing corporate website built on PHP. Each of this solution have different set of resources and hence you may want to group them together.

Key pointers about Resource Group at this moment are

  1. Default and Maximum Resource group that you can create within a subscription is 500.
  2. Resource Group should not be confused with Active Directory Group functionally, both are two different services.
  3. Linking of shared resources between groups is not fully functional yet
  4. Resource Group can span regions.

5. Endpoint

Be default VMs launched within Virtual Networks can communicate to each using their private address, but if you want to make VMs placed in different Networks irrespective of whether it is within azure/on premise/other cloud, you need public IPs otherwise called as Endpoints. When you create VMs, ports like Remote Desktop, Windows PowerShell Remoting, and Secure Shell (SSH) are automatically opened, but you can also open other ports like FTP, SMTP, DNS, HTTP, POP3, IMAP, LDAP, HTTPS, SMTPS, IMAPS, POP3S, MSSQL, and MySQL as it requires.

Each endpoint in the VM has two ports i.e. Public Port & Private Port. Public port is used for incoming traffic from the internet and private port is for internal communication with other services within the virtual network.

6. Public Virtual IP Address/Dynamic IP Address

When you first create a Cloud Service in Azure you will be assigned with Virtual Public IP Address. This VIP will not be released until all the VMs placed inside the Cloud services is successfully Deleted or Stopped (De-allocated).

Dynamic IP Address (DIP) are nothing Private IP address allocated by DHCP (Dynamic Host Control protocol), also note that it bounds to the VNet CIDR block defined by the user. Similar to the Public IP, DIPs are also not release until all the VMs placed inside the Cloud services is successfully Deleted or Stopped (De-allocated).

Reserved Virtual IP Address

Users can reserve IP addresses for their subscription. This helps them with predictable IP address that can associated with their Cloud Services and Virtual Machines. By default when you delete or stop (De-allocate) your instances the VIPs will be released to Azure IP address pool, but when you reserve IPs it will remain in your subscription until to remove Reserved IPs from your subscription.


7. Instance Level Public IP Address

Instance level IP address is associated directly to the Virtual Machine Instances rather than to the Cloud Services where you back all the Virtual Machines within. Currently you can only allocate one PIP to a VM instance and it is not currently supported Multi NIC VMs.

Instance Level IP addresses can be used when you simply want to connect your VM with an IP instead of using Cloud Service endpoints opened individually for each ports like Other benefits includes receiving traffic on any port instead of selective ports which is best suitable Passive FTP where the selection of the ports are completely dynamic in nature, similarly outbound traffic from VM can be routed via PIP.

At this moment, requesting of Instance level IPs as well as allocation if IPs can only be done using Windows PowerShell and Rest APIs.


It’s a command line interface for Windows, Linux and IOS Platforms. You might be familiar with Windows PowerShell CLI, the favorite power shell scripting utility of IT pros used to automate and execute remote commands, but it’s meant only for windows. X-PLAT CLI built using JavaScript/Node.js is an alternate solution which brings the same power to non-Microsoft platforms. You can download Windows installer here and OS X installer here and find Linux installation instructions here.

9. Cloud Service

Out of all the Naming Conventions of Microsoft Azure, Cloud Service is the single most confusing and ubiquitous term. Cloud Service is a very broad term and used by everyone, everywhere basically for one reason, anything hosted out of premise is generally called as Cloud Service.

Cloud Service in Azure is nothing but a DNS name e.g. http://<<contonso>> or http://<<contonso>>.azure which could be mapped with a custom domain. Creating cloud service is the first step of creating public interfaces like WebApp, Mobile services  or Azure VM.


10. App Services

Azure App Service is the new term coined by Microsoft recently which consolidates Websites (Web Roles/Worker Roles), Web Jobs, Mobile Services, API Services together and offers it as a package. As of writing this article, it’s currently available only in the preview portal. There was lot of confusions within the Developer community as when to choose Web Roles, Website, Mobile Services etc because of close resemblance to each other. In fact you can create a mobile services using Worker Role or a Web role.

Now let’s look at what these individual services can do

Web App

This is nothing but Azure Websites that helps developers to quickly build websites using variety of different programming languages and host and scale seamlessly using Azure PaaS offering.

Mobile App

Azure Mobile App service is purpose built for 3 Key Scenarios. 1. Enterprise SSO with AD, 2. Push Messaging and 3. Social Integration. Mobile service is completely platform agnostic and technology agnostic, means you can build mobile services for variety of different platforms like iOS, Android, Windows with both .Net or JavaScript as the back end.

Logic App

It’s a new breed of service targeted at developers and technical business users to orchestrate and create API workflows. APIs found everywhere, almost all the services exposes APIs. Logic Apps helps you to connect various APIs together in a secured and organized manner. Logic App provides out of box Social Media connectors for Twitter, Facebook, Yammer. Enterprise Connectors for SAP, Marketo, Salesforce and Azure Data Service connectors for  Sharepont, Mobile Services, Storage etc. If you don’t find connectors of your favorite services, you can build one by yourself using API App service.


It’s an API hosting Service where you can build APIs using various programming languages including C#,Java, Python,Node.js,PHP and  host it with Azure Apps service. API App connects seamlessly with Azure Web App/Mobile and Logic App. The 2 major benefits of API app are 1. Simplification of Security using AD/SSO and OAuth and 2. Quick API deployments and automated versioning support.

About the Author

 Ilyas is a Cloud Solution Architect at 8K Miles specializing Microsoft Azure and AWS Clouds. He is also passionated about Big Data, Analytics and Machine Learning Technologies.

LinkedIn || Twitter