Modify Block Blob with Pessimistic Concurrency approach Azure

In this example, we’ll take a Block Blob and an example of a class named Assignment. The Pessimistic Concurrency approach takes a Lease on a Blob Client and allows overwrite only if the Lease is not expired else it’ll give HttpStatusCode.PreconditionFailed error. For more details, check the following document.

The code below is a .Net 6 Console App.

The Assignment Class has the following Properties:

public class Assignment
    public int Id { get; set; }
    public string Code { get; set; }
    public string Kind { get; set; }
    public double pehe { get; set; }

The Console App’s Program.cs code will fetch blob content every time and manually add another Assignment. In the 4th step, it’ll fetch content from another Blob and append the Deserialized object to the original list of Assignments being built in previous steps and finally overwrite the first Blob with all Assignments.

using Azure;
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using Azure.Storage.Blobs.Specialized;
using Newtonsoft.Json;
using System.Net;
using System.Text;

await PessimisticConcurrencyBlob();


async Task PessimisticConcurrencyBlob()
    Console.WriteLine("Demonstrate pessimistic concurrency");
    string connectionString = "xxxx"; //ConfigurationManager.ConnectionStrings["storage"].Con;
    string filename = "testAssignment.json";
    string containerName = "mycontainer";
    BlobServiceClient _blobServiceClient = new BlobServiceClient(connectionString);
    BlobContainerClient containerClient = _blobServiceClient.GetBlobContainerClient(containerName);

    BlobClient blobClient = containerClient.GetBlobClient(filename);
    BlobLeaseClient blobLeaseClient = blobClient.GetBlobLeaseClient();

    string filename2 = "assignments.json";
    BlobClient blobClient2 = containerClient.GetBlobClient(filename2);

        // Create the container if it does not exist.
        await containerClient.CreateIfNotExistsAsync();
        var blobAssList = await RetrieveBlobContentAsync(blobClient);
        // Upload json to a blob.
        Assignment assignment1 = new Assignment()
            Id = 8,
            Code = "ABC",
            Kind =  "Lead",
            pehe = 10.0

        var blobContents1 = JsonConvert.SerializeObject(blobAssList);
        byte[] byteArray = Encoding.ASCII.GetBytes(blobContents1);
        using (MemoryStream stream = new MemoryStream(byteArray))
            BlobContentInfo blobContentInfo = await blobClient.UploadAsync(stream, overwrite: true);

        // Acquire a lease on the blob.
        BlobLease blobLease = await blobLeaseClient.AcquireAsync(TimeSpan.FromSeconds(60));
        Console.WriteLine("Blob lease acquired. LeaseId = {0}", blobLease.LeaseId);

        // Set the request condition to include the lease ID.
        BlobUploadOptions blobUploadOptions = new BlobUploadOptions()
            Conditions = new BlobRequestConditions()
                LeaseId = blobLease.LeaseId

        // Write to the blob again, providing the lease ID on the request.
        // The lease ID was provided, so this call should succeed.
        // Upload json to a blob.
        blobAssList = await RetrieveBlobContentAsync(blobClient);
        Assignment assignment2 = new Assignment()
            Id = 9,
            Code = "DEF",
            Kind = "Assignment",
            pehe = 20.0
        var blobContents2 = JsonConvert.SerializeObject(blobAssList);
        byteArray = Encoding.ASCII.GetBytes(blobContents2);

        using (MemoryStream stream = new MemoryStream(byteArray))
            BlobContentInfo blobContentInfo = await blobClient.UploadAsync(stream, blobUploadOptions);

        // This code simulates an update by another client.
        // The lease ID is not provided, so this call fails.

        // Acquire a lease on the blob.
        BlobLease blobLease2 = await blobLeaseClient.AcquireAsync(TimeSpan.FromSeconds(60));
        Console.WriteLine("Blob lease acquired. LeaseId = {0}", blobLease2.LeaseId);

        // Set the request condition to include the lease ID.
        BlobUploadOptions blobUploadOptions2 = new BlobUploadOptions()
            Conditions = new BlobRequestConditions()
                LeaseId = blobLease2.LeaseId

        blobAssList = await RetrieveBlobContentAsync(blobClient);
        Assignment assignment3 = new Assignment()
            Id = 10,
            Code = "GHI",
            Kind = "Assignment",
            pehe = 30.0
        var blobContents3 = JsonConvert.SerializeObject(blobAssList);
        byteArray = Encoding.ASCII.GetBytes(blobContents3);

        using (MemoryStream stream = new MemoryStream(byteArray))
            // This call should fail with error code 412 (Precondition Failed).
            BlobContentInfo blobContentInfo = await blobClient.UploadAsync(stream, blobUploadOptions2);

        // Calling another blob and add to first blob.
        BlobLease blobLease3 = await blobLeaseClient.AcquireAsync(TimeSpan.FromSeconds(60));
        Console.WriteLine("Blob lease acquired. LeaseId = {0}", blobLease3.LeaseId);

        // Set the request condition to include the lease ID.
        BlobUploadOptions blobUploadOptions3 = new BlobUploadOptions()
            Conditions = new BlobRequestConditions()
                LeaseId = blobLease3.LeaseId

        var blobAssList2 = await RetrieveBlobContentAsync(blobClient2);
        var blobContents4 = JsonConvert.SerializeObject(blobAssList);
        byteArray = Encoding.ASCII.GetBytes(blobContents4);

        using (MemoryStream stream = new MemoryStream(byteArray))
            // This call should fail with error code 412 (Precondition Failed).
            BlobContentInfo blobContentInfo = await blobClient.UploadAsync(stream, blobUploadOptions3);

    catch (RequestFailedException e)
        if (e.Status == (int)HttpStatusCode.PreconditionFailed)
                @"Precondition failure as expected. The lease ID was not provided.");
        await blobLeaseClient.ReleaseAsync();

The code for fetching the Blob Content is as follows:

async Task<List<Assignment>> RetrieveBlobContentAsync(BlobClient blobClient)
    //List<Assignment> assignments = new List<Assignment>();

    var response = await blobClient.DownloadAsync();
    string content = string.Empty;
    using (var streamReader = new StreamReader(response.Value.Content))
        while (!streamReader.EndOfStream)
            content = await streamReader.ReadToEndAsync();

    var assignments = JsonConvert.DeserializeObject<List<Assignment>>(content);

    return assignments;

Append to text file AppendBlock BlobStorage Azure

Below example takes input from user in a .Net 6 Console App and appends each input to a text file on BlobStorage using AppendBlob. The connectionString is a SAS for access to the Blob Storage and should be managed in appSettings.json file.

using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Specialized;
using System.Text;

Console.WriteLine("please enter text to add to the blob: ");
string text = Console.ReadLine();

await AppendContentBlobAsync(text);


async Task AppendContentBlobAsync(string content)
    string connectionString = "xxxx";
    string filename = "test.txt";
    string containerName = "mycontainer";
    BlobServiceClient _blobServiceClient = new BlobServiceClient(connectionString);
    BlobContainerClient container = _blobServiceClient.GetBlobContainerClient(containerName);
    await container.CreateIfNotExistsAsync();
    AppendBlobClient appendBlobClient = container.GetAppendBlobClient(filename);
    if (!await appendBlobClient.ExistsAsync())
        await appendBlobClient.CreateAsync();
    using (MemoryStream ms = new MemoryStream(Encoding.UTF8.GetBytes(content)))
        await appendBlobClient.AppendBlockAsync(ms);

Microservices with .Net Core API and Azure Service Bus

Basic Microservice Architecture with Azure Service Bus and API Gateway

The above Architecture has the following components:

Front-end: ReactJS Application with a simple form hosted on App Service Plan which will hit the API Gateway with GET/POST requests.

Name Microservice: First Microservice hosted on App Service Plan that will take User Details from front-end and save PersonId and Name in it’s own Azure SQL database.

Azure Service Bus: The Name Microservice will forward the PersonId created and Address details from front-end to the Azure Service Bus Topic.

Address Microservice: Second Microservice hosted on App Service Plan will subscribe to the Topic that takes PersonId and Address data from the Name service and store the details in it’s own Azure SQL database.

This is not an ideal scenario for creating Microservice Architecture but it shows the idea how the individual components interact with each other.

*In addition to this, another Azure Function will subscribe to the Queue for new User creation and sends email to someone with the details. This is however, not shown in the above image.

The Front-end Application is a form created in ReactJS that takes in following fields from the User:

FirstName, MiddleName, LastName, Email, Phone Number and Address

The following front-end and back-end repositories have all the code for connecting to the required components.

Further, a separate Authentication Microservice can be created which will return a JWT token based on the credentials provided. The token can then be validated by the API Gateway configured with the API Manager Service and then allow the requests to go through to the other Microservices.

The front-end code does not have Login page in this sample, however it directly passes the credentials to the Auth Service to generate the token. The token is then passed through to the subsequent calls to the API Management Gateway.

Select Validate JWT policy for Inbound Processing at the Operation Level for APIs under API Management.

Change the issuer signing key while adding the Inbound Policy under API Management as per the key used while generating the token from your Authentication Microservice.

        <base />
        <validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized">

Also, make sure the CORS policy is updated under the respective APIs Inbound Processing at the required operations or All Operations level:

<cors allow-credentials="false">

Update the connection strings as per your Database and Service Bus Services from the Code Repository provided above.

Another example of Microservices Architecture for a real world scenario is as below:

Microservices Architecture for Online Shopping Application

Automate ARM templates deployment with Azure DevOps pipelines

In continuation to my previous post, I’m deploying the Hub and Spoke model using the ARM templates from Azure Repo using Azure pipeline. For this example, I’m only showing 2 ARM templates deployment for Log Analytics and Hub Vnet. The rest can be done in the same way.

Create a Service Connection in your Azure Project as shown below:

Below is the yaml file that I’m using to create multi-stage pipeline, where each stage is running a job:

name: $(BuildDefinitionName)_$(date:yyyyMMdd)$(rev:.r)
trigger: none
pr: none
stages :        
  - stage: arm_loganalytics_deploy
      - job: arm_loganalytics_deploy
              - checkout: self
              - task: AzureResourceManagerTemplateDeployment@3
                  deploymentScope: 'Resource Group'
                  azureResourceManagerConnection: 'AzureVSE'
                  subscriptionId: '1bcd68af-e392-4b66-9558-697bd7e8dc91'
                  action: 'Create Or Update Resource Group'
                  resourceGroupName: 'azhubspoke-rg'
                  location: 'Japan East'
                  templateLocation: 'Linked artifact'
                  csmFile: '$(System.DefaultWorkingDirectory)/loganalytics-workbook/loganalytics.json'
                  deploymentMode: 'Incremental'
  - stage: arm_hubvnet_deploy
      - job: arm_hubvnet_deploy
              - checkout: self

              - task: AzureResourceManagerTemplateDeployment@3
                  deploymentScope: 'Resource Group'
                  azureResourceManagerConnection: 'AzureVSE'
                  subscriptionId: '1bcd68af-e392-4b66-9558-697bd7e8dc91'
                  action: 'Create Or Update Resource Group'
                  resourceGroupName: 'azhubspoke-rg'
                  location: 'Japan East'
                  templateLocation: 'Linked artifact'
                  csmFile: '$(System.DefaultWorkingDirectory)/hub-vnet/hub-vnet.json'
                  deploymentMode: 'Incremental'

You can also validate the parameters of ARM template using the following parameter below csmFile in yaml file e.g.:

csmParametersFile: '$(System.DefaultWorkingDirectory)/hub-vnet/hub-vnet.parameters.json'

Run the pipeline and verify the results.

Create Azure Hub and Spoke model using ARM templates

The Hub and Spoke model is a popular Architecture for Teams who are migrating their Workloads to Cloud environment incrementally and still keep some workloads on-prem. Following are the main components that make-up Hub and Spoke model:

  1. Hub Virtual Network that holds your common components like VPN or Express Route Gateway, Azure Firewall, Azure Bastion Host etc. These components can be common to different environments like Dev, Staging, Prod etc. for better cost management.
  2. Spoke Virtual Network which have isolated workloads. These can hold VMs or other PaaS services like App Service that connect to On-Prem network via the Hub network gateway transit. There can be any number of Spokes.
Example Hub and Spoke Architecture

The benefits of hub and spoke configuration include cost savings, overcoming subscription limits and workload isolation.

Another example from the MS docs I found useful is as shown below:

I’m going to create the architecture shown above using ARM templates. You can find the templates for different components here. I’ve broken down the combined template into multiple templates that can be run in the following order by deploying them as Custom templates in Azure Portal and will be created in an existing Resource Group:

  1. Log Analytics workbook
  2. Hub (includes vpn gateway, firewall, bastion)
  3. Spoke1
  4. Spoke2
  5. Vnet Peerings
  6. Azure Sentinel
  7. Azure KeyVault

MS docs URL shared above contains more details about these components. Breaking down the templates into separate components gives you more control in creating a automated flow using say Azure pipelines. You can also later add or remove components easily as per your requirement.

Other options can be using Azure Blueprints for creating a minimal architecture and building from there on.

A similar architecture can created using Terraform as per MS docs here. But while running Terraform seems to be erroneous in Cloud shell and sometimes becomes unresponsive as per my experience.

Change ApplicationInsights Azure resource configuration in existing Web App

Application Insights is a Service on Microsoft Azure that lets you understand what users are actually doing on your App.
It also lets you diagnose any issues with it’s Powerful analytics tools and works with platforms including .Net, Java and Node.js.

The App Insights Instrumentation key is what is required to link your App with the resource on Azure.
If you already have an existing App Insights resource created through Visual Studio and you need to change it, then you can create another resource manually from the Azure Portal.

Once the App Insights resource is created, copy the Instrumentation key and replace it in your ApplicationInsights.config file. This lets you switch the ApplicationInsights resource for your Application.

Look for the InstrumentationKey tag in your ApplicationInsights.config file and replace. You might also need to change the InstrumentationKey in the HomePage JavaScript under Views folder added by App Insights SDK.

Start debugging your App and verify with your Live Metrics Stream in the App Insights resource that it is working.

Accessing the Ubuntu vm created on Azure via vnc server on Mac

Currently I’ve setup the Ubuntu Server 18.04 LTS from the Azure marketplace and I’m trying to access it via VNC Server setup on the Linux machine. Also, you’ll need a vnc client like RealVNC or you can also use the screen-sharing client available on your Mac.

Login via SSH:

First you need to login to your Linux VM as a non-root user which you’ve created while setting up the VM. To spin up a new Linux VM, you can check out this post. You can use the Cloud shell to connect to your VM using the non-root username and password to your machine via SSH. Use the Connect menu of your VM and copy the SSH command to run in the Cloud shell.

ssh your_user_name@IP_Address

You just need to replace the your_user_name and IP_Address parts in the above command. Enter the password you’re prompted for to complete the Login as SSH.

Install the required packages:

We now need to install the required packages like Xfce desktop environment and VNC Server which are not bundled in the Ubuntu OS by default. Xfce is a free and open-source desktop environment for Unix and Unix like Operating Systems.

Update list of packages:
$ sudo apt update
Install Xfce Desktop environment and wait for the installation to complete:
$ sudo apt install xfce4 xfce4-goodies
Install the VNC Server:

$ sudo apt install tightvncserver

Complete the initial configuration and provide the setup password:
$ vncserver

Providing a view-only password is optional. You’ll get the below Output as the initial configuration completes:

Creating default startup script /home/your_user_name/.vnc/xstartup Starting applications specified in /home/your_user_name/.vnc/xstartup Log file is /home/your_user_name/.vnc/your_hostname:1.log

Configure VNC Server:

The VNC Server is by default configured on the port 5901 and display port :1. VNC can launch multiple instances on other ports like :2, :3 and so on.

Let’s first kill the current instance for further configuration that we require:

$ vncserver -kill :1


Killing Xtightvnc process ID <ID>

Backup the xstartup file before modifying:

$ mv ~/.vnc/xstartup ~/.vnc/xstartup.bak

Create a new xstartup file and open in editor:

$ nano ~/.vnc/xstartup

Add the following lines to your file in the nano editor and save it:

xrdb $HOME/.Xresources
startxfce4 &

This is making certain settings to the graphical desktop like colours, themes and fonts. The last line is starting the Xfce desktop. Now, let’s convert the file to an executable and restart:

$ sudo chmod +x ~/.vnc/xstartup
$ vncserver

Now, let’s connect to the VNC Server from your Mac by creating a SSH tunnel and use Screen-sharing client to connect.

Run this command on your Mac terminal:

$ ssh -L 5901: -C -N -l your_user_name your_server_ip

Do replace the your_user_name with your sudo non-root username and your_server_ip with the IP Address of your Linux VM. Provide the password when prompted for your username.

Now, open your screen sharing App available in the Finder Go Menu on your Mac that says “Connect to Server…”.

Click on Connect and provide your password when prompted again and you’ll see the Xfce Desktop running via Screen-sharing.

Spin up Linux VM on Microsoft Azure

Microsoft Azure provides a multiple ways to create Virtual Machines for Windows or Linux along with their multiple market variations. They are:

  1. Azure portal UI at
  2. Azure CLI
  3. Azure PowerShell commands

I’ll be using the Azure portal UI to create a marketplace image for Ubuntu Linux. Some of the UI features may change in future, but the crux will remain pretty much similar more or less.

Open Marketplace for VM Images on Azure portal

Marketplace image

Fill up the VM Image details

Create Image1

  1. Create or select Resource Group.
  2. Give a suitable name.
  3. Select region based on your geographic availability.
  4. For personal use redundancy is not required. You can change Availability options based on Availability Zone or Availability set.
  5. Select the Marketplace image for the available Ubuntu version.
  6. Select a machine size based on vcpus, memory and IOPS requirement. Of course, check the cost factor.

Setup Authentication using Password or SSH public key

You can simply use Username and Password for authentication otherwise use SSH public/private key pair.

For generating SSH public key, use Putty gen for Windows or ssh-keygen on Linux and OSX. You can download a suitable Putty client for windows here.

  1. Generate RSA 2048-bit key and follow the instructions by the tool.
  2. Save the Private key file as .ppk
  3. Save the Public key file as .pub
  4. Export the Private key file as .openssh format using the Conversions menu if this key will be used by an external SSH client such as on Linux.


For the Admin account, put a suitable Username and SSH public key generated above starting with “ssh-rsa” as shown below. Make sure the key is copied as is without any modifications.


Add Disks information

It’s better to use Premium SSD for optimal performance. If you have additional disks already created you can attach it with the VM at this step or you can also do this later.

Create Image2


This step creates a Virtual Network, a subnet and a Public IP for the VM. All these are added to the same Resource Group while creating these new resources. You can also select if you have these existing resources.

Create Image3

Allow Inbound Ports

You can select the required ports e.g. SSH for connecting using SSH public/private key-pair or RDP to connect using Username and Password.

inbound ports


Keep these to default if you prefer. You can enable/disable any option based on requirement. I turned off Boot diagnostics as it required to create a Storage account so I switched it off, as I don’t require it for a test VM.

Create Image4

Guest Config

You can provide additional post-deployment configuration options using extensions like chef and puppet or Cloud init for Linux.

Create Image5


You can add various tags to categorize resources for consolidated billing and automation management.

Create Image6

Review your provided details in the next step and click on Create. Wait for the deployment to succeed.

Accessing the VM

For accessing the VM, check if you have inbound port rules set up to access using Public IP address with RDP or SSH. Use Putty configuration client to SSH into the VM using port 22 on Windows machine. From a Unix like system including MacOS, use the following command:

ssh <username>@<computer name or IP address>

For details on how to connect to your Ubuntu Linux VM from your Mac machine, check out this post.