Unable to load the service index for source azure devops

While creating Nuget package through Azure DevOps pipeline and adding the package to the Artifacts, you might get the following or similar errors:

error NU1301: Unable to load the service index for source https://xxxx.dev.azure.com/MyApps/_packaging/MyPackage%40Local/nuget/v3/index.json

This could usually happen when you’re Publishing the package to Nuget in a separate job or Stage from the Build step.

To resolve this, you need to do a nuget package restore before publishing the package to Nuget:

  - task: DotNetCoreCLI@2
    displayName: Nuget package restore
      command: restore
      projects: '$(workingDirectory)/**/$(projectName)*.csproj'
      feedsToUse: 'config'
      nugetConfigPath: $(workingDirectory)/nuget.config
  - task: DotNetCoreCLI@2
    displayName: Create Nuget Package
        command: pack
        versioningScheme: byBuildNumber
        arguments: '--configuration $(buildConfiguration)'
        packagesToPack: '$(workingDirectory)/**/$(projectName).csproj'
        packDestination: '$(Build.ArtifactStagingDirectory)'
  - task: NuGetAuthenticate@0
    displayName: 'NuGet Authenticate'
  - task: NuGetCommand@2
    displayName: 'NuGet push'
      command: push
      publishVstsFeed: 'MyFramework'
      allowPackageConflicts: true

The variables in the above template can be passed from a Pipeline that is triggered by a change to a branch or run manually.

Costs involved to delete/purge the blob that has Archive access tier

A blob cannot be read directly from the Archive tier. To read a blob in the Archive tier, a user must first change the tier to Hot or Cool. For example, to retrieve and read a single 1,000-GB archived blob that has been in the Archive tier for 90 days, the following charges would apply:

Data retrieval (per GB) from the Archive tier: $0.022/GB-month x 1,000 GB = $22
Rehydrate operation (SetBlobTier Archive to Hot): $5.50/10k = $0.0006
Early deletion charge: (180 – 90 days)/30 days x $0.002/GB-month x 1,000 = $5.40
Read blob operation from Hot = $0.0044/10k = $0.0001
Total = $22 + $0.0006 + $5.40 + $0.0001 = $28

  • In addition to the per-GB, per-month charge, any blob that is moved to the Archive tier is subject to an Archive early deletion period of 180 days. Additionally, for general-purpose v2 storage accounts, any blob that is moved to the Cool tier is subject to a Cool tier early deletion period of 30 days. This charge is prorated. For example, if a blob is moved to the Archive tier and then deleted or moved to the Hot tier after 45 days, the customer is charged an early deletion fee for 135 (180 minus 45) days of storage in the Archive tier.
  • When a blob is moved from one access tier to another, its last modification time doesn’t change. If you manually rehydrate an archived blob to hot tier, it would be moved back to archive tier by the lifecycle management engine. Disable the rule that affects this blob temporarily to prevent it from being archived again. Re-enable the rule when the blob can be safely moved back to archive tier. You may also copy the blob to another location if it needs to stay in hot or cool tier permanently.

Azure Archive Operations and pricing

There will be some charges associated after 180 days for the storage account.

  • Storage capacity is billed in units of the average daily amount of data stored, in gigabytes (GB), over a monthly period. For example, if you consistently used 10 GB of storage for the first half of the month, and none for the second half of the month, you would be billed for your average usage of 5 GB of storage. However, using the Cool (GPv2 accounts only) or Archive tier for less than 30 and 180 days respectively will incur an additional charge.

For more information refer to this article: https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers#pricing-and-billing
For more information on pricing, please see this link: https://azure.microsoft.com/en-us/pricing/details/storage/blobs/.

Prices may vary subject to changes by Azure Services.

Modify Block Blob with Pessimistic Concurrency approach Azure

In this example, we’ll take a Block Blob and an example of a class named Assignment. The Pessimistic Concurrency approach takes a Lease on a Blob Client and allows overwrite only if the Lease is not expired else it’ll give HttpStatusCode.PreconditionFailed error. For more details, check the following document.

The code below is a .Net 6 Console App.

The Assignment Class has the following Properties:

public class Assignment
    public int Id { get; set; }
    public string Code { get; set; }
    public string Kind { get; set; }
    public double pehe { get; set; }

The Console App’s Program.cs code will fetch blob content every time and manually add another Assignment. In the 4th step, it’ll fetch content from another Blob and append the Deserialized object to the original list of Assignments being built in previous steps and finally overwrite the first Blob with all Assignments.

using Azure;
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using Azure.Storage.Blobs.Specialized;
using Newtonsoft.Json;
using System.Net;
using System.Text;

await PessimisticConcurrencyBlob();


async Task PessimisticConcurrencyBlob()
    Console.WriteLine("Demonstrate pessimistic concurrency");
    string connectionString = "xxxx"; //ConfigurationManager.ConnectionStrings["storage"].Con;
    string filename = "testAssignment.json";
    string containerName = "mycontainer";
    BlobServiceClient _blobServiceClient = new BlobServiceClient(connectionString);
    BlobContainerClient containerClient = _blobServiceClient.GetBlobContainerClient(containerName);

    BlobClient blobClient = containerClient.GetBlobClient(filename);
    BlobLeaseClient blobLeaseClient = blobClient.GetBlobLeaseClient();

    string filename2 = "assignments.json";
    BlobClient blobClient2 = containerClient.GetBlobClient(filename2);

        // Create the container if it does not exist.
        await containerClient.CreateIfNotExistsAsync();
        var blobAssList = await RetrieveBlobContentAsync(blobClient);
        // Upload json to a blob.
        Assignment assignment1 = new Assignment()
            Id = 8,
            Code = "ABC",
            Kind =  "Lead",
            pehe = 10.0

        var blobContents1 = JsonConvert.SerializeObject(blobAssList);
        byte[] byteArray = Encoding.ASCII.GetBytes(blobContents1);
        using (MemoryStream stream = new MemoryStream(byteArray))
            BlobContentInfo blobContentInfo = await blobClient.UploadAsync(stream, overwrite: true);

        // Acquire a lease on the blob.
        BlobLease blobLease = await blobLeaseClient.AcquireAsync(TimeSpan.FromSeconds(60));
        Console.WriteLine("Blob lease acquired. LeaseId = {0}", blobLease.LeaseId);

        // Set the request condition to include the lease ID.
        BlobUploadOptions blobUploadOptions = new BlobUploadOptions()
            Conditions = new BlobRequestConditions()
                LeaseId = blobLease.LeaseId

        // Write to the blob again, providing the lease ID on the request.
        // The lease ID was provided, so this call should succeed.
        // Upload json to a blob.
        blobAssList = await RetrieveBlobContentAsync(blobClient);
        Assignment assignment2 = new Assignment()
            Id = 9,
            Code = "DEF",
            Kind = "Assignment",
            pehe = 20.0
        var blobContents2 = JsonConvert.SerializeObject(blobAssList);
        byteArray = Encoding.ASCII.GetBytes(blobContents2);

        using (MemoryStream stream = new MemoryStream(byteArray))
            BlobContentInfo blobContentInfo = await blobClient.UploadAsync(stream, blobUploadOptions);

        // This code simulates an update by another client.
        // The lease ID is not provided, so this call fails.

        // Acquire a lease on the blob.
        BlobLease blobLease2 = await blobLeaseClient.AcquireAsync(TimeSpan.FromSeconds(60));
        Console.WriteLine("Blob lease acquired. LeaseId = {0}", blobLease2.LeaseId);

        // Set the request condition to include the lease ID.
        BlobUploadOptions blobUploadOptions2 = new BlobUploadOptions()
            Conditions = new BlobRequestConditions()
                LeaseId = blobLease2.LeaseId

        blobAssList = await RetrieveBlobContentAsync(blobClient);
        Assignment assignment3 = new Assignment()
            Id = 10,
            Code = "GHI",
            Kind = "Assignment",
            pehe = 30.0
        var blobContents3 = JsonConvert.SerializeObject(blobAssList);
        byteArray = Encoding.ASCII.GetBytes(blobContents3);

        using (MemoryStream stream = new MemoryStream(byteArray))
            // This call should fail with error code 412 (Precondition Failed).
            BlobContentInfo blobContentInfo = await blobClient.UploadAsync(stream, blobUploadOptions2);

        // Calling another blob and add to first blob.
        BlobLease blobLease3 = await blobLeaseClient.AcquireAsync(TimeSpan.FromSeconds(60));
        Console.WriteLine("Blob lease acquired. LeaseId = {0}", blobLease3.LeaseId);

        // Set the request condition to include the lease ID.
        BlobUploadOptions blobUploadOptions3 = new BlobUploadOptions()
            Conditions = new BlobRequestConditions()
                LeaseId = blobLease3.LeaseId

        var blobAssList2 = await RetrieveBlobContentAsync(blobClient2);
        var blobContents4 = JsonConvert.SerializeObject(blobAssList);
        byteArray = Encoding.ASCII.GetBytes(blobContents4);

        using (MemoryStream stream = new MemoryStream(byteArray))
            // This call should fail with error code 412 (Precondition Failed).
            BlobContentInfo blobContentInfo = await blobClient.UploadAsync(stream, blobUploadOptions3);

    catch (RequestFailedException e)
        if (e.Status == (int)HttpStatusCode.PreconditionFailed)
                @"Precondition failure as expected. The lease ID was not provided.");
        await blobLeaseClient.ReleaseAsync();

The code for fetching the Blob Content is as follows:

async Task<List<Assignment>> RetrieveBlobContentAsync(BlobClient blobClient)
    //List<Assignment> assignments = new List<Assignment>();

    var response = await blobClient.DownloadAsync();
    string content = string.Empty;
    using (var streamReader = new StreamReader(response.Value.Content))
        while (!streamReader.EndOfStream)
            content = await streamReader.ReadToEndAsync();

    var assignments = JsonConvert.DeserializeObject<List<Assignment>>(content);

    return assignments;

Append to text file AppendBlock BlobStorage Azure

Below example takes input from user in a .Net 6 Console App and appends each input to a text file on BlobStorage using AppendBlob. The connectionString is a SAS for access to the Blob Storage and should be managed in appSettings.json file.

using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Specialized;
using System.Text;

Console.WriteLine("please enter text to add to the blob: ");
string text = Console.ReadLine();

await AppendContentBlobAsync(text);


async Task AppendContentBlobAsync(string content)
    string connectionString = "xxxx";
    string filename = "test.txt";
    string containerName = "mycontainer";
    BlobServiceClient _blobServiceClient = new BlobServiceClient(connectionString);
    BlobContainerClient container = _blobServiceClient.GetBlobContainerClient(containerName);
    await container.CreateIfNotExistsAsync();
    AppendBlobClient appendBlobClient = container.GetAppendBlobClient(filename);
    if (!await appendBlobClient.ExistsAsync())
        await appendBlobClient.CreateAsync();
    using (MemoryStream ms = new MemoryStream(Encoding.UTF8.GetBytes(content)))
        await appendBlobClient.AppendBlockAsync(ms);

Change ApplicationInsights Azure resource configuration in existing Web App

Application Insights is a Service on Microsoft Azure that lets you understand what users are actually doing on your App.
It also lets you diagnose any issues with it’s Powerful analytics tools and works with platforms including .Net, Java and Node.js.

The App Insights Instrumentation key is what is required to link your App with the resource on Azure.
If you already have an existing App Insights resource created through Visual Studio and you need to change it, then you can create another resource manually from the Azure Portal.

Once the App Insights resource is created, copy the Instrumentation key and replace it in your ApplicationInsights.config file. This lets you switch the ApplicationInsights resource for your Application.

Look for the InstrumentationKey tag in your ApplicationInsights.config file and replace. You might also need to change the InstrumentationKey in the HomePage JavaScript under Views folder added by App Insights SDK.

Start debugging your App and verify with your Live Metrics Stream in the App Insights resource that it is working.

Accessing the Ubuntu vm created on Azure via vnc server on Mac

Currently I’ve setup the Ubuntu Server 18.04 LTS from the Azure marketplace and I’m trying to access it via VNC Server setup on the Linux machine. Also, you’ll need a vnc client like RealVNC or you can also use the screen-sharing client available on your Mac.

Login via SSH:

First you need to login to your Linux VM as a non-root user which you’ve created while setting up the VM. To spin up a new Linux VM, you can check out this post. You can use the Cloud shell to connect to your VM using the non-root username and password to your machine via SSH. Use the Connect menu of your VM and copy the SSH command to run in the Cloud shell.

ssh your_user_name@IP_Address

You just need to replace the your_user_name and IP_Address parts in the above command. Enter the password you’re prompted for to complete the Login as SSH.

Install the required packages:

We now need to install the required packages like Xfce desktop environment and VNC Server which are not bundled in the Ubuntu OS by default. Xfce is a free and open-source desktop environment for Unix and Unix like Operating Systems.

Update list of packages:
$ sudo apt update
Install Xfce Desktop environment and wait for the installation to complete:
$ sudo apt install xfce4 xfce4-goodies
Install the VNC Server:

$ sudo apt install tightvncserver

Complete the initial configuration and provide the setup password:
$ vncserver

Providing a view-only password is optional. You’ll get the below Output as the initial configuration completes:

Creating default startup script /home/your_user_name/.vnc/xstartup Starting applications specified in /home/your_user_name/.vnc/xstartup Log file is /home/your_user_name/.vnc/your_hostname:1.log

Configure VNC Server:

The VNC Server is by default configured on the port 5901 and display port :1. VNC can launch multiple instances on other ports like :2, :3 and so on.

Let’s first kill the current instance for further configuration that we require:

$ vncserver -kill :1


Killing Xtightvnc process ID <ID>

Backup the xstartup file before modifying:

$ mv ~/.vnc/xstartup ~/.vnc/xstartup.bak

Create a new xstartup file and open in editor:

$ nano ~/.vnc/xstartup

Add the following lines to your file in the nano editor and save it:

xrdb $HOME/.Xresources
startxfce4 &

This is making certain settings to the graphical desktop like colours, themes and fonts. The last line is starting the Xfce desktop. Now, let’s convert the file to an executable and restart:

$ sudo chmod +x ~/.vnc/xstartup
$ vncserver

Now, let’s connect to the VNC Server from your Mac by creating a SSH tunnel and use Screen-sharing client to connect.

Run this command on your Mac terminal:

$ ssh -L 5901: -C -N -l your_user_name your_server_ip

Do replace the your_user_name with your sudo non-root username and your_server_ip with the IP Address of your Linux VM. Provide the password when prompted for your username.

Now, open your screen sharing App available in the Finder Go Menu on your Mac that says “Connect to Server…”.

Click on Connect and provide your password when prompted again and you’ll see the Xfce Desktop running via Screen-sharing.

Spin up Linux VM on Microsoft Azure

Microsoft Azure provides a multiple ways to create Virtual Machines for Windows or Linux along with their multiple market variations. They are:

  1. Azure portal UI at https://portal.azure.com
  2. Azure CLI
  3. Azure PowerShell commands

I’ll be using the Azure portal UI to create a marketplace image for Ubuntu Linux. Some of the UI features may change in future, but the crux will remain pretty much similar more or less.

Open Marketplace for VM Images on Azure portal

Marketplace image

Fill up the VM Image details

Create Image1

  1. Create or select Resource Group.
  2. Give a suitable name.
  3. Select region based on your geographic availability.
  4. For personal use redundancy is not required. You can change Availability options based on Availability Zone or Availability set.
  5. Select the Marketplace image for the available Ubuntu version.
  6. Select a machine size based on vcpus, memory and IOPS requirement. Of course, check the cost factor.

Setup Authentication using Password or SSH public key

You can simply use Username and Password for authentication otherwise use SSH public/private key pair.

For generating SSH public key, use Putty gen for Windows or ssh-keygen on Linux and OSX. You can download a suitable Putty client for windows here.

  1. Generate RSA 2048-bit key and follow the instructions by the tool.
  2. Save the Private key file as .ppk
  3. Save the Public key file as .pub
  4. Export the Private key file as .openssh format using the Conversions menu if this key will be used by an external SSH client such as on Linux.


For the Admin account, put a suitable Username and SSH public key generated above starting with “ssh-rsa” as shown below. Make sure the key is copied as is without any modifications.


Add Disks information

It’s better to use Premium SSD for optimal performance. If you have additional disks already created you can attach it with the VM at this step or you can also do this later.

Create Image2


This step creates a Virtual Network, a subnet and a Public IP for the VM. All these are added to the same Resource Group while creating these new resources. You can also select if you have these existing resources.

Create Image3

Allow Inbound Ports

You can select the required ports e.g. SSH for connecting using SSH public/private key-pair or RDP to connect using Username and Password.

inbound ports


Keep these to default if you prefer. You can enable/disable any option based on requirement. I turned off Boot diagnostics as it required to create a Storage account so I switched it off, as I don’t require it for a test VM.

Create Image4

Guest Config

You can provide additional post-deployment configuration options using extensions like chef and puppet or Cloud init for Linux.

Create Image5


You can add various tags to categorize resources for consolidated billing and automation management.

Create Image6

Review your provided details in the next step and click on Create. Wait for the deployment to succeed.

Accessing the VM

For accessing the VM, check if you have inbound port rules set up to access using Public IP address with RDP or SSH. Use Putty configuration client to SSH into the VM using port 22 on Windows machine. From a Unix like system including MacOS, use the following command:

ssh <username>@<computer name or IP address>

For details on how to connect to your Ubuntu Linux VM from your Mac machine, check out this post.