Continuous delivery for Azure Workbooks using Azure DevOps

Scenario Link to heading

Azure Workbooks are great!, with an easy to use graphical designer to put together interactive queries and reports, no need to code, and available directly in the portal, no need to host a new application.

I have been using Azure Workbooks for the last couple of months to show summaries and details on failures for several business applications, some running as Logic Apps Standard, some as Azure Container Apps.

In this blog post I share the way I automate the deployment of these Workbooks, following patterns similar to the ones used in continuous deployment for regular applications. Here the workflow is not really an application in the sense of the need to pull dependencies, build and deploy, but by following these patterns, the need for inspection, approvals and reuse between environments is satisfied.

Creating log entries to populate the workbook Link to heading

This section is only to address the need of data to show in the workflow. Real applications are probably already creating all these logs. The code bellow is a simple console application, starting from Microsoft’s sample code, I added console output to be able to check the outcome live while running. The program creates logs entries simulating a process that runs every 5 seconds, doing 5 tasks every time, simulating some tasks taking longer and failures every once in a while.

using System.Diagnostics;
using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

using var channel = new InMemoryChannel();

try
{
    IServiceCollection services = new ServiceCollection();
    services.Configure<TelemetryConfiguration>(config => config.TelemetryChannel = channel);
    services.AddLogging(builder =>
    {
        // Only Application Insights is registered as a logger provider
        builder.AddApplicationInsights(
            configureTelemetryConfiguration: (config) => config.ConnectionString = "---the connection string---",
            configureApplicationInsightsLoggerOptions: (options) => { }
        );
        builder.AddJsonConsole(options =>
        {
            options.IncludeScopes = true;
            options.TimestampFormat = "HH:mm:ss ";
        });
    });

    var serviceProvider = services.BuildServiceProvider();
    var logger = serviceProvider.GetRequiredService<ILogger<Program>>();
    var cancellationTokenSource = new CancellationTokenSource();
    await MainLoop(cancellationTokenSource.Token,
        logger, 5000).ConfigureAwait(false);
    logger.LogInformation("Logger is working...");
}
finally
{
    // Explicitly call Flush() followed by Delay, as required in console apps.
    // This ensures that even if the application terminates, telemetry is sent to the back end.
    channel.Flush();

    await Task.Delay(TimeSpan.FromMilliseconds(1000));
}

return;

static async Task MainLoop(
    CancellationToken cancellationToken, 
    ILogger<Program> logger,
    int frequencyInMilliSeconds)
{
    var totalRuns=0;
    while (!cancellationToken.IsCancellationRequested && totalRuns < 200)
    {
        // Create a child task that runs in parallel
        var childTask = Task.Run(async () =>
        {
            var transactionId = DateTime.UtcNow.ToString("yyyyMMddHHmmssfff");
            using (logger.BeginScope("{transactionId}", transactionId))
            {
                var totalTimeTaken = new Stopwatch();
                totalTimeTaken.Start();
                logger.LogInformation(
                    "Pest finder {eventTypeName}",
                    "started");
                try
                {
                    var stopWatch = new Stopwatch();
                    for (var i = 1; i < 6; i++)
                    {
                        stopWatch.Restart();
                        await Task.Delay(RandomNumber(1000, 20000), cancellationToken);
                        if (RandomNumber(0, 100) > 90) throw new Exception("Pest finder overrun!");
                        logger.LogInformation(
                            "Pest finder task {taskNumber} took {timeTaken}ms",
                            i,
                            stopWatch.ElapsedMilliseconds);
                    }
                    logger.LogInformation(
                        "Pest finder {eventTypeName} and took {totalTimeTaken}ms",
                        "completed", totalTimeTaken.ElapsedMilliseconds);
                }
                catch (Exception ex)
                {
                    logger.LogError(ex,
                        "Pest finder {eventTypeName} with error {errorMessage} and took {totalTimeTaken}ms",
                        "completed", ex.Message, totalTimeTaken.ElapsedMilliseconds);
                }
            }
        });
        await Task.Delay(frequencyInMilliSeconds, cancellationToken).ConfigureAwait(false);
        totalRuns++;
    }
}

static int RandomNumber(int min, int max)
{
    var random = new Random();
    return random.Next(min, max);
}

Creating the workbook Link to heading

Now that we have some logs in place, its time to create the workbook. In the Azure Portal, find the application insights instance which was receiving these logs, then navigate to workbooks

Navigate to workbooks link

The workbooks screen offers two templates, select the default template

Template options

Workbooks are composed of blocks, added vertically one after the other. The default template adds two blocks, one text block and one query block

Default template

I updated the query in the query block according to the logs I created

union * | where customDimensions.eventTypeName == 'completed' | summarize count() by  bin(timestamp, 1m), itemType
| render barchart

First query

And this already shows the value of workbooks!, the ability to produce very nice reports and summaries from logs, without code and without deploying an additional application. This first query shows a summary.

I used the +Add option to add a new query block

add query

and in this next query I’m getting the details of all the processes that failed

exceptions | project timestamp, TransactionId = customDimensions.transactionId, Error = customDimensions.errorMessage

To add some interactivity I configured the “export parameter” feature so that when a row from the results is selected, the selected value is made available as a parameter for the next query

Adding parameters

Next I added the last query, to show all the log entries for that process run

union traces,exceptions | where customDimensions.transactionId == {txid} 
| order by timestamp asc 
| project timestamp, message = iif(itemType=='trace', message, customDimensions.FormattedMessage), timetaken = customDimensions.timeTaken

The final result looks like this

Final result

Workbooks also allow “link” columns, with the ability to directly open Azure Portal blades or invoke actions, as shown here for Logic Apps Standard where the workbook includes links to the run details and to the “resubmit” action for the workflow.

Creating the pipeline Link to heading

The workbook is now in place and working, this however was done directly in the Azure Portal, manually, something that I don’t want to repeat for my TEST, STAGING, PERF or PRODUCTION environments. Just like with code, ideally, this needs to be version managed and go though deployment pipelines that parameterize by environment when needed, and control the approvals process to promote from lower to higher environments.

The workbooks user interface in the portal already provides some help for this by producing the ARM template needed to deploy the workbook via the Azure CLI. To obtain the ARM template, in the workbook editor view select the advanced editor

Advanced editor

Then use the ARM template from the options offered by the Azure Portal

Export options

This option works, however the content of the workbook is all in one single place, the serializedData property, which makes it harder to inspect when thinking about code reviews and pull requests. The option I use is the first one, the “Gallery template” option, which provides the full content of the workbook in an easy to read and inspect JSON format.

To use this option I save this content as a JSON file, which I then add to source control, then pull during deployment using the loadTextContent bicep function.

Assuming separate subscriptions per environment, my bicep template looks like this, and a critical piece is the uniqueness of the name of the workflow. This name needs to be unique, and the same between pipeline runs to ensure the workbook is updated instead of adding a new workbook. The GUID function accepts as many parameters as needed so depending on the project I might need to add more parameters to make it unique.

@description('The datacenter to use for the deployment.')
param location string = resourceGroup().location
param environmentName string
param environmentShortNameUpperCase string = toUpper(environmentName)
param workbookSourceId string = '/subscriptions/${subscription().subscriptionId}/resourceGroups/${resourceGroup().name}/providers/microsoft.insights/components/appinsights-${environmentName}'
resource existing_application_insights 'Microsoft.Insights/components@2020-02-02' existing = {
  name: 'appinsights-${environmentName}'
  scope: resourceGroup()
}

resource ProcessRunsSummaryWorkbook 'Microsoft.Insights/workbooks@2023-06-01' = {
  name: guid(subscription().id, resourceGroup().id, existing_application_insights.id)
  location: location
  tags: {
    costCenter: 'Demos'
    project: 'Demos'
  }
  kind: 'shared'
  properties: {
    category: 'workbook'
    displayName: 'Pest control runs - ${environmentShortNameUpperCase}'
    serializedData: loadTextContent('PestControlWorkbook.json')
    sourceId: workbookSourceId
    version: '1.0'
  }
  dependsOn: [
    existing_application_insights
  ]
}

To deploy this bicep template I use the Azure Devops deployment task

- task: AzureResourceManagerTemplateDeployment@3
    displayName: 'Deploy Workbook'
    inputs:
    azureResourceManagerConnection: ${{ parameters.serviceConnection }}
    subscriptionId: '$(subscriptionId)'
    action: 'Create Or Update Resource Group'
    resourceGroupName: $(resourceGroupName)
    location: $(resourceGroupLocation)
    csmFile: '$(Pipeline.Workspace)/$(artifactName)/template-workbooks.bicep'
    overrideParameters: >-
                -environmentName $(environmentShortName)
    deploymentMode: 'Incremental'

Which is called by a multistage pipeline that takes care of each of the environments, and where a typical stage looks like this

- stage: STAGING
  displayName: 'STAGING Deployment'
  variables:
  - template: pipeline-variables.yml
    parameters:
      environmentShortName: 'stg'
      subscriptionId: '---my guid---'
  jobs:
  - template: templates/iac-template.yml
    parameters:
      azDevOpsEnvironment: 'Pest Control Staging'
      serviceConnection: 'azure-staging-service-connection'

A complete example of multistage pipelines for infrastructure as code (IaC) and continuous integration (CI) and continuous delivery (CD) can be found in the Microsoft guidance for DevOps with Azure Standard Logic Apps