Categories
Python

Azure DevOps: use the api to copy logs to linux

Intro

As far as I can tell there’s no way to search through multiple pipeline logs with a single command. In linux it’s trivial. Seeing the lack of this basic functionality I decided to copy all my pipeline logs over to a linux server using the Azure DevOps (ADO) api.

The details

This is the main program which I’ve called get_raw_logs.py.

                    

#!/usr/bin/python3
# fetch raw log to local machine
# for relevant api section, see:
#https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/get-build-log?view=azure-devops-rest-7.1
import urllib.request,json,sys
from datetime import datetime,timedelta
from modules import aux_modules

conf_file = sys.argv[1]

# pipeline uses UTC so we must follow suit or we will miss files
a_day_ago = (datetime.utcnow() - timedelta(days = 1)).strftime('%Y-%m-%dT%H:%M:%SZ')
print('a day ago (UTC)',a_day_ago)

#url = 'https://dev.azure.com/basf4ServicesCoreSystems/Connectivity/_apis/build/builds?minTime=2022-10-11T13:00:00Z&api-version=7.1-preview.7'

# dump config file into a dict
config_d = aux_modules.parse_config(conf_file)

url = config_d['url_base'] + config_d['organization'] + '/' + config_d['project'] + '/_apis/build/builds?minTime=' + a_day_ago + config_d['url_params']
#print('url',url)
req = urllib.request.Request(url)
req.add_header('Authorization', 'Basic ' + config_d['auth'])

# Get buildIds for pipeline runs from last 24 hours
with urllib.request.urlopen(req) as response:
   html = response.read()
txt_d = json.loads(html)
#{"count":215,"value":[{"id":xxx,buildNumber":"20221011.106","definition":{"name":"PAN-Usage4Mgrs-2"
value_l = txt_d['value']
for builds in value_l:
    buildId = builds['id']
    build_number = builds['buildNumber']
    build_def = builds['definition']
    name = build_def['name']
    #print('name,build_number,id',name,build_number,buildId)
    #print('this_build',builds)
    if name == config_d['pipeline1'] or name == config_d['pipeline2']:
        aux_modules.get_this_log(config_d,name,buildId,build_number)

In the modules directory this is aux_modules.py.

                    

import json
import os,urllib.request

def parse_config(conf_file):
# config file should be a json file
    f = open(conf_file)
    config_d = json.load(f)
    f.close()
    return config_d

def get_this_log(config_d,name,buildId,build_number):
# leaving out the api-version etc works better
#GET https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/logs/{logId}?api-version=7.1-preview.2
#https://dev.azure.com/basf4ServicesCoreSystems/d6335c8e-f5b4-44a5-8f6c-7b17fe663a86/_apis/build/builds/44071/logs/7'
        buildId_s = str(buildId)
        log_name = config_d['log_dir'] + "/" + name + "-" + build_number
# check if we already got this one
        if os.path.exists(log_name):
            return
        #url = url_base + organization + '/' + project + '/_apis/build/builds/' + buildId_s + '/logs/' + logId + '?' + url_params
        url = config_d['url_base'] + config_d['organization'] + '/' + config_d['project'] + '/_apis/build/builds/' + buildId_s + '/logs/' + config_d['logId']
        print('url for this log',url)
        req = urllib.request.Request(url)
        req.add_header('Authorization', 'Basic ' + config_d['auth'])
        with urllib.request.urlopen(req) as response:
            html = response.read()
        #print('log',html)
        print("Getting (name,build_number,buildId,logId) ",name,build_number,buildId_s,config_d['logId'])
        f = open(log_name,"wb")
        f.write(html)
        f.close()

Unlike programs I usually write, some of the key logic resides in the config file. My config file looks something like this.

{
"organization":"drj4ServicesCoreSystems",
"project":"Connectivity",
"pipeline1":"PAN-Usage4Mgrs",
"pipeline2":"PAN-Usage4Mgrs-2",
"logId":"7",
"auth":"Yaskaslkasjklaskldslkjsasddenxisv=",
"url_base":"https://dev.azure.com/",
"url_params":"&api-version=7.1-preview.7",
"log_dir":"/var/tmp/rawlogs"
}

It runs very efficiently so I run it every three minutes.

In my pipelines, all the interesting stuff is in logId 7 so I’ve hardcoded that. It could have turned out differently. Notice I am getting the logs from two pipelines due to the limitation, discussed previously, that you can only run 1000 pipeline runs a week so I was forced to run two identical ones, staggered, every 12 minutes with pipeline-2 sleeping the first six minutes.

The auth is the base-64 encoded text for any:<my_auth_token>.

Conclusion

I show how to copy the logs over from Azure DevOps pipeline runs to a local Unix system where you can do normal cool linux commands on them.

References and related

Running an ADO pipeline more than 1000 times a week.

ADO Rest api reference section relevant for this post: https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/get-build-log?view=azure-devops-rest-7.1

How to secure a sensitive variable in ADO.

Categories
Cloud

Azure DevOps: pipeline tips

Intro

I’ve made a lot of mistakes and learned something from every one. I am trynig to pass on some of what I learned.

Limit of 1000 pipeline runs per week

This seems pretty crazy. In my unit we were going all gung-ho and imagining Azure DevOps pipelines as an elegant replacement for running cron jobs on our own linux servers. But something as simple as “I need to run this job every five minutes” seems to be overwhelming for a pipeline. What? Yes there is a hard limit of 1000 pipeline jobs a week. This limit is discussed here.

I wanted a job to run every six minutes, which will still hit that limit. So what I am trynig is to create two pipelines. Each is scheduled to run every 12 minutes. The yaml files are almost the same except in the one I sleep for six minutes. I also needed to remember to re-create the pipeline variables I was using.

Getting the raw logs from your pipeline runs

I wrote a little script to get the raw logs and copy them to a linux filesystem where I can use the linux command-line tools I know and love to examine them in bulk. The main point I wish to share right now is that it is not at all obvious that you need to use the get builds section of the api, not the log section! Who would have guessed? https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/get-build-log?view=azure-devops-rest-7.1

Errors I am seeing in my pipeline

[error] We stopped hearing from agent dsc-adosonar-drjohns4servicescoresystems-agent-549c476959-whd72. Verify the agent machine is running and has a healthy network connection. Anything that terminates an agent process, starves it for CPU, or blocks its network access can cause this error. For more information, see: https://go.microsoft.com/fwlink/?linkid=846610

We still need to figure this one out. The error appears only randomly.

I also saw a lot of more subtle errors which amounted to my variables not being defined correctly in the yaml section. Indentation is important! I had variables set up secret environment variables amongst other things. The behavior which results does not always make it obvious what the root cause is.

Don’t run the pipeline for every commit

In your commit comment, put

[skip ci]

somewhere on its own line. This will avoid that the pipeline runs each time you do a commit, which quickly gets annoying.

References and related

How to use the Azure DevOps api to for instance fetch the raw logs

Microsoft’s api documentation pertinent to this topic:

https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/get-build-log?view=azure-devops-rest-7.1

Categories
Cloud Python

Azure DevOps: How to work in a subfolder of a project

Intro

Our repo corresponds to a project. Within it are subfolders corresponding to individual apps we want to run in a pipeline.

Someone provided me with a starter yaml file to run my code in a pipeline. Originally my code was running just fine on my own linux server. In the pipeline, not so much as it became apparent it was expecting the current working directory to be the subfolder (directory in linux-speak) for references to modules, config files, etc. The documentation is kind of thin. So I spent hours checking things out and creating the solution which I now present.

The solution

The critical thing is to set the workingDirectory. Here is that section of the yaml file.

 script: python getlogdata.py 'PAN4-5min.aql'
  displayName: 'Run script'
  workingDirectory: $(System.DefaultWorkingDirectory)/PAN_Usage_4_Mgrs
  env:
    AUTH_TOKEN: $(auth_token)
#    PYTHONPATH: $(System.DefaultWorkingDirectory)/PAN_Usage_4_Mgrs/modules

Note that that PYTHONPATH environment variable is another possible way out – if all you need is to include modules, but it won’t help you with other things like finding your config file.

Errors

Now suppose you see an error like I got:

ImportError: cannot import name 'ZabbixMetric' from 'jhpyzabbix' (unknown location).

I had tried to put jhpyzabbix folder at the same level as my subfolder, so, right under the top project level. At first I was getting module not found errors. So I put back my PYTHONPATH like so

    PYTHONPATH: $(System.DefaultWorkingDirectory)/PAN_Usage_4_Mgrs:$(System.DefaultWorkingDirectory)

And that’s when I got that cannot import name error. Whar caused that is that although I had copied over the needed .py files to jhpyzabbix, I forgot one whose purpose seemed irrelevant to me. __init__.py. Turns out that tiny python file is quite important after all. School of hard knocks… It sets up the namespace mapping, I guess. To be concrete, mine looks like this:

from .api import ZabbixAPI, ZabbixAPIException, ssl_context_compat
from .sender import ZabbixMetric, ZabbixSender, ZabbixResponse
References and related

Passing secure variable in Azure DevOps to your program

Categories
Cloud

Securing a sensitive variable in Azure DevOps

Intro

I’m a newbie at Azure DevOps. They sort of have to drag me kicking and screaming to give up the comfort of my personal linux to endure the torture of learning something brand new. But I iguess that’s the new way so I gotta go along with it. I work within a project that has a repo. And we intend to run a pipeline. Question is, how to deal with sensitive parameters like passwords?

One approach

If you just do raw Internet searches you can fall into any number of rabbit holes. Key vault, anyone? How about relying on secure files within the library? I guess – very tentatively – that using a key vault might be the right way to deal with this issue, but it all depends on the ACLs available and I do not know that.. I also do not see a simple-minded way to set up a key vault. So what the guy who seems to know a lot more than I do is to set up a hidden variable for the pipeline itself.

The thing is that even that has its own gotchas. I find that depending on where you start from, you may ior may not see the option to declare a pipline variable as hidden.

If I am in the Pipeline section and looking at Recently Run Pipelines, and click on my pipeline, before I run it I can add variables. Doing it that way, you only get the option to include a name and Value. No option for declaring it to be hidden.

So how do you do it?

Instead of Run Pipline > variables go to Edit Pipeline > Variables. Then variables you create will have the option to be kept secret.

It is a bad idea to pass a sensitive information on the command line. Anyone with access to that agent could do a process listing to read the secret. So instead you use an environment variable. It’s a little tricky as you have to understand yaml variable interpolation versus script interpolation.

My secret variable is auth_token for the Zabbix api token. So in my yaml file I have a reference to it which sets up an environment variable:

- script: python Delete_Unused_VEdges/activity_check.py 'SA'
  displayName: 'Run script'
  env:
    AUTH_TOKEN: $(auth_token)

And in activity_check.py I read that environment variable like this:

token_zabbix = os.environ['AUTH_TOKEN']

And, voila, it’s all good.

But how secure is it really?

Question is, is this just performative security? Because I’m working with a project. Anyone else with access to the project could alter my python program temporarily to print out the value of the secret variable. I’m assuming they can both alter the repo I use as well as run a pipeline. Yes, they make it a tiny bit challenging because you need to break up a secret variable into two pieces in order to print it out, but come on. So maybe we’ve mostly achieved security by obscurity.

But again, I don’t really know the richness of this environment and perhaps more can be done to restrict who can alter the repo and run it in a pipeline.

Conclusion

People who know more than I, a newbie, have sugested a certain approach to dealing with sensitive variables with our Azure DevOps pipelines. I have described here what I as a newbie see as potential pitfalls as well as how to implement their chosen method.