Our repo corresponds to a project. Within it are subfolders corresponding to individual apps we want to run in a pipeline.
Someone provided me with a starter yaml file to run my code in a pipeline. Originally my code was running just fine on my own linux server. In the pipeline, not so much as it became apparent it was expecting the current working directory to be the subfolder (directory in linux-speak) for references to modules, config files, etc. The documentation is kind of thin. So I spent hours checking things out and creating the solution which I now present.
The critical thing is to set the workingDirectory. Here is that section of the yaml file.
script: python getlogdata.py 'PAN4-5min.aql' displayName: 'Run script' workingDirectory: $(System.DefaultWorkingDirectory)/PAN_Usage_4_Mgrs env: AUTH_TOKEN: $(auth_token) # PYTHONPATH: $(System.DefaultWorkingDirectory)/PAN_Usage_4_Mgrs/modules
Note that that PYTHONPATH environment variable is another possible way out – if all you need is to include modules, but it won’t help you with other things like finding your config file.
Now suppose you see an error like I got:
ImportError: cannot import name 'ZabbixMetric' from 'jhpyzabbix' (unknown location).
I had tried to put jhpyzabbix folder at the same level as my subfolder, so, right under the top project level. At first I was getting module not found errors. So I put back my PYTHONPATH like so
And that’s when I got that cannot import name error. Whar caused that is that although I had copied over the needed .py files to jhpyzabbix, I forgot one whose purpose seemed irrelevant to me. __init__.py. Turns out that tiny python file is quite important after all. School of hard knocks… It sets up the namespace mapping, I guess. To be concrete, mine looks like this:
from .api import ZabbixAPI, ZabbixAPIException, ssl_context_compat from .sender import ZabbixMetric, ZabbixSender, ZabbixResponse