All right, let's trace the data flows that the current interface to MOTR would require for some of my other noxfiles.
The first thing it all needs is to gather the package information. Let's gloss over that.
- Given all of the package roots, build the packages, and write a constraint and per-package requirements files.
- The noxfile I'm looking at also has constraints on third-party packages, so that needs to factor into the constraints somehow. Could just read a separate file in, or have handling for any number of constraint files.
- Running coverage erase doesn't need any package data.
- Running flake8 requires the command-line to take into account whether or not there's a src directory, so the package root data needs to specify whether there is a src directory.
- Running mypy needs the same information.
- Running pytest needs to optionally pass additional environment variables based on the package root.
- Running pytest under coverage requires a custom configuration file generated from the requirements file. (The package name has to be processed into an importable module name.) (Also, the coverage needs to be recorded in a unique data file; I'd say, slugify the package root.)
- One thing I need to keep in mind for actually processing the coverage output is not this data flow, but being able to optionally plug in limit-coverage. The thing about this data flow, which I forgot at first, is that coverage combine is going to need its own coveragerc, deriving paths directives from the package root, whether there is a src directory, and the module name.
- The next session is about building an executable with shiv. I think the only portable way to accomplish this is to extract the entry point data from the wheel, specifically the console_scripts section. This means that the specific command lines being generated will have to be done at run time instead of graph time.
- Running the profiler doesn't need anything extra here...
So, we have: - the package roots - whether they use src - the paths to each requirements file - the requirements file provides access to the package names
(The constraints file is also required to get the path to the wheel file for shiv.)
For some aesthetics purposes, I kind of want to divide the package roots into the common and unique parts. So the data structure looks something like:
- common prefix
- unique suffix
- src bool
- generated path to requirements file, as an Input
My inclination is to first have the common prefix be empty, and then process all of the instances together at once to get the common prefix as long as possible. This could also be accomplished by having two distinct data types.
I'll have to work on this tomorrow. For now, I'm just glad to have worked through this analysis. From this, I can properly write the functions I need.
But at the moment, I just want to be ready for bed.