Okay, I made a bit more progress on MOTR, partially by doing weird hacks to the code. I've now got a basic test working for the mypy workflow. At this point, I could either try to plug some coverage holes by copying this approach with the pytest workflow code I've written, or I could step back and try to figure out which bits of the code I've written should be their own functions, etc.
For the latter course of action, I've got five-ish modules to review and see if I can make modules like them more pleasant to write. Making those modules more pleasant is important because... Okay, so, imagining a hypothetical future where MOTR takes off, I've categorized developers who will do things with MOTR into 4 groups:
- Core developers, aka just me for now.
- Extension developers, aka just me for now.
- Developers configuring MOTR to run against a repository, aka just me for now.
- Developers running tests against a repository that uses MOTR, aka just me for now.
MOTR is right now good enough for the fourth category, and I'm getting there for the third one, but it could be better, for sure.
These five-ish modules are important to the first two categories because the third category needs high-level representations of automatable tasks to be able to write a concise and clear motrfile. (Sadly, judging by the tests I've written, I can't expect any improvements in clarity just yet.) Right now, there are two ways these modules can be improved:
- Factor out common code to cut down on boilerplate that the second group has to write.
- Write helper functions to cut down on the freeform nature of the current interface. Some of the test code is "obvious" conversions from input types, and there's no "static" analysis of whether the generated commands make sense. I think it should be possible to address these shortcomings together, but I should plan the details carefully.
For now, I'm going to take things easy, and pick back up with planning the boilerplate reduction first.