I ended up not having the patience to write out all of the end-to-end cases that the requirements changes in the Gilded Rose kata... require, so I just wrote tests for the basic case, and assumed it should compose as expected with all of the other cases. I'm going to say this is "learning to finish", and get on with my life.
Anyway, I've been investigating the upcoming changes to pip and how they will, at least temporarily, potentially break my workflow, and the more I've thought about it, the more I think it would make more sense to implement my desired workflow as a policy against a local PyPI mirror. This sidesteps the question of "how should pip interpret URL constraints", by taking the question more-or-less out of pip's hands. Looks like I'd like to try devpi for that.
The other thing that thinking about that made me realize is, I don't have a plan for getting the libraries I'm writing into a usable state for me, like having an executable. For that, I'd either need to maintain a cache and a venv and make sure the venv pip always points to the cache, or... I could try out something like shiv, and make the executables be explicit artifacts that get built. In a similar vein, I'm kind of curious what happens if I try shoving some of these into something like Brython. I'm... not sure if Brython is compatible with trio's guest mode, but if I make sure the business logic doesn't refer to specific async libraries, and move that out to whatever executes them, I should be okay. If I go this route, I'll want to break up my modules even more, so I end up with, like, risus, and risus-cli, and I can hopefully be confident that I'm not mixing event loops in an unprincipled fashion.
So, there's a plan for the future:
- maybe Brython
I'm trying to get a jump on understanding how best to configure devpi; it looks like I might want to try using devpi-constrained to have an intermediate index that filters out all of the packages I'm installing myself. Maybe. I'll figure this out later. For now, I'll wind down.