- Wednesday: I complained about Sphinx, and devised workarounds for my complaints.
- Thursday: I thought about potential changes I could make to my Cryptopas code, that wouldn't currently be worth it.
- Friday: I sketched out my plans for Challenge 11, and justified them, because I felt they needed justification.
- Saturday: I gloated a bit about the quality of my old low-comment code, from the perspective of maintaining/rewriting it years later.
- Sunday: I ran into a problem with my Challenge 12 code. I assumed it was slow.
- Monday: It was nonterminating.
Next week, I'm going to focus on tooling, documentation layout, general quality improvements. I may try to figure out a mystery I discovered: under coverage, the Challenge 12 test runs significantly slower as a hypothesis test than a parametrized test, and the overall time does correlate with the number of examples, but the correlation should be linear, and 100 examples is nowhere near 100 times as long as 1 example, which implies to me some kind of massive constant term, but I'm not getting the same slowdown from a strictly more complex test signature that runs at an acceptable speed. So it's not the test body, because the constant term is dominating. But it shouldn't be hypothesis, because other tests aren't hitting this slowdown. It's like every section of the test lifecycle is pointing accusatory fingers at every other section.