I more-or-less took things easy today, because I woke up tired and that never really turned around. I intend to rewrite the tests for the task runner tomorrow, and also go over some details of conlang phonotactics. Neither of those will look like much from the perspective of the blog; oh well.
I don't have anything else to write about right now, I'm just really tired. Here's an idea: in a few weeks, I should have my task runner in good enough shape to run against its own repository, and that'll give me the freedom to add all kinds of crazy opt-in testing and analysis.
What I have currently:
- Normal tests, code coverage, and profiling
- A variety of lints, including vanilla flake8 and isort (I haven't added black, and I would like to)
- Type checking
- Barebones HTML report generation from xunit results
What I'm planning to add:
- Property-based testing using Hypothesis
- Mutation testing using something, I haven't thought too hard about it
- Documentation-related lints
What I might add:
- Something like pytest-bdd? I dunno.
- There's something called semgrep? It might help? With something?
What I'm interested in suggestions for:
- Literature on developing better workflows for profiling
- Linters besides anything listed above, and also besides wemake-python-styleguide
- Other things that plug into flake8
- Other static or semantic analysis
- Other things that evaluate the code by running it
- Fancier xunit -> html conversion
Basically, I'm going to want stuff that runs non-interactively from the command line, and can be convinced to output all relevant feedback in a form that, one way or another, ends up as a webpage. It should also be configurable in terms of what files it writes to, if any, such that I can run multiple processes in parallel in the same directory.
Anyway, if anyone has any suggestions there, my Mastodon account is right down there, so have at it. For now, I desperately need more sleep.