I didn't work on the tracer yesterday, for what I assume are obvious reasons. Looking over it now, I see that I'd gotten up to the point in the do_run function where it's validating the arguments in the context of using multiprocessing. I was thinking about this some yesterday, and now I'm remembering some trains of thought that didn't get too far.
Basically, it comes down to whether the options for controlling the covered files are relevant. At first, I didn't think so, but now I think that's wrong. If the intent of trace analysis is to detect invariants in code that the user controls, then it makes sense to restrict the scope of measurement to files specified by the user.
I'm still taking things somewhat easy, but I believe what I should do next, is sketch out high-level descriptions of what Coverage.py does, so I can figure out where the tracer needs to diverge.
One design decision I was considering, is to kind of duplicate the way Coverage.py creates, updates, and processes a database as an intermediate format. A Daikon tracer doesn't have the same diversity of output formats, but if I'm reading the documentation correctly, the list of list types has to come quite early on, and I'd like to populate that dynamically according to the execution data. There's also the question of how actually to handle concurrency, and I assume "give each copy of the tracer its own output file, and aggregate them in a post-processing step" makes sense.
I'll think about it some tomorrow. Or take it easy. Either is good.