Coding 2023-02-02
I'm not touching MOTR's code, but I did think some about how I want to reason about the different sets of labels, and I came up with some ideas that I think will be helpful.
Considering a Parametric in isolation, there are four basic possibilities when it comes to extracting a value from it:
- The metadata is overspecified. This happens when the metadata requires iteration over a selection that is not allowed to have multiple values, but it does have multiple values.
- The metadata is underspecified. This happens when the metadata requires iteration over a selection, but nothing iterates over the selection.
- The metadata is incompatible with the input Box. This happens when the metadata requires iteration over a selection, but the selection is not defined by the Box.
- The metadata is compatible with the input Box. This happens when the metadata is not underspecified, and the Box contains a superset of the required values, which is no overspecified in conjunction with the metadata.
This is helpful to think about, because I'm realizing that some of the information required to track this stuff could be tracked, but isn't.
Also, this is all in terms of selections, but there should be a compatible/incompatible check around non-selection labels. Furthermore, iterating over a label that is added via include_box should render the parametric incoherent.
Fun note to myself because I keep forgetting: include_box prefers the upstream values over the included ones, which means that, actually, it just sort of cancels out and turns into a requirement. Whether this is "the right choice" is a question that, like, needs more usage data to answer. It's hard for me to work out from first principles whether I care about the behavior of "two different values get included under the same label in different parts of the Parametric". One possible mitigation would be to pass the included box up and have a sentinel value for collisions. I'll keep that in mind as something I can do, but the bigger win right now is in formalizing the relationships between the different outcomes. (Like, you can use parametric reduction to stop a Parametric from being overspecified.)
The tl;dr of this is that I need to make sure that I'm actually tracking all of the data required to carry out the proper runtime checks; at this point, I highly doubt that that's the case.
For now, I'm going to mess with other stuff, and get generally towards bed.
Good night.