Diary 2021-07-30

By Max Woerner Chase

Work is a little calmer (for now), so I'm... doing better. I need to stand up more, lest I mess up my torso muscles. Or torso something. Predictable surface-level chest pains, eugh.

Anyway, I spent some time today messing with the code I was talking about earlier. The basic idea was to try to code the geometric product concept in Python in a way that used very little code. I ran into a few issues, which I'll lay out after I explain the basic principle.

So, something that becomes clear from reading that article is, once you have the basis (uni?)vectors for a space, then the higher- and lower- dimensional bases will either contain a given basis vector or will not, and this partitions the bases in half as many times as there are basis vectors. One way to act on this division is to divide them into groups of "does not contain the last basis" and "does contain the last basis". But this process can be repeated until it's just dividing two numbers from each other, based on whether they're associated with the first basis.

I represented this in code with a class that represents such a basis division, and therefore contains two objects of identical type, the first being "everything that does not contain the relevant basis vector", and the second being "everything that does". So, the second element is effectively multiplied at the end by a basis that comes after every element it contains, and therefore doesn't need to negate anything.

Getting this object to multiply requires the definition of a function that negates every sub-element associated with an odd number of basis vectors. I think. One of the obstacles I hit was that it's kind of hard to reason, first about what this code should even do for even very basic cases, and second, how to scale it up arbitrarily. It golfed the code to make it negate the vector component at the same level, then subtract the result in the actual multiplication. (So the more dot-product-y part of the multiplication is a difference of products...)

I imagine that didn't make too much sense. Part of the problem with all of this is that, currently, I'm not sure I understand the concepts I'm trying to represent well enough to evaluate my representation. The 3-vector multiplication example from Marc ten Bosch's article does come through properly, which is encouraging.

The final issue I have with all of this is how incredibly sparse the general representation is. A non-generic representation for 3-vectors requires eight "leaves" for all data, while a specialized representation requires three leaves for vectors and four leaves for rotors. So, this is acceptable for prototyping, though the possibilities of extending it into higher dimensions are kind of slowed down by the exponential blowup in memory consumption. (Even if the leaves are mostly memoized, there's still the intermediate nodes to contend with, unless those are also memoized.)

After I said all that, though, I something occurred to me. It seems to me unreasonable to expect a statically typed, compiled language to specialize a concrete type based on the specific values it contains, but what if the specialized types were user-generated, but delegated their implementations to the generic version? Then, potentially, inlining and constant propagation could essentially destructure the generic types into data flows at compile time, generating a correct implementation of geometric algebra operations for the specific types, without needing any specifically-written arithmetic. This sounds like a neat trick, and I'd imagine someone has already tried something along these lines, and maybe gotten it to work.

I'll have to poke around with that more later, but I should wrap up for now. Eh, it's all set, best get to bed.

Good night.