Coding 2026-04-10
All right, it's getting kind of late, so let's wrap this up quickly, since I don't want a repeat of last night. Instead of working on sequencing notes in space, I made some plans related to phase modulation. In all likelihood, getting it to work like a physical synthesizer will require some kind of compromise, and be a prime candidate for Numba-based acceleration. While I was thinking about all of this advanced stuff, I tossed together a sawtooth sample function. I'll try to work out the sequencing stuff later, but first, some notes on the calendar stuff.
I experimented with customizing __init__ in an attrs class hierarchy. It didn't really work out, so I just went with a classmethod. In order to test this, I tossed some junk implementations at the various abstract methods I defined. The first thing to try to get done with all of this is to get actual implementations in there.
Now then, let's see what I need to do back in audio synthesis land. From an interface perspective, I believe I want to be talking about start times and durations, in seconds. The prototype code just did a bunch of np.concatenate calls, which, eh. Let's ignore the question of streaming for now, since it's just going to confuse me. The samples for each individual sound have to start at zero. And stuff gets added, and on reflection, it might actually be easier to think of this in terms of constructing chunks, and yielding them out to something higher-level to process, so maybe I get streaming for free-ish. The interface would, I think, look something like pushing a combination of start time, duration, and sample function into a coroutine, and getting chunks of audio data out. I'll have to sketch out some diagrams or something tomorrow.
For now, um, it got super-late.
Good night.