Still super tired and out of it, but something interesting occurred to me earlier today.
As you may know, there are recurring arguments over whether 0.99999... is "really" equal to 1. I noticed an interesting perspective to take on this. In decimal notation, it's not just 1 that has this issue. There's 0.5 and 0.499999..., and 0.2 and 0.19999.... And it extends to any product of powers of one half and one fifth, and any integer.
But it's different, for example, in ternary. In ternary, besides 1 and 0.222222..., there's any power of 0.1 times an integer.
But it's different in balanced ternary. Suppose we use the digits -, 0, +. Now, the highest balanced ternary number with a 0 in the ones place is 0.++++++..., which corresponds to 0.11111..., which should be one half. Flip the digits and add one, and we get that there's another representation of one half: +.--------.... And one is only +. By switching around the notation, we've ended up with a completely non-overlapping set of problem numbers! This means that either the "problem" isn't real, or it's happening with literally every rational number, and it's simply obscured by the choice of notation.
If the "problem" isn't real, then we should understand the multiple representations to form equivalence classes. Which isn't so strange. Consider that 1/2 and 3/6 have no digits in common in those representations, yet they're normally treated as the same number.
In the alternative... There are infinitely many distinct numbers that are collapsed into a single representation by typical positional notation. To handle the difference between these distinct numbers, some more precise representation is required to track them. In other words, if you believe that 1 ≠ 0.99999..., then point notation is literally not good enough for you, and you should be looking for alternatives!