ok, wavelets are not deterministic, they are more composable for natural forms than sine waves because of their decay, thus allowing more flexibility in overlaying them to compose the image. sine waves let you do circles, but wavelets make elliptical, asymmetric forms easier. however, it seems to me like you could derive a more deterministic pattern out of them.
as a child, when i got my first computer, the way that you convert a circle into a pixel grid was fascinating. some are like squares, some are like octagons, and as you increase the size, the sequence of the number of pixels approximating the curve gradually progresses towards the limit between discrete and continuous.
circles are a concept, and the pixels represent the discretized version. it's not the most expensive quantization, but it is a quantization.
it's funny, i hadn't really thought about the meanings of the words, when i read "this model is quantized" and this makes its model smaller, that's what it's talking about - down-sampling from effectively infinite precision down to a size that you can actually generate algorithmically faster than you need to paint it on the screen. for this task, it's about how to take what is effectively discrete data, not sampled like a pixel bitmap, but like a mesh or network. so, the path to this effective algorithmic, algebraic decomposition maps to the process of quantization on a graph.
i have been doing some work with the Brainstorm WoT stuff, and i gave them a database driver for neo4j because the main dev straycat (npub1u5n…ldq3) was familiar with it, but i understand graph data representations well enough that as i repeatedly have said to them, to zero response, is that i can make algorithms that optimize for their queries, if they just define the queries. i already implemented a vertex table that has the property of "adjacency free" which is a fancy word for "searching the table does not require the use of unique keys forming nodes", but rather, that the nodes are in a list and your iteration would use something like a bisection search to get to the start of one node, and then iterate that to find the second one, bisection to find the second (all done by the database index iterator code) and then you can traverse, effectively, in "one" iteration, which is actually just a random scan across a sorted linear index, where each node and its vertexes to other nodes is found in a section of the table.
i want to impress upon everyone i've linked into this conversation, that building geometric and algorithmic traversals for data that has an easy visual representation is my super power. i can SEE what i'm doing, which many people who are better at linear stuff, parsing out the code, i'm better at writing it, than reading it. once you clarify the description, i can SEE the pattern as i scan around the visual representation my brain generates from the linear description.
the simple fact that i can see how all of these things boil down to graphs, is something that i think is harder for everyone else to do than me. not in theory, but in practice.
