I've been playing around with a small project that gets its data from 256 rows in a table. Accessing that data is... not slow, but there's a barely perceptible delay. I don't want that; it's bound to be worse when I pile on rules and daemons.
So I thought I'd subdivide the (sorted) giant table into a tree structure and then do a binary search. The idea seemed solid. I built it to access by number (called x), and got the game to load the giant table, chunk it into 32 equal-sized tables on the leaf-nodes, and then I recursively let the max-x values bubble up the tree in (fixed-size) tables (basically, top node table has 2 subtables, which each have 2 subtables, etc, each looking something like this:
Code:
Table of Top-Node
max-x table-name
102 Table of Sub-node a
230 Table of Sub-node b
Table of Sub-node a
max-x table-name
63 Table of Sub-node a a
102 Table of Sub-node a b
Table of Sub-node b
max-x table-name
160 Table of Sub-node b a
230 Table of Sub-node b b
.
While finished code does run, I still sensed a small but perceptible delay in the built-in i7 terp (which has always been slow as molasses on my end, but still). So I guess what I'm asking is, am I just reinventing something I7 does under the hood already, or could this method help reduce lag to a useful degree?