Dannii, was the “minimum timer resolution” actually the value you wanted me to test? It always returns “1”, the minimum value, so I assume that the value is not measured in microseconds…?
I’m using the latest public release of Gargoyle, 2011.1.
When the 95-96,000 samples reaches its end and sits for a few minutes as if the interpreter has hung, then finally spits out the results. I assume that during that time it is processing the massive list you mentioned! (Even the smaller value, 25-28,000 samples, has a noticeable delay, but it’s not as bad.) I’m no math whiz, so apologies if this is silly, but would it be possible to process the results iteratively, so that you don’t actually need to store all the values? I’m sure that the mean could be got that way, and probably the variance too…?
I’m not noticing a whole lot of difference between your new update and the original. Here are before and after results:
For the record, the code I’m testing in both cases is:
[spoiler][code]Include Benchmarking by Dannii Willis.
Search list is a list of numbers variable.
First when play begins:
try switching the story transcript on;
repeat with x running from 1 to 99:
add a random number between 1 and 99 to search list;
add 100 at entry 90 in search list;
say “The search list is [number of entries in search list] items long: [search list in brace notation].”;
wait for any key.
Searching a simple list is a test case.
The author is “Erik Temple”.
The description is “Looks for the number 100 in a list of random numbers 100 entries long.”
To run simple-list search (this is running simple search):
if 100 is listed in the search list:
do nothing.
The run phrase is running simple search.
Selecting from a simple list is a test case.
The author is “Erik Temple”.
The description is “Grabs entry 90 from a list of random numbers 100 entries long.”
To run simple-list select (this is running simple selection):
if entry 90 in search list is 100:
do nothing.
The run phrase is running simple selection.
After running the benchmark framework:
say “The minimum timer resolution was: [the minimum timer resolution].”[/code][/spoiler]
I seem to recall Ron mentioning that different systems might return the system clock in different units (was it nanoseconds vs microseconds?) That couldn’t be a factor here, could it? I wouldn’t think so, given that the results look pretty reasonable, but…
I do think it would potentially be useful to see the raw number of times a function was run, and it might also be useful to be able to cap the number (running 90,000 iterations takes a long time!).
FYI, here is a typical result on my system for your example from the extension docs: