Crystal apple of harmony?

What games in the IFComp have gotten the lowest standard deviation over time? I know I could just look it up, but I’d like to hear people’s feedback on why they are the lowest, too.

Among entries in the past three years, it looks like Reels (2013) has the lowest standard deviation…probably because it was broken (or so reviews say–I haven’t played it.) My guess is that even a great game will probably be disliked by some people, because tastes differ. But a very broken game will pretty reliably annoy everyone.

Looks like you’re right for most years. Although it seems like Fish Bowl did pretty good in its year, and had the lowest standard deviation. That’s pretty interesting…

Fish Bowl is pretty damned good. Atmospheric. It sets out to create an ambiance and does it well. For that alone I recommend it.

Different comp (year)s aren’t comparable because different populations of voters may have voted differently for different reasons.

But that’s also true of different voters within the same comp year, really.

Scores are comparable in one competition because voters are rating games against each other; scores are not comparable across competitions because voters weren’t rating the respective games against each other. If variance is related to score then we would expect that variance is (weakly) comparable in one competition but not across competitions.

If you’re interested in non-comp games too, it’s possible to get a list of games from the IFDB sorted by standard deviation, using the advanced search feature. Here’s a list of all games with at least 15 ratings, sorted by standard deviation. On top of the list we find The Absolute Worst IF Game in History by Dean Menezes, which managed to get a standard deviation of exactly 0 (!), based on 22 ratings. (But there are some very good games on the list too.)

This assumes that, on the whole, people are grading on the curve. But I’m not sure that’s true. (I try not to, and I’ve heard others say the same.)

Me neither. I mean, for close scores maybe (“I gave a 6 to both X and Y but actually I think Y is better; is there enough difference for me to up Y to a 7?” kind of thing), but if everything feels so-so I’m not going to give the best of the so-so a 10, and if everything is mind-blowingly stupendous I’m not going to give the least of the brilliance a 1, you know?

I think this is possible if you can avoid the comp games filling up your head space. If you can keep up your normal gaming and not get overloaded or obsessed with the comp that’s possible. I would be a fan of a maximum ten votes limit to make voters more equal. But we obviously need a larger electorate, or fewer entrants, to pull that off.

I guess the gotta-rate-'em-all maniacs are ruining the comp. Sorry! Seriously, I try to rate each game independently but it’s plain easier to compare two games to decide small differences and that can lead to a curve. I’ve been thinking I should refrain from voting more often when I’m not sure of the rating, but I didn’t accomplish that this time.