[I7] Learning Testing by Interactive Fiction

Okay, so some of you may have been suitably embarrassed on my behalf when I brought up my Exploratory Testing (using Inform 7) material before.

But, as the saying would have it: You ain’t seen nothin’ yet!

So my next attempt is shown in my new repo on Exploring Testing.

At the time I post this, you can check out an example of what I mean by looking at the current story.ni file. Notice how I’ve done nothing here but specify a test script and spec test. The idea is then to provide implementations that make these tests pass.

One of the ideas will be having people in the class write up those tests. I’m going to be providing implementations so this isn’t a case of really teaching Inform 7 (although that can be a nice by-product), but rather having people learn how to think about testing by considering the kind of script that has to built. As most everyone here will know, it’s very easy to write a lot of stuff in interactive fiction that isn’t tested for at all. It’s very easy to add just a tiny bit of implementation that, all the sudden, throws everything out of whack.

I think this is going to be a very interesting experiment in learning. This is particularly the case because there are many tools out there in the development and testing world that are promoting concepts living documentation or executable specifications. For example, consider SpecFlow or Cucumber. The key thing there, of course, is a human language abstraction on top of code. There’s a high degree of interest in the wider IT world of “gamification” of concepts. (Check out my own attempt at this with a repurposed SCI game for interviewing testers via a game challenge. There’s an interesting, and I would argue growing, niche out there for tools like Inform 7 that allow people to explore their own thinking.

Interesting!

I haven’t had time/energy/whatnot to really deliver on it, but I started some work on testing in Inform 7 that’s published on github.com/i7/extensions, specifically Checkpoints and Unit Testing.

The Unit testing is (now) an addon for Simple Unit Tests by Dannii Willis, while Checkpoints uses some idea of checking the state of things at given points in the game using stuff stored in tables.

Maybe this is of some interest in you.

Anyway, it’s nice to see that somebody else is thinking about testing and Inform 7!

(Edit: actually, now that I think about it, Checkpoints was never updated for version 2 of Unit testing - it still seems to work, but I should update it).

Does my Simple Unit Tests still even work? I haven’t touched it in a long while, and I thought that the changes to how rules were compiled (now called from one I6 function) would have meant it wouldn’t work anymore.

I had read another forum post here about exactly that issue with Simple Unit Tests, although I didn’t try it out, to be honest. So currently the i7Spec I’m using is based on a slightly modified Command Unit Testing by Xavid.

The core challenge I have in general is that it’s not a unit testing tool or even a spec testing tool in one key respect, which I indicate in the documentation:

In other words, I don’t really have a provision for a ‘setup’ or ‘teardown’ style mechanism. You can handle this to an extent by just using “do” instructions in various other tests but, of course, that starts to create a web of dependencies between tests, which is never good.

I’m still working out how best to handle this.

Ah, the easiest way to do resets would be through VM_Save_Undo and VM_Undo.

Yeah, that was my initial thought. I originally had lines from your extension in place as such:

[code]
To decide what number is the result of saving before running the spec tests:
(- VM_Save_Undo() -).

To restore back to before running the spec tests:
(- VM_Undo(); -).[/code]
For context, here’s the execution of the ‘spec’ command, I put some “STARTING” and “FINISHING” statements in place to show where certain aspects take place:

The finishing part comes from “To process current expectation:” and that’s where I tried to restore from UNDO. The starting part comes from ‘Before reading a command when the command queue is not empty (this is the i7Spec handle queued echoes rule):’ and that’s where I initially tried to save the undo.

The idea being that the save undo and restore undo have to bracket the execution of each individual test. So far I haven’t that to work, however. I’ll keep plugging away.

You could also try Autosave, though you’d want it to not delete the autosave after using it once. The easiest way to do that is to just empty out the AS_Delete routine, making it do nothing. (I should add a use-option for that…)

Thanks for all the ideas. I tried a few things to get the undo / restore working, but it keeps interfering with the output, stopping up the further execution of scenarios. I’ll try Autosave next. The challenge is coming in with scenarios like this:

Scenario:
	context "Kitty can take the message without opening the glass box";
	verify that "Kitty, get the message" produces "As her hands pass through the the glass box, Kitty Pryde picks up the message.";
	verify that "examine box" produces "In the glass box are some poison gas."

Here once that scenario runs, Kitty has taken the message from the glass box. That means any other scenarios that run are now operating in that context.

That being said, each test could have to establish its context, even using some not for release commands like purloin or abstract. So, for example, if another test relied on the message still being in the glass box I can do this:

Scenario:
	context "Kitty can take the message without opening the glass box";
	do "abstract message to glass box";
	verify that "examine box" produces "In the glass box are some poison gas and a message."

That’s acting sort of like a ‘setup’ would in a standard unit or spec testing runner. Still investigating possibilities.

I just wanted to add that Inform7 and cucumber really seems like a nice combination. Natural language all the way.

Edit: actually, I guess I mean gherkin.