AI for IF games – question about different kinds of IF games

Hello :slight_smile:

I’m a computer science and artificial intelligence student and I’m interested in creating an AI system capable of good natural language understanding and representation. What my advisor and I think is that IF could be a great source of data and learning material for such system. Why? IF stories or games usually include some sort of feedback to the player which is something that could possibly help the system learn useful representations of natural language. The dream scenario would be to let the system train on a set of IF games and then test it on games that it has never seen before and see it perform well (or at least human-like in some ways).

There are already two papers that do something similar [1][2], except they simply test their systems on the same games they have been trained on – which makes very little sense to me, as this makes the task virtually trivial.

Now I have a couple of questions about the availability of IF games with specific features which might help (they might or might not be necessary) us create the system described above:

  1. Are there any games (or perhaps servers) with user input data available? More specifically, it would be extremely useful to know how exactly users play IF games (simply put, see their input and actions) or in other words – how well they play. Using this data, it might be possible to bootstrap the system to behave at least partially human-like. Even more importantly, it would be great to have such data for evaluation, so that we could see how the AI system’s peformance compares to that of human players.

  2. Some (perhaps most?) games have different endings. In that case, the endings can usually be associated with some kind of reward. The problem is that the reward is specified in the form of words, perhaps a couple of sentences. To evaluate the human or AI players, though, it would be very much needed to assign numeric values to the different endings in different games. Are there any (sets of) games that actually have their endings annotated with numbers (score)?

  3. Are there any random games? The randomness can either come from: A. a random description of the same inner world state (e.g. ‘You see a bird on the window’ and ‘There is a bird on the window’); or B. random transitions between the world states (one action from a given world state can lead to two or more different world states, based on some random distribution).

Ideally, I’d like to have a set of games with all the properties above but I realise such dataset (or in this case, such collection of IF games) may not be available at all. At any rate, any help or feedback is greatly appreciated!

As a final note, I do know about MUD games that even offer interaction with human players. The problem is that the system needs to have a very fast simulation of the environment (in the case of IF games, perhaps something like a simple HTML page with hyperlinks) which is something that the server-based MUD games don’t really allow.

Thank you very much :slight_smile:

Adventure and the original Zork come to mind. They have a numerical scoring system and random elements through the wandering NPCs. I don’t know how much input data is available, but the source code in a variety of languages is available on the Archive.

There are ClubFloyd transcripts for lots of games here: allthingsjacq.com/interactive_fi … #clubfloyd

Ifcomp saves users transcripts, with dozens of transcripts per game. Individual authors may have downloaded their transcripts, but I don’t know if they are still kept by the organizers.

Aaron Reed has collected and analysed player transcripts for a couple of his games, Whom the Telling Changed and Blue Lacuna. Neither game is really standard parser IF, though.

While I agree that the Zork games are excellent exemplars for playability and scoreability, I don’t remember much randomness in them, other than the behaviour of the thief, grues etc. which aren’t what I think you mean. However, more recent games (those written in Inform 7, for example) often have facilities for random speech.

It occurs to me that you might answer your objectives by learning an if creation language (not too difficult, I promise you) and building games ideally suited to the training function you envisage. Then you could test your trained automaton on a wider selection of games.

Thanks for the responses!

Thank you – I found Zork on the Archive, and although it does have the ‘I7 source available’ tag, I really couldn’t find an I7 source in the Downloads. I also found a sourceforge project from 2008 but the source file won’t compile in Inform 7 (I tried fixing the bugs that probably were caused by the difference between I7 and previous versions but gave up after about half an hour – more issues were still popping up). Does anyone know about an I7-compatible version of Zork’s source, please?

Thanks, this looks really promising :slight_smile: Though the main problem is finding the ‘correct’ games and only then look for the user traces for those games. Do you perhaps know about any (concrete) games submitted to the IfComp that would fulfill my requirements (i.e. preferably large worlds with a scoring system and random elements (ideally random transitions between states or at least partially randomly generated world)? Not all of these are necessary, but the more, the better (the least needed requirement is probably the scoring system, since one could probably add that to the game fairly easily).

Thank you – this looks very interesting but I have yet to think about how these two games could fit into the chosen frame.

You’re absolutely right about the randomness. I was thinking, though, that it might be possible to add a random arrangement of the game world to Zork (basically randomly generating the map while keeping the original transitions). Not sure if it’s really possible, though, I’ll have to look at I7 if I get a working version of Zork.
Also, could you recommend some of the random speech games, please? Or even better – is there any way to look for games with such features other than asking others or simply trying to play all of them? :slight_smile: (it would be great if the Archive had something like a ‘random’ tag).

I agree this would be the best solution in an ideal world, but creating such dataset of games would take too much time. But of course, if I won’t be able to find any suitable (set of) game(s), this is what I’ll have to do :slight_smile:

It’s hard for me to think of any game that meets all of your requirements.

I don’t think I’ve ever played an IF with numbered endings. I’m not sure if any exist. I’m not sure even unnumbered multiple endings are especially common in parser IF.

Untold Riches from IFComp 2015 has randomly generated text passages that appear throughout the story. And there’s a ClubFloyd transcript for it. It’s not that big a world though. And as far as I know there is only one ending. I don’t remember if there’s scoring.

Lobster Bucket is a small Shufflecomp game with a randomly generated map, but no transcripts that I know of.

The Dreamhold has a score, I think (a point for each mask?) and at least one random part of the map. But no transcripts that I know of.

If you want scoring, you might have better luck with older games. At one point I believe Inform, for instance, switched from making scoring on by default to making it off by default, because it wasn’t as popular to use scoring anymore.

The only way I can think of for figuring out if a game has random elements without asking or playing is to search the source code for the word “random” (if it’s in inform 7). In inform 7, the relevant phrases will be “at random” or “if a random chance of X in Y succeeds.”

If you’re planning on making changes to the game, does that mean you’re limited to games with the source code public?

Another idea: maybe look at the source code some of Andrew Schultz’s games? I believe he sometimes publishes source code, and I believe some have been played by Club Floyd (for instance, Threediopolis), and I think he might like to use random cycling text? And his wordplay games might use scoring. I can’t remember.

Oh, wait, I know. Check out Six by Wade Clarke. I think it has random elements, scoring, and a ClubFloyd transcript. And it was in IFComp 2011, so maybe there are IFComp transcripts. And the source is public.

Thank you very much, bg :slight_smile:

It is not that important for the game to have multiple endings as long as there is another metric that can be used to evaluate player’s performance (e.g. number of steps, occurence of ‘good’ states, reactions of NPCs, etc.). But of course, the existence (and ‘quality’) of multiple endings is one of the most natural metrics.

I’ll look at the games you’ve suggested, greatly appreciated.

It might or might not be that case – it depends on what changes would be needed. For example, only adding something like scoring or evaluation of player’s performance could be done externally, without actually modifying the game.

The most important feature I’m looking for in the game(s) is probably the following: in order for the AI to be interesting, it should be able to generalise (for example, learn by playing one set of IF games and then succesfully play another set games it has never seen before). In the case of one game (so far it seems that it would be much easier to take one complex game, something similar to Zork), this would correspond to not being able to visit all possible states of the one game during learning – and then, during testing, whenever it would encounter a state it hasn’t seen before, it should still work well.

So that’s why random description of states or random states or random transitions between them are important – that way we’d be sure that the AI can’t brute-force its way through the game by simply remembering it all. These features would make the game either literally or at least virtually (so big that it can’t all be seen in reasonable time) infinite and one could then prove that the AI is actually learning something meaningful.

Ok–hmm. I mentioned Six earlier but Augmented Fourth by Brian Uri might also be worth looking at. It’s relatively large and old-school style, and I think there are some random elements in that spells may or may not work when you cast them, depending on how skilled you are, and NPCs that move around (I assume with some degree of randomness). It also looks like its source is available. I don’t know if there are transcripts available though.

Sounds like a very interesting project - and thank you for the arXiv articles, I had no idea this had been considered before! :slight_smile:

More specifically re: your questions:

  • If you want to run some kind of automated play, there are several interpreters which may be useful (including the HTML ones), but you could also take a look at dumbfrotz (in the source for Frotz): it takes input on stdin and output on stdout, which could be interesting.
  • In the Inform language, the endings are triggered by a “deadflag” variable (see for instance this page). Usually 1 is death (bad) and 2 is winning (good), but authors can define more. If you have time and technical know-how, you could look into modifying an interpreter (such as Frotz) so that when the value of deadflag changes, you can signal to your controller (or whatever name the program that monitors the AI is called) that you have an ending (and can give a score or something); and since it’s kind of standardised, it would save you some work. An alternative would be to detect when you have a message like " *** You have won ***" (ie detect bold and the ***), or the “would you like to restore, restart” etc messages (or the default response “please choose one of the above” that you get in that case).
  • I think there are many ways a game can use randomness, and it depends what you want to do with it: do you just want a little variation in the text/descriptions, or do you want randomness to actually affect the puzzles or the game logic? In any case: if you’re looking for random, changing descriptions, there are a few games (such as, as bg mentioned, the games by Andrew Schultz) that have a bookcase or a TV or something like that and interacting repeatedly with them will give you different, often cyclic text ; however, I believe that most times it’s just flavor text, so it’s easy to overlook or bypass (although it’d be interesting to see how your AI copes with it). I think there are quite a few games that have NPCs who trigger random descriptive text in the room (“Gus is scratching his head”, “Gus looks like he’s going to say something, but just sighs”, etc), which then makes the actual room description changing and thus is more likely to throw off the AI. (I’m really not sure what specific games have that, maybe [i]Guess the Verb[i]? It’s a fairly standard trick, though. Ah, maybe Planetfall? Deadline?) If you’re looking for random room connections, I don’t know many games which use them (because they can be unfair to the player), but there might be some mazes like that (I’d be curious to see if an AI can figure out the standard tricks to solve mazes).

And of course, if you have the time and the computational resources, I think throwing as many games as you can to your system is a good idea and will only make it better :slight_smile: If not, you might want to look at games set in the same universe or in the same series: there are similar elements, probably the same writing style, and the same kind of puzzles. Off the top of my head, you can look at the Zork & Enchanter trilogies, the Unkkulia series, the Earth & Sky series, Muldoon Legacy & Muldoon Murders, and the games by Larry Horsfield. It’d be pretty neat if an AI could be trained on Zork I and Zork II then manage to solve Zork III, or something like that :slight_smile:

Thanks for the suggestions, guys :slight_smile: Before I get to them in detail, there is probably a better way of formulating my requirements for the ‘randomness’:

I would like if there was not a sequence of commands (sequence of commands is simply “n, e, x house, enter house…” in the game that would always lead to the perfect ending.

For example, in the game Six, four out of six children can always be tipped with the exact same sequence of commands. The remaining two can’t, since one runs in random directions and one hides behind a tree with a random description.

Obviously, if such sequence exists, the game does not provide much challenge for the AI since one could just search the space of all possible actions to get the optimal action sequence. That being said, do you know about any games utilizing randomness in such a way that the optimal action sequence is different (ideally) every time? Another way of formulating this is that the walkthrough for the game would have to be branched (it would have to use conditions, such as if, when, whenever…), not linear.

Okay, that helps. Two games that come to mind are Paper Bag Princess, which I believe has a fake maze that’s actually random, and Child’s Play, which requires you to interact a lot with NPCs who move around independently. I remember reading in the notes for Child’s Play that the random elements made it difficult to test. (It looks like Child’s Play, at least, has a transcript and source?)

When in Rome II randomly selects an opponent character from several different types at the beginning of the game; different opponents show different characteristics and strategies, so what works for one won’t work on others.

ifdb.tads.org/viewgame?id=qx277z01nf6adwan , if you’re curious.

Thank you again :slight_smile:

Child’s Play does look very interesting indeed, although I do have some trouble understanding the game – I have read a couple of walkthroughs and none of them actually work (I assume that the reason is the randomness). So I’ll probably have to have a thorough look at the source code in order to understand what’s going on under the hood to see if it would fit my requirements. But so far it looks like some events occur randomly, which is exactly what I wanted. Does anyone perhaps know about a more detailed description of the game? I ‘only’ found the hints and a couple of walkthroughs which weren’t complete or only described one instance of the game.

Also, I stumbled upon an AI poll in the IFDB which has Galatea on the top of the list with Child’s Play being second – it also looks very interesting, but I have yet to think about how it could be used.

Emily, is this the list of all the different opponent characters from When in Rome 2? I tried playing a couple of games and I’m a bit unsure whether the optimal strategy against any given visitor can be derived from their description in the game (i.e. does the game give enough hints on what actions or actions of which type you should take – do you need to use the in-game book/guide)? Thank you.

There are some tests in the source. I don’t know if they’d work as a walkthrough or not.

granades.com/games/cplay/source_67.html
granades.com/games/cplay/source_66.html

I forgot to mention I saw those too, the problem is they don’t work either (or at least don’t work consistently, in my tries). They might be helpul in getting some insight, though :slight_smile:

Maybe you’ve already seen it, but there’s an “orderizing” action that seeds the random-number generator with a particular number. Presumably that’s the version that’ll work with the tests. I don’t know if you’ve compiled it from the source, or are playing the already-compiled version, but it looks like orderizing is only possible in a non-released version of the game (i.e. in the Inform IDE). Other than that I don’t have any ideas other than contacting the author, sorry.

Ah, that could be it – I’ll try messing around with the source and also contact the author if needed. Cheers! :slight_smile: