Purposes of archiving

We just had a game (by Jmac) disqualified from ShuffleComp because he used my multiplayer MUD platform to build it. The ShuffleComp rules say, basically, “Your game must be archivable.”

These sorts of issues will continue to come up, as “the IF community” extends to cover more diverse tools and platforms and assumptions about how games are handled.

My blog post on this:

gameshelf.jmac.org/2014/05/purpo … archiving/

I’ll repeat some of it, to spark additional discussion…

Assume that in the future, more IF will come along that undermines our archiving assumptions. We can imagine many possible reasons:

  • Inherently multiplayer games
  • Games that draw on real-time Internet data (Twitter, current news headlines, etc)
  • ARG-style games that are hosted on social media services for reasons of authenticity
  • “Living” games, where only one “live” copy exists and is passed from player to player
  • Games built in proprietary web services
  • Commercial games, where the author does not want free copies distributed
  • Games that run afoul of parochial laws
  • …?

These are not hypotheticals in the IF world. Consider Blueful, Winterstrike, Naked Shades, the whole Versu story, etc.

The question is, how do we support our desire to save stuff without stifling or rejecting IF in these categories?

Archiving covers many needs, so let’s split that up as well.

  • Long-term preservation: you want to play a game years after the original web site has vanished.
  • Short-term offline use: you want to download a game and play it without direct Internet access.
  • Medium-term reference: you are writing a web page about IF games (i.e. your games, or a specific competition) and you want to link to the games without hosting your own copies.
  • Discoverability: having all games on the same web site makes it easier to find things.
  • Academic study: you want to learn how a game was constructed.

As the author of Winterstrike, I am selfishly interested in this topic because I can’t figure out a good way to archive the doggone thing. Everything got typed or copy-pasta’d from a Word .doc into a web interface. To my knowledge there’s no good way to extract out the text, and my local copy (text only) is no longer up-to-date because “syncing” the corrections (after playtesting, plus Failbetter Games’ adjustments of game balance) would have had to be done manually, by more copypasta, and I have an RSI that got triggered very badly when I was coding the thing. I should probably still copypasta the gametext before it vanishes forever, even if it won’t be playable as such as a .doc or text file.

But in general, yes; these are really good points. I hate the thought of games evanescing because there’s no good way to save them.

In my small way, I’m privately taking care of long-term preservation whenever I can.

Richard Evans and I did a number of things (conference talks, public demos, blog posts, peer-reviewed journal article) to share and preserve as much information as we could about what Versu was doing, given that neither the code nor the games could be themselves made freely available. Of course, a lot of that information then got locked away behind academic paywalls or in GDC Vault – so a number of the papers and talks are archived, but only if you have the right kind of access, often an expensive kind of access that comes with already being in a privileged industry or academic position.

This sucks, but we would have had to pay something like $1400 out of pocket (I can’t recall for sure, but it was on that order, anyway) to make the academic paper open-access rather than paywalled. For GDC Vault, I’m not sure the speaker has the option to make their talk public-access, even for money. (I haven’t tried, though.)

Conversely, I can make stuff available on my blog or by uploading a PDF to the IF archive, but then it won’t turn up in journal searches when academic researchers are considering similar projects, and it won’t necessarily look plausible enough to cite in literature reviews.

Using a client-side applet might be a way, all you have to do in to preserve the downloadable applet. And by applet, I necessarily don’t mean Java; even a Glulx file which either will be downloaded everytime you play the game, can be cached (along with the save file) or can be interpreted on the server directly (might require a smart use of different processes {“Process” as in Linux’s fork()} for different player. My 2 cents (not that a penny is worth anything now-a-days).

EDIT1: In particular case of Twine, will it be possible to compile all the HTMLs and CSSs (and whatever) in a single CHM?

I don’t think that’s really an issue; at the very least, I’ve never had a problem playing Twine games offline.

There are Twine games which include images, video, etc. Downloading all the linked files can be a problem. I know cklimas was thinking about it though – the plan may be to shove everything into the HTML with data: URLs.

A Twine game that cares about getting archived will include all the files - some already do.

A Twine game that doesn’t include them either doesn’t care about them getting archived or is dependant of non-downloadable files. Either way, there’s no point in trying to get all those files.

Also, with the latest version of Twine images are now part of the file rather than having to be linked externally, I believe. That takes care of most linked-to files right there.

Importing images is optional with the newest versions of Twine - Surface, my Spring Thing entry, takes advantage of this.

“Built-in-webservices” and “multiplayer” are 2 things that I’m working with in some of my newer experiments. One for instance only uses a primitive IF parser as a component to a graphical game… so I don’t think it’s even appropriate for something like Shufflecomp or any other major IF comp.

However, another one I’m working on(which sort of sounds like tapping into your MUD platform?) is a text adventure that is heavy on combat and stats which has a graphical window… but all character/monster/etc statistical data is stored and loaded via a remote MySQL database and JSON. Such a thing would be totally unplayable standalone to get the whole experience. In such a case the only thing I can think of to do is run the server facilitating the game as long as possible, then hand out the “server source”.

I definitely found importing images helpful for a few of my recent projects. I just wish there was a way to import sound files as well. That’s probably a bit beyond Twine’s development team, though.

I guess my thoughts are: if people want to make non-archivable games that’s okay but they should be aware that they’re making something less permanent that way.

It’s like live performance art, here today, Cold as Death tomorrow. We still see plays and go to concerts, watch live poetry performances regardless of the fact that the event will never occur exactly again.

As a courtesy for archivists and future historians, we could encourage Let’s Play video’s made of non-archivable games.

Em, I’m not sure if I completely understood your comments above. But it seems to me if you created papers, presentation materials etc on the subject of Versu or any other topics, you own the copyright and can do whatever you like with those materials. I’ve spoken at many conferences and written many articles for publication and have never had a problem posting the materials.

You may not be able to continue to count on that if you publish in an academic journal. For-profit academic publishers, in particular, can be very aggressive about making sure that no one gets to read an article unless they are affiliated with an institution that has paid buckets of money to allow them to do so.

There was a righteous rant somewhere that I can’t find right now, about how the social contract between academics and publishers used to be that we work for them for free (no one gets paid for publishing in or reviewing for a journal) and they help us make our work accessible, but that increasingly they are making their money by making our work inaccessible.

Matt w, I get your points. I understand that for academics who are interested in tenure or recognition from their peers why they would choose to publish in an academic journal. But I’m not sure why anyone else would bother. And I wonder how likely it would be that a publisher (academic or otherwise) would take any action (legal or takedown notice) against an individual for publishing their own work on their own web site. Sometimes, I think you gotta just say wtf…

If a paper is worth publishing then it’s worth publishing in a journal which at the very least allows self-archiving

The publishers will, in fact, send takedown notices against authors publishing their own papers on their own websites. See here for example.

It’s worth noting on the referenced link regarding the Q&A about Elsevier:

I thought self archiving was fine. Don’t we have an open access policy now?
UC’s Open Access Policy, adopted July 24, 2013, enables faculty to self-archive their articles published after this date. Articles published before this date can often be self-archived depending on the rules of the journal. However, it is important that authors self-archive the “author’s final version” rather than the “published version” of their article (see below). These published versions are being targeted by the takedown notices.

So perhaps there are ways to make the content available if the authors chose to do so.

It’s a complicated issue, but as an author you should always be clear what rights you’re giving up and what you’re getting in return.

Also if you are not allowed to publish the journal’s official pdf, you could still publish one with the same page numbers and page breaks.

So! We’ve now posted a variety of content here:

versu.com/about/how-versu-works/

to the extent that we could do that. (In some cases we have just slides from a talk or a version of a paper somewhat different from the published journal version.) Speaking of which, the project is not dead after all:

versu.com/2014/06/06/news-about- … d-laurels/