Development tools for incorporating generative sound/visuals

I’m knew to IF development and I’m wondering if anyone here knows of a development environment that could be used in conjunction with software that creates visuals and sound in real time, such as Processing, Max/MSP, or Supercollider?

What kind of IF game are you imagining? Something where you type in commands, or choose with a mouse, or something else?

I was thinking one where you type in commands, and either the commands themselves or the resulting text could trigger different variables or functions in the media software. Sorry if this is too abstract.

Not at all, it’s a neat idea. I think in general, the more full-featured the IF system (e.g. Inform, Tads) the more difficult the communication between the IF system and the audio/visuals will be. There are IF systems in programming languages like Javascript, Python, Ruby, etc., and so talking to the audio/visual clients will be easier, but usually those IF systems aren’t as full-featured.

A reasonable choice might be Quest, since you can use Javascript somewhat more easily in that system, and I’m guessing there are JS clients for audio/visual software (maybe using node.js or other tools). The Quest desktop IDE only runs on Windows however, and the Quest web IDE is not as good as the desktop one.

Another possibility might be Inform, since you could run it in the web interpreter Parchment which I think gives you access to external Javascript. The presentation might be tricky; I guess you could put the IF in an iframe within a larger framing window with the visuals.

I don’t really have a good recommendation for a system in a language like Python or Ruby; maybe Pyf (Python) has the best iF system, but that project isn’t worked on anymore or used much as far as I know.

I think any way you slice it there’ll be some work wiring it all up.

On second thought, perhaps Aetheria is a possibility – code.google.com/p/aetheria/ . It’s in Java so interfacing with the audio/visual might be easier, but then again depending on Aetheria (I haven’t used it) it might not be. I also can’t speak to how robust its IF features are.

Yeah, the more developed systems seem to have features for incorporating multimedia but not in the way I’m looking for. I’ll try looking at something like a JS system. Thanks.

It’s possible to embed javascript code in Twine, in what is termed a “passage”. You might be able to utilize Processing.js in this way. However, the difficulty lies not so much in how to trigger a sound, or change a variable, but how the application is to communicate back to the client. I would think a library like Soundjs might be more suitable. You might take a look at it.

createjs.com/#!/SoundJS

Cheers.

I suppose you could try Zifmia with Inform 7.

A standard Inform 7 trick is to write text to an external file; an outside process watches the file for changes and then triggers whatever else you want to happen. This is simplistic (you’re basically shoving lines of text out of I7) but it works out of the box (no interpreter hacks) and it’s portable. Or as portable as the other end is, anyhow.