Lectrote 1.2.0

I have replaced the older Parchment engine with Dannii’s current ZVM (for Z-code games). The interface is now consistent with the other VMs, and it now supports Z-code V3/5/8.

github.com/erkyrath/lectrote/releases

I’ve posted 1.2.1, which fixes some hangs and status-line screwups in Inform 6 Z-code games.

github.com/erkyrath/lectrote/releases

If nobody ran into any hangs or status-line screwups, that tells us something about how much Lectrote is being used for Z-code games. :slight_smile:

I did just play Savoir Faire, but I realized in hindsight that I was using Lectrote 1.1.8.

So if the hangs/status screwups were newly introduced in 1.2.0, then maybe nobody noticed because not many people upgraded.

(Electron has an auto-updater/update-detector thingy doesn’t it? Maybe turn that on?)

That’s correct. We switched to a new Z-code VM.

I don’t trust auto-updaters. In this case, it would have prevented you from playing Savoir Faire!

(And, in general, the times when I am not updating Lectrote will outweigh the times that I am.)

If you really need to, provide three Z-code VMs (four when I make XJSZM). You can then add VM implementations for other systems too if/when needed.

But I agree with not using auto-update; I also don’t like auto-updates.

Maybe you should also change the title of the top message to say 1.2.1 as well; then you can see more easily when is the new version released (it is what I have done with my own releases too, and I think everyone should probably do).

Unfortunately, Electron (and so Lectrote) doesn’t support Windows XP, which can be in use among IF aficionados since some of them love everything retro :slight_smile:
I was thinking to add a couple of interpreters written in C as Native Client modules, but 1) Electron doesn’t support those, too 2) they would not be able to read/write files from/to disk, as they are sandboxed.
However, it seems that they can be added as Pepper plugins, which Electron runs as trusted non-sandboxed code. Or probably just as a subprocess of node.js server, if I’m understanding correctly how it works.

There’s a competing project, NW.js, which works on XP and even supports Native Client.
However, what I was thinking about is use PyWebkitGtk library running interpreters as subprocess and having a smaller download size.
So I’m asking for opinions on this:

  1. is there a real need to replace zvm.js and Quixe with faster vms? or improvements in V8 together with decompiling into AST and so on will reduce the difference over time?
  2. what looks to you as a better platform choice for this - NW.js, if I insist on WinXP support, or Python with GTK and Webkit bindings? (all kind of comparisons - ease of installation, ease of packaging for 3 or more operating systems, App Store support, …)

Compiling a Glk interpreter with RemGlk and connecting to it through a subprocess would probably be the easiest way to get additional VMs into Lectrote, but it would need to be done for each OS. I’m not sure how the bundling procedure would work for that.

Compiling to JS with Emscripten would also work, and would allow them to be used on the web as well.

Once I adapt Emscripten’s relooper algorithm then I expect ZVM (and a future GVM interpreter I plan to write) to be as fast or faster than the C terps.

Using NW.js would allow you to reuse most of the electronfs.js Dialog library. Switching to Python would mean reimplementing it from scratch, and probably also changing how the JS connects to it. And hopefully it’s improved by now, but last time I used PyWebkitGtk it was a real mess. But NW.js wasn’t much better then either…

And forget about using Native Client. If you really want something like that, use WebAssembly.

Faster interpreters are never a bad idea. However, there are already interpreter solutions for WinXP. Lectrote isn’t aimed at that target. So I don’t plan to replace Lectrote’s Javascript engines with native-client engines. But if you want to try making such an interpreter, have at it.

I understand that PNaCL gives you the benefits of native-client engines without the portability problems, but I have no experience with them.

So let me sum it up:

  1. ZVM will potentially run at least as fast as Bocfel and analogous GVM (once implemented) will run as fast/faster than GIT, so using Bocfel/GIT with Webkit could be of lower priority. (I am assuming, maybe wrongly here, that the default choices Gargoyle offers are the fastest terps - thus Bocfel and GIT).
  2. The easiest way to target WinXP desktop with web-UI would be: NW.js+ZVM+Quixe.

And if I don’t go web-UI and just want a bit more user-friendly software, I could consider this:
3) Another possibility is to enhance Gargoyle with GUI dialogs for tweaking it’s configuration file. This way I could reach Debian users without additional effort (as currently gargoyle-free package is present in Jessie repository), and wouldn’t need to distribute Windows builds myself. I would just need them to accept my pull-request.

WebAssembly is not fully alive yet, and won’t be for another year. If you want to use it, asm.js is its current analogue.

A desirable project in its own right.

I see. I wasn’t sure what Dannii means advising not using PNaCl, but my reasons not to use it would be:

  1. requires some work reorganizing makefiles;
  2. has no access to local file system from the modules;
  3. kind of “proprietary-ish”: if Google decides to abandon it, is likely to die. Also, modules wouldn’t be runnable from a Firefox plugin, or in any other browser.

But what I’m interested in is a little broader topic beyond just IF interpreter GUI-shells: if I install Lectrote today and then need another Electron-based software on my computer, I end up having two additional Chromium installations even though I use Chromium as my default browser already.
Another thing is upstream interpreter updates: if I use either Lectrote or Gargoyle, I have to reinstall them completely to get those updates. And whether they even have those, depends on their authors or maintainers.

What I would like to have, is a standardised platform where other C/C++ programs can also have an external web UI in a similar way. Easily installable like Chrome apps, once user has framework installed, and easily upgrade-able when their modules have new versions. A couple of examples would be: a board game with chess-like minimax algo written in C, that I have written a jQuery-based UI for. I don’t think I can compile it with Emscripten so that it would run fast enough, not that it’s theoretically impossible but I lack the expertise, so I would like to integrate my UI with its binary in some way that perhaps resembles Glk.
Another theoretical example: OCR package that is already in Debian repository, Cuneiform, could have an additional package using it as a library and doing talking to the Webkit.

Is there a way to build it upon Glk adding more primitives to it?
I have no idea how packaging would work outside of Linux, that may be another interesting project to bring Debian, CentOs or Gentoo-like repositories to Windows and Mac.
(It’s possible to install programs in Windows-fashion on Linux, dragging all the disparate version copies of all dependencies along; but not the other way round!)
The benefit of this modular structure would be instant upgrade of let’s say Bocfel soon after new version is out, without having to wait for GUI-shell maintainer to replace sources.
Probably an off-topic here, so feel free to banish me to the off-topic subforum with this :slight_smile:

It is possible to add more primitives to the Glk API, but that is set up around the idea of extended input and output capabilities. Trying to cram a general application interface or update system into that API is not going to work out well.

Mac and Windows have good reason to treat applications as sealed bundles with all their dependencies included (aside from core OS libraries). It’s easier for users to understand, easier for users to manage, and hard drive space is cheap. You can use Linux-style package managers – I use Homebrew – but most people don’t.

If you want to manage Electron’s dependencies in a unified way, you could rely on the Node level. Install Electron as a global Node package, install Lectrote’s source somewhere, and then launch it with “npm start foo.z5”.

Emscripten is about 0.5x native speed as long as you avoid certain pitfalls (I think 64-bit integers and exceptions are the only ones of any relevance). kripken.github.io/emscripten-si … lines.html

Just use emscripten. Your browser is that standardized platform. asm.js is supported by all desktop browsers, and whether it dies or not, emscripten’s abilities will never disappear; it can only be supplanted by something that does the same thing better.
Emscripten’s asm.js output will literally run faster than a Java or Python application, at least in desktop browsers. This isn’t some kind of “theoretically, Java is just as fast as C++” nonsense; you get this performance without doing anything.
On mobile, performance depends if they support asm.js (they don’t). So your performance will suck on mobile, about the same as a normal browser webpage, too bad I guess. But at least it will run.

You can start an emscripten project from scratch, or steal code from my project here: github.com/ad8e/Text-Game-Standard-Engine
It’s public domain, so no need for any kind of credit. Its shell file is designed for text output of any kind, not just IF, pretty useful.

Actually, I shouldn’t have said all desktop browsers, old versions of IE don’t support asm.js either. I think only Edge does out of Microsoft’s browsers, and that’s only available on Windows 10.

I agree with zarf that having N copies of Chromium on a desktop machine isn’t a big deal. For Electron apps that don’t run third-party JS, just the JS provided with the application itself, it isn’t even very important to keep Chromium up to date. My perception is that Lectrote is “fast enough” on desktop.

Electron/Lectrote doesn’t work on WinXP, but XP is dead now, completely unsupported by Microsoft for almost two years; it’s already rare, and getting rarer.

IE11 (the most popular version of IE) “supports” asm.js; it’s just ES5 JS. But it’s pretty slow.

For a while I thought that Emscripten/asm.js on mobile would be the best way forward, and I still kinda think that, but I think we’re at least five years off from that being a good solution for Android devices. Android hardware is seriously underpowered in ways that really matter to single-threaded JS performance.

youtube.com/watch?v=4bZvq3nodf4
twitter.com/slightlylate/status … 5543740416
“much of this is down to L2/L3. iPhones have gobs of it (3MB L2, 4MB L3), Android’s don’t (Pixel: 1MB L2, no L3)”

You can’t buy an Android phone with L3 cache at any price today in 2017. Even if Qualcomm realized their mistake and started shipping bigger caches starting now, most Android users wouldn’t see the benefit for years.

For the foreseeable future, mobile CPUs will benefit from tightly constraining memory usage; in practice, that means C compiled to native code. iosglulxe+iosglk are probably as good as it gets on iOS. (iosglulxe doesn’t do JIT compilation like GIT can, but Apple forbids JIT in apps, so you’re stuck there.)

On Android, IMO RemGlk + Git Glulx is our best hope; hence allensocket’s heroic work on exactly this.
https://intfiction.org/t/help-c-devs-android-blorb-image-sound-saving-in-remglk/10985/1

You’d have to turn off the USE_DIRECT_THREADING option in Git, but that’s always been a bit shaky. The Makefile has it turned off on MacOS and on anything using GCC3. I have a feeling it won’t work on Android either.

Hmm, that’s interesting about Android’s poor/missing cache. I wonder how that would affect IF parser terps, which are generally running games with <1MB RAM or stack.

ifvms.js is now using Typed arrays, so memory use there will be nicely constrained. What isn’t contrained is the JIT. I could add a LRU cache, but I doubt it would really be effective. It’s very unlikely for the terp to max out a phone’s RAM, and even if the CPU is missing L3 it will be able to handle caching the JIT better than I could.

I think it will come down to what I’ve been saying for years: minimising function calls. Since discovering zmach I’ve got an even better idea how to adapt the relooper algorithm.

Of course choice based fiction has a whole nother set of performance questions, being less JS intensive, but often more DOM intensive.

As we close out August, things have really improved on this front. 1. Chris Spiegel has taken the terps from Gargoyle into their own GitHub project and created fresh CMake files. github.com/WakeRealityDev/Thunderquake_Qt

What we are lacking is user interface. The backend is all there and solid, you can run multiple terps and RemGlk is extremely flexible in terms of license issues mixing closed-source and open-source, etc - as you are not linking and completely using data exchange of JSON. Performance is perfect, the overhead of JSON is not significant. What’s missing is visual design, user interface, how to cope with mobile keyboard and screen sizes, etc. Popular apps are out there like Son of HunkyPunk / Text Fiction / etc that could be adapted to work with RemGlk - and I’ve released some demonstrations of that with Text Fiction with some success. But I really encourage some people with better visual skills than I have to make some fresh front-ends for presentation of the JSON. Especially on mobile screens where people want to pinch-to-zoom, different font sizes, rotate, keyboard hinting - there is a lot of opportunity for creativity.

I made some big Android-specific mistakes in Thunderword design in the first half of the year with the JSON interface, and that really discouraged me and slowed me down, but I’ve solved that now this week. It’s a friendly and elegent user interface that’s really the thing we need most. Ideas, drawings, mockups - and of course, complete apps, are welcome and encouraged :wink: And HTML front-ends are welcome too, there is no real performance reason to avoid a WebView front end. I hit this thread to look at some of the parts of Lectrote - in terms of front-end.

Still working away :wink: More to come.