This is really nice. It would have been even cooler if it was running on the HDMI attached monitor, and then massively cooler if you could write list primitives that would let you draw on the screen, then you create an entire environment written in lisp (or smalltalk, or forth).
If you are reading this and you work for Broadcom of have some sway there, why not encourage them to make that possible by freeing the GPU from its proprietary bindings.
From: Hans HübnerSep 10, 2012 (OP of that G+ post)
I have not looked into OpenGL, but it'd certainly be good to have a faster way to draw on the screen. VECTO and bitmap copying is not suitable for anything that is supposed to move.
This is my experience as well. You can mmap /dev/fb0 and draw on it, and you can use something like directfb or even Cairo if you're not into lisp, but if you want to do even trivial 2D acceleration you are out of luck.
Re-rasterizing and blitting the whole screen on every change is clearly unacceptable, but there's a large gap between that and outright HW acceleration. Being smarter about redraws gets you a long way. I have used things like web browsers on unaccelerated X11, and it's been totally usable even on ARM.
But apart from educational purposes, why would you do that? Sure, the graphics driver is an ugly blob, but if you care that much about libre hardware/software, it would be odd buying an Raspberry Pi in the first place.
Granted, "for educational reasons" is a valid subject for the Pi. Get that old Abrash book out and let the OpenGL-spoiled kids feel the pain.
A SBCL port to a new architecture is far from easy.
Portability might have been a design goal, but ease of porting wasn't. The amount of platform-specific code is only small in relation to the system as a whole. In the classic paper describing CMUCL, Rob MacLachlan has a great line about "porting taking 2-4 wizard-months". Things have progressed a bit since then, so you don't need a wizard. But the time estimate is probably still valid.
So why is it a non-trivial task? Well, first of all you'll need to add instruction descriptions so that the table-driven assembler and disassembler work for that new arch. There might be regularities in the instruction set that make this easier, but on the other hand any mistakes will make debugging much harder than you'd like.
Then you get to translate 5k-10k lines of assembler templates from whatever existing backend you decided to start from. That's actually just tedious, not hard, unless none of the backends is really close. My understanding is that ARM has lots of warts about e.g. which values are easily representable as immediate values. Some parts of this work will be trickier, like adding the support for the platform's native ABI.
That gets you far enough to theoretically compile the system. It will almost certainly crash on startup, before it has even managed to load enough of the state required for debugging itself. So the workflow will consist of figuring out where something odd is happening, and then collecting and cross-correlating various bits of information (single stepping in gdb, gdb disassemblies, annotated assembler code from the compiler) around the problematic bit of code.
There will be dozens of bugs like that between this stage and a working Lisp prompt. The first time floating point code gets used, or the first time the compiler is called during the startup, the first explicit call to C code, the first call the garbage collector (or rather what happens sometime right after the first call, when e.g. the GC has mangled some relocations), etc. Doing the bring-up for a merely compiling SBCL port was probably the hardest programming I've ever done.
I'd bet that none of the CCL ports were just "easy" either.
A small hack could be using a very modern sophisticated clang compiler, to produce an optimized assembly for an idiom you need, coded in C. Like clang -S a.c then you change and hard-code what you like.
Yes, it is a non-trivial task, so, it might make you much better engineer, like a difficult journey into unknown improves you.
Any system based on a C VM is going to be lots easier to port than something that compiles into native instructions. Also look at the Gambit-C implementation of Scheme, which compiles to C. The C itself is arranged like a VM, it's very fast (biggest issue with this approach is incremental compiling, where you need trampolines; the more code you block compile the more you can avoid this overhead).
Thanks for the link, it was PreScheme I was thinking of. You can make a PreScheme to native code compiler. Portable Standard Lisp also uses the same approach now (PSL->SYSLISP->C), but I think when it started out it may have compiled to machine code.
If you are reading this and you work for Broadcom of have some sway there, why not encourage them to make that possible by freeing the GPU from its proprietary bindings.