Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Accuracy takes power: a 3GHz quest to build a perfect SNES emulator (2010) (arstechnica.com)
64 points by shawndumas on Dec 28, 2013 | hide | past | favorite | 22 comments



Interesting the comment from the first link link (590 days ago, before PS4/Xbox One had been announced): (source: https://news.ycombinator.com/item?id=3988628)

> What worries me is if future generations will be able to enjoy the games of today. Will it ever be feasible to emulate a PS3 to the level demanded here? Will my grandchildren in 50 years be able to play GTA 6 on a PS4 emulator? Processing power does not appear to scale to allow this, and there will barely by any of today's consoles still alive by then (also, I doubt 2060s television sets will have HDMI input).

My response would be the trend towards more PC-like architectures (more traditional x86 systems vs something like http://en.wikipedia.org/wiki/Cell_(microprocessor)) and the fact it's already easy to build a similarly priced ($400-$500) PC with superior performance already (historically rare at release than the new set of consoles) may mean it's easier to accurately run these games in future than a Nintendo 64 or SNES. On the other hand, the original Xbox (circa 2000) was a traditional x86 system running an only slightly modified Windows with DirectX that as far as I know isn't emulated well.

I think the search for accuracy will probably be achieved in the coming decades by producing accurate and cheap clone hardware "dongles" to offload processing and interfacing them to a computer via some future fast bus.


As for the Xbox, I think a lot just falls onto documentation. It's clearly technically possible - Xbox backwards compatibility on the 360 was achieved with software emulation.


Not only is it possible, it's been done: http://www.caustik.com/cxbx/

It runs a few commercial games, but probably not all that well. I wonder why not a lot of people bothered to contribute.


It wasn't generic though, IIRC. It was done on a per-game basis. (XBox was an x86 arch, but XBox360 was a PPC arch).


Never mind PS3, we don't have Sega Saturn yet.


There are SS emulators that play some portion of the games for the system: http://www.zophar.net/saturn.html

Complete speculation, but most emulator devs probably just see it as a waste of time considering how few good games exist for the system.


I think the obsession with accuracy is actually a bad thing. The author glosses over it, but many games are improved by having slightly higher clock speeds. Game breaking bugs are usually only a problem on lower end games where the devs were doing something hacky.

Personally I always found it frustrating when new versions of MAME would make games unplayably slow on my system in the name of accuracy.

The really important question is are we trying to preserve the games or the hardware? If we're trying to preserve the games than fast emulation with special case hacks for games that need them seems like the way to go. There were a limited number of games released for each system so accurate emulation is just a waste of time.


It's very true that a bit of extra horsepower in the right place can make a game better than the original, even if it's something as simple as a boosted framerate.

But I don't think accuracy is a waste of time. Accuracy can even be the easier method if you have proper specifications for the chips, with the difficulty moving into optimization.


I don't see why it is a bad thing. There are multiple types of emulators out there, which is awesome!!! This way if someone has a low powered computer, he/she can use one of the "less accurate but speedy emulators", but if someone has the compute power OR wants to play a rare game the others can't handle, bsnes to the rescue


It has always seemed to me that the reason accurate multi-chip emulation is slow has to do with trying to serialize an effectively parallel computation—a bit like why symbolic AI never got off the ground.

Wouldn't a better approach to this problem involve some sort of FPGA expansion card (or OpenCL code to get the GPU to mimic one) programmed at runtime, for each game, with the traces of the original chips composing it?


I agree.

I believe OpenCL is the way to go. I did a very simple Sega Genesis emulator and basically what you do is control with a very slow CPU a very fast hardware that does very simple things like drawing tiles.

You control the tile with something like a pointer with the CPU but it is draw by hardware. Same with sound.

It is not hard to do, it just takes time.


PS: For those interested, programming in OpenCL-OpenGL is way harder, painful and slower(in developer time) than in the CPU.

In particular, you extract all the performance in OpenCL from knowing how to manage memory. We talk about orders of magnitude difference(100x-200x times faster).

This memory management is hell for anybody trained with garbage collectors, Object oriented constructors and so on. It is way harder than pure c and the feedback loop is way longer(unless you invest a lot in hardware).


In this case, I don't think the emulator would be "programmed" in OpenCL; instead, what would be programmed in OpenCL would be a generalized cellular-automata-style FPGA "emulator." (This would be deep magic, but it would only need to be done once.)

Then you'd "flash" the trace of the chips you want emulated to this virtual FPGA, just as if you were deploying it to a real one.


serialize an effectively parallel computation

Our computers pretend to run hundreds to thousands of processes simultaneously using only one to eight physical cores (numbers common for personal use at this point in the historical record).

It's easy to pretend to be parallel when you are operating at 2 microseconds per loop.


Interesting. The mentioned emulator (bsnes) appears to have source code in the Ubuntu repositories, so I'm going to try taking a look at it.

On an unrelated note, does anyone here have a sense of how useful semi-specialized hardware could be for emulators. Obviously emulating a specific system in hardware is easy (at least in principle). However, would it be possible to create hardware specialized for emulators in general. This would probably be multi-core devices that could synchronize between cores at a configurable rate. The obvious downside to any hardware approach is that if the assumptions are wrong, it is likely far more difficult to patch the problem with sacrificing most of the benefits hardware gives you.


FYI: this project is now called Higan http://byuu.org/higan/


Many commenters (in this and the two previous threads for the same link) have already suggested an FPGA solution for accurate and high performing emulation.

I wonder if memristors[1] would be able to enable an even more powerful way; just download the model for the chip to emulate and have the memristors configure themselves accordingly.

[1] https://en.wikipedia.org/wiki/Memristor


I love this article! The extremes are very interesting. I think this is the edge case of a continuum of preservation practices. I reference the author in my article on software preservation, offering other examples of different approaches and efforts: http://www.emeraldinsight.com/journals.htm?articleid=1709237...


There is a Sega Genesis emulator, blastem, whose premise is that that even when extreme accuracy is the primary goal, it can still be done efficiently.

http://rhope.retrodev.com/files/blastem.html


Hmm I wonder how much speed it would require to emulate an Amiga then





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: