Ian wrote:Edit: well i tried messing with the multiplier, but it made no difference to playback. So yeah it could well be some sort of floating point bug. The question is, if it is how can we even check this? Buy an old mac with a 603e processor? lol
If I recall correctly, there is some sort of PowerPC emulation test suite written by Gwenole Beauchesne (who wrote the SheepShaver Mac emulator). We used part of it at some point when writing our PPC emulators ca. 2003-2006 during the first Supermodel project. I don't know if it thoroughly exercises the floating point instructions, though.
[qupte]
The
PowerPC architecture provides for hardware to implement a floating-point system as
defined in ANSI/IEEE Standard 754-1985, IEEE Standard for Binary Floating Point
Arithmetic.
Sounds similar to what i assume modern cpus would comply to?
[/quote]
Yup. I do think that there can be some implementation differences, though. For example, Intel chips have that weird 80-bit floating point type, and I thought I had read somewhere once that double precision operations on Intel are performed in this 80-bit format internally before being down-converted back to 64 bits.
The standard also recommends extended format(s) to be used to perform internal computations at a higher precision than that required for the final result, to minimise round-off errors: the standard only specifies minimum precision and exponent requirements for such formats. The x87 80-bit extended format is the most commonly implemented extended format that meets these requirements.
Maybe these older cpus didn't use any kind of float extended format?
Hah! I had just written that above Yes, this is indeed true. But could that really make such a large difference?
I wonder if a lower hanging fruit is to check to see if there are any instructions performing compound operations (like multiply-and-add) where the emulator is doing something that could be causing a loss of precision. There's also always the pesky issue of whether some godforsaken floating point status flag is actually being used somewhere but being emulated incorrectly.
Another project that would have encountered this kind of stuff is the Dolphin emulator. The GameCube and Wii libraries are vast and they've had to deal with all kinds of weird behavior. They have a dev log I think and it'd be interesting to do some searching to see if they've documented any unusual floating point issues. We could even try to finally just port the Dolphin core (I think Nik had done so at some point but I never saw the code).
Finally, we can see if there is a C library that implements the IEEE standard with 64-bit precision directly (with integral math, not falling back on processor-specific floats and doubles).
EDIT:
4.0-2729 - Fix Floating-Point Multiply Floating-Point Multiply Add
EDIT 2:
I looked at the Dolphin code. The interpreter doesn't seem to be doing anything special but the JIT is definitely doing something. It's hard to follow but I can probably figure it out. If I understand correctly, the trick is that the PowerPC internally truncates the mantissa to 25 bits. Embarrassingly, I've never really studied floating point encodings thoroughly but I think the mantissa is the fraction part plus some implicit bit. IEEE 64-bit has a 52-bit fraction, so if you drop 28 bits as a comment in the Dolphin code suggests, you are left with 24 bits, and then I guess the implicit bit makes 25. This might not be that difficult to implement. The comment says this happens to the RHS of every multiplication instruction. Their PowerPC is a different model, though, and a cursory search doesn't yield any documentation of this phenomenon so I am not sure how they deduced this.