Master: Frame 1
send duration 0 size 7 offset 128
rec duration 0
send duration 0 size 7 offset 128
rec duration 0
END FRAME
Slave: Frame 1
rec duration 1676
send duration 0 size 7 offset 128
rec duration 0
send duration 0 size 7 offset 128
rec duration 57
send duration 0 size 7 offset 128
END FRAME
send duration 0 size 15872 offset 256
rec duration 0
send duration 0 size 7 offset 128
rec duration 0
send duration 0 size 15872 offset 256
rec duration 59
send duration 0 size 7 offset 128
rec duration 1
NETBOARD TIME 73
Ian wrote:That's a classic problem with emulation. Normally processors etc work in parallel off the same clock or similar clocks. But in emulation things are generally run serially.
Does irq5 trigger the send/receive functions? I see in scud it sends two distinct packets every frame.
gm_matthew wrote:My concern with one machine running 0.0001% slower or faster isn’t about the video board falling behind with rendering graphics; the vsync signal happens regardless. My concern was that ultimately, vsync must be derived from the pixel clock which is defined as 16 MHz - what if, due to age, temperature variations or something else, one system has a pixel clock speed of 16.000016 MHz? That would cause the 0.0001% drift I’ve been talking about.
Maybe Sega used especially accurate clock crystals. Maybe they have some means of compensating for frequency variations. I am certainly not an electronics expert; my theories as to how the netboard works are mainly down to studying the disassembly of netboard code.
On a modern PC it doesn’t matter so much anyway as neither TCP nor UDP require precise timing like the real netboard does.
Bart wrote:Timing could be controlled by a signal passed on the fiber optic cable. Nik mentioned once that if one of the interrupts wasn’t fired correctly, a “no carrier” error was indicated. It is quite likely that there is an even lower level encoding protocol on the fiber link.
Users browsing this forum: No registered users and 1 guest