gm_matthew wrote: ↑Mon Nov 13, 2023 2:35 am
Literally the only actual mention of "S-buffer" in the patent itself (other than the title) is this paragraph:
With reference to FIG. 7, if 2×2 S buffered subpixels per pixel 60 are desired for better sampling, the pixel resolution can be increased by a factor of 4, and then a post filter 62 applied to filter the pixels down to the display resolution.
One of the Jupiter modeword register flags is "Disable S-Buffer", which I can only presume must refer to the edge crossing values and translucency flags; if this is not the S-buffer then what is?
I guess with S-buffering turned off you'd have a lack of blending at the edges and presumably a lack of transparency if this is how it's implemented.
By the way,
this patent was filed in 1994 and granted in 1997 and is therefore probably exactly what found its way into the Pro-1000. They describe the polygon rasterization algorithm, including translucency in considerable detail. It's not exactly as you describe it but close: translucency is achieved by disabling sampling points. But they don't use a horizontal stipple pattern necessarily, which would halve the horizontal resolution of the texture map as Ian pointed out. Check out Fig. 5a and page 18:
Code: Select all
played or further processed, as required).
The edge intercept calculator performs the process steps
shown in the flow chart of FIG.5a: the pixelrowline Lp and
column Pp data signals are received in step 50 and the
polygon 36 vertex data signals are received in step 51. In
step 52, the beginning of a display line Lloop, the calculator
determines which display lines L., from the top line (where
I=Io) to the bottom line (I-Imax), have the polygon present in at least part of one pixel; these line numbers L are
temporarily recorded and the first of the line numbers LP is
set. This line number is entered into step 53, where the left
and right polygon edge-rowline intersections J and J are
found for that line; these limits are also temporarily stored.
Step 54 begins a nested pixel loop, considering each pixel.JP
along the line LP established in step 52, from the limits (left
to right) found in step 53. Inside the nested loop is found
steps 55, 56, 57 and 58, respectively acting for: finding, for
the top-left corner Co of the pixel, the four crossings C and
the associated normalized four distances D of that corner
constellation; clamping the distance D to a maximum 1
pixel value, and setting a "covered” flag, if the particular distance D is greater than 1 pixel distance (step 55); operating on some aspect of the pixel data if the polygon is
translucent (translucency considerations will be discussed in
detail hereinbelow); computing the corner point Co color as
a function of all related factors, such as polygon color,
texture effects (color, modulation, and the like) and so forth
(step 57); and then temporarily storing the relevant data
signals in video memory (the frame buffer 44). Thereafter.
step 59 is entered and the J loop determines: will the J value
for a next-sequential pixel will exceed the righthand limit
J? If not, further pixel processing on that Lp line can
proceed; output 59a is used and step 60 is entered, to return
the loop to step 54, with J=J+1. If the present pixel was at
Jr., the next pixel contains no part of the present polygon.
and output 59bis exited so that the process moves to the next
line L=(Lp+1). Step 61 is entered and the line loop deter
mines if the next-sequential line will exceed the lower line
limit L for that polygon? If not, further pixel processing
on that line can proceed; output 61a is used and step 62 is
entered so that the line loop actually now returns to step 54,
with L-L-1; the pixel loop for that new line will be
traversed. If the just-completed line was at the bottom of the
polygon, the next line contains no part of the present polygon, and output 61b is exited so that the process moves
to the next polygon 36". When the present polygon process ing is finished, step 63 is entered and the polygon loop determines if all polygons in the present view window 34
have been considered? If other polygons remain to be
processed, exit 63a is taken, the polygon designator is
advanced at step 64 and the set of next-polygon vertices
fetched (step 51). If no other polygons remain for
consideration, further pixel processing in this system portion is not necessary for that display frame; output 61b is exited
so that the process moves to the next image window frame
of data signals and begins the edge intercept calculation
process for the pixel data of that display frame.
TRANSLUCENCY
Each face polygon 36 has a certain, known degree of
translucency. We normalize the translucency level so that the
level can be denoted by a number between 0.0 (perfectly transparent) and 1.0 (completely opaque). Translucency is
accomplished by disregarding or disabling particular pixel
corner sample points, so that even though the polygon may
lie on (i.e., cover) the sample corner point, the polygon characteristics data is not written into the video memory for
that pixel corner; the edge crossings associated with the
omitted corner are also not written into memory. By dis
abling sampling points depending on the amount of
translucency, polygons visually behind the translucent poly
gon can be seen. The more translucent the polygon, the more
sample points are disabled. This is analogous to poking
holes in the translucent polygon to see what is behind. A
pattern select code and translucency value are assigned to a
polygon. Translucency value indicates how many sample points to disable, and the pattern select indicates which
sample points to disable.
The disablement process may be understood by reference
to FIG. 6a, [b]where five different transparency levels T are[/b]
shown, for each of the two levels of a pattern select (PS) bit.
It will be seen that for different ranges of T levels, the
translucency disablement pattern for the pattern select bit at
a low binary level (PS=0) is complementary to the selected
pattern with the pattern select bit at a high binary level
(PS=1). While only five levels of translucency T are
illustrated, more translucency levels are computable. The number of Tlevels is achieved by modifying the edge crossings, as shown in FIGS. 6b-6e, as a function of
translucency. The more translucent the face polygon, the less
the area assigned to a sample point. This procedure increases
or decreases the area of the polygon on each sample corner
point: as seen in FIG. 6b, the translucency level T is
sufficiently low (0-T<4) that only one pixel corner C is
part of the selected pattern, and the other corners C, C and
C corners of that same pixel are disabled and not consid
ered; the left and right crossings are moved toward the
sample point, so that distances D, and D respectively become 1.0 and 0.0, while the modified bottom crossing distance D" becomes (4TD), where D is the
unmodified bottom crossing 40" distance to corner C.
Similarly, the top crossing 40' distance D is modified to
D'=1-(4*T*(D-D)), where D is the unit pixel edge segment distance. As the translucency level T increases, the
edge crossings are moved away from the corner sample point, and increases the effective area of the face polygon in
the pixel being considered.
5 different translucency levels, huh? That sounds familiar
Looks to me like the rasterizer takes the polygon vertices, computes ymin and ymax of the polygon, then loops over those scanlines. For each scanline, it loops from xmin to xmax. For each pixel, it computes the distances to the polygon edge, as in your diagram and clamps the distance to 1 pixel. The crossing distances, translucency, color (polygon color, texture color, with lighting and modulation applied) are stored to the frame buffer along with a "covered flag".
EDIT:
Yes, stipple
is used. I *think* this is what's happening
per pixel:
- 8 neighboring pixels are considered.
- Our pixel is in the center.
- This means there are 4x4 corners in play. These are the "sample points."
- The color value in the frame buffer for each pixel is its top-left sample point.
- The rasterizer draws to the frame buffer and stores the translucency flag, some "covered" flag, and edge mumbo jumbo per pixel in addition to the color, which again, will be used for the top-left sample point.
- During a post-processing step (after all polygons have been drawn?), we have all of our
sample values set up in each pixel of the frame buffer but now we need to compute the actual color of the pixel at its center point. This will be some mix of all 4 of its sample point corners but those sample points will be modulated by the edge calculations. And
that is why 16 (4x4) sample points are involved because each of those 4 sample points can be influenced by
its neighbors to result in the final color.
So... if we draw only opaque polygons, each polygon pixel overwrites the previous one if it's nearer to the camera. Translucent polygons are no different except that they also set the T flag and some pixels are
not written (the disabled sample points). But disabling sample points alone is not all that happens. Fig. 6a shows that there are in fact 8 different stipple patterns. Yet we have 32 levels of translucency. The patent explains the additional levels are achieved by modulating the edge values for the pixels that are written.
Because of the T flag, if another translucent polygon overwrites one, the old color value is lost completely. This also means that depending on the stipple pattern used you might be able to overlay two translucent polygons and have them create an opaque image by writing to alternate pixels.
Once
everything is done rendering, a post-processing step mixes all the colors together. This I think means 8 pixels have to be read to establish the color of each individual pixel in the frame buffer.