There are 4 viewport priorities, and the depth buffer is wiped between each priority layer. This means that because the layers are not depth tested against each other, each layer can have it's own near/far value pair.
I found in ECA and harley, really each layer needs a different value pair, as the sky is drawn at a distance in the millions, and the 2D stuff at a distance between 0-1. You can't represent such an extreme range without some sort of z fighting with a single near/far value pair.
To calculate them .. originally I did a min/max on the entire data set every frame. It worked giving accurate Z values, but was too expensive. But given the discovery of the node distances, we can now do this extremely fast.
Calculating the far plane is super easy from the node distances, you just find the max distance of a node, either fully inside of traversing the view frustum. Calculating a near plane is much more complex and I struggled with it. I tried just getting the min value of nodes fully inside the view frustum, this worked most of the time, but geometry which was also traversing the front clipping plane, that should have been visible wasn't.
Anyway I had a different idea, how the hardware might be doing it. Basically just getting the 'scale' of the scene from the top level nodes. If we know scale, ie the max distance is 1000 we could probably guess a near plane. My thinking currently is something like
- Code: Select all
Near Far
0.01 100
0.1 1000
1 10000
10 100000
100 1000000