View Single Post
  #183  
Old 09-15-2012, 08:38 PM
PiB
Fire Beetle
 
Join Date: Aug 2012
Posts: 15
Default

Quote:
Originally Posted by PixelEngineer View Post
Is that your program? Nice work. Do you have a copy of the source?
It is. Thanks! I pushed my local Git repo to Github if you want to take a look. This should build on both Windows and Linux (haven't tested MacOS X) using CMake. I really should add a README with build instructions.

Quote:
Originally Posted by PixelEngineer View Post
The BSP tree has done pretty much all of the work for you. A BSP tree is made up of arbitrarily sized regions divided by split planes. Things in front of each split plane are found on the left side of each tree node and things behind it will be found on the right. The way to correctly render front to back is to recursively iterate the tree visiting the child nodes the camera is in front of first.
I didn't know that the planes in the BSP tree always split the space into front / back half-spaces. Thanks for the clarification.

Quote:
Originally Posted by PixelEngineer View Post
In terms of rendering transparency back to front, as I mentioned, I use a stack. It holds the offset in my VBO as well as the number of polygons. Because a stack is a last in first out data structure when I render front to back the polygon batches that go in come out last.

Here is some tree traversal code demonstrating what happens:

...

I suppose you can use an octree but you will still have to take the BSP tree into consideration for information about region types (water, lava, PvP) and if you want to use the PVS. It is true that the PVS is pretty inefficient at reducing a lot of the regions you can't see but it's a very inexpensive check to do.
Yeah, I'm working on using this BSP tree now. This should be faster to traverse than my octree and I need it to determine the ambient light of the region anyway (even though many zones seem to have the same ambient light in all regions). The traversal code looks really straightforward, thanks.

Quote:
Originally Posted by PixelEngineer View Post
I think you are misunderstanding what skydomes really are. EverQuest's skydomes are small half spheres that translate and rotate with the camera. They are drawn first and give the impression that the sky is very large despite it being exactly the opposite. Picture someone walking around with a sky textured bowl on their head. This is essentially the idea and because it moves with them, it gives the illusion that the sky is vastly infinite. If you were to stretch the skybox over the entire zone and walk the distance, you could notice approaching the edge of the diameter and it would look a bit weird.

The first thing you should render is the skydome. Then, clear the depth buffer. Because the skydome is inches away from the camera if you didn't clear the depth buffer, none of the zone would render because it's all further away than the skydome actually is. After clearing the depth buffer, render as you usually do. It will give the illusions that behind everything rendered is a vast sky.
Ah, that makes sense. I think I understand now. Does this mean you render the sky box in camera space?
Reply With Quote