View Single Post
  #182  
Old 09-14-2012, 09:39 PM
PixelEngineer
Sarnak
 
Join Date: May 2011
Posts: 96
Default

Quote:
Originally Posted by PiB View Post
I think one issue you will probably run into is that many characters have few or no animations you can load through 0x14 fragments. I am pretty sure the reason was to save space instead of duplicating animations. Many characters seem to share animations. For examples, barbarians, dark/high/half elves, erudites and humans all use wood elf animations with some additions (see video). Same for dragons and quite a lot of mobs. I have made a list of the most common vanilla/Kunark/Velious characters I could find and which animations they use.
Is that your program? Nice work. Do you have a copy of the source?

Quote:
Originally Posted by PiB View Post
How do you sort from front to back, do you do it per object/region using its AABB or per face? Or can you traverse the BSP front-to-back somehow? I thought the division between planes were arbitrary. Anyway that's one more thing I have to implement, right now I'm using an octree and frustum culling for this. I guess this is not the most efficient. But it will probably come in handy for keeping track of characters. One thing I was wondering, isn't the usefulness of the PVS limited in outdoor zones like the Karanas where you can see from very far away? Obviously I'm sure this works pretty well in dungeons.
The BSP tree has done pretty much all of the work for you. A BSP tree is made up of arbitrarily sized regions divided by split planes. Things in front of each split plane are found on the left side of each tree node and things behind it will be found on the right. The way to correctly render front to back is to recursively iterate the tree visiting the child nodes the camera is in front of first.

In terms of rendering transparency back to front, as I mentioned, I use a stack. It holds the offset in my VBO as well as the number of polygons. Because a stack is a last in first out data structure when I render front to back the polygon batches that go in come out last.

Here is some tree traversal code demonstrating what happens:

Code:
    // Calculate the distance to the split plane
    float distance = (camera.getX() * tree[node].normal[0]) 
+ (camera.getY() * tree[node].normal[1]) + (camera.getZ() * tree[node].normal[2]) + tree[node].splitdistance;

    // We are not at a leaf
    if (distance > 0) 
    {
        renderGeometry(cameraMat, tree[node].left, curRegion);
        renderGeometry(cameraMat, tree[node].right, curRegion);
    } 
    else 
    {
        renderGeometry(cameraMat, tree[node].right, curRegion);
        renderGeometry(cameraMat, tree[node].left, curRegion);
    }
I suppose you can use an octree but you will still have to take the BSP tree into consideration for information about region types (water, lava, PvP) and if you want to use the PVS. It is true that the PVS is pretty inefficient at reducing a lot of the regions you can't see but it's a very inexpensive check to do.

Quote:
Originally Posted by PiB View Post
How did you determine the scale of the dome? Do you use some kind of scaling factor that you multiply with the width/length of the zone?
I think you are misunderstanding what skydomes really are. EverQuest's skydomes are small half spheres that translate and rotate with the camera. They are drawn first and give the impression that the sky is very large despite it being exactly the opposite. Picture someone walking around with a sky textured bowl on their head. This is essentially the idea and because it moves with them, it gives the illusion that the sky is vastly infinite. If you were to stretch the skybox over the entire zone and walk the distance, you could notice approaching the edge of the diameter and it would look a bit weird.

The first thing you should render is the skydome. Then, clear the depth buffer. Because the skydome is inches away from the camera if you didn't clear the depth buffer, none of the zone would render because it's all further away than the skydome actually is. After clearing the depth buffer, render as you usually do. It will give the illusions that behind everything rendered is a vast sky.

Quote:
Originally Posted by PiB View Post
I think I have tried something similar. I started with determining which lights affect an object and send the array of lights used in the vertex shader for the object. This approach didn't scale very well with a lot of lights. I tried to compute the per-vertex lighting once, when loading the zone files. Then I didn't have to do any lighting in shaders but the result was quite ugly (since it's done per-vertex and old zones have very large polygons). I will try deferred shading next (so I can do per-fragment lighting in one pass) but I think this will be quite a lot of work.
First, determine what your goal is. My goal is have the zones rendering as close to classic EverQuest as possible. The original lighting was simply precomputed vertex colors which are blended with the textures to give the appearance of lighting. Objects also have vertex colors as the EverQuest client did not dynamically shade objects. I assume the lights.wld contains lighting details just for shading player and mob models.

After I have everything rendering, I will move on to per pixel lighting. You are correct that per pixel lighting with the provided surface normals will not look good at all. You really need to be utilizing normal maps for any surface that is rendered with phong shading.
Reply With Quote