|
|
 |
 |
 |
 |
|
 |
 |
|
 |
 |
|
 |
|
Archive::Development Archive area for Development's posts that were moved here after an inactivity period of 90 days. |

11-24-2003, 08:50 AM
|
Sarnak
|
|
Join Date: Apr 2003
Posts: 72
|
|
Why not divide each grid into subgrids when you load the zone up. Would have to be sorted once upon load(based on location coords). Then depending uopn location of the person you could start the search with a subgrid of localized data. If not found there then sort nearest subgrids, until finally searching farthest ones.
In theory you could kill 2 birds with one stone and use subgrids to further reduce dist check time by reducing the possible sets of data you have to compare too.
|
 |
|
 |

11-24-2003, 10:51 AM
|
Fire Beetle
|
|
Join Date: Nov 2003
Posts: 7
|
|
Regarding the original proposition of faster access to the EntityList - I've not looked at the code for too long but it seems there is a lot of commitment in terms of coding/time/learning invested in the existing structure, it would be painful to discard that.
Another problem is that the number of elements of a particular type varies enormously by zone. For example in freeport the doors elements are quite high, in lavastorm, not so many. So dividing into separate doors, items, etc, wouldn't be so effective for many zones. Short lists are as wasteful as long ones in time terms.
One way of handling even breaking up of long lists where you have a quickly accessible int based ID is just to use a hashtable structure over the linked lists. So the lists still remain we just have an array of n ElementLists then use (ID MOD n) to allocate an object with a given ID to the appropriate list. This gives a fairly even distribution of Elements to lists without caring too much about their type, the hashing operation is very quick so shouldn't introduce excessive overhead and in larger zones n can be set to a greater number.
S'cuse my C++ being rusty, I've been playing java only for the last few years but just to give the rough idea.
class EntityFarm
{
private:
int n = 20;
EntityList* lists[n];
public:
EntityFarm() { }
~EntityFarm() {}
int16 Hash(int16 ID ) { return (ID MOD n; }
Mob* GetMob(int16 id) { &lists[Hash(id)]->GetMob(id)
....
Yeah, I know the syntax is wrong in there somewhere you get the idea though. The initial cost of deciding on the right list to use *should* be paid off in the, fairly even, allocation of entities to lists and it also gives another tuning layer without adding too much complexity (by changing n). The Entity farm is just a wrapper on the EntityList so most if not all of the methods can be inline.
From your timings it doesn't look as though this aspect is *that* time critical though. Just offering this up as an idea for consideration in case you do need to address the list problem.
|
 |
|
 |
 |
|
 |

11-24-2003, 06:39 PM
|
Hill Giant
|
|
Join Date: Nov 2003
Posts: 168
|
|
Quote:
Originally Posted by Valtin
Regarding the original proposition of faster access to the EntityList - I've not looked at the code for too long but it seems there is a lot of commitment in terms of coding/time/learning invested in the existing structure, it would be painful to discard that.
|
I'm not convinved that there is a definite need to optimize yet (which explains why I haven't implemented/suggested code changes). OTOH, changing list<> to set<> wouldn't be so very difficult. The container is already built into the language, and the semantics are almost identical. The base entity would only need to implement some sort of arbitrary comparison operation.
Quote:
Short lists are as wasteful as long ones in time terms.
|
*brushes off his math books* Actually, for element searches like the ones Trump mentioned at the beginning of the thread, your statement is exactly opposite the truth. Searching unsorted linked-lists is an O(n) operation. That means that the cost goes up in step with the size of elements. Using an stl set (which is implemented w/ a red-black binary search tree) would reduce search time to O(logn). As you can see, the difference for small n is negligable. So, to make a long story short - short lists are not as wasteful as long ones.
Quote:
One way of handling even breaking up of long lists where you have a quickly accessible int based ID is just to use a hashtable structure over the linked lists.
|
I fully agree. Hashtables are sexy. An item search in a hash table drops down to O(1) time - it doesn't matter how many items you store. The bad news is that ANSI C++ doesn't yet include a hash-table container (although they are currently considering proposals). Most compilers still contain some form of hashmap, but I find them hard to recommend for cross-platform solutions.
Quote:
From your timings it doesn't look as though this aspect is *that* time critical though. Just offering this up as an idea for consideration in case you do need to address the list problem.
|
I agree. My time is better spent adding/improving functionality at this point.
|
 |
|
 |
 |
|
 |

11-24-2003, 09:58 PM
|
Fire Beetle
|
|
Join Date: Nov 2003
Posts: 7
|
|
Agree with your point about lists being linear so shorter ones being quicker to search than long ones Eglin. I was meaning that the pre-search code for list allocation becomes more significant in the total time. i.e. pre-search list allocation time makes things slower, especially when you have a large number of shorter lists. Conversely the pre-search time tends to be absorbed more into a lower number of 'longer list' searches.
That's the point I was trying to make, that a balance is needed - the ultimate step is to have n lists of length 1 where the id is a one to one allocator for the list holding the single element. I didn't express myself at all well though. (and still aren't sure that I have LOL)
Not a big issue really, I agree with you basically just didn't want you thinking I didn't know it takes longer to search a linked list of 1000 things than 10 things.
I'm sorta confused as to why a hashtable or any other kind of structure not being a standard container at this point should be a problem. Its nice if there is a standard implementation of some structure but there is nothing wrong with creating your own. As long as you do it in a disciplined way the cross-platform element isn't an issue.
That said, I absolutely agree your time is better spent elsewhere though - from experience its only worth messing with something this central if you have an explicit problem. Just noticed that nobody had addressed trumpcard's original question so thought I'd throw some thoughts in for what its worth
Anyway back to working through the code for me so I can discuss the issues a bit more intelligently.
|
 |
|
 |
 |
|
 |

11-25-2003, 12:32 AM
|
Demi-God
|
|
Join Date: Jan 2002
Location: Charlotte, NC
Posts: 2,614
|
|
The type of collection you use really needs to be defined by the bulk of the operations you are going to be performing...
Are you searching, adding/removing elements to the front/back of the collection, etc..
Our list does all of those pretty regularly..
Consider the performance impacts of how we do it now, everytime you make a combat hit, the iterator has to loop through the entire entity list (lets say 20 clients, 400 mobs, 200 doors, 100 objects), find all the clients (performing an isClient check on each entity), check their distance from you, and if they're within a certain range, spit out a message to your client to inform you of the hit...
As you can imagine, this is highly unoptimized.. Will a new collection help? Only for searches (look at GetMob) really...
I think probably the simpliest and best solution is to just split the lists up, entity_list for mob, client_list, etc, and the calls changed to use the right one, this way youre drasticlly reducing the number of objects your having to compare against to find the one you want.
Since we regularly insert/delete (spawn, mob dies, etc), the efficiency boost of a sorted list like a hash map is questionable (will the speedup in searches offset the decrease in efficiency of inserts/deletes, a sorted list has to resort the entire table everytime something changes)
Ultimately, probably not, a hashmap or binary tree is probably the best option, and splitting the lists out, even if we stuck to linked lists, would improve the performance immensely... That in itself would probably give us the most bang for the buck (the biggest hike in performance with the least amount of effort)
I like a RB binary tree personally, but it might be a pain in the ass to implement...
__________________
Quitters never win, and winners never quit, but those who never win and never quit are idiots.
|
 |
|
 |

11-25-2003, 01:29 AM
|
 |
Demi-God
|
|
Join Date: Mar 2003
Location: USA
Posts: 1,067
|
|
Quote:
Originally Posted by Trumpcard
I think probably the simpliest and best solution is to just split the lists up, entity_list for mob, client_list, etc, and the calls changed to use the right one
|
What if you use a psuedo index... when the list is loaded, save a pointer to the start of the type of data contained at that point: pointer for start of mobs, clients, doors, etc. And a count of how many there are.
You could then start searching at that point in the list for a count = to the number of entries for that type. Seems like if this would be possible, it would be the smallest amount of change to the program, giving the fastest method to implement and the least amount of potential bugs produced because of the change.
__________________
Maybe I should try making one of these servers...
|
 |
|
 |

11-25-2003, 03:52 AM
|
Fire Beetle
|
|
Join Date: Nov 2003
Posts: 7
|
|
Hmm, makes sense to split them out if the functionality is so different Trumpcard. I see your point about having to decide what every entity is before performing an operation rather than just addressing the clients, as per your example. I agree just by not doing that every round you'll drastically increase performance.
Sorted structures will only help if you are searching for a specific item in the structure, if you are walking it all anyway (for distance checks) there is no advantage, only overhead. If they are unsorted though Scorpious2k's indexing won't work as the doors, say, could be at any point in the list rather than starting at n and extending to n+x. If they are ordered, its an excellent solution though.
Would there be an advantage in structured representation of the data such as items or doors which are more likely to be looked up individually, while leaving ones which need walking such as clients and mobs unordered? I could see advantages for doing an ordered representation for doors as they are pretty static so the sorting overhead would be negligible. If so that would be a good argument for breaking them out as you say.
|
 |
|
 |

11-25-2003, 04:10 AM
|
Dragon
|
|
Join Date: May 2003
Location: Seattle, WA
Posts: 609
|
|
A couple of things to keep in mind when splitting up the entity list...
There is a thread in the application that acts as the "game pulse". This means all of the logic for the application is done repeatedly, with a 1/60-th sec pause between each pulse.
1. While optimizing things like attack can be beneficial, optimizing operations that happen each pulse is more critical. Look in net.cpp to see what happens on each pulse. Since this main thread is the CPU-hog for the app, this is the first place to look to cut down on CPU usage.
2. One critical bit of functionality inside the pulse is where each entity in the entity_list is told to save itself. If you split up the entity_list, make sure to keep this in mind. Preserving polymorphism here would be best.
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
All times are GMT -4. The time now is 05:17 AM.
|
|
 |
|
 |
|
|
|
 |
|
 |
|
 |