PDA

View Full Version : Memory issue


Angelox
09-07-2006, 04:52 AM
I don't know where this should go, I tried bugs section, but I can't post there;
I'm not sure what created this or how long ago it started - since I carry 1.3 gig memory on my server, I never noticed it before ( it was brought to my attention).
I start the server and mount 5 dynamic zones - each runs with about 6mb, but one runs with 650mb ( the first one) - when I kill the 650mb zone process and it auto remounts, it then runs fine like all the rest - I log in a character, run through all the zones, and memory usage goes up around 30mb (normal) on whichever zone I'm in , but I never see the 650mb issue again. Dunno if it's related to extra work added or a bug in the emu or what -
If anyone has any ideas to what this is, please post .

WildcardX
09-07-2006, 05:11 AM
This link is a theory that may explain what might be happening with the memory consumption. Read FNW's and my post in this thread...

http://www.eqemulator.net/forums/showthread.php?t=21441

Angelox
09-07-2006, 09:03 AM
Well, I guess my "big idea" to use high numbers on my sql updates was crap. I did this because wanted to share everything i did with the public here, and I figured if, my numbers were high, they would not interfere with any work you had on your database.
But I'm not a quitter, so I'll start over again from scratch (if i can't find a way to alter all these updates). It's sorta frustrating.
The database still works, you just need a lot of memory :(
That was the problem with the "Scars of Nalgora" dude , his server only has 350 meg. He came out of hiding and tried to add my stuff to his "top secret" project.
Would be nice to one day see a lot more people working together here on a "public" manner, Imagine how far advanced the database would be?

WildcardX
09-07-2006, 09:41 AM
I feel your pain... But don't get discouraged. I looked at your work and I liked it. For being one person you added a lot of content that looks pretty accurate to me!

eq4me
09-07-2006, 10:36 AM
Well, I guess my "big idea" to use high numbers on my sql updates was crap. I did this because wanted to share everything i did with the public here, and I figured if, my numbers were high, they would not interfere with any work you had on your database.
But I'm not a quitter, so I'll start over again from scratch (if i can't find a way to alter all these updates). It's sorta frustrating.


Do not do that yet! There might be other solutions for this problem. Maybe zephyrs tool can help you.
Imho it would be better to fix that problem on the server side. We have some high powered (my)sql gurus at work, mabe they can give me a hint what can be done.

eq4me
09-08-2006, 11:56 PM
Please bear with me if the below is a bit vague and filled with lots of assumptions. I am occupied with other stuff right now and I am just feeling my way back in C programming after 8 years of doing other things.
My assumption is that right now the size of the shared memory is set by the number of table entrys. That means even if 90% of the table entrys are empty it will still be reserved in memory.
There could be multiple ways to redeem that.

First you could use the c++ function map(), according to a C++ crack I know that might be not a good idea to do in a shared memory segment.

Here a small scrap of code :

#include <map>

main()
{
typedef map<int, double> id_map;
id_map foo;

foo[17] = 3.5;

foo[19] = 7;

++foo[17];


for (id_map::iterator i = foo.begin(); i != foo.end(); ++i
{
int idx = i->first;
double val = i->second;
}

id_map::iterator i = foo.find(5);
if (i != foo.end())
foo.erase(i)
}


The second possiblity could be to do a double pointer. For each table entry generate a pointer that points to the actual data entry. On default this is set to void. So only if the mysql table entry holds data you need to allocate memory. (Sorry for being very vage here but our resident C expert didnt had much time and just outlined it briefly - to brief for me to understand propperly. )


The third possibility would be to allow multipe tables.

Lets take the items table for an example.

items (the one we have right now = default, so its either peq or cavedudes db)
items.ax (angelox db additions)
items.custom (local customized stuff)

The big advantage would be that these tables can grow without interfering with each other.
The problem here would be how to allow eg. the items.ax table to override stuff in the items table. Maybe an additional table entry which holds the id number and table name for the entry you want to override would do the trick.

like



CREATE TABLE items.ax (
id int(11) NOT NULL default '0',
overridetable varchar(64) NOT NULL default '',
overrideid int(11) NOT NULL default '0',
overrideprority int(11) NOT NULL default '0',



The field overridepriority would eg. allow for local customizations that build on more than one table.

Well, enough of my ramblings. Maybe someone with more knowledge about the inner workings of EQEmu can comment on this.

John Adams
09-09-2006, 11:09 AM
Well, I guess my "big idea" to use high numbers on my sql updates was crap.
Angelox, you should know I am one of your biggest fans :) but I did fear this all along. I'm sorry you have this problem, and hope there is an easy solution (like Zephyrs tool).

Let me know if I can be of any assistance. I can help convert some data if you need.

Angelox
09-09-2006, 11:24 AM
Angelox, you should know I am one of your biggest fans :) but I did fear this all along. I'm sorry you have this problem, and hope there is an easy solution (like Zephyrs tool).

Let me know if I can be of any assistance. I can help convert some data if you need.

Thank you for your help :)

I did get the problem solved - I have my database optimized and running fine now; You must have missed my post. It took me a few days, but now that I know what to do, it's no problem.
I posted my new, updated database for anyone who wants it, at my page.

fathernitwit
09-11-2006, 02:02 AM
but I never see the 650mb issue again
For what its worth, I suspect that this is just windows lying to you. My guess is that the first zone got the shared memory accounted for in its displayed size, but the other zones did not, because they didnt "create" it. When you killed it, the shared mem would have seemed to disappear, but the memory was still allocated, and windows just didnt know who to "blame" it on, so it didnt display it.


Well, enough of my ramblings. Maybe someone with more knowledge about the inner workings of EQEmu can comment on this.

Not to derail the thread, but I dont want to leave this unanswered.

Anywhere we can, we use a map, and where we cannot (shared memory), we already use a double pointer system. There is no reasonable way to run a map against shared memory however. Further, a map is still a O(log n) data structure, and with the number of items we have, log(n) is on the order of 15-16... and thats wayyyy too many operations (not even considering the constant factor) for the frequency at which we do item lookups.

The other things which are stored in shared memory, and hence have similar space constraints, are going to have a similar story behind them. They are there for one of two reasons: runtime effeciency or space effeciency. Items happens to be both. We hate shared memory, and try to get rid of it whenever possible, but not using it would be worse.

Angelox
09-11-2006, 03:04 AM
Quote:
Originally Posted by Angelox
but I never see the 650mb issue again

For what its worth, I suspect that this is just windows lying to you. My guess is that the first zone got the shared memory accounted for in its displayed size, but the other zones did not, because they didnt "create" it. When you killed it, the shared mem would have seemed to disappear, but the memory was still allocated, and windows just didnt know who to "blame" it on, so it didnt display it.

I figured that out eventually, but forgot about it since it's corrected anyways.