Possible help
This may not solve your problem, but it may help. (especially if you use innodb as your storage engine)
I took a look at how the Save method in client.cpp worked and saw some issues in the way locking happens and transactions are used (or not used). From what I can tell between the lock in front of the DB connection and the lack of bulking up statements into a transaction, this could cause some serious issues if people were not using a very good ssd on their database and had quick network connection to their database of choice. You can easly get 'fsync' choked. Symptions are low io/cpu usage, everything starts lagging out waiting for the locks to be release for fsync to happen on the database for transaction purposes. I've created a pull request. https://github.com/EQEmu/Server/pull/827 **WARNING*** my testing has been minimal so use as your own risk, but I am encouraged by the results (over 2x faster with zero load on the system) **Note**, you will also need to included two indexes on tables, its in the pull request. Its important as we are doing table scans during the save without them. Hope this can help some of you. right now its a stop gap and hopefully I can come up with a better solution in the near future. I would like feedback if it helps resolve the lag issues with pets out. This is an example bulking of the transactions of a simple save. START TRANSACTION; REPLACE INTO `character_currency` (id, platinum, gold, silver, copper,platinum_bank, gold_bank, silver_bank, copper_bank,platinum_cursor, gold_cursor, silver_cursor, copper_cursor, radiant_crystals, career_radiant_crystals, ebon_crystals, career_ebon_crystals)VALUES (685273, 184, 153, 149, 101, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); REPLACE INTO `character_bind` (id, zone_id, instance_id, x, y, z, heading, slot) VALUES (685273, 189, 0, 18.000000, -147.000000, 20.000000, 64.000000, 0); REPLACE INTO `character_bind` (id, zone_id, instance_id, x, y, z, heading, slot) VALUES (685273, 41, 0, -980.000000, 148.000000, -38.000000, 64.000000, 1); REPLACE INTO `character_bind` (id, zone_id, instance_id, x, y, z, heading, slot) VALUES (685273, 41, 0, -980.000000, 148.000000, -38.000000, 64.000000, 2); REPLACE INTO `character_bind` (id, zone_id, instance_id, x, y, z, heading, slot) VALUES (685273, 41, 0, -980.000000, 148.000000, -38.000000, 64.000000, 3); REPLACE INTO `character_bind` (id, zone_id, instance_id, x, y, z, heading, slot) VALUES (685273, 41, 0, -980.000000, 148.000000, -38.000000, 64.000000, 4); DELETE FROM `character_buffs` WHERE `character_id` = '685273'; DELETE FROM `character_pet_buffs` WHERE `char_id` = 685273; DELETE FROM `character_pet_inventory` WHERE `char_id` = 685273; INSERT INTO `character_pet_info` (`char_id`, `pet`, `petname`, `petpower`, `spell_id`, `hp`, `mana`, `size`) VALUES (685273, 0, 'Labann000', 0, 632, 3150, 0, 5.000000) ON DUPLICATE KEY UPDATE `petname` = 'Labann000', `petpower` = 0, `spell_id` = 632, `hp` = 3150, `mana` = 0, `size` = 5.000000; DELETE FROM `character_tribute` WHERE `id` = 685273;REPLACE INTO character_activities (charid, taskid, activityid, donecount, completed) VALUES (685273, 22, 1, 0, 0), (685273, 22, 2, 0, 0), (685273, 22, 3, 0, 0), (685273, 22, 4, 0, 0), (685273, 22, 5, 0, 0); REPLACE INTO character_activities (charid, taskid, activityid, donecount, completed) VALUES (685273, 23, 0, 0, 0);REPLACE INTO character_activities (charid, taskid, activityid, donecount, completed) VALUES (685273, 138, 0, 0, 0); REPLACE INTO `character_data` ( id,account_id,`name`, last_name, gender, race, class, `level`, deity,birthday,last_login,time_played,pvp_status,l evel2, anon, gm, intoxication,hair_color,beard_color,eye_color_1,ey e_color_2,hair_style,beard,ability_time_seconds,ab ility_number,ability_time_minutes,ability_time_hou rs, title,suffix, exp, points, mana, cur_hp, str, sta, cha, dex, `int`,agi, wis, face, y, x, z, heading, pvp2, pvp_type,autosplit_enabled, zone_change_count, drakkin_heritage, drakkin_tattoo,drakkin_details, toxicity,hunger_level,thirst_level,ability_up,zone _id, zone_instance,leadership_exp_on, ldon_points_guk, ldon_points_mir, ldon_points_mmc, ldon_points_ruj, ldon_points_tak, ldon_points_available,tribute_time_remaining, show_helm, career_tribute_points,tribute_points,tribute_activ e,endurance, group_leadership_exp,raid_leadership_exp, group_leadership_points, raid_leadership_points, air_remaining,pvp_kills, pvp_deaths,pvp_current_points, pvp_career_points, pvp_best_kill_streak,pvp_worst_death_streak, pvp_current_kill_streak, aa_points_spent, aa_exp, aa_points, group_auto_consent, raid_auto_consent, guild_auto_consent, RestTimer, e_aa_effects, e_percent_to_aa, e_expended_aa_spent, e_last_invsnapshot, mailkey ) VALUES (685273,90536,'Rekka','',0,6,13,50,396,1550636815, 1552018689,22507,0,70,0,1,0,17,255,4,4,2,255,0,0,0 ,0,'','',164708608,345,2299,1589,60,80,60,75,134,9 0,83,3,-1831.625000,-225.750000,3.127999,37.500000,0,0,0,0,0,0,0,0,4480 ,4480,0,22,0,0,0,0,0,0,0,0,4294967295,0,0,0,0,1291 ,0,0,0,0,60,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,'B F01A8C0586571A2'); COMMIT; If this doesn't help, I'm sorry if I have muddied the waters a bit. |
Is there data that makes you think DB is acting as a bottleneck?
While I don't want to undermine the helpful suggestions. I did try a lot of DB performance tuning and changing storage engines to tweak. I also setup fairly robust DB monitoring to make sure it wasn't DB performance. When I was having massive lag spikes, there was virtually no load, locks, waits, anything really on the DB. The DB was on-box and was on SSD. I didn't save the reporting but nothing about it in my case pointed to DB performance issues. As others said, a more methodical approach is probably in order to make sure the situation doesn't get more complex. |
From what I can tell from just looking at the code , the locking isn't at the database layer per say, it's at the MySQL connection per zone In the zonedb.cpp. there is only one connection per zone, at least from what I see.
Fsync is a delay or latency issue on the DB when dealing with transactions. Every single query in save is a transaction. (All 13+of them.) You can have a system that can do 500 transactions per sec but can do 100,000 inserts per sec if you bulk up statements. A small latency can have a massive impact when locks are involved. If there is latency at the DB it queues up on the zone depending on how many are in each zone. Less people in the zone the less this impacts them Lowering the latency by limiting the fsync on the transaction call can ease the pressure on the lock on the connection which prevents stalling of the character saved. Or that is at least the idea. **Note** this lock prevents all queries in the zone , not jsut during saves. Also note the removal of the table scanning of the pet tables (adding indexes) helps to lower latency of the call as well It can be easy to confuse work with latency/locks. You can have a slow system doing no work. Honestly I would like to do more work on this and know it's a stop gap but figured doing 13x less transaction s per save was a win when someone in this thread noted commenting how some of the saves improved their latency. (Mine improved 2-3x) I know it can be many issues and I may be barking up the wrong tree , but this is simply another option that does have a very clear improvement in performance around the zone locks. Side note, A single mysql connection for a process is generally a less than idea situation. It is too much of a blockage area for network IO. Locks should be kept for nano/microseconds, not milliseconds. Possibly make seperate connections for read/writes depending on how the threading is setup on the zone process.. (note I have not really looked at the threading model of the zone yet, so this may be moot and may be my misunderstanding) Note on a phone so sorry for formatting/bad Grammer. Very small window :) |
Quote:
There are quite a few things I want to highlight about this though and contrast what the real problems are here On the Performance Standpoint EQEmu is not database heavy at all, there was once upon a time where we did all kinds of stupid things but 100's of hours have been poured over reducing our performance bottlenecks across the board in departments of CPU, I/O and even Network. To illustrate here is PEQ: http://peq.akkadius.com:19999/#menu_...late;help=true With PEQ at 800-1000 toons on the daily, we barely break 100 IOPS a second, with barely 1MB/s in writes with minimal spikes here and there. That is virtually nothing Also with your benchmark (I assume this was you on the PR), the current stock code on some middle of the line server hardware produces the following timings HTML Code:
[Debug] ZoneDatabase::SaveCharacterData 1, done... Took 0.000107 seconds (These timings entirely will depend on your hardware and MySQL server configuration of course) These operations also happen very infrequently where it is not going to even matter. There are many many factors that play into the overall performance of a server and since the server is essentially an infinite loop, anything within that loop can influence the amount of time that a CPU is not spent idling (Network, I/O, overly CPU intensive operations etc.). Hardware is of influence, your MySQL server configuration is of influence and most of all software is of influence EQEmu used to be way way way more resource intensive and we've come along way to where that is not even an issue anymore. We have one outstanding bug that is isolated to the networking layer that made its way through because we never saw it on PEQ during our normal QA routines We are currently working on the code to measure application layer network stats so folks can give us dump metrics off of so we can give a proper fix. We've seen what happens during the network anomaly during a CPU profile and there's not much that it is going to show alone but where it is spending most of its time. We folks at EQEmu definitely have jobs that have been a deterrer from resolving said bug but we will have it on lockdown soon enough as we know exactly what we need to do, the only thing in our way is time as a resource We are not grasping at straws to fix this folks, so please just be patient as this is just not a quick fix with our schedules Quote:
|
Update to this, KLS and I have been working on our stats internals so we can start dumping out data
You can see a preview of this here of what KLS has working from stats internals https://media.discordapp.net/attachm...4/EQ000016.png I have built out a API which will pipe these stats to a web front-end to display graphs and metrics of all types here so we can perform some analysis against affected servers and zones. From there we should make some good progress and we've seen a handful of things from it already We will update when we have more |
Quote:
With these new changes, what time-frame are we looking at for a windows fix in the source? |
Not sure if this is still being looked at or anything's been done, but I thought I'd mention this because it might bring some things to light.
I took code that was based around 2010 or so. (Note that was before the _character table changed to character_data). I added custom changes from that to code to newer code I grabbed from your GIT at around 8-2017 to use on our server. I didn't notice the difference right away because most of my testing involved a few toons here and there. Since I noticed the problem with logging in more than 24 toons, I adjusted things in my own code, and started updating my code by incorporating each change to your GIT.. It is now up to date with 9-2018, at which time I noticed one change that really helped, but it was buried in a merge. I thought that was all there was to it.. But since then, I logged 48 toons in one zone on my server.. All seemed fine and there was no lag.. Until I buffed all my toons. This added to the lag because of added work for database.SaveBuffs(this); function call. Now, I have begun comparing the code difference between the old version (before the database split) and the new. In the old version.. The only database call in Client::Save(uint8 iCommitNow) function was using the DBAsyncWork class unless iCommitNow was > 0. That particular class actually added it's own thread to work on a queue of database queries without slowing down the main thred. I don't see that thread anywhere in the new code. Maybe I'm mistaken, but I think it would maybe help. Don't know for sure, but just a thought. |
A lot has been being done, if you had read the thread I have been giving updates on exactly what we've been working on. For the umpteenth time, this is not a database problem
We built metrics into the server, we built a web admin panel and a comms API to the server so we can visually see the problem https://media.discordapp.net/attachm...080&height=247 https://media.discordapp.net/attachm...080&height=876 Below is the visual of the exact problem that we are running into, this is the cascading resend problem that chokes the main thread of the application if the processor can't keep up, the core/process collapses upon itself We had 60 toons come in (Which isn't a problem at all for the hardware we run on) and they all ran macros that generated an excess amount of spam. It all runs fine until the process gets behind on resends, then cascades in its ability to keep up with the resends because of all of the packet loss https://media.discordapp.net/attachm...080&height=538 https://media.discordapp.net/attachm...080&height=726 Here is when resends get to the point where the server can no longer send keepalives to the clients, the clients disconnect and then the process eventually catches up again and everything flatlines https://media.discordapp.net/attachm...080&height=798 TLDR; the server keeps up just fine until the process buckles The reason for this is that the packet communications happen in the main thread, which hasn't been a problem until we discovered this scenario in recent months We are working on removing the client communications from the main thread so that we don't run into thus buckling problem from back-pressure. Our networking should not be occurring on the main thread regardless and getting two threads to communicate networking responsibilities over an internal queue isn't the most trivial of processes either so its taking us some time We also can't measure changes without having taken the time that we have to build out metrics to show the problem and know that it actually has been resolved Also, for context and clarity, the network code was completely overhaul at this time. While we've ironed out most things, this is our last outstanding issue and it hasn't been an easy one from a time and resource perspective because it has been incredibly elusive, harder to reproduce and didn't have any way to measure or capture the problem Code:
== 4/16/2017 == We'll let you know when we have the code fixes in place |
Thank you Kindly for the very detailed Update, I believe this was a Side effect to Thanos Snap
|
I for one am Very pleased with the current build that was released to me *shared with Varlydra server* Akkaidus and KLS and anyone else I may not know who was involved work very hard and they kept their promise to find a fix and they delivered. Tested this with 24 clients in zone. Keep in mind your MS bar may say one thing but in reality you can cast spells with normal fresh time. My server is a 2 box server but permitted 3 for the purpose of our testing. Once more thank you for taking this problem serious and investing time and resources to see it fixed.
|
Keep in mind this is not merged mainline, but we have a general fix in a working branch currently, we have a handful of things we need to take care of before merging mainline
If you're interested in the build for your server, download it at the following and report back https://www.dropbox.com/s/2s2mput1q4...aries.zip?dl=0 Also, keep in mind you will need to run this update manually: https://github.com/EQEmu/Server/blob...date_range.sql |
Quote:
|
Hey guys just checking in to see if there is any update to this issue and if the fix will be pushed out.
Thank you |
Quote:
|
I meant to ask you Akka, that fix, was that the "compression level" update that was commited ? The only reason I ask, one of my "toy boxes", is sticking to slightly older code, but picking away at manually applying feasible updates, when I can get away with it. :)
|
Hey Folks,
Posted this in discord but things can drift up on there so just posting here as well. We've just completed a server update and merged in the latest changes in the eqemu master branch. Everything is great with the exception that we seem to have now run into the dreaded windows server lag bug as report here: http://www.eqemulator.org/forums/sho...t=42311&page=4 According to that thread it was fixed, but it seems it's happening to our server still for some reason. Symptoms are exactly the same, a resend cascade leading to big lag spikes (going by netstats). We have windows server 2019, 2 zeon 2.4 processors, 32gig ram and more bandwidth then you can shake a stick at. Any ideas as to settings to tweak or other highly welcome! Meanwhile I'll see if I can utilise those metrics in the thread to get more insights. |
I haven't been on discord yet..
..but, I would start with ensuring that you have the correct zlib dll. If it's not from 2019, I wouldn't trust it to be correct. Make sure that you're using the one acquired from the eqemu_server.pl download option. The new vcpkg method seems to install a zlib dll from 2018 into the build directory that seems to cause this issue. If you do a select all -> copy -> paste from build to server install, and this dll is present in build, it will overwrite your current server copy. As well, any older dependency-related copy will do it too. The issue is related to build flags (mostly) and forces the encryption to operate in single-thread mode. The copy obtained through the eqemu_server download is known to be correctly flagged (as of the my commit to that repo.) |
Thanks again Uleat.
So after a bit of experimenting here's some notes: I've been compiling with the Build Zlib flag set in cmake, so I didn't actually need or use a zlib1.dll in the folder - not sure what that means, shouldn't it compile with zlib with multithreaded mode in that case or is it not configured for that in cmake? I can untick the build with zlib, in which case I do need the zlib1.dll in the folder to run it, however I've been compiling in 64bit mode so the x86 zlib1.dll from the installer doesn't work with it. However I can just compile a x86 version instead and use the zlib1.dll which is what I'll try next, at least then I know for sure is is using the right zlib then. |
All times are GMT -4. The time now is 04:04 AM. |
Powered by vBulletin®, Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.