Go Back   EQEmulator Home > EQEmulator Forums > Archives > Archive::Development > Archive::Development

Archive::Development Archive area for Development's posts that were moved here after an inactivity period of 90 days.

Reply
 
Thread Tools Display Modes
  #1  
Old 11-21-2003, 10:53 AM
Merth
Dragon
 
Join Date: May 2003
Location: Seattle, WA
Posts: 609
Default Zone Threads

I'm trying to track down the LD issues encountered when a server is load stressed. It could be any number of things, so might as well start at the bottom.

Here's a breakdown of the threads in EQEMu's zone servers. If you see something that should be shifted or potential improvements to logic, please comment.

Main Thread
Function: main()
Quote:
Handles main() and all core program logic.[list=1][*] Parses command line[*] Loads various cache stores[*] Creates TCP connection to world server[*] Starts infinite processing loop
  1. Process world server socket data received
    NOTE: Only processes data already received on the TCP thread
  2. Process whatever logic is necessary for each entity in zone (i.e., AI, client HandlePacket(), etc)
    NOTE: Sending/Receiving of client data not handled on this thread. This thread only processes the data.
  3. Refresh world server ping, db variables, /who all
[/list:o:a531084bc2]
TCP Thread
Function: TCPConnectionLoop()
Quote:
Handles receiving and sending of data between zone and world servers, using TCP. Not much goes on here.

UDP Thread
Function: EQNetworkServerLoop()
Quote:
Handles receiving and sending of data between zone and all clients. This is the thread that must be highly optimized to work with client/zone UDP traffic.[list=1][*] Socket is opened up in SOCK_DGRAM mode (UDP, connectionless)[*] SO_RCVBUF is set to 64kb[*] SO_SNDBUF is set to 64kb[*] Socket is set to bind on INADDR_ANY (all IP addresses)[*] Socket is set to nonblocking mode[*] Enters infinite processing loop:
  1. recvfrom() is called on server socket with buffer size of 1,518 bytes. Not sure why this number was chosen.
  2. A virtual connection is established for each packet of data received. A virtual connection is treated as a 'Client' class.
  3. Verifies checksum on packet received, if any
  4. Iterates through all client connections, looking for match on who the packet belongs to
  5. If match found, the entire packet is decrypted. If the packet is a fragment of a packet, all fragmented packet processing is done as well.
    NOTE: This is a part I believe we can optimize
  6. All (virtual) connections are checked for validity. Connections deemed no longer valid are removed.
[/list:o:a531084bc2]

Async TCP Thread
Quote:
I didn't review this thread
Async DB Thread
Quote:
I didn't review this thread
Reply With Quote
  #2  
Old 11-21-2003, 10:56 AM
Merth
Dragon
 
Join Date: May 2003
Location: Seattle, WA
Posts: 609
Default

One optimization I have seen so far is on the UDP Thread.

Currently, we are receiving data on the socket and blocking remaining thread operations until that packet has been decrypted. I believe this could be optimized by pushing the processing onto another thread, and allowing the UDP thread to continue socket operations.

Thoughts?
Reply With Quote
  #3  
Old 11-21-2003, 11:50 AM
krich
Hill Giant
 
Join Date: May 2003
Location: The Great Northwest
Posts: 150
Default Re: Zone Threads

Quote:
Originally Posted by Merth
[*] recvfrom() is called on server socket with buffer size of 1,518 bytes. Not sure why this number was chosen.
That's the MTU of an Ethernet frame... Perhaps that's why.

Hey! This means that EQEMu won't be efficient on Token Ring or FDDI! Bah! :P

Regards,

krich
Reply With Quote
  #4  
Old 11-21-2003, 12:12 PM
kai_shadowbane
Sarnak
 
Join Date: Sep 2003
Posts: 67
Default

1) shouldn't the /who all only be refreshed per call to that, or is that reference for server side use for the server itself to know who is online?

2)Wouldn't it be easier to keep a single (or multiple) virtual connection(s) alive (statically), and route through that than to keep opening per packet and then closing it?
__________________
The downside of being better than everyone else, is that people have a tendancy to think you're pretentious.
Reply With Quote
  #5  
Old 11-30-2003, 05:38 PM
DeletedUser
Fire Beetle
 
Join Date: Sep 2002
Posts: 0
Default

Async TCP Thread
If i remember correctly, this is a breif thread to handel the connecting process of the TCP socket and ends once the connection is estabilished, i thought it was no longer used however.

Async DB Thread
Allowed database queries to be run without blocking the main thread.
Quote:
[list=1][*] Mainines a queue of commands to be processed.[*] Allows both a configurable wait period before executing the datbase call, and passes back a job id# that can be used to cancel the query.[*] After the query is complete, can either:
  1. Call a function in the DBThread execution timeslice.
  2. Pass back an event to the main thread (the pending event list is checked as part of main's loop).
  3. Do nothing.
[/list:o:982197c2ff]
While the network code is logically complex, i dont think it's very slow in it's execution. There's some performance counter code kicking around somewhere in the project that you can use as a high precision count of the processor time used by a chunk of code (search for "QueryPerformanceCounter", only works on windows). However on high use servers perhapse the OS level buffer is being overflowed on the UDP socket if we're not clearing it fast enough. That's the thing SO_RCVBUF is making larger than default already. Having another thread to clear this in the same way we're doing now wont help, because the minium sleep(1) time on windows machines is 10ms, which may be enough for the buffer to fill up. The fix for this on windows is to use completion ports (which is to use a blocking function on the socket that only returns when there's data, faster but requires it's own thread), someone help me if there's a linux equivilent.
Referance: http://msdn.microsoft.com/msdnmag/issues/1000/Winsock/
Reply With Quote
  #6  
Old 12-06-2003, 06:47 PM
Aaburog
Fire Beetle
 
Join Date: May 2003
Posts: 13
Default Re: Zone Threads

Quote:
Originally Posted by krich
Quote:
Originally Posted by Merth
[*] recvfrom() is called on server socket with buffer size of 1,518 bytes. Not sure why this number was chosen.
That's the MTU of an Ethernet frame... Perhaps that's why.

Hey! This means that EQEMu won't be efficient on Token Ring or FDDI! Bah! :P

Regards,

krich
That's the *default* value for the MTU of an Ethernet frame. And while it is wildly the most common value, it's by no means a given that every node has that value. Perhaps read the correct/current MTU in the given runtime environmentto avoid possible fragmentation? Or at least debug this code while sniffin' in a nonstandard environment to see if this is a real problem?
Reply With Quote
  #7  
Old 12-08-2003, 03:02 AM
krich
Hill Giant
 
Join Date: May 2003
Location: The Great Northwest
Posts: 150
Default Re: Zone Threads

Quote:
Originally Posted by Aaburog
That's the *default* value for the MTU of an Ethernet frame. And while it is wildly the most common value, it's by no means a given that every node has that value. Perhaps read the correct/current MTU in the given runtime environmentto avoid possible fragmentation? Or at least debug this code while sniffin' in a nonstandard environment to see if this is a real problem?
Actually, you might have something there...for dialup guys or PPPoEthernet DSL subscribers. Isn't the MTU for a PPP connection (on windows) something like 576? I don't claim to be a windows guru by any means though...

Regards,

krich
Reply With Quote
  #8  
Old 12-08-2003, 04:04 AM
Trumpcard
Demi-God
 
Join Date: Jan 2002
Location: Charlotte, NC
Posts: 2,614
Default

Most people tweak their MTU's to reduce fragmentation, and you're right, most are alot lower than 1500.

Im not sure how Windows handles it, but I think it 576 is considered an 'optimized' MTU for a dialup connection.
__________________
Quitters never win, and winners never quit, but those who never win and never quit are idiots.
Reply With Quote
  #9  
Old 12-08-2003, 07:40 AM
kathgar
Discordant
 
Join Date: May 2002
Posts: 434
Default

Note: I'm not 100% sure of any of this, and this is being kind of hastly written as I'm busy, but meh. I have also not had the time to look over the net code.
The 1518 recvfrom() buffer is fine. This is the buffer of what the server is not sending, but receiving. Even if someone is using a higher MTU than normal it may still be getting fragged by a router along the path(?). Reading the MTU of the server really does you no good either, you need to know the MTU of the clients. I also believe that the netcode reuses this buffer for every packet. It would be bothersome to figure out which connection the packet goes to, then allocate the buffer size you want and read the packet in. Also note that I can only think of a handful of packets that will even come close to this size. On the server sending side, items, mass spawns, playerprofile, guildlist, /who, petitions, and maybe some of the gm commands. Now packets that approach this size that the client sends. The only one that comes to mind is /bug, obviously it is not used often.
__________________
++[>++++++<-]>[<++++++>-]<.>++++[>+++++<-]>[<
+++++>-]<+.+++++++..+++.>>+++++[<++++++>-]<+
+.<<+++++++++++++++.>.+++.------.--------.>+.
Reply With Quote
  #10  
Old 12-08-2003, 08:09 PM
Aaburog
Fire Beetle
 
Join Date: May 2003
Posts: 13
Default

I believe the only pertinent MTU in any connection is the one between *here* and *there* :p.

To be more precise, here := the node yer on and there := the next hop (usually your nearest friendly router).

Along the way, even if the packet traverses 23 hops or whatever, each MTU from point to point can be different. Hence fragmentation, flags to deal with it, and the entire TCP suite. But when coding, one needn't worry about the rest of the net, just your neighbor. You can reasonably expect each hop to be as optimized MTU-wise as it's admin can work out.

Anywhoo, I still think it might be worthwhile to test 1 server with an odd MTU between it and it's default gateway, and see if that server when used produces *way* too many fragmented packets, or just a 'reasonable' amount, whatever you want that reasonable threshold to be.

I don't know your code like you do, but kathgar was right in focusing (sp?) on packet types the server deals with that are larger than said MTU. Smaller ones dont matter, of course. So if you do a /who all, and it gens a packet say 2k large, the buffer identified by merth might cause problems do to the CPU time required to defragment and due to the various bits of network lag that slow down the receipt of all the frags of a given packet. Gotta wait for em all to be there, and all.

Just food for thought. Enjoy the meal. And I'm tired and likely blithering on too long to folks who already got the gist of the post..g'nite y'all.
Reply With Quote
  #11  
Old 12-09-2003, 07:18 AM
kathgar
Discordant
 
Join Date: May 2002
Posts: 434
Default

Again, that is the size of the buffer on *RECEIVING* packets NOT *SENDING* packets. The command the client sends for /who all is NOT that big. The RESPONSE which would NOT BE IN THIS BUFFER would be much larger (depending on your admin level and the number of players). Also note that this is all UDP, with custom control code.
__________________
++[>++++++<-]>[<++++++>-]<.>++++[>+++++<-]>[<
+++++>-]<+.+++++++..+++.>>+++++[<++++++>-]<+
+.<<+++++++++++++++.>.+++.------.--------.>+.
Reply With Quote
  #12  
Old 12-14-2003, 02:32 PM
DeletedUser
Fire Beetle
 
Join Date: Sep 2002
Posts: 0
Default

Actually the EQ Network layer never allows a packet more than ~550ish bytes, so could lower it to that. ;p (probably so that there would never be IP layer fragmentation on dialup connections) But it's irrelevent, no downside to having it oversized (wasting what, 1k per zone load? It's a static buffer the oversize isnt passed along).

Thinking about this, the fix is probably to change the socket to blocking mode and make another thread and leave it blocked on recv() forever. That's probably the linux way of handeling this, and should work on windows too - however i'm guessing it'll cause problems on zone shutdown getting that recv call to unblock for a reason other than incomming data.

And technically krich was right, ppp or pppoe frames != "ethernet frame". ;p
Reply With Quote
  #13  
Old 12-14-2003, 04:08 PM
kathgar
Discordant
 
Join Date: May 2002
Posts: 434
Default

Linux has both POSIX 4 async_io and a completion ports port. I also think that read() is non reintrant and we should your mmap().. I don't know.. i'm sick quickpost and I don't have my resources availible at this moment.
__________________
++[>++++++<-]>[<++++++>-]<.>++++[>+++++<-]>[<
+++++>-]<+.+++++++..+++.>>+++++[<++++++>-]<+
+.<<+++++++++++++++.>.+++.------.--------.>+.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

   

All times are GMT -4. The time now is 09:38 PM.


 

Everquest is a registered trademark of Daybreak Game Company LLC.
EQEmulator is not associated or affiliated in any way with Daybreak Game Company LLC.
Except where otherwise noted, this site is licensed under a Creative Commons License.
       
Powered by vBulletin®, Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Template by Bluepearl Design and vBulletin Templates - Ver3.3