Skip to content

Parallelized client / server think on listen server? #441

Open
MoeMod opened this issue Mar 2, 2019 · 6 comments
Open

Parallelized client / server think on listen server? #441

MoeMod opened this issue Mar 2, 2019 · 6 comments

Comments

@MoeMod
Copy link
Contributor

MoeMod commented Mar 2, 2019

void Host_Frame( float time )
{
#ifndef NO_SJLJ
	if( setjmp( host.abortframe ))
		return;
#endif

	Host_Autosleep();

	// decide the simulation time
	if( !Host_FilterTime( time ))
		return;

	rand (); // keep the random time dependent

	Sys_SendKeyEvents (); // call WndProc on WIN32

	Host_InputFrame ();	// input frame

#ifndef XASH_DEDICATED
	Host_ClientBegin(); // prepare client command
#endif

	Host_GetConsoleCommands ();

	Host_ServerFrame (); // server frame

	if ( !Host_IsDedicated() )
		Host_ClientFrame (); // client frame

	HTTP_Run();

	host.framecount++;
}

How about calling Host_ServerFrame and Host_ClientFrame at once on different threads to achieve better performance?

@mittorn
Copy link
Member

mittorn commented Mar 2, 2019

Engine has many thread-unsafe places that can be reached from client/server exports. And two threads will not give much performance improvenent.
It is better to run render in external thread, but it still need many changes. We are working on external render module. It will be easier to move it to separate thread, but client still can update enitity data while it renders, so it will require wait until render finish before parsing next message. And all 2d and triapi drawings must be queued and drawed after world/entities render end. I do not know how to combine it with tranparent/non-transparent passes. Maybe different queues?

@MoeMod
Copy link
Contributor Author

MoeMod commented Mar 2, 2019

Another way is to Think() multiple entities at once. However a thread-safe hlsdk/server is required.
While most of android devices have good multi-core performance but not as good single-core one, it should perform better when multi threads are activated.

@mittorn
Copy link
Member

mittorn commented Mar 2, 2019

Is server really is a bottleneck? I do not see any performance difference between listenserver and client-only in HL. Is there any difference in CS? Maybe profile it first? If bots loads server so much, maybe simplier to move bots to other thread?

@MoeMod
Copy link
Contributor Author

MoeMod commented Mar 3, 2019

Yep. Those zbots are so laggy. There is a significant fps drop after joining some zbots.
After moving bots' thread, it should be thread-safe for the whole module as well, and it won't be much easier than applying it to all entities.

some APIs must be redesigned:

  1. AngleVectors (modifying gpGlobals->v_forward, etc)
  2. TraceAttack (modifying gMultiDamage)
  3. MessageBegin
    And there would be more, maybe thread_local in C11 helps.

@mittorn
Copy link
Member

mittorn commented Mar 3, 2019

There is AngleVecrorsPrivate whuch do not modify global state. And it maybe implemented in server. It does not require modify engine.
Bots part can be rewritten to not affecg global states. It not connected with engine very much.
Messages should not be sent from other threads, it is better to wrap message api to separate buffer in bot's thread

@a1batross
Copy link
Member

Moving the whole server frame to separate thread is possible and maybe easier than you think. Unkle Mike also did this, but don't got any improvement and never published it.

There was an engine feature with different FPS for server and client in newer original engine builds, but removed later. You can use this as base for timers. But anyway, keep in mind memory access to avoid race condition. Probably you may need to rewrite local networking to make it thread safe.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants