Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardware Requirements for a complete high performance backend #18

Open
sistein opened this issue Jun 1, 2020 · 12 comments
Open

Hardware Requirements for a complete high performance backend #18

sistein opened this issue Jun 1, 2020 · 12 comments

Comments

@sistein
Copy link

sistein commented Jun 1, 2020

After getting the news that the HPB got open sourced and is open to use for everyone (thank you!) I'm considering a installation for the local student representation (to start things ;-) ).

Unfortunaly - I found no information at all regarding the server requirements and number of servers needed. What I did understand is that it should be separated from the normal NC since same ports will be used.

I would propose to update the readme regarding:

  • Which services can be combined/should be separated.
  • What requirements exist for small/medium/large demands. Especially which parts need what kind of ressource (CPU/Bandwith/etc.)
@kennymario3012
Copy link

After getting the news that the HPB got open sourced and is open to use for everyone (thank you!) I'm considering a installation for the local student representation (to start things ;-) ).

Unfortunaly - I found no information at all regarding the server requirements and number of servers needed. What I did understand is that it should be separated from the normal NC since same ports will be used.

I would propose to update the readme regarding:

* Which services can be combined/should be separated.

* What requirements exist for small/medium/large demands. Especially which parts need what kind of ressource (CPU/Bandwith/etc.)

for sure i needing it too

@peacepenguin
Copy link

Being that this is a recently open sourced product, there may not be any documentation to share, even from the vendor. So we, the community can help fill this void for now:

In my setup with a Xeon Silver 2110 Processor in the host, 3-cores assigned to the HPBE VM (separate from nextcloud VM). Each user I add to a room seems to add about 2% cpu load to the HPBE vm, each user seems to add about 1.5 mbps for their video feed, both ways. Ram usage didn't change noticeably in my testing with just a few users.

This system requirements doc is pretty important, but the good news is, the HPBE app on its own is really small and lightweight, the big resource that will be used quickly though is Bandwidth. And that's easy to calculate too, basically 1.5mbps per user both directions.

Also the HPBE app itself is essential just an API interface for Nextcloud to use Janus and the VideoRoom SFU functionality that Janus provides. So most of the system requirements will be related to the Janus app/daemon. And that question too has somewhat limited info, but a good place to start is the FAQ entry on system performance of the Janus system that supports the WebRTC and A/V aspects of the HPBE:

https://janus.conf.meetecho.com/docs/FAQ.html#benchmark

@fancycode
Copy link
Member

The signaling server itself requires very little resources (CPU / bandwidth) as it's only forwarding the messages between clients.

In most cases, the environment will be network-bound for streaming the audio/video/screensharing between the clients. The maximum bandwidth per stream can be configured in server.conf at

# The maximum bitrate per publishing stream (in bits per second).
and
# The maximum bitrate per screensharing stream (in bits per second).

As @peacepenguin mentioned above, most of the requirements will be for running Janus.

@lbdroid
Copy link

lbdroid commented Jan 4, 2021

"What I did understand is that it should be separated from the normal NC since same ports will be used."

Well... the part you say you understood, you apparently did not understand correctly. I think I read somewhere that it is recommended to run it on a separate server, but it is by no means required. Certainly not because of port conflicts. The only user facing ports that could be in conflict are 80/443, but your web server can handle this conflict using the domain name (assign a separate domain name for signaling). Backend services can, of course, be assigned available port numbers as needed.

@jakobroehrl
Copy link

# The maximum bitrate per publishing stream (in bits per second).

and

# The maximum bitrate per screensharing stream (in bits per second).

@fancycode
Thanks for these parameters. This will change rate for all conferences, right?
Would it be possible to change this value automatically for a conference if there is a participant with a bad connection?
Or would it be even possible to send a lower stream to only this participant with a bad connection?

@fancycode
Copy link
Member

@jakobroehrl yes, these settings apply to all streams of the signaling server instance.

Changing the stream per connection is more a client-side thing, you can follow nextcloud/spreed#5535 for the implementation of simulcast in Nextcloud Talk which will allow exactly this: let the client decide which quality / bandwidth it wants to subscribe.

@jakobroehrl
Copy link

@fancycode
Thanks, that sounds nice!

Last question: The last conference was bad, so I want to reduce the bitrate for the next try.
How can I find a good bitrate? Could every paricipant do a speedtest (eg https://librespeed.org/) and I take the lowest value?
Is it true, that every paricipant needs at least 1 mbit in up- and download if I want to set the bitrate to 1mbit?

@fancycode
Copy link
Member

Each client that publishes needs the configured bandwidth as upstream and N times the number of streams he needs to subscribe as downstream, i.e. in a meeting with 5 users where everybody publishes audio/video with a maximum of 1 MBit/s, everybody needs 1 MBit/s upstream and 4 MBit/s downstream. The server will need 5 MBit/s downstream and 20 MBit/s upstream.

Without the HPB, each client would need 4 MBit/s upstream and 4 MBit/s downstream (+ more CPU requirements to perform the 4 individual video encodings).

@lbdroid
Copy link

lbdroid commented May 10, 2021

(+ more CPU requirements to perform the 4 individual video encodings)

Would it really do 4 individual video encodings? That seems like a very poor design -- it should just do one encoding and send it to 4 recipients.

@fancycode
Copy link
Member

Well, that's how peer-to-peer works. A client might have different connections to the different other participants, so it makes sense to potentially send different streams to the different other peers.

@jakobroehrl
Copy link

Each client that publishes needs the configured bandwidth as upstream and N times the number of streams he needs to subscribe as downstream, i.e. in a meeting with 5 users where everybody publishes audio/video with a maximum of 1 MBit/s, everybody needs 1 MBit/s upstream and 4 MBit/s downstream. The server will need 5 MBit/s downstream and 20 MBit/s upstream.

Thanks, that was very helpful.
Would it be possible to send all data to the server an not directly to all the users?
If yes, this would lead to a 1 MBit/s upstream and 1 MBit/s downstream for every user, right?

@fancycode
Copy link
Member

Would it be possible to send all data to the server an not directly to all the users?

With the HPB this is what already happens. A publisher only sends the stream to the server once and each subscriber receives it from there. The server doesn't mix the streams into one combined video, so subscribing has to happen for each stream.

Streams are only sent to each participant individually if no HPB is used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants