Setting up a load-balanced Jitsi Meet instance
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Jitsi Meet is a self-hosted Free and Open-Source Software (FOSS) video conferencing solution. During the recent COVID-19 pandemic the project became quite popular and many companies decided to host their own Jitsi instance.
There are many different ways to install and run Jitsi on a machine.
A popular choice in the DevOps space is to use Docker via docker-compose
, which was the method used in our scenario.
While at cynkra we have been running our own Jitsi instance quite happily for some months, there was a slightly challenging task coming up: hosting a virtual meeting for approximately 100 participants.
The Challenge
cynkra actively supports the local Zurich R User Group. For one of their recent meetings, about 100 people RSVP’ed.
When browsing the load capabilities of a single Jitsi instance, one finds that the stock setup gets into trouble starting at around 35 people and will go down at around 70 people. The limiting factor is said to be the “videobridge”. One solution is to add a second videobridge to the Jitsi instance. Jitsi can then distribute the load and should be able to host more than 100 people in a meeting.
The best approach to do this is to deploy the second videobridge on a new instance to avoid running into CPU limitations on the main machine. While there is a guide in the Jitsi Wiki and a video about it, many people struggle (1, 2) to get this set up successfully.
Hence, we thought it would be valuable to take another, hopefully simple and understandable, stab at explaining this task to the community.
Load-balancing Jitsi Meet
In the following we will denote the main machine which Jitsi runs on as MAIN. The second machine, which will only host a standalone videobridge, will be named BRIDGE.
-
The first step is to create a working installation on MAIN, following the official docker guide from the Jitsi developers. There is no need to use Docker. An installation on the host system will also work.
At this point we assume that you already have installed Jitsi with SSL support at a fictitious domain.
-
To be able to connect to the XMPP server (managed by
prosody
) on MAIN from BRIDGE (details in point 4 below), port 5222 needs to be exported to the public. This requires addingports: - "5222:5222"
to the prosody
section in docker-compose.yml
and ensuring that the port is opened in the firewall (ufw allow 5222
).
On BRIDGE, start with the same .env
and docker-compose.yml
as MAIN.
In docker-compose.yml
, remove all services besides jvb
.
The videobridge will later connect to all services on MAIN.
Make sure that JVB_AUTH_USER
and JVB_AUTH_PASSWORD
in .env
are the same as on MAIN, otherwise the authentication will fail.
On BRIDGE in .env
change XMPP_SERVER=xmpp.<DOMAIN>
to XMPP_SERVER=<DOMAIN>
.
Run docker-compose up
and observe what happens.
The videobridge should successfully connect to <DOMAIN>
.
On MAIN, in docker logs jitsi_jicofo_1
, an entry should appear denoting that a new videobridge was successfully connected.
It looks like
Jicofo 2020-10-23 19:01:52.173 INFO: [29] org.jitsi.jicofo.bridge.BridgeSelector.log() Added new videobridge: Bridge[jid=jvbbrewery@internal-muc.<DOMAIN>/d789de303e9b, relayId=null, region=null, stress=0.00]