Current server issues explained by a Backend Developer

I am an experienced backend developer and have worked for major banks and insurances. I had my fair share of overloaded servers, server crashes, API errors and so on.

Let's start with some basic insight into server infrastructure and how the game's architecture might be designed.

Escape from Tarkov consists of multiple parts:

0.) PROXY Server

The proxy server distributes requests from game clients to the different servers (which I explain below). They use basic authorization (launcher validity, client version, MAC address and so on) to check if the client has access to the servers. It also works as a basic protection against DDOSing. Proxy servers are usually able to detect if they are targeted by bots and block or defer traffic. This is a very complex issue though and there are providers which can help with security and DDOS protection.

1.) Authorization/ Login Server

When you start the launcher you need to login and then you start the game. The client gets a Token which is used for your gaming session until you close the game again. Every time your client makes an API call to one of the servers I mention below, it also sends this token as identification. This basically is the first hurdle to take when you want to get into the game. If authorization is complete the game starts and starts communicating with the following server:

2.) Item Server

Every time a player collects an item on a map and brings it out of the raid these items need to be synced to the item server. Same when buying from traders or the flee market. The Client or Gameserver makes an API request (or several) to bring items into the ownership of a player. The item server needs to work globally because we share inventory across all servers (NA / EU / OCEANIC). The item server then updates a database in the background. Your PMC actually is an entry in the database who's stash is modeled completely in that database. After the server moved all the items into the database it sends confirmation to the client that these items have been moved successfully. (Or it sends an Error like that backend move error we get from time to time).

The more people play the game the more concurrent requests go the server and database potentially creating issues like overload or database write issues. Keep in mind that the database consistency is of extreme importance. You don't want to have people lose their gear or duplicate gear. This is why these database updates probably happen sequentially most of the time. For example while you are moving gear (which wasn't confirmed by the server yet) you can't buy anything from traders. These requests will queue up on the server side.

Also to add the server load is people logging into the game and make a "GET" request to the item server to show all their gear, insurance and so on. Depending on the PMC character this is A LOT OF DATA. You can optimize this buy lazy loading some stuff but usually you just try to cache the data so that subsequent requests don't need to contain all the information.

The solution to this problem would be to create a so called micro service architecture where you can have multiple endpoints on the servers (let's call them API Gateways) so that different regions (EU, NA and so on) query different endpoints of the item server which will then distribute the database updates to the same database server. It is of extreme importance that these API calls from one client will be worked on by different endpoints. This is not easily done. This problem is not just fixed by "GET MORE SERVERS!!!111". The underlying architecture needs to support these servers. You would have more success by giving that one server very good hardware at first.

3.) Game Server

A Game (or Raid) can last anywhere from 45 to 60 minutes until all players and player scavs are dead an the raid has concluded. Just because you die in the first 10 minutes doesn't mean the game has ended. The more players have logged in to that server, the longer the server instance needs to stay alive the more load it has. You need to find a balance between AI count, player count and player scav count. The more you allow to join your server the faster the server quality degrades. This can be handled by smarter AI routines and adjusting the numbers of how many player and scavs can join. The game still needs to feel alive so that is something which needs to be adjusted carefully.

Every time you queue into a raid at new server instance needs to be found with all the people which queue at the same time. These instances are hosted on many servers across the globe in a one to many relationship. This means that one servers hosts multiple raids. To distribute this we have the so called:

4.) Matchmaking Server

This is the one server responsible for distributing your desperate need to play the game to an actual game server. The matchmaking server tries to get several people with the same request (play Customs at daytime) together and will reserve an instance of a gameserver (Matching phase). Once the instance has been found the loot tables will be created, the players synchronized (we wait for people with slow PCs or network connection) and finally spawned onto the map. Here the Loot table will probably be built by the item server again because you want to have a centrally orchestrated loot economy. So again there is some communication going on.

When you choose your server region in the launcher and maybe select a very distinct region like MIAMI or something it will only look for server instances in Miami and nowhere else. Since these might all be full and many other players are waiting this can take a while. Therefore it would be beneficial to add more servers to the list. The chance to get a game is a lot higher then.

What adds to the complexity are player groups. People who want to join together into a raid usually have a lower queue priority and might have longer matching times.

So you have some possibilities to reduce queue times here:

  • Add more gameservers in each region (usually takes time to order the servers and install them with gameserver software and configure them to talk to all the correct APIs). This just takes a few weeks of manpower and money.
  • Add more matchmaking servers. This is also not easily done because they shouldn't be allowed to interfere with each other. (two Matchmaking servers trying to load the same gameserver instance e.g.)
  • Allow more raid instances per gameserver. This might lead to bad gameplay experiences though. (players warping, invisible players bad hit registration, unlootable scavs and so on). Can be partially tackled by increasing server hardware specs.

Conclusion:

If BSG would start building Tarkov TODAY the would probably handle things differently and try a different architecture (cloud microservices). But when the game first started out they probably thought that the the game will be played by 30.000 players top. You can tackle these numbers with one central item server and matchmaking server. Gameservers are scalable anyway so that shouldn't be a problem (or so they thought).

Migrating from such a "monolithic" infrastructure takes a lot of time. There are hosting providers around the world who can help a lot (AWS, Azure, Gcloud) but they weren't that prevalent or reliable when BSG started developing Tarkov. Also the political situation probably makes it harder to get a contract with these companies.

So before the twitch event, the item servers were handling the load just fine. They had problems in the past which they were able to fix by adjusting logic on the server (need to know principle, reducing payload, and stuff like that). Then they needed to add security to the API calls because of the Flee Market bots. All very taxing on the item server. During the twitch event things got worse because the item server was at its limit therefore not allowing players to login. The influx of new players resulted in high stress on the item server and its underlying database.

When they encountered such problems it is not just fixed by adding more servers or upgrading their hardware. There are many many more problems lying beneath it and many more components which can throw errors. All of that is hard to fix for a "small" company in Russia. You need money and more importantly the manpower to do that while also developing your game. This means that Nikita (who's primary job should be to write down his gameplay ideas into user stories) needs to get involved with server stuff slowing the progress of the game. So there is a trade off here as well.

I want to add that I am not involved with BSG at all and a lot of the information has come from looking at networking traffic and experience.

And in the future: Please just cut them some slack. This is highly complex stuff which is hard to fix if you didn't think of the problem a long time ago. It is sometimes hard to plan for the future (and its success) when you develop a "small" indie game.

Source: https://www.reddit.com/r/EscapefromTarkov/comments/erdozr/current_server_issues_explained_by_a_backend/

leave a comment

Your email address will not be published. Required fields are marked *