Get SSH access and Login to server

We’ll use Nginx server and will make that server working as a load balancer. For doing that, we have to install Nginx at first.
Step 1 – Installing Nginx
sudo apt update
sudo apt install nginx

Step 2 – Adjusting the Firewall
sudo ufw app list

sudo ufw allow 'Nginx HTTP'

sudo ufw status

Step 3 – Checking your Web Server
systemctl status nginx

Step 4 – Time to configure Nginx to work like a load balancer

sudo nano /etc/nginx/sites-available/default
Change the config, for my case it was like this:
# Define which servers to include in the load balancing scheme.
# It’s best to use the servers’ private IPs for better performance and security.
# You can find the private IPs at your UpCloud control panel Network section.
http {
upstream server {
server domain-1.com;
server domain-2.com;
server domain-3.com;
}

# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
# Add this if use HTTPS
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name localhost;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;

location / {
proxy_pass http://server;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
# Very important, controls proxied websocket connection timeout
proxy_read_timeout 600s;
}
}
}

Now restart Nginx
sudo service nginx restart

Load balancing methods
Load balancing with nginx uses a round-robin algorithm by default if no other method is defined, like in the first example above. With round-robin scheme each server is selected in turns according to the order you set them in the load-balancer.conf file. This balances the number of requests equally for short operations.

Least Connection
Least connections based load balancing is another straightforward method. As the name suggests, this method directs the requests to the server with the least active connections at that time. It works more fairly than round-robin would with applications where requests might sometimes take longer to complete.

To enable least connections balancing method, add the parameter least_conn to your upstream section as shown in the example below.
upstream backend {
least_conn;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

Weight
One way to begin to allocate users to servers with more precision is to allocate specific weight to certain machines. Nginx allows us to assign a number specifying the proportion of traffic that should be directed to each server.

A load balanced setup that included server weight could look like this:
upstream backend {
server backend1.example.com weight=1;
server backend2.example.com weight=2;
server backend3.example.com weight=4;
}

Hash
IP hash allows servers to respond to clients according to their IP address, sending visitors back to the same VPS each time they visit (unless that server is down). If a server is known to be inactive, it should be marked as down. All IPs that were supposed to routed to the down server are then directed to an alternate one.

The configuration below provides an example:
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com down;
}

Max Fails
According to the default round robin settings, nginx will continue to send data to the virtual private servers, even if the servers are not responding. Max fails can automatically prevent this by rendering unresponsive servers inoperative for a set amount of time.

There are two factors associated with the max fails: max_fails and fall_timeout. Max fails refers to the maximum number of failed attempts to connect to a server should occur before it is considered inactive. Fall_timeout specifies the length of that the server is considered inoperative. Once the time expires, new attempts to reach the server will start up again. The default timeout value is 10 seconds.

A sample configuration might look like this:
upstream backend {
server backend1.example.com max_fails=3 fail_timeout=15s;
server backend2.example.com weight=2;
server backend3.example.com weight=4;
}