Nginx with Python: Handles Load Balancing, Proxies, and Authentication In One Place.

Obi kastanya
8 min readJul 10, 2023

--

What is Nginx and why it’s so popular?

Nginx is an open-source web server designed to handle high traffic. Initially, Nginx was created to solve the ‘C10K’ problem, where the web server could only handle about 10,000 simultaneous connections.

Apart from offering high-performance servers, Nginx is also known for its flexibility. We could use it as a web server or a proxy server. Nginx is popular for load balancers, proxy servers, and API Gateway.

What are we going to do?

In this tutorial, we are going to create a simple microservices architecture where Nginx will operate as the API Gateway.

To improve the functionality of our API gateway, we will implement an authentication mechanism in the Nginx using NJS Script.

This is what the app gonna looks like at the end of this tutorial:

Nginx with Python: API Gateway with Load balancer and Auth

In this tutorial, I will use Python to build the services. So if you are a Python developer and you work with websites, it’s gonna be really nice article for you.

Prerequisite:

These are the tools you need to install before you attend this tutorial.

  • Docker
  • Postman

We gonna build everything inside containers, so you need to really understand Docker. If you haven't, you could read my docker articles: Flask and Docker: An Intro for Containerization.

Nginx Basic

Let's try to install Nginx and do some basic configuration.

Create a folder named api_gateway_with_auth.

Create the following files.

  • nginx.conf
  • docker-compose.yaml

Make sure you have Docker running on your computer. For me, I used Docker Desktop on Windows.

Open the command lines and run the following command.

docker compose up -d

It will pull the Nginx images from the docker repository and build our Nginx containers.

To check if the container is running well, run the following syntax.

docker ps -a

It will show you all the available containers.

As you can see, our Nginx container running on port 80 and it's forwarded to our machine on port 8881.

Let’s check it from the browsers.

  • localhost:8881/
  • localhost:8881/login
  • localhost:8881/product

Now let’s talk about the Nginx configuration.

Nginx configuration is composed of directives and contexts. A directive is our Nginx configuration. For example:

server_name localhost;

Meanwhile, Context is a group to organize our directive. For example:

http {
...
}

That’s called HTTP contexts. Inside the context, we could add our directive.

In our basic Nginx configuration, we use events context.

events {
worker_connections 1024;
}

Events context determines how Nginx will handle connection and events. This context is required, you must put it into your configuration.

Next, we use HTTP context to define our server. Here, we listen to localhost:80 and create some API endpoints using location context.

http{

server {
listen 80;
server_name localhost;

location / {
return 200 '{"message":"Hello world"}';
}

location /product {
return 200 '{"message":"OK", "data":{"id":"1","name":"iPhone 9"}}';
}
location /login {
return 200 '{"message":"OK", "token":"jdasahfas.dshakfdah.hadkfaskh"}';
}
}
}

That's a basic configuration to create restful APIs. Let’s continue and make it more complex.

Proxy.

A proxy is a server that operates between the client and our actual server. Proxy extends a new abstraction on our application.

With proxy, we could manage a request before it arrives on our actual server, or manage a response before it is sent to the client.

For example, we could rewrite/redirect the URLs, add headers, add security checks, etc.

In the microservices architecture, each service has its own host or location. It’s impossible for the client to remember all the locations for each API.

Furthermore, if there is a change in API locations, the client should update all API locations. That’s very inefficient.

The most simple solution for that case is to create an API Gateway that works as a reverse proxy. Each service is registered to the API gateway.

The client will only communicate with the API Gateway, then the API Gateway will reverse the request to the actual service.

So if there are changes in the service locations, we should only update the API Gateway.

Now let’s code.

Here we are going to create a simple microservices architecture, where we have 2 services: auth service and product service. Then we gonna register it to the API Gateway.

Product service.

Create a new folder named app_product.

Create the following files.

  • requirements.txt
  • Dockerfile
  • Main.py
  • public_api.py
  • protected_api.py
  • middleware.py

Auth Service.

Next, let’s create a new folder for auth service.

Add the following files.

  • Dockerfile
  • requirements.txt
  • main.py
  • public_api.py

Also don't forget to modify the docker-compose.yaml and Nginx configuration files.

  • nginx.conf
  • docker-compose.yaml

Now it’s time to run the program.

Run this command to build product service and auth service images.

docker compose build

Then create the containers.

docker compose up -d --force-recreate

The ‘ — force-recreate’ flags will force docker to update the container if it already exists.

To check if the program is working well, open Postman and create some requests.

  • login
  • get token information
  • get product
  • get permissions

Perfect.

Now we could access all our services through our API Gateway. If there is a new service, all we have to do is just update the API Gateway.

Load Balancer.

We have already implemented Nginx as API Gateway and Reverse Proxy. That’s one of the common use of Nginx.

Otherwise, Nginx could also be used as Load Balancer.

Imagine that the product service is going big and has very high traffic. Sometimes the service goes down because of the traffic.

To solve this issue, we have to do some horizontal scaling. Let’s add a new instance of product service.

So we will have 2 product services running.

All the traffic into the product service will be split in half across the product services instance.

If one of the services goes down, the other one still be able to serve the request. So we could also maintain our software availability.

Let’s implement the load balancer.

First, we need to modify the docker-compose.yaml. We need to multiply the instance for product service.

  • docker-compose.yaml

Modify the Nginx configuration.

  • nginx.conf

Modify the protected_api.py and public_api.py in app_product folders.

  • public_api.py
  • protected_api.py

Rebuild the images and the containers.

docker compose build
docker compose up -d --force-recreate

Check the API to make sure that the load balancer is working.

As you can see, my first request is served by app_product_service_1 and the second one is served by app_product_service_2.

Nginx has a lot of algorithms for load balance, such as likes IP Hash, Least Connections, Least Time, Round Robin, Etc.

If we didn't specify the algorithm, it would automatically use Round Robin, as we did.

Now, what happens if one of our services is go down?

Run this command to stop app_product_service_1.

docker container stop app_product_service_1

This is what happened.

Nginx will try to send the request to app_product_service_1 until the fail_timeout is reached.

If Nginx didn’t receive a response, it will reverse the request to app_product_service_2.

After that, all the requests will be sent to app_product_service_2.

Now our users won't notice that one of our services has collapsed.

Perfect.

Authentication.

Another challenge in microservices architectures is Authentication and Authorization. If we have an authentication or authorization, we must apply it to every service. If the access rules changed, we must update it on every service.

By implementing an API gateway, we could solve this issue. We could implement the authentication on the API Gateway.

API gateway will reject every unauthorized request. Now the service could only focus on his own business goal.

Then, what about the authorization?

Each API or resource may have its own policy to authorize whether the user is allowed or not to access its resource.

Instead of writing all the authorization rules in API Gateway, it’s a better approach to let each service determine whether the user is allowed or not.

API Gateway will handle the authentication. If it’s successful, API Gateway will proceed with the request and send some data to the target service so they can do authorization by themself.

Lets code.

Update the Nginx configuration.

  • nginx.conf

On the nginx folder, create a new folder named app.

Add the following files.

  • Dockerfile
  • auth.js

Don't forget to update the product service.

  • middleware.py
  • protected_api.py

Also, update the docker-compose.yaml files.

Rebuild the images and recreate the containers.

docker compose build
docker compose up -d --force-recreate

Try to log in as gues.

Try to access product data using the ‘gues’ token.

Try to access public API.

Perfect.

We have implemented authentication in Nginx using NJS modules. NJS is a built module in Nginx that allows you to use javascript scripting to extend Nginx functionality.

With javascript, we are able to manage incoming requests with more complex logic. If you want to know more about NJS, you could read the official documentation here.

Besides javascript, Nginx also supports Lua programming language. But for my personal recommendation, I would suggest you use Javascript. Since it’s a more popular language with a bigger community.

Another thing to consider is that Lua is not a built-in module in Nginx, so you need to install Lua by yourself. Or you could use Openresty instead. Openresty is a web platform built on top of Nginx and Lua.

That’s all.

If you want to take a look at the full resources, visit my GitHub account here.

Thanks for reading my articles.

Leave a clap if it's useful.

--

--

Obi kastanya
Obi kastanya

Written by Obi kastanya

Python Developer | Backend Developer | Medium Writer

No responses yet