Dark Mode in Clubhouse is here! Learnmore
Architecture

How to use internal redirects in NGINX

Peter Bex

NGINX is a popular and flexible web server and proxy. It has a few tricks up its sleeve which are worth knowing. Today I'll explain how and why to use internal redirects.

Internal redirects for efficiency

In almost every web application, you would like to serve static files to your users. Some of those files are available only to certain users, so all requests must go through your web application. If you were to expose the content directly through NGINX, it would be available to anonymous users. You probably want to avoid that, especially if the files are confidential!

One option is to let your web application read the files and serve it to your user. This works fine, but it has several disadvantages (depending on your web framework):

  • Done naively, the file is read into memory and then served. If the files are large, this could cause your server to run out of memory.
  • Caching headers are often not set correctly. This causes web browsers to re-download the file multiple times even if it hasn't changed.
  • Support for HEAD requests and range requests is typically not automatically supported.
  • If the files are large, serving such files ties up a worker process or thread. This can lead to starvation if there are limited workers available. Increasing the number of workers can cause your server to run out of memory.

NGINX handles all of these things properly. So let's handle permission checks in the application and let NGINX serve the actual file. This is where internal redirects come in. The idea is simple: you can configure a location entry as usual when serving regular files. Then you simply add the keyword internal to the configuration block to hide it from external requests. Here's a simple example where the web application server is running on port 8000:

server {
listen 80 default_server;
listen [::]:80 default_server;

server_name _;

location / {
proxy_pass http://127.0.0.1:8000/;
}

location /hidden-files/ {
internal; # This tells nginx it's not accessible from the outside
alias /srv/hidden-files/;
}
}

The files are served from the directory /srv/hidden-files by the path prefix /hidden-files/. Pretty straightforward. The internal declaration tells NGINX that this path is accessible only through rewrites in the NGINX config, or via the X-Accel-Redirect header in proxied responses.

To use this, just return an empty response which contains that header. The content of the header should be the location you want to "redirect" to. Here's a concrete example of how to do that in a Django application:

from django.http import HttpResponse
from django.core.exceptions import PermissionDenied

def redirect_test(request, path):
if request.user.is_authenticated:
response = HttpResponse()
response['X-Accel-Redirect'] = '/hidden-files/' + path
return response
else:
raise PermissionDenied()

Add it to your urls.py like so:

from django.contrib import admin
from django.urls import path
from . import views

urlpatterns = [
path('file/<path>', views.redirect_test),
path('admin/', admin.site.urls),
]

With this, you can request http://localhost/file/my-hidden-file.txt which will serve up /srv/hidden-files/my-hidden-file.txt, but only if the user is currently logged in.

Internal redirects to hide credentials

Another use case for internal redirects in NGINX is to hide credentials. Often you need to make requests to 3rd party services. For example, you want to send text messages or access a paid maps server. It would be the most efficient to send these requests directly from your JavaScript front end. However, doing so means you would have to embed an access token in the front end. This means savvy users could extract this token and make requests on your account!

An easy fix is to make an endpoint in your back end which initiates the actual request. We could make use of an HTTP client library inside the back end. However, this will again tie up workers, especially if you expect a barrage of requests and the 3rd party service is responding very slowly.

To fix this, you could again use internal redirects. Here's another example NGINX config:

server {
listen 80 default_server;
listen [::]:80 default_server;

server_name _;

location / {
proxy_pass http://127.0.0.1:8000/;
}

location /external-api/ {
internal;
set $redirect_uri "$upstream_http_redirect_uri";
set $authorization "$upstream_http_authorization";

proxy_buffering off; # For performance
proxy_set_header Authorization $authorization; # Pass on secret from back end
proxy_pass $redirect_uri; # Use URI determined by back end
}
}

The proxy_buffering option tells NGINX to pass the response directly back to the client. Otherwise, it will try to buffer it in memory or on disk. I recommend this if the upstream response can be large.

To use this, you can set the external API's URI in the redirect_uri header. The Authorization header contains your username and password or access token. Again, here's a quick example in Django on how to do that:

import base64
from django.http import HttpResponse
from django.core.exceptions import PermissionDenied

def external_api(request):
if request.user.is_authenticated:
response = HttpResponse()
response['X-Accel-Redirect'] = '/external-api/'
response['redirect_uri'] = 'http://example.com/some-api/endpoint'
response['Authorization'] = b'Basic ' + base64.b64encode(':'.join(['username', 'password']).encode('ascii'))
return response
else:
raise PermissionDenied()

As you can see, using internal redirects with NGINX is not hard. It can make your applications substantially faster and more robust!