The Complete Nginx Guide: Installation, Configuration, Reverse Proxy, SSL, Load Balancing & Performance Tuning
Master Nginx from beginner to production: installation, server blocks, location directives, reverse proxy, load balancing, SSL/TLS, caching, rate limiting, security headers, WebSocket proxy, gzip compression, performance tuning, and common configuration patterns.
- Nginx is an event-driven, non-blocking web server — a single worker handles tens of thousands of concurrent connections
- Server blocks define virtual hosts; location blocks control URI routing and request handling
- proxy_pass enables reverse proxying; upstream blocks enable multi-backend load balancing
- Let's Encrypt + Certbot provides free, automated SSL/TLS certificates
- limit_req_zone controls rate limiting to prevent API abuse and DDoS
- Gzip compression reduces text resource size by 60-80%
- worker_processes auto + worker_connections 4096 is the starting point for high concurrency
- Nginx architecture: master process manages workers; each worker handles thousands of connections via epoll/kqueue
- Location priority: exact (=) > preferential prefix (^~) > regex (~ / ~*) > regular prefix (longest match)
- Production requires security headers: HSTS, CSP, X-Frame-Options, X-Content-Type-Options
- Load balancing strategies: round-robin (default), least_conn, ip_hash, weighted distribution
- WebSocket proxying requires proxy_http_version 1.1 and Upgrade/Connection headers
- Caching: proxy_cache_path + proxy_cache drastically reduces backend load
1. Installation & Getting Started
Nginx can be installed through your system package manager or the official repository. The official repo is recommended for the latest stable release.
Ubuntu / Debian Installation
# Install prerequisites
sudo apt update
sudo apt install -y curl gnupg2 ca-certificates lsb-release
# Add official Nginx signing key and repo
curl -fsSL https://nginx.org/keys/nginx_signing.key | sudo gpg --dearmor -o /usr/share/keyrings/nginx-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu $(lsb_release -cs) nginx" | sudo tee /etc/apt/sources.list.d/nginx.list
# Install Nginx
sudo apt update
sudo apt install -y nginx
# Start and enable
sudo systemctl start nginx
sudo systemctl enable nginx
sudo systemctl status nginxCentOS / RHEL / Rocky Installation
sudo yum install -y epel-release
sudo yum install -y nginx
sudo systemctl start nginx
sudo systemctl enable nginx
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reloadmacOS (Homebrew)
brew install nginx
brew services start nginx
# Config: /opt/homebrew/etc/nginx/nginx.conf
# Default port: 8080Key Directory Structure
| Path | Purpose |
|---|---|
| /etc/nginx/nginx.conf | Main config file |
| /etc/nginx/conf.d/ | Additional configs (recommended for server blocks) |
| /etc/nginx/sites-available/ | Available sites (Debian-style) |
| /etc/nginx/sites-enabled/ | Enabled sites (symlinks) |
| /var/log/nginx/ | Access and error logs |
| /var/www/html/ | Default web root |
# Essential Nginx commands
nginx -t # Test config syntax
sudo systemctl reload nginx # Reload without downtime
sudo systemctl restart nginx # Full restart
nginx -V # Show compile-time modules
nginx -T # Dump full merged config2. Core Configuration Structure
Nginx configuration uses a hierarchical context structure: main -> events -> http -> server -> location. Understanding this hierarchy is fundamental to correct configuration.
# /etc/nginx/nginx.conf — Main configuration skeleton
user nginx;
worker_processes auto; # One worker per CPU core
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096; # Max connections per worker
multi_accept on; # Accept multiple connections at once
use epoll; # Linux: epoll, macOS: kqueue
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log main;
# Performance
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
# Include server blocks
include /etc/nginx/conf.d/*.conf;
}3. Server Blocks (Virtual Hosts)
Server blocks allow hosting multiple domains on a single Nginx instance. Each server block matches requests based on server_name and listen port.
# /etc/nginx/conf.d/example.com.conf
server {
listen 80;
listen [::]:80; # IPv6
server_name example.com www.example.com;
root /var/www/example.com/public;
index index.html index.htm;
# Access & error logs per site
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
location / {
try_files $uri $uri/ =404;
}
# Custom error pages
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}WWW to Non-WWW Redirect
server {
listen 80;
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}4. Location Directives Deep Dive
Location directives determine how Nginx handles different URIs. Understanding match priority is the key to avoiding routing issues.
| Modifier | Type | Priority | Description |
|---|---|---|---|
| = | Exact | 1 | Exact URI match, highest priority |
| ^~ | Preferential prefix | 2 | Stops regex checking if prefix matches |
| ~ | Case-sensitive regex | 3 | Regex match (case-sensitive) |
| ~* | Case-insensitive regex | 3 | Regex match (case-insensitive) |
| (none) | Prefix | 4 | Regular prefix match (longest wins) |
# Location matching examples
server {
# 1. Exact match — highest priority
location = / {
# Only matches exactly "/"
return 200 "Homepage";
}
# 2. Preferential prefix — skips regex
location ^~ /static/ {
# Any URI starting with /static/
root /var/www/assets;
}
# 3. Case-sensitive regex
location ~ \.php$ {
# URIs ending in .php
fastcgi_pass unix:/run/php/php-fpm.sock;
}
# 4. Case-insensitive regex
location ~* \.(jpg|jpeg|png|gif|ico|svg)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
# 5. Regular prefix (fallback)
location / {
try_files $uri $uri/ /index.html;
}
}5. Reverse Proxy Configuration
Reverse proxying is one of the most common uses of Nginx. It forwards client requests to backend application servers (Node.js, Python, Go, etc.) while handling SSL termination, static file serving, and load balancing.
# Reverse proxy to a Node.js / Next.js app
server {
listen 80;
server_name myapp.com;
# Proxy all requests to backend
location / {
proxy_pass http://127.0.0.1:3000;
# Preserve original client info
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 16k;
}
# Serve static files directly (bypass proxy)
location /static/ {
alias /var/www/myapp/static/;
expires 30d;
add_header Cache-Control "public, immutable";
}
}6. Load Balancing
Nginx distributes traffic across multiple backend servers via upstream blocks, enabling high availability and horizontal scaling.
| Strategy | Directive | Use Case |
|---|---|---|
| Round Robin (default) | — | Backends with equal capacity |
| Least Connections | least_conn | Varying request processing times |
| IP Hash | ip_hash | Session persistence required |
| Weighted | weight=N | Backends with different capacities |
# Load balancing with health checks
upstream backend {
least_conn; # Use least connections strategy
server 10.0.0.1:3000 weight=3; # 3x traffic share
server 10.0.0.2:3000 weight=2; # 2x traffic share
server 10.0.0.3:3000 weight=1; # 1x traffic share
server 10.0.0.4:3000 backup; # Only used when others fail
# Health check parameters
# max_fails: number of failed attempts before marking down
# fail_timeout: time to consider server unavailable
server 10.0.0.1:3000 weight=3 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
server_name myapp.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_next_upstream error timeout http_500 http_502 http_503;
proxy_next_upstream_tries 3;
}
}7. SSL/TLS Configuration
Modern websites require HTTPS. Let's Encrypt provides free certificates, and Certbot automates certificate acquisition and renewal.
Certbot Automatic Setup
# Install Certbot
sudo apt install -y certbot python3-certbot-nginx
# Obtain certificate and auto-configure Nginx
sudo certbot --nginx -d example.com -d www.example.com
# Test auto-renewal
sudo certbot renew --dry-run
# Certificates stored at:
# /etc/letsencrypt/live/example.com/fullchain.pem
# /etc/letsencrypt/live/example.com/privkey.pemProduction-Grade SSL Config
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
# Certificate files
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Modern TLS settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
# SSL session cache
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# ... your location blocks
}
# HTTP to HTTPS redirect
server {
listen 80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}8. Caching Configuration
Nginx provides two caching mechanisms: browser caching for static files (expires directive) and reverse proxy caching (proxy_cache). Proper caching significantly reduces backend load and response latency.
Static File Browser Caching
# Cache static assets with long expiration
location ~* \.(css|js|woff2|woff|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp|avif)$ {
expires 30d;
add_header Cache-Control "public";
access_log off;
}
# HTML files — no cache or short cache
location ~* \.html$ {
expires 1h;
add_header Cache-Control "public, must-revalidate";
}Reverse Proxy Cache
# Define cache zone in http block
http {
proxy_cache_path /var/cache/nginx levels=1:2
keys_zone=app_cache:10m
max_size=1g
inactive=60m
use_temp_path=off;
}
# Use cache in server/location
server {
location / {
proxy_pass http://backend;
proxy_cache app_cache;
proxy_cache_valid 200 302 10m; # Cache 200/302 for 10 min
proxy_cache_valid 404 1m; # Cache 404 for 1 min
proxy_cache_use_stale error timeout updating
http_500 http_502 http_503;
proxy_cache_lock on; # Prevent thundering herd
# Show cache status in response header
add_header X-Cache-Status $upstream_cache_status;
}
# Bypass cache for authenticated users
location /api/ {
proxy_pass http://backend;
proxy_cache app_cache;
proxy_cache_bypass $http_authorization;
proxy_no_cache $http_authorization;
}
}9. Rate Limiting
Rate limiting protects backends from traffic spikes, brute-force attacks, and API abuse. Nginx uses a leaky bucket algorithm to smooth request rates.
# Define rate limit zones in http block
http {
# 10 requests/sec per IP for general traffic
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
# 5 requests/min for login endpoints
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
# 30 requests/sec per IP for API
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;
# Connection limiting
limit_conn_zone $binary_remote_addr zone=perip:10m;
}
server {
# General rate limit
location / {
limit_req zone=general burst=20 nodelay;
limit_req_status 429;
proxy_pass http://backend;
}
# Strict rate limit on auth endpoints
location /api/login {
limit_req zone=login burst=3 nodelay;
limit_req_status 429;
proxy_pass http://backend;
}
# Connection limit (max 10 simultaneous per IP)
location /downloads/ {
limit_conn perip 10;
limit_rate 500k; # Throttle to 500KB/s per connection
alias /var/www/files/;
}
}10. Security Headers
Security response headers are a critical defense against XSS, clickjacking, and protocol downgrade attacks. These headers should be set in all production environments.
# Security headers — add to server or http block
# HSTS: Force HTTPS for 1 year, include subdomains
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# Prevent clickjacking
add_header X-Frame-Options "SAMEORIGIN" always;
# Prevent MIME-type sniffing
add_header X-Content-Type-Options "nosniff" always;
# XSS protection (legacy browsers)
add_header X-XSS-Protection "1; mode=block" always;
# Referrer policy
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Permissions policy (restrict browser features)
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# Content Security Policy (customize per site)
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.example.com; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com;" always;
# Hide Nginx version
server_tokens off;11. WebSocket Proxy
WebSocket connections are established via the HTTP Upgrade mechanism. Nginx must explicitly pass Upgrade and Connection headers to correctly proxy WebSocket traffic.
# WebSocket proxy configuration
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name ws.example.com;
location /ws {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
# Required for WebSocket
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# Preserve client info
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Keep connection alive (default 60s may be too short)
proxy_read_timeout 86400s; # 24 hours
proxy_send_timeout 86400s;
}
# Socket.IO specific path
location /socket.io/ {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
}12. Gzip Compression
Gzip compression reduces text resource transfer size (HTML, CSS, JS, JSON) by 60-80%, significantly improving page load speed. Configure globally in the http block.
http {
# Enable gzip
gzip on;
gzip_vary on; # Add Vary: Accept-Encoding header
gzip_proxied any; # Compress for proxied requests too
gzip_comp_level 5; # Balance: 1=fast/low, 9=slow/high
gzip_min_length 1000; # Skip files < 1KB
gzip_http_version 1.1;
# MIME types to compress
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml
application/xml+rss
application/atom+xml
image/svg+xml
font/woff2;
# Disable gzip for old IE
gzip_disable "msie6";
}13. Performance Tuning
Under high concurrency, tuning Nginx worker parameters, buffer sizes, and timeout settings is crucial. Below are production-proven tuning parameters.
# ---- Main context ----
worker_processes auto; # Match CPU cores
worker_rlimit_nofile 65535; # Max open file descriptors
# ---- Events context ----
events {
worker_connections 4096; # Increase for high traffic
multi_accept on; # Accept all pending connections
use epoll; # Linux: epoll; macOS: kqueue
}
# ---- HTTP context ----
http {
# Sendfile: kernel-level file transfer
sendfile on;
tcp_nopush on; # Optimize packet size
tcp_nodelay on; # Disable Nagle's algorithm
# Keepalive
keepalive_timeout 65;
keepalive_requests 1000; # Max requests per keepalive
# Client body / headers
client_max_body_size 50m; # Max upload size
client_body_buffer_size 128k;
client_header_buffer_size 1k;
large_client_header_buffers 4 16k;
# Proxy buffers (for reverse proxy)
proxy_buffer_size 4k;
proxy_buffers 8 32k;
proxy_busy_buffers_size 64k;
# Open file cache
open_file_cache max=10000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}System Kernel Tuning
# /etc/sysctl.conf — Linux kernel tuning for Nginx
# Increase max open files
fs.file-max = 2097152
# Network tuning
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535
net.ipv4.tcp_max_syn_backlog = 65535
# TCP keepalive
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 5
# Reuse sockets
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
# Apply: sudo sysctl -p14. Common Configuration Patterns
SPA (Single Page App) Deployment
server {
listen 443 ssl http2;
server_name spa.example.com;
root /var/www/spa/dist;
# All routes fall back to index.html
location / {
try_files $uri $uri/ /index.html;
}
# Cache hashed static assets forever
location /assets/ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# API proxy
location /api/ {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}WordPress / PHP-FPM
server {
listen 443 ssl http2;
server_name blog.example.com;
root /var/www/wordpress;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php8.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
fastcgi_read_timeout 300;
}
# Block access to sensitive files
location ~ /\.(ht|git|env) {
deny all;
}
location = /xmlrpc.php {
deny all;
}
}Next.js / PM2 Deployment
upstream nextjs {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 443 ssl http2;
server_name nextapp.example.com;
# Next.js static assets (/_next/static/)
location /_next/static/ {
alias /var/www/nextapp/.next/static/;
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# Public folder static files
location /static/ {
alias /var/www/nextapp/public/static/;
expires 30d;
}
# Proxy everything else to Next.js
location / {
proxy_pass http://nextjs;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}CORS Configuration
location /api/ {
# CORS headers
add_header Access-Control-Allow-Origin "https://frontend.example.com" always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type, X-Requested-With" always;
add_header Access-Control-Allow-Credentials "true" always;
add_header Access-Control-Max-Age 86400 always;
# Handle preflight requests
if ($request_method = OPTIONS) {
return 204;
}
proxy_pass http://backend;
}Basic Authentication
# Create password file
# sudo apt install apache2-utils
# sudo htpasswd -c /etc/nginx/.htpasswd admin
location /admin/ {
auth_basic "Restricted Area";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://backend;
}
# Whitelist IPs (skip auth for trusted IPs)
location /admin/ {
satisfy any;
allow 10.0.0.0/8;
deny all;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}Custom Log Format & Conditional Logging
http {
# JSON log format for log aggregation (ELK, Loki)
log_format json_log escape=json
'{"time":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"method":"$request_method",'
'"uri":"$request_uri",'
'"status":$status,'
'"body_bytes":$body_bytes_sent,'
'"request_time":$request_time,'
'"upstream_time":"$upstream_response_time",'
'"user_agent":"$http_user_agent"}';
# Skip logging for health checks and static assets
map $request_uri $loggable {
~*\.(?:css|js|ico|png|jpg|svg|woff2)$ 0;
/health 0;
default 1;
}
access_log /var/log/nginx/access.json json_log if=$loggable;
}15. Complete Production Configuration Template
Below is a complete production template combining all best practices: SSL, security headers, caching, gzip, rate limiting, and reverse proxy.
# /etc/nginx/conf.d/production.conf
# Complete production-ready configuration
upstream app_backend {
least_conn;
server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
keepalive 64;
}
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;
# HTTP → HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
# WWW → non-WWW redirect
server {
listen 443 ssl http2;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://example.com$request_uri;
}
# Main server
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
# SSL
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_stapling on;
ssl_stapling_verify on;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
server_tokens off;
# Logging
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log warn;
# Static assets
location ~* \.(css|js|woff2|woff|ttf|svg|ico)$ {
root /var/www/example.com/public;
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
location ~* \.(jpg|jpeg|png|gif|webp|avif)$ {
root /var/www/example.com/public;
expires 30d;
access_log off;
}
# API with rate limit
location /api/ {
limit_req zone=api burst=50 nodelay;
limit_req_status 429;
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Default proxy
location / {
limit_req zone=general burst=20 nodelay;
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Block hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}16. Nginx vs Apache vs Caddy Comparison
| Feature | Nginx | Apache | Caddy |
|---|---|---|---|
| Architecture | Event-driven, async | Process/thread model | Go goroutines |
| Memory Usage | Very low | Higher | Low |
| Auto HTTPS | Needs Certbot | Needs Certbot | Built-in automatic |
| .htaccess | Not supported | Supported (perf overhead) | Not supported |
| Config Syntax | Custom (concise) | XML-style | Caddyfile (minimal) |
| Best For | Reverse proxy, high concurrency | Shared hosting, .htaccess | Quick deploy, auto TLS |
17. Troubleshooting
# Check config syntax before reload
nginx -t
# View error log in real time
tail -f /var/log/nginx/error.log
# Check if Nginx is listening
ss -tlnp | grep nginx
# Test specific endpoints
curl -I https://example.com
curl -vk https://example.com 2>&1 | grep -i "ssl\|tls\|subject"
# Check worker processes
ps aux | grep nginx
# Dump full merged config (includes all includes)
nginx -T
# Check file permissions for web root
namei -l /var/www/example.com/public/index.html
# Common issues:
# 502 Bad Gateway → Backend is down or proxy_pass target unreachable
# 504 Gateway Timeout → Backend too slow, increase proxy_read_timeout
# 413 Entity Too Large → Increase client_max_body_size
# 403 Forbidden → Check file permissions and SELinux/AppArmor
# Permission denied on socket → Check nginx user and socket file permsNginx is a cornerstone of modern web infrastructure. From simple static file serving to complex microservice reverse proxying and load balancing, mastering the configuration patterns in this guide will help you build high-performance, secure, and scalable production environments. Start with a basic configuration, incrementally add SSL, caching, rate limiting, and security headers, and always validate changes with nginx -t before reloading.
Frequently Asked Questions
What is the difference between Nginx and Apache?
Nginx uses an asynchronous, event-driven architecture that handles thousands of concurrent connections in a single worker process with minimal memory. Apache uses a process/thread-per-connection model by default (prefork/worker MPMs). Nginx excels at serving static files and reverse proxying, while Apache offers more flexible per-directory configuration via .htaccess files. For modern high-traffic sites, Nginx is generally preferred for its lower resource usage and higher concurrency.
How do I set up a reverse proxy with Nginx?
Use the proxy_pass directive inside a location block: location / { proxy_pass http://127.0.0.1:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }. This forwards all requests to a backend running on port 3000 while preserving the original client information through headers.
How do I configure SSL/TLS with Nginx and Let's Encrypt?
Install Certbot (apt install certbot python3-certbot-nginx), then run certbot --nginx -d yourdomain.com. Certbot automatically obtains a certificate and modifies your Nginx config to listen on port 443 with ssl_certificate and ssl_certificate_key directives. For best security, add ssl_protocols TLSv1.2 TLSv1.3, ssl_prefer_server_ciphers on, and enable HSTS. Certbot sets up auto-renewal via a systemd timer or cron job.
What is the difference between location /, location =, location ~, and location ^~ in Nginx?
location / is a prefix match (matches any URI starting with /). location = /path is an exact match with highest priority. location ~ pattern is a case-sensitive regex match, and location ~* is case-insensitive. location ^~ /path is a prefix match that stops regex checking if matched. Priority order: exact (=) > preferential prefix (^~) > regex (~ / ~*) > regular prefix (longest match). Understanding this order is crucial for correct routing.
How do I configure Nginx load balancing across multiple backends?
Define an upstream block with your backend servers: upstream myapp { server 127.0.0.1:3001; server 127.0.0.1:3002; server 127.0.0.1:3003; }. Then proxy_pass to it: proxy_pass http://myapp;. Nginx supports round-robin (default), least_conn (fewest active connections), ip_hash (sticky sessions by client IP), and weighted distribution (server host weight=3). Add max_fails and fail_timeout for health checking.
How do I enable gzip compression in Nginx?
Add gzip on; in the http block, then configure: gzip_types text/plain text/css application/json application/javascript text/xml application/xml text/javascript; gzip_min_length 1000; gzip_comp_level 5; gzip_vary on;. This compresses responses over 1KB for the specified MIME types. Level 5 balances CPU usage and compression ratio. Always set gzip_vary on so CDNs cache compressed and uncompressed versions separately.
How do I proxy WebSocket connections through Nginx?
WebSocket requires HTTP Upgrade headers. Add: location /ws { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_read_timeout 86400s; }. The key directives are proxy_http_version 1.1 and the Upgrade/Connection headers. Increase proxy_read_timeout to prevent Nginx from closing idle WebSocket connections.
How do I implement rate limiting in Nginx?
Use the limit_req module. First define a zone in the http block: limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;. Then apply it in a location block: limit_req zone=api burst=20 nodelay;. This allows 10 requests per second per IP with a burst queue of 20. The nodelay flag processes burst requests immediately rather than throttling them. Use limit_req_status 429 to return a proper Too Many Requests status code.