If you encode video on your local machine, your CPU gets pegged and you can't do anything else for the next hour. We've all been there — the machine grinds to a halt while FFmpeg churns through footage.
This article walks through building a dedicated FFmpeg encoding server on a VPS. By the end you'll have a watch folder that triggers encoding automatically, a systemd service keeping it alive 24/7, and a simple REST API for submitting jobs remotely.
What you'll learn
- Why offloading encoding to a VPS makes sense
- How to pick the right VPS specs
- Installing and configuring FFmpeg from scratch
- Building a watch folder with
inotifywait - Running the watcher as a systemd service
- Exposing a minimal REST API with FastAPI
- Production tips: disk management, resource limits, logging
Why Run Encoding on a VPS
The main problem with local encoding isn't just the CPU load — it's reliability. Leave an overnight encode running on a laptop and there's a good chance it woke from sleep halfway through, or the fan spun up so loud you killed the process.
A VPS solves this with three concrete benefits.
Your local machine stays free. Once you upload the file, encoding runs entirely on the VPS. You can close your laptop, switch contexts, and come back to a finished file.
Guaranteed uptime. VPS instances run in data centers with no sleep, no power cuts, no accidental shutdowns. Long encodes finish reliably.
Elastic scaling. When throughput isn't enough, upgrade the VPS or spin up a second one. That's cheaper and faster than buying a new local machine.
Cost-wise, a VPS with enough CPU for typical video work runs $3–15/month depending on region and provider. Far less than a dedicated encoding workstation.
Choosing a VPS and Estimating Specs
CPU core count is the single most important spec for FFmpeg. It uses all available threads for software encoding, so more cores directly means faster encodes.
Spec guidelines
| Use case | CPU | RAM | Storage |
|---|---|---|---|
| Testing / personal | 2 cores | 2 GB | 50 GB SSD |
| Mid-scale (several files/day) | 4 cores | 4 GB | 100 GB SSD |
| Production | 8+ cores | 8+ GB | 200+ GB SSD |
GPU encoding (NVENC, VAAPI) is significantly faster but requires a GPU-enabled plan, which costs considerably more. For most workflows, libx264 or libx265 on CPU delivers excellent quality without the premium.
Recommended providers
- DigitalOcean — straightforward pricing, good documentation, easy to resize
- Hetzner — excellent price-to-performance in Europe
- Vultr — competitive pricing globally, hourly billing
Pick Ubuntu 22.04 LTS as the OS. Long-term support through 2027, extensive documentation, and most FFmpeg guides target it.
Installing and Configuring FFmpeg
SSH into your VPS and start with a system update.
# Update the system
sudo apt update && sudo apt upgrade -y
# Install FFmpeg and required tools
sudo apt install -y ffmpeg inotify-tools python3-pip python3-venv
# Verify the install
ffmpeg -version
You should see ffmpeg version 6.x or similar. If the command isn't found, the apt install failed — rerun with sudo apt install -y ffmpeg.
Set up the directory structure.
# Create the encoding server directories
sudo mkdir -p /opt/encoder/{watch,processing,done,failed,logs}
# Create a dedicated system user (never run as root)
sudo useradd -r -s /bin/false encoder
# Transfer ownership
sudo chown -R encoder:encoder /opt/encoder
Directory roles:
watch/— drop files here to trigger encodingprocessing/— files move here while encoding runsdone/— completed encodes land herefailed/— files that errored out land herelogs/— log files for the watcher and FFmpeg
Next, write the encode script that does the actual transcoding.
# Create /opt/encoder/encode.sh
sudo tee /opt/encoder/encode.sh > /dev/null << 'EOF'
#!/bin/bash
set -euo pipefail
INPUT="$1"
BASENAME=$(basename "$INPUT" | sed 's/\.[^.]*$//')
OUTPUT="/opt/encoder/done/${BASENAME}_encoded.mp4"
ffmpeg -i "$INPUT" \
-c:v libx264 \
-preset slow \
-crf 23 \
-c:a aac \
-b:a 128k \
-movflags +faststart \
"$OUTPUT" \
2>> /opt/encoder/logs/ffmpeg.log
echo "Done: $OUTPUT"
EOF
sudo chmod +x /opt/encoder/encode.sh
sudo chown encoder:encoder /opt/encoder/encode.sh
-crf 23 is a good starting point for quality/size balance. Lower values (18–20) give higher quality at larger file sizes. Higher values (26–28) compress more aggressively. The sweet spot for most content is 20–26.
Automating Encoding with a Watch Folder
The watch script listens for new files in watch/ and runs the encode script on each one.
# Create /opt/encoder/watch.sh
sudo tee /opt/encoder/watch.sh > /dev/null << 'EOF'
#!/bin/bash
set -euo pipefail
WATCH_DIR="/opt/encoder/watch"
PROCESSING_DIR="/opt/encoder/processing"
FAILED_DIR="/opt/encoder/failed"
LOG="/opt/encoder/logs/watch.log"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG"
}
log "Encoder watch started. Watching: $WATCH_DIR"
inotifywait -m -e close_write --format '%f' "$WATCH_DIR" | while read -r FILENAME; do
FILEPATH="${WATCH_DIR}/${FILENAME}"
# Only process supported video formats
EXT="${FILENAME##*.}"
EXT_LOWER=$(echo "$EXT" | tr '[:upper:]' '[:lower:]')
if [[ ! "$EXT_LOWER" =~ ^(mp4|mkv|mov|avi)$ ]]; then
log "Skipped (unsupported format): $FILENAME"
continue
fi
log "Detected: $FILENAME"
# Move to processing to prevent double-triggering
PROC_PATH="${PROCESSING_DIR}/${FILENAME}"
mv "$FILEPATH" "$PROC_PATH"
log "Moved to processing: $FILENAME"
# Run the encode
if /opt/encoder/encode.sh "$PROC_PATH"; then
rm -f "$PROC_PATH"
log "Success: $FILENAME"
else
mv "$PROC_PATH" "${FAILED_DIR}/${FILENAME}"
log "Failed: $FILENAME — moved to failed/"
fi
done
EOF
sudo chmod +x /opt/encoder/watch.sh
sudo chown encoder:encoder /opt/encoder/watch.sh
Test it manually before wiring up systemd.
# Start the watcher in the background
sudo -u encoder /opt/encoder/watch.sh &
# Drop a test file into the watch folder
cp /path/to/test.mp4 /opt/encoder/watch/
# Tail the log
tail -f /opt/encoder/logs/watch.log
Running the Watcher as a systemd Service
A script running in the background isn't production-grade. Register it as a systemd service so it starts on boot and restarts automatically if it crashes.
# /etc/systemd/system/encoder.service
[Unit]
Description=FFmpeg Encoding Watch Service
After=network.target
[Service]
Type=simple
User=encoder
Group=encoder
ExecStart=/opt/encoder/watch.sh
Restart=on-failure
RestartSec=5s
StandardOutput=append:/opt/encoder/logs/systemd.log
StandardError=append:/opt/encoder/logs/systemd.log
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ReadWritePaths=/opt/encoder
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Enable and start the service.
# Reload systemd to pick up the new unit file
sudo systemctl daemon-reload
# Enable auto-start on boot
sudo systemctl enable encoder.service
# Start it now
sudo systemctl start encoder.service
# Check status
sudo systemctl status encoder.service
Active: active (running) means it's working. Use journalctl for deeper log inspection.
# Follow logs in real time
sudo journalctl -u encoder.service -f
# View logs from the last hour
sudo journalctl -u encoder.service --since "1 hour ago"
Adding a Remote Job Submission API
SCP to the watch folder is reliable but requires SSH access. A REST API lets you submit jobs from scripts, CI pipelines, or any HTTP client.
Install FastAPI into a virtual environment.
# Create a Python virtual environment
python3 -m venv /opt/encoder/venv
# Install FastAPI and Uvicorn
/opt/encoder/venv/bin/pip install fastapi uvicorn python-multipart
Write the API server.
# /opt/encoder/api.py
import os
import shutil
from pathlib import Path
from fastapi import Depends, FastAPI, File, HTTPException, Security, UploadFile
from fastapi.responses import JSONResponse
from fastapi.security import APIKeyHeader
app = FastAPI(title="FFmpeg Encoding API")
API_KEY = os.environ.get("ENCODER_API_KEY", "")
api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False)
WATCH_DIR = Path("/opt/encoder/watch")
DONE_DIR = Path("/opt/encoder/done")
FAILED_DIR = Path("/opt/encoder/failed")
PROCESSING_DIR = Path("/opt/encoder/processing")
def verify_api_key(key: str = Security(api_key_header)) -> str:
if not API_KEY:
raise HTTPException(status_code=500, detail="API key not configured on server")
if key != API_KEY:
raise HTTPException(status_code=403, detail="Invalid API key")
return key
@app.post("/encode")
async def submit_encode(
file: UploadFile = File(...),
_: str = Depends(verify_api_key),
) -> JSONResponse:
"""Upload a video file and queue it for encoding."""
allowed_exts = {".mp4", ".mkv", ".mov", ".avi"}
ext = Path(file.filename).suffix.lower()
if ext not in allowed_exts:
raise HTTPException(
status_code=400,
detail=f"Unsupported file type: {ext}. Allowed: {allowed_exts}",
)
dest = WATCH_DIR / file.filename
with dest.open("wb") as f:
shutil.copyfileobj(file.file, f)
return JSONResponse(
status_code=202,
content={"status": "queued", "filename": file.filename},
)
@app.get("/status")
async def get_status(_: str = Depends(verify_api_key)) -> JSONResponse:
"""Return file counts for each directory."""
return JSONResponse(
content={
"watching": len(list(WATCH_DIR.iterdir())),
"processing": len(list(PROCESSING_DIR.iterdir())),
"done": len(list(DONE_DIR.iterdir())),
"failed": len(list(FAILED_DIR.iterdir())),
}
)
@app.get("/health")
async def health_check() -> JSONResponse:
"""Health check endpoint — no auth required."""
return JSONResponse(content={"status": "ok"})
Register the API as a systemd service.
# Create the API service unit file
sudo tee /etc/systemd/system/encoder-api.service > /dev/null << 'EOF'
[Unit]
Description=FFmpeg Encoding API Server
After=network.target
[Service]
Type=simple
User=encoder
Group=encoder
WorkingDirectory=/opt/encoder
Environment="ENCODER_API_KEY=your-secret-key-here"
ExecStart=/opt/encoder/venv/bin/uvicorn api:app --host 127.0.0.1 --port 8000
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable encoder-api.service
sudo systemctl start encoder-api.service
Generate a strong API key with openssl.
# Generate a random 32-byte hex key
openssl rand -hex 32
Submit a job via curl.
# Upload a file for encoding
curl -X POST http://your-vps-ip:8000/encode \
-H "X-API-Key: your-secret-key-here" \
-F "file=@/path/to/video.mp4"
# Check queue status
curl http://your-vps-ip:8000/status \
-H "X-API-Key: your-secret-key-here"
For production, put Nginx in front of the API as a reverse proxy and add TLS. Binding Uvicorn to 127.0.0.1 means it only listens on localhost — Nginx handles the public-facing HTTPS connection.
Production Considerations
Disk space management
Video files fill disks fast. Set a policy for the done/ folder before you go live. The simplest approach is a cron job that deletes files older than N days.
# Delete files in done/ older than 7 days (add to crontab -e)
0 3 * * * find /opt/encoder/done -type f -mtime +7 -delete
Limiting concurrent encodes
If multiple files land in the watch folder simultaneously, the watcher starts parallel encodes and saturates the CPU. Add a lock file or use flock in the encode script to enforce one encode at a time.
Lower the process priority with nice so the OS stays responsive even during heavy encoding.
# In encode.sh — wrap the ffmpeg call with nice
nice -n 10 ffmpeg -i "$INPUT" ...
Log rotation
Logs in /opt/encoder/logs/ grow indefinitely without rotation. Set up logrotate.
# Create /etc/logrotate.d/encoder
sudo tee /etc/logrotate.d/encoder > /dev/null << 'EOF'
/opt/encoder/logs/*.log {
daily
rotate 7
compress
missingok
notifempty
}
EOF
Transferring files
SCP is the simplest way to push files from a local machine to the watch folder.
# Push a local file to the watch folder
scp /path/to/video.mp4 user@your-vps-ip:/opt/encoder/watch/
Set up SSH public key authentication so you don't need to type a password on every transfer. Add an alias to ~/.ssh/config if you're doing this frequently.
Related articles:
Wrapping Up
Here's what we built and why it works.
- VPS over local: frees your machine, guarantees uptime, scales without hardware changes
- Spec choice: 2 cores for personal use, 8+ for production throughput
- FFmpeg install:
apt install ffmpeg inotify-toolscovers the essentials - Watch folder:
inotifywait -e close_writefires only after a file is fully written — SCP-safe - systemd service: auto-starts on boot, restarts on crash, logs to journald
- REST API: FastAPI + API key authentication for remote job submission
- Production: disk cleanup cron,
nicefor CPU courtesy, logrotate for log hygiene
Start with the watch folder and systemd service — that's the core of the system and covers most use cases. Add the API later if you need programmatic job submission. When the VPS runs out of headroom, upgrade the instance; the code doesn't change.