32blogby Studio Mitsu

Build an FFmpeg Encoding Server on a VPS

Set up a dedicated FFmpeg encoding server on a VPS. Covers installation, watch folder automation, systemd service, and a simple REST API.

by omitsu15 min read

This article contains affiliate links.

On this page

To build an FFmpeg encoding server on a VPS, install FFmpeg and inotify-tools on an Ubuntu 24.04 instance, create a watch folder that triggers encoding via inotifywait -e close_write, register the watcher as a systemd service for 24/7 uptime, and optionally add a FastAPI REST endpoint for remote job submission.

If you encode video on your local machine, your CPU gets pegged and you can't do anything else for the next hour. We've all been there — the machine grinds to a halt while FFmpeg churns through footage.

This article walks through building a dedicated FFmpeg encoding server on a VPS. By the end you'll have a watch folder that triggers encoding automatically, a systemd service keeping it alive 24/7, and a simple REST API for submitting jobs remotely.

What you'll learn

  • Why offloading encoding to a VPS makes sense
  • How to pick the right VPS specs
  • Installing and configuring FFmpeg from scratch
  • Building a watch folder with inotifywait
  • Running the watcher as a systemd service
  • Exposing a minimal REST API with FastAPI
  • Production tips: disk management, resource limits, logging

Why Run Encoding on a VPS

The main problem with local encoding isn't just the CPU load — it's reliability. Leave an overnight encode running on a laptop and there's a good chance it woke from sleep halfway through, or the fan spun up so loud you killed the process.

A VPS solves this with three concrete benefits.

Your local machine stays free. Once you upload the file, encoding runs entirely on the VPS. You can close your laptop, switch contexts, and come back to a finished file.

Guaranteed uptime. VPS instances run in data centers with no sleep, no power cuts, no accidental shutdowns. Long encodes finish reliably.

Elastic scaling. When throughput isn't enough, upgrade the VPS or spin up a second one. That's cheaper and faster than buying a new local machine.

Cost-wise, a VPS with enough CPU for typical video work runs $3–15/month depending on region and provider. Far less than a dedicated encoding workstation. Providers like Kamatera offer a 30-day free trial, so you can benchmark real encoding jobs before committing to a plan.

If you're weighing the economics of self-hosted encoding against managed cloud services, check out FFmpeg vs AWS MediaConvert: Cost Comparison for a breakdown with real numbers.

Choosing a VPS and Estimating Specs

CPU core count is the single most important spec for FFmpeg. It uses all available threads for software encoding, so more cores directly means faster encodes.

Spec guidelines

Use caseCPURAMStorage
Testing / personal2 cores2 GB50 GB SSD
Mid-scale (several files/day)4 cores4 GB100 GB SSD
Production8+ cores8+ GB200+ GB SSD

GPU encoding (NVENC, VAAPI) is significantly faster but requires a GPU-enabled plan, which costs considerably more. For most workflows, libx264 or libx265 on CPU delivers excellent quality without the premium.

Recommended providers

  • Kamatera — 13 global data centers, pay-as-you-go from $4/month, 30-day free trial
  • DigitalOcean — straightforward pricing, good documentation, easy to resize
  • Hetzner — excellent price-to-performance in Europe
  • Vultr — competitive pricing globally, hourly billing

Pick Ubuntu 24.04 LTS as the OS. Standard support through April 2029, extensive documentation, and apt install ffmpeg gives you FFmpeg 6.1.x out of the box.

Installing and Configuring FFmpeg

SSH into your VPS and start with a system update.

bash
# Update the system
sudo apt update && sudo apt upgrade -y

# Install FFmpeg and required tools
sudo apt install -y ffmpeg inotify-tools python3-pip python3-venv

# Verify the install
ffmpeg -version

On Ubuntu 24.04, you should see ffmpeg version 6.1.x. On 22.04, the default is 4.4.x — still fully functional for this guide. If the command isn't found, the apt install failed — rerun with sudo apt install -y ffmpeg. For detailed installation options including building from source, see the Complete FFmpeg Installation Guide.

Set up the directory structure.

bash
# Create the encoding server directories
sudo mkdir -p /opt/encoder/{watch,processing,done,failed,logs}

# Create a dedicated system user (never run as root)
sudo useradd -r -s /bin/false encoder

# Transfer ownership
sudo chown -R encoder:encoder /opt/encoder

Directory roles:

  • watch/ — drop files here to trigger encoding
  • processing/ — files move here while encoding runs
  • done/ — completed encodes land here
  • failed/ — files that errored out land here
  • logs/ — log files for the watcher and FFmpeg
SCP / APIUploadwatch/Queueinotifyprocessing/FFmpeg encodeencodedone/Output

Next, write the encode script that does the actual transcoding.

bash
# Create /opt/encoder/encode.sh
sudo tee /opt/encoder/encode.sh > /dev/null << 'EOF'
#!/bin/bash
set -euo pipefail

INPUT="$1"
BASENAME=$(basename "$INPUT" | sed 's/\.[^.]*$//')
OUTPUT="/opt/encoder/done/${BASENAME}_encoded.mp4"

ffmpeg -i "$INPUT" \
  -c:v libx264 \
  -preset slow \
  -crf 23 \
  -c:a aac \
  -b:a 128k \
  -movflags +faststart \
  "$OUTPUT" \
  2>> /opt/encoder/logs/ffmpeg.log

echo "Done: $OUTPUT"
EOF

sudo chmod +x /opt/encoder/encode.sh
sudo chown encoder:encoder /opt/encoder/encode.sh

-crf 23 is a good starting point for quality/size balance. Lower values (18–20) give higher quality at larger file sizes. Higher values (26–28) compress more aggressively. The sweet spot for most content is 20–26. For a deep dive into CRF tuning and use-case-specific settings, see the FFmpeg Video Compression Guide.

Want to automate encoding for a batch of files instead of one at a time? The Python + FFmpeg Batch Automation guide covers scripting multi-file workflows with progress tracking.

Automating Encoding with a Watch Folder

The watch script listens for new files in watch/ and runs the encode script on each one.

bash
# Create /opt/encoder/watch.sh
sudo tee /opt/encoder/watch.sh > /dev/null << 'EOF'
#!/bin/bash
set -euo pipefail

WATCH_DIR="/opt/encoder/watch"
PROCESSING_DIR="/opt/encoder/processing"
FAILED_DIR="/opt/encoder/failed"
LOG="/opt/encoder/logs/watch.log"

log() {
  echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG"
}

log "Encoder watch started. Watching: $WATCH_DIR"

inotifywait -m -e close_write --format '%f' "$WATCH_DIR" | while read -r FILENAME; do
  FILEPATH="${WATCH_DIR}/${FILENAME}"

  # Only process supported video formats
  EXT="${FILENAME##*.}"
  EXT_LOWER=$(echo "$EXT" | tr '[:upper:]' '[:lower:]')
  if [[ ! "$EXT_LOWER" =~ ^(mp4|mkv|mov|avi)$ ]]; then
    log "Skipped (unsupported format): $FILENAME"
    continue
  fi

  log "Detected: $FILENAME"

  # Move to processing to prevent double-triggering
  PROC_PATH="${PROCESSING_DIR}/${FILENAME}"
  mv "$FILEPATH" "$PROC_PATH"
  log "Moved to processing: $FILENAME"

  # Run the encode
  if /opt/encoder/encode.sh "$PROC_PATH"; then
    rm -f "$PROC_PATH"
    log "Success: $FILENAME"
  else
    mv "$PROC_PATH" "${FAILED_DIR}/${FILENAME}"
    log "Failed: $FILENAME — moved to failed/"
  fi
done
EOF

sudo chmod +x /opt/encoder/watch.sh
sudo chown encoder:encoder /opt/encoder/watch.sh

Test it manually before wiring up systemd.

bash
# Start the watcher in the background
sudo -u encoder /opt/encoder/watch.sh &

# Drop a test file into the watch folder
cp /path/to/test.mp4 /opt/encoder/watch/

# Tail the log
tail -f /opt/encoder/logs/watch.log

Running the Watcher as a systemd Service

A script running in the background isn't production-grade. Register it as a systemd service so it starts on boot and restarts automatically if it crashes.

ini
# /etc/systemd/system/encoder.service
[Unit]
Description=FFmpeg Encoding Watch Service
After=network.target

[Service]
Type=simple
User=encoder
Group=encoder
ExecStart=/opt/encoder/watch.sh
Restart=on-failure
RestartSec=5s
StandardOutput=append:/opt/encoder/logs/systemd.log
StandardError=append:/opt/encoder/logs/systemd.log

# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ReadWritePaths=/opt/encoder
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Enable and start the service.

bash
# Reload systemd to pick up the new unit file
sudo systemctl daemon-reload

# Enable auto-start on boot
sudo systemctl enable encoder.service

# Start it now
sudo systemctl start encoder.service

# Check status
sudo systemctl status encoder.service

Active: active (running) means it's working. Use journalctl for deeper log inspection.

bash
# Follow logs in real time
sudo journalctl -u encoder.service -f

# View logs from the last hour
sudo journalctl -u encoder.service --since "1 hour ago"

Adding a Remote Job Submission API

SCP to the watch folder is reliable but requires SSH access. A REST API lets you submit jobs from scripts, CI pipelines, or any HTTP client.

Install FastAPI into a virtual environment.

bash
# Create a Python virtual environment
python3 -m venv /opt/encoder/venv

# Install FastAPI and Uvicorn
/opt/encoder/venv/bin/pip install "fastapi[standard]" python-multipart

Write the API server.

python
# /opt/encoder/api.py
import os
import shutil
from pathlib import Path

from fastapi import Depends, FastAPI, File, HTTPException, Security, UploadFile
from fastapi.responses import JSONResponse
from fastapi.security import APIKeyHeader

app = FastAPI(title="FFmpeg Encoding API")

API_KEY = os.environ.get("ENCODER_API_KEY", "")
api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False)

WATCH_DIR = Path("/opt/encoder/watch")
DONE_DIR = Path("/opt/encoder/done")
FAILED_DIR = Path("/opt/encoder/failed")
PROCESSING_DIR = Path("/opt/encoder/processing")


def verify_api_key(key: str = Security(api_key_header)) -> str:
    if not API_KEY:
        raise HTTPException(status_code=500, detail="API key not configured on server")
    if key != API_KEY:
        raise HTTPException(status_code=403, detail="Invalid API key")
    return key


@app.post("/encode")
async def submit_encode(
    file: UploadFile = File(...),
    _: str = Depends(verify_api_key),
) -> JSONResponse:
    """Upload a video file and queue it for encoding."""
    allowed_exts = {".mp4", ".mkv", ".mov", ".avi"}
    ext = Path(file.filename).suffix.lower()

    if ext not in allowed_exts:
        raise HTTPException(
            status_code=400,
            detail=f"Unsupported file type: {ext}. Allowed: {allowed_exts}",
        )

    dest = WATCH_DIR / file.filename
    with dest.open("wb") as f:
        shutil.copyfileobj(file.file, f)

    return JSONResponse(
        status_code=202,
        content={"status": "queued", "filename": file.filename},
    )


@app.get("/status")
async def get_status(_: str = Depends(verify_api_key)) -> JSONResponse:
    """Return file counts for each directory."""
    return JSONResponse(
        content={
            "watching": len(list(WATCH_DIR.iterdir())),
            "processing": len(list(PROCESSING_DIR.iterdir())),
            "done": len(list(DONE_DIR.iterdir())),
            "failed": len(list(FAILED_DIR.iterdir())),
        }
    )


@app.get("/health")
async def health_check() -> JSONResponse:
    """Health check endpoint — no auth required."""
    return JSONResponse(content={"status": "ok"})

Register the API as a systemd service.

bash
# Create the API service unit file
sudo tee /etc/systemd/system/encoder-api.service > /dev/null << 'EOF'
[Unit]
Description=FFmpeg Encoding API Server
After=network.target

[Service]
Type=simple
User=encoder
Group=encoder
WorkingDirectory=/opt/encoder
Environment="ENCODER_API_KEY=your-secret-key-here"
ExecStart=/opt/encoder/venv/bin/uvicorn api:app --host 127.0.0.1 --port 8000
Restart=on-failure
RestartSec=5s

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable encoder-api.service
sudo systemctl start encoder-api.service

Generate a strong API key with openssl.

bash
# Generate a random 32-byte hex key
openssl rand -hex 32

Submit a job via curl.

bash
# Upload a file for encoding
curl -X POST http://your-vps-ip:8000/encode \
  -H "X-API-Key: your-secret-key-here" \
  -F "file=@/path/to/video.mp4"

# Check queue status
curl http://your-vps-ip:8000/status \
  -H "X-API-Key: your-secret-key-here"

For production, put Nginx in front of the API as a reverse proxy and add TLS. Binding Uvicorn to 127.0.0.1 means it only listens on localhost — Nginx handles the public-facing HTTPS connection.

Production Considerations

Disk space management

Video files fill disks fast. Set a policy for the done/ folder before you go live. The simplest approach is a cron job that deletes files older than N days.

bash
# Delete files in done/ older than 7 days (add to crontab -e)
0 3 * * * find /opt/encoder/done -type f -mtime +7 -delete

Limiting concurrent encodes

If multiple files land in the watch folder simultaneously, the watcher starts parallel encodes and saturates the CPU. Add a lock file or use flock in the encode script to enforce one encode at a time.

Lower the process priority with nice so the OS stays responsive even during heavy encoding.

bash
# In encode.sh — wrap the ffmpeg call with nice
nice -n 10 ffmpeg -i "$INPUT" ...

Log rotation

Logs in /opt/encoder/logs/ grow indefinitely without rotation. Set up logrotate.

bash
# Create /etc/logrotate.d/encoder
sudo tee /etc/logrotate.d/encoder > /dev/null << 'EOF'
/opt/encoder/logs/*.log {
    daily
    rotate 7
    compress
    missingok
    notifempty
}
EOF

Transferring files

SCP is the simplest way to push files from a local machine to the watch folder.

bash
# Push a local file to the watch folder
scp /path/to/video.mp4 user@your-vps-ip:/opt/encoder/watch/

Set up SSH public key authentication so you don't need to type a password on every transfer. Add an alias to ~/.ssh/config if you're doing this frequently.

Storage and Distribution

Once encoding finishes, you need somewhere to put the output. Match the service to your use case.

Cloud object storage

ServicePriceBest for
Backblaze B2$6/TB/month (10 GB free)Cheapest large-scale storage
Google Drive15 GB freeQuick sharing with a Google account
Cloudflare R2~$0.015/GB/month, no egress feesCDN delivery without bandwidth costs
Dropbox2 GB freeTeam collaboration

For large video archives, Backblaze B2 or Cloudflare R2 offers the best value. For casual sharing, Google Drive is fine.

Video platforms

PlatformTypeBest for
YouTubeFree, unlimited, massive reachPublic content, monetization
VimeoAd-free, high-quality streamingPortfolios, client reviews
Bunny.netAffordable CDN + video streamingEmbedding video in your own service

Self-hosted NAS

If you want full control over your video library, a Synology or QNAP NAS works well. Install Jellyfin or Plex for FFmpeg-based on-demand transcoding and a complete home media server.

If you're encoding for web delivery, a natural next step is HLS streaming with a CDN — your VPS encodes, the CDN distributes.

Related articles:

FAQ

How many CPU cores does FFmpeg actually use?

FFmpeg scales across all available CPU threads for software encoding with libx264 and libx265. A 4-core VPS uses 4 threads by default. You can limit this with -threads N if you want to leave headroom for other processes. For hardware encoding (NVENC, QSV), core count matters less — the GPU does the heavy lifting. See the GPU encoding guide for details.

Can I use this setup for HLS/DASH streaming output?

Yes. Modify encode.sh to output HLS segments instead of a single MP4. Replace the FFmpeg command with -hls_time 6 -hls_list_size 0 output.m3u8 and the watcher will generate HLS-ready files automatically. The HLS streaming guide covers multi-bitrate adaptive streaming from this kind of setup.

Is a $5/month VPS fast enough for video encoding?

For personal use with occasional 1080p encodes, a 2-core/$5 VPS handles it fine — a 10-minute 1080p clip takes roughly 15–25 minutes with -preset slow -crf 23. For 4K or batch encoding, you'll want 4+ cores. The beauty of VPS is you can resize instantly when demand spikes.

How do I transfer large files to the VPS faster?

SCP works but can be slow for multi-gigabyte files over long distances. Use rsync --partial --progress for resumable transfers. For bulk uploads, consider rclone to sync a local folder with the VPS watch folder. Compression before transfer (ffmpeg -c copy to strip unnecessary streams) also helps.

What happens if the VPS reboots during an encode?

The systemd service restarts automatically after boot. However, the file that was mid-encode will be stuck in processing/ with a partially written output in done/. Add a startup check to watch.sh that moves any files in processing/ back to watch/ so they get re-queued automatically.

Should I compile FFmpeg from source on the VPS?

For most use cases, the apt package is sufficient. Compile from source only if you need codecs not included in the default build (like libfdk-aac or libsvtav1). The FFmpeg installation guide covers both apt and source compilation.

How do I monitor encoding progress remotely?

The /status API endpoint shows queue counts. For real-time progress on individual encodes, add -progress pipe:1 to the FFmpeg command and parse the output in your watch script. You can also use watch -n 5 curl -s http://localhost:8000/status on the VPS itself.

Is this setup production-ready for a video SaaS?

The watch folder + systemd architecture handles low-to-medium volume reliably. For a proper SaaS with concurrent users, job priorities, and retry logic, you'll want a message queue (Redis + Celery, or RabbitMQ) instead of filesystem-based triggering. This guide gives you the encoding layer — the orchestration layer is a separate concern.

Wrapping Up

Here's what we built and why it works.

  • VPS over local: frees your machine, guarantees uptime, scales without hardware changes
  • Spec choice: 2 cores for personal use, 8+ for production throughput
  • FFmpeg install: apt install ffmpeg inotify-tools covers the essentials
  • Watch folder: inotifywait -e close_write fires only after a file is fully written — SCP-safe
  • systemd service: auto-starts on boot, restarts on crash, logs to journald
  • REST API: FastAPI + API key authentication for remote job submission
  • Production: disk cleanup cron, nice for CPU courtesy, logrotate for log hygiene

Start with the watch folder and systemd service — that's the core of the system and covers most use cases. Add the API later if you need programmatic job submission. When the VPS runs out of headroom, upgrade the instance; the code doesn't change.


Tired of memorizing FFmpeg commands? Try ffmpeg-quick — an open-source CLI that wraps common tasks like compression, HLS, and GIF creation into simple presets you can run with npx.

Kamatera

Enterprise-grade cloud VPS with global data centers

  • 13 data centers (US, EU, Asia, Middle East)
  • Starting at $4/month for 1GB RAM — pay-as-you-go
  • 30-day free trial available