32blogby Studio Mitsu

Building a Multi-Camera Surveillance Dashboard with FFmpeg

Build a browser-based surveillance dashboard using FFmpeg and Node.js. Convert multiple RTSP camera feeds to HLS and display them in a grid layout.

by omitsu14 min read
On this page

You can build a multi-camera surveillance dashboard by running one FFmpeg process per camera to convert RTSP streams to HLS, managing those processes with Node.js, and displaying the feeds in a browser grid using hls.js. No proprietary software required.

"We need to see all our cameras on one screen" — this is a challenge I actually tackled over five years ago at a TV station project. The request was simple: eight cameras in common areas, and one manager needed to monitor all feeds from a browser at his desk. A VMS was overkill for that scale. I built it with FFmpeg, Java, and Nginx back then, and it took about a month to finish. This article covers the same architecture using Node.js instead.

This article covers that system from architecture design to frontend implementation, all running on a single server.

System Architecture Overview

Let's map out the complete system.

Cam 1HikvisionCam 2DahuaCam 3USB/RPiRTSPStreamManagerNode.js + FFmpegspawn() per cameraHLS-f teeHLS.m3u8 + .tsRecordingMP4 archiveHTTPDashboardhls.js grid view

Components:

ComponentRole
IP Cameras (multiple)Deliver H.264/H.265 video via RTSP
Node.js Stream ManagerSpawn and manage FFmpeg processes per camera
FFmpeg (one per camera)Convert RTSP to HLS in parallel
NginxServe HLS segments as static files
Browser DashboardPlay each camera's HLS stream using hls.js

The key design decision is running one FFmpeg process per camera. While a single FFmpeg process can handle multiple inputs, if one stream freezes, it can take down the others. Process isolation minimizes the blast radius of failures.

Converting Multiple RTSP Streams to HLS Simultaneously

Let's start with raw FFmpeg commands before adding the Node.js management layer.

Directory Structure

bash
mkdir -p /var/www/hls/{cam01,cam02,cam03}

FFmpeg Commands Per Camera

bash
# Camera 1 (Hikvision)
ffmpeg -rtsp_transport tcp \
  -i "rtsp://admin:pass1@192.168.1.64:554/Streaming/Channels/101" \
  -c:v copy -c:a aac -b:a 128k \
  -f hls -hls_time 2 -hls_list_size 10 \
  -hls_flags delete_segments+append_list \
  -hls_segment_filename "/var/www/hls/cam01/seg_%03d.ts" \
  "/var/www/hls/cam01/index.m3u8" &

# Camera 2 (Dahua)
ffmpeg -rtsp_transport tcp \
  -i "rtsp://admin:pass2@192.168.1.108:554/cam/realmonitor?channel=1&subtype=0" \
  -c:v copy -c:a aac -b:a 128k \
  -f hls -hls_time 2 -hls_list_size 10 \
  -hls_flags delete_segments+append_list \
  -hls_segment_filename "/var/www/hls/cam02/seg_%03d.ts" \
  "/var/www/hls/cam02/index.m3u8" &

# Camera 3 (ONVIF)
ffmpeg -rtsp_transport tcp \
  -i "rtsp://admin:pass3@192.168.1.100:554/onvif1" \
  -c:v copy -c:a aac -b:a 128k \
  -f hls -hls_time 2 -hls_list_size 10 \
  -hls_flags delete_segments+append_list \
  -hls_segment_filename "/var/www/hls/cam03/seg_%03d.ts" \
  "/var/www/hls/cam03/index.m3u8" &

wait

The & runs each command in the background, and wait blocks until all processes finish. This works, but monitoring, restarting, and configuration management are all manual — which is why we wrap it in Node.js for production.

Resource Estimates

With -c:v copy (no re-encoding), per-stream resource usage is minimal.

CamerasCPU Usage (approx.)MemoryNetwork Bandwidth
1-45-10%~50MB/process2-8 Mbps/camera
5-1010-25%~500MB10-40 Mbps
10-2020-50%~1GB20-80 Mbps

If H.265→H.264 transcoding is required, CPU load increases dramatically. Consider GPU acceleration with NVENC or QSV — see the GPU Encoding Guide for details. The FFmpeg HLS muxer documentation covers all available options for tuning segment behavior.

Building a Stream Manager with Node.js

We need a Node.js server to spawn, stop, and monitor FFmpeg processes.

Project Structure

bash
rtsp-hls-dashboard/
├── server/
│   ├── index.mjs          # Entry point
│   ├── stream-manager.mjs # FFmpeg process management
│   └── config.mjs         # Camera configuration
├── public/
│   └── index.html          # Dashboard UI
├── package.json
└── .env                    # Environment variables (credentials)

Camera Configuration

javascript
// server/config.mjs
export const cameras = [
  {
    id: "cam01",
    name: "Entrance",
    rtspUrl: process.env.CAM01_RTSP_URL,
    hlsDir: "/var/www/hls/cam01",
  },
  {
    id: "cam02",
    name: "Server Room",
    rtspUrl: process.env.CAM02_RTSP_URL,
    hlsDir: "/var/www/hls/cam02",
  },
  {
    id: "cam03",
    name: "Parking Lot",
    rtspUrl: process.env.CAM03_RTSP_URL,
    hlsDir: "/var/www/hls/cam03",
  },
];

RTSP URLs are managed via .env. Never hardcode credentials in source code.

bash
# .env
CAM01_RTSP_URL=rtsp://admin:pass1@192.168.1.64:554/Streaming/Channels/101
CAM02_RTSP_URL=rtsp://admin:pass2@192.168.1.108:554/cam/realmonitor?channel=1&subtype=0
CAM03_RTSP_URL=rtsp://admin:pass3@192.168.1.100:554/onvif1

Stream Manager

javascript
// server/stream-manager.mjs
import { spawn } from "node:child_process";
import { mkdir } from "node:fs/promises";
import path from "node:path";

export class StreamManager {
  #processes = new Map();
  #restartTimers = new Map();

  async startStream(camera) {
    if (this.#processes.has(camera.id)) {
      console.log(`[${camera.id}] Already running`);
      return;
    }

    await mkdir(camera.hlsDir, { recursive: true });

    const args = [
      "-rtsp_transport", "tcp",
      "-timeout", "5000000",
      "-i", camera.rtspUrl,
      "-c:v", "copy",
      "-c:a", "aac", "-b:a", "128k",
      "-f", "hls",
      "-hls_time", "2",
      "-hls_list_size", "10",
      "-hls_flags", "delete_segments+append_list",
      "-hls_segment_filename",
      path.join(camera.hlsDir, "seg_%03d.ts"),
      path.join(camera.hlsDir, "index.m3u8"),
    ];

    const proc = spawn("ffmpeg", args, {
      stdio: ["ignore", "pipe", "pipe"],
    });

    proc.stderr.on("data", (data) => {
      const line = data.toString().trim();
      if (line.includes("Error") || line.includes("error")) {
        console.error(`[${camera.id}] ${line}`);
      }
    });

    proc.on("exit", (code) => {
      console.log(`[${camera.id}] FFmpeg exited with code ${code}`);
      this.#processes.delete(camera.id);
      this.#scheduleRestart(camera);
    });

    this.#processes.set(camera.id, proc);
    console.log(`[${camera.id}] Started (PID: ${proc.pid})`);
  }

  stopStream(cameraId) {
    const proc = this.#processes.get(cameraId);
    if (proc) {
      proc.kill("SIGTERM");
      this.#processes.delete(cameraId);
    }
    const timer = this.#restartTimers.get(cameraId);
    if (timer) {
      clearTimeout(timer);
      this.#restartTimers.delete(cameraId);
    }
  }

  #scheduleRestart(camera) {
    console.log(`[${camera.id}] Restarting in 5 seconds...`);
    const timer = setTimeout(() => {
      this.#restartTimers.delete(camera.id);
      this.startStream(camera);
    }, 5000);
    this.#restartTimers.set(camera.id, timer);
  }

  getStatus() {
    const status = {};
    for (const [id, proc] of this.#processes) {
      status[id] = { pid: proc.pid, running: !proc.killed };
    }
    return status;
  }

  stopAll() {
    for (const [id] of this.#processes) {
      this.stopStream(id);
    }
  }
}

Three key design choices:

  • Process isolation: Each camera gets its own spawn. One crash doesn't affect others
  • Auto-restart: The exit event schedules a restart after 5 seconds
  • Clean shutdown: SIGTERM for graceful termination

Entry Point

javascript
// server/index.mjs
import "dotenv/config";
import http from "node:http";
import { cameras } from "./config.mjs";
import { StreamManager } from "./stream-manager.mjs";

const manager = new StreamManager();

// Start all camera streams
for (const cam of cameras) {
  manager.startStream(cam);
}

// Status API
const server = http.createServer((req, res) => {
  if (req.url === "/api/status") {
    res.writeHead(200, { "Content-Type": "application/json" });
    res.end(JSON.stringify({
      cameras: cameras.map((cam) => ({
        id: cam.id,
        name: cam.name,
        hlsUrl: `/hls/${cam.id}/index.m3u8`,
        ...manager.getStatus()[cam.id],
      })),
    }));
    return;
  }
  res.writeHead(404);
  res.end("Not Found");
});

server.listen(3001, () => {
  console.log("Stream manager API running on port 3001");
});

// Graceful shutdown
process.on("SIGTERM", () => {
  console.log("Shutting down...");
  manager.stopAll();
  server.close();
  process.exit(0);
});

process.on("SIGINT", () => {
  console.log("Shutting down...");
  manager.stopAll();
  server.close();
  process.exit(0);
});

A Browser-Based Frontend for Camera Feeds

A simple dashboard using hls.js to display camera feeds in a responsive grid.

html
<!-- public/index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>Surveillance Dashboard</title>
  <script src="https://cdn.jsdelivr.net/npm/hls.js@1"></script>
  <style>
    * { margin: 0; padding: 0; box-sizing: border-box; }
    body {
      background: #0a0a0a;
      color: #e0e0e0;
      font-family: "SF Mono", "Fira Code", monospace;
    }
    .header {
      padding: 1rem 2rem;
      border-bottom: 1px solid #1a1a1a;
      display: flex;
      justify-content: space-between;
      align-items: center;
    }
    .header h1 { font-size: 1.2rem; color: #b4f0a0; }
    .status { font-size: 0.8rem; color: #666; }
    .grid {
      display: grid;
      grid-template-columns: repeat(auto-fit, minmax(480px, 1fr));
      gap: 1px;
      background: #1a1a1a;
      padding: 1px;
    }
    .camera-cell {
      background: #0a0a0a;
      position: relative;
    }
    .camera-cell video {
      width: 100%;
      display: block;
      background: #000;
    }
    .camera-label {
      position: absolute;
      top: 8px;
      left: 8px;
      background: rgba(0, 0, 0, 0.7);
      color: #b4f0a0;
      padding: 4px 8px;
      font-size: 0.75rem;
      border: 1px solid #b4f0a033;
    }
    .camera-status {
      position: absolute;
      top: 8px;
      right: 8px;
      width: 8px;
      height: 8px;
      border-radius: 50%;
      background: #4caf50;
    }
    .camera-status.offline { background: #f44336; }

    @media (max-width: 768px) {
      .grid { grid-template-columns: 1fr; }
    }
  </style>
</head>
<body>
  <div class="header">
    <h1>&gt;_ Surveillance Dashboard</h1>
    <div class="status" id="clock"></div>
  </div>
  <div class="grid" id="grid"></div>

  <script>
    async function init() {
      const res = await fetch("/api/status");
      const data = await res.json();
      const grid = document.getElementById("grid");

      for (const cam of data.cameras) {
        const cell = document.createElement("div");
        cell.className = "camera-cell";
        cell.innerHTML = `
          <video id="video-${cam.id}" muted autoplay playsinline></video>
          <div class="camera-label">${cam.name} [${cam.id}]</div>
          <div class="camera-status ${cam.running ? "" : "offline"}"
               id="status-${cam.id}"></div>
        `;
        grid.appendChild(cell);

        const video = cell.querySelector("video");
        if (Hls.isSupported()) {
          const hls = new Hls({
            liveSyncDurationCount: 3,
            liveMaxLatencyDurationCount: 6,
            enableWorker: true,
          });
          hls.loadSource(cam.hlsUrl);
          hls.attachMedia(video);
          hls.on(Hls.Events.ERROR, (event, data) => {
            if (data.fatal) {
              console.error(`[${cam.id}] HLS error:`, data.type);
              setTimeout(() => {
                hls.loadSource(cam.hlsUrl);
                hls.attachMedia(video);
              }, 3000);
            }
          });
        } else if (video.canPlayType("application/vnd.apple.mpegurl")) {
          video.src = cam.hlsUrl;
        }
      }

      setInterval(() => {
        document.getElementById("clock").textContent =
          new Date().toLocaleString("en-US");
      }, 1000);
    }

    init();
  </script>
</body>
</html>

Key features:

  • Responsive grid: grid-template-columns: repeat(auto-fit, minmax(480px, 1fr)) adapts to the number of cameras
  • Low latency: hls.js liveSyncDurationCount: 3 minimizes delay
  • Error recovery: Auto-reconnect after 3 seconds on HLS errors
  • Mobile-friendly: Switches to single column below 768px

Recording and Automatic Cleanup

Beyond live streaming, you often need simultaneous recording. FFmpeg's -f tee lets you create multiple outputs from a single input.

Simultaneous HLS Streaming and Recording

bash
ffmpeg -rtsp_transport tcp \
  -i "rtsp://admin:pass1@192.168.1.64:554/Streaming/Channels/101" \
  -c:v copy -c:a aac -b:a 128k \
  -f tee -map 0:v -map 0:a \
  "[f=hls:hls_time=2:hls_list_size=10:hls_flags=delete_segments+append_list:hls_segment_filename=/var/www/hls/cam01/seg_%03d.ts]/var/www/hls/cam01/index.m3u8|[f=segment:segment_time=3600:segment_format=mp4:reset_timestamps=1:strftime=1]/var/recordings/cam01/%Y%m%d_%H%M%S.mp4"

This produces both "live HLS streaming" and "hourly MP4 recordings" from a single FFmpeg process.

Auto-Deleting Old Recordings

Manage disk space with a cron job that deletes old files.

bash
# /etc/cron.daily/cleanup-recordings
#!/bin/bash
# Delete recordings older than 30 days
find /var/recordings/ -name "*.mp4" -mtime +30 -delete

# Log the cleanup
echo "[$(date)] Cleanup completed" >> /var/log/recording-cleanup.log
bash
chmod +x /etc/cron.daily/cleanup-recordings

For more on automating batch operations, see Automating Video Batch Processing with FFmpeg and Python.

Production Considerations — systemd, Logging, Monitoring

systemd Service

Register the Node.js stream manager as a systemd service for automatic recovery on server restarts.

ini
# /etc/systemd/system/surveillance-dashboard.service
[Unit]
Description=Surveillance Dashboard Stream Manager
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=surveillance
Group=surveillance
WorkingDirectory=/opt/rtsp-hls-dashboard
ExecStart=/usr/bin/node server/index.mjs
Restart=always
RestartSec=10
Environment=NODE_ENV=production
EnvironmentFile=/opt/rtsp-hls-dashboard/.env

# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ReadWritePaths=/var/www/hls /var/recordings
ProtectHome=true

[Install]
WantedBy=multi-user.target
bash
sudo systemctl enable surveillance-dashboard
sudo systemctl start surveillance-dashboard
sudo systemctl status surveillance-dashboard

Log Management

FFmpeg produces verbose output. Use journald for management and filter for errors.

bash
# Stream errors in real time
journalctl -u surveillance-dashboard -f | grep -i error

# Last hour of logs
journalctl -u surveillance-dashboard --since "1 hour ago"

Health Checks

A simple script that verifies each camera's HLS playlist is being updated.

bash
#!/bin/bash
# /opt/rtsp-hls-dashboard/healthcheck.sh
CAMERAS=("cam01" "cam02" "cam03")
ALERT_THRESHOLD=30  # seconds

for cam in "${CAMERAS[@]}"; do
  playlist="/var/www/hls/${cam}/index.m3u8"
  if [ ! -f "$playlist" ]; then
    echo "[ALERT] ${cam}: playlist not found"
    continue
  fi

  age=$(( $(date +%s) - $(stat -c %Y "$playlist") ))
  if [ "$age" -gt "$ALERT_THRESHOLD" ]; then
    echo "[ALERT] ${cam}: playlist is ${age}s old (threshold: ${ALERT_THRESHOLD}s)"
  else
    echo "[OK] ${cam}: last updated ${age}s ago"
  fi
done

Run this via cron and integrate with email or Slack alerts for production monitoring.

For building a similar setup in the cloud, see Building an FFmpeg Encoding Server on a VPS.

Nginx Configuration for Serving HLS

The architecture diagram shows Nginx serving HLS segments, but you need specific MIME types and CORS headers for browsers to play the streams correctly.

nginx
# /etc/nginx/sites-available/surveillance
server {
    listen 8080;
    server_name localhost;

    location /hls/ {
        alias /var/www/hls/;

        types {
            application/vnd.apple.mpegurl m3u8;
            video/mp2t ts;
        }

        add_header Cache-Control "no-cache, no-store";
        add_header Access-Control-Allow-Origin *;
        add_header Access-Control-Allow-Methods "GET, OPTIONS";
    }

    location / {
        root /opt/rtsp-hls-dashboard/public;
        index index.html;
    }

    location /api/ {
        proxy_pass http://127.0.0.1:3001;
    }
}

Cache-Control: no-cache is essential for live streams — without it, browsers cache stale .m3u8 playlists and the video freezes. The proxy_pass directive forwards API requests to the Node.js stream manager.

FAQ

How many cameras can a single server handle?

With -c:v copy (no transcoding), a modest server (4-core CPU, 8 GB RAM) can handle 15-20 cameras. Each FFmpeg process uses about 50 MB of memory and minimal CPU. The bottleneck is usually network bandwidth — at 4-8 Mbps per 1080p camera, 20 cameras need 80-160 Mbps of sustained throughput.

What's the typical latency of this HLS-based setup?

Expect 4-8 seconds of latency with the configuration in this article (hls_time=2, hls_list_size=10, liveSyncDurationCount=3). HLS is inherently higher-latency than protocols like WebRTC or RTMP because it buffers multiple segments. For surveillance use cases, this delay is usually acceptable. If you need sub-second latency, consider LL-HLS or a WebRTC-based approach.

Can I use any IP camera brand with this setup?

Yes — any camera that supports RTSP works. Hikvision, Dahua, Reolink, Amcrest, Axis, and ONVIF-compliant cameras all output standard RTSP streams. The only thing that varies is the RTSP URL path, which you can find in each manufacturer's documentation. Check our RTSP fundamentals guide for common URL patterns by brand.

How much storage does continuous recording require?

A 1080p H.264 camera at 4-8 Mbps generates approximately 1.8-3.6 GB per hour. For 10 cameras recording 24/7 for 30 days, budget 13-26 TB. Using H.265 cameras cuts storage by roughly 40%. The cron-based cleanup script in this article automatically deletes recordings older than 30 days.

What happens when a camera goes offline?

The StreamManager detects the FFmpeg process exit and automatically schedules a restart after 5 seconds. The health check script monitors playlist freshness — if a playlist hasn't been updated in 30 seconds, it raises an alert. On the frontend, the status indicator turns red, and hls.js retries the connection every 3 seconds.

Can I access the dashboard remotely over the internet?

Yes, but never expose it directly. Use a VPN like WireGuard or a reverse proxy with authentication (Nginx basic auth, Authelia, or Cloudflare Access). The dashboard has no built-in authentication, so securing the network layer is critical.

Do I need the Nginx RTMP module?

No. This setup doesn't use RTMP at all. FFmpeg reads RTSP directly from cameras and writes HLS segments as files. Nginx serves those files as plain static content. The nginx-rtmp-module is only needed if you're ingesting RTMP streams from OBS or similar tools.

How do I add or remove cameras without restarting the service?

The current implementation requires a restart to change the camera list. For hot-reload capability, extend the status API with POST /api/cameras endpoints that call manager.startStream() or manager.stopStream(). Store the camera configuration in a JSON file or database instead of config.mjs, and watch for changes with fs.watch().

Wrapping Up

The system I built for the TV station ran stable with 8 cameras. Even at small scale, process isolation and auto-recovery proved their worth. Over on r/homelab, it's a common sentiment that "FFmpeg + HLS is more flexible than VMS and scales better as you add cameras."

Here's what makes this system work:

  • Architecture: One FFmpeg process per camera. When one drops, the rest keep rolling
  • Stream management: Node.js handles spawning, stopping, and auto-restarting FFmpeg with a 5-second retry
  • Nginx: Serves HLS segments as static files with proper CORS and cache headers
  • Frontend: hls.js renders camera feeds in a responsive browser grid
  • Recording: -f tee for simultaneous HLS streaming and MP4 segment recording
  • Operations: systemd service + health checks for stable production operation

The source code for this system is available at GitHub: omitsu-dev/rtsp-hls-dashboard.

For enterprise deployment inquiries, get in touch.

Related articles: