32blogby Studio Mitsu

fqmpeg Format Conversion & Streaming: GIF, HLS, DASH

Seven fqmpeg verbs for format work: convert containers, make GIFs, generate HLS/DASH for adaptive streaming, split segments, extract streams by index.

by omitsu18 min read
On this page

fqmpeg's C2 cluster is seven verbs that handle every "change the format" or "prepare it for delivery" task: convert swaps containers, gif and gif-to-video move between GIF and MP4, hls and dash produce adaptive-streaming manifests, segment slices a long file into fixed chunks, and extract-stream pulls a single video / audio / subtitle track out by index. None of them are codec-decision verbs (that's C1) — they answer the question what shape does this file need to be in next.

This guide walks through each command — the FFmpeg flags it produces, defaults, output naming — then chains several of them into real delivery workflows. Everything below is verified against the source in src/commands/ of fqmpeg 3.0.1.

What you'll get out of this guide

  • Which of the 7 verbs to pick for which job (decision matrix)
  • Exact FFmpeg invocation each verb generates (verified --dry-run output)
  • Defaults, allowed values, and output filenames for every command
  • Three end-to-end recipes: HLS for a static-host CDN, GIF preview from a long edit, multi-track surgery on a single MKV

Format & Delivery: Which Verb for Which Job

The mental shortcut for this cluster splits along two axes — single file vs. fragmented output, and changing pixels vs. just changing the wrapper.

GoalVerbWhat it doesOutput
Change container, keep codecconvert --copyStream-copy remux<name>.<ext>
Change container, re-encodeconvertDecode + re-encode<name>.<ext>
Make a GIF previewgifTime-trim + palette + scale<name>-gif.gif
Promote a GIF to MP4gif-to-videoEven-dim pad + yuv420p<name>.mp4
HLS adaptive playlisthls.m3u8 + .ts segments<name>.m3u8 + _%03d.ts
MPEG-DASH manifestdash.mpd + DASH segments<name>.mpd
Cut into fixed chunkssegmentStream-copy -f segmentsegment_%03d.<ext>
Pull one stream by indexextract-stream-map 0:<spec> -c copy<name>-stream-<spec>.<ext>

Two things to internalize before reading on:

  1. Stream-copy vs. re-encode. convert --copy, segment, and extract-stream all use -c copy — they don't touch pixels. They finish in seconds because they're only rewriting the container. convert (without --copy), gif, gif-to-video, hls, and dash all re-encode and run at FFmpeg's normal encoding speed. When in doubt, try the stream-copy form first.
  2. Single file vs. segmented output. hls, dash, and segment produce many files in the working directory: a manifest plus N media segments. Run them inside a dedicated output directory, or you'll dump dozens of .ts chunks next to your source.

For the broader codec / quality side of these decisions, the video compression guide is the companion piece. For an end-to-end HLS pipeline that takes you from fqmpeg hls to a CDN, see the HLS CDN streaming guide.

Container Conversion

convert — Switch container (mp4 ↔ mov ↔ mkv ↔ webm ↔ mp3 ...)

Renames the wrapper. By default FFmpeg picks codecs to match the new container; with --copy it stream-copies the existing audio/video tracks unchanged.

  • Source: src/commands/convert.js
  • Default: decode + re-encode using FFmpeg's container-default codecs
  • --copy: add -c copy to remux without touching pixels (fast, lossless, but only valid when the source codec is legal for the target container)
Argument / OptionDefaultNotes
<format> (positional)requiredTarget extension. Leading dot is stripped (.mov and mov both work)
--copyoffStream-copy mode — no re-encoding
-o, --output <path><input-stem>.<format>Override output path
bash
$ npx fqmpeg convert input.mp4 mov --dry-run

  ffmpeg -i input.mp4 input.mov

$ npx fqmpeg convert input.mp4 mov --copy --dry-run

  ffmpeg -i input.mp4 -c copy input.mov

When --copy works, it finishes in under a second on multi-GB files because no decoding happens. The catch is codec/container compatibility:

  • MP4 ↔ MOV: usually both copy fine (H.264 + AAC is legal in both)
  • MP4 ↔ MKV: copy works for almost anything (MKV is permissive)
  • WebM: requires VP8/VP9/AV1 video + Vorbis/Opus audio — copying H.264 into .webm will fail. Use encode-vp9 from C1 instead

If convert --copy errors with Could not find tag for codec ... in stream, the source codec isn't legal for the target container. Drop --copy to fall back to re-encoding.

GIF I/O

gif — Make a GIF from a video clip

Takes a time slice of a video and writes a high-quality GIF. The default 480 px × 15 fps × 5 s is calibrated for embed-in-readme territory — small enough for a 4-MB README budget, big enough to read.

  • Source: src/commands/gif.js
  • Quality trick: uses the two-pass palettegen / paletteuse filter chain. This generates an optimal 256-color palette from the clip and applies it on the second pass. Result is dramatically better than naive GIF encoding (no banding, no dithering noise)
  • Scaler: scale=<width>:-1:flags=lanczos — height is auto, lanczos is the high-quality upsampler/downsampler
  • Loop: -loop 0 (infinite)
OptionDefaultRange / FormatNotes
-s, --start <sec>0non-negative numberStart offset in seconds
-d, --duration <sec>5positive numberLength of the GIF
--fps <n>15positive integerHigher = smoother but bigger file
--width <px>480positive integerHeight auto-derived to keep aspect

Default output filename: <input-stem>-gif.gif.

bash
$ npx fqmpeg gif input.mp4 --dry-run

  ffmpeg -ss 0 -t 5 -i input.mp4 -vf fps=15,scale=480:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse -loop 0 input-gif.gif

# Custom: start at 10s, 3-second clip, 20 fps, 600 px wide
$ npx fqmpeg gif input.mp4 --start 10 --duration 3 --fps 20 --width 600 --dry-run

  ffmpeg -ss 10 -t 3 -i input.mp4 -vf fps=20,scale=600:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse -loop 0 input-gif.gif

gif-to-video — Promote a GIF to MP4

The reverse operation. Converts a GIF to an H.264 MP4 — much smaller, supports actual sound (none in this case), works in <video> tags everywhere.

  • Source: src/commands/gif-to-video.js
  • Pixel format: -pix_fmt yuv420p (required for broad browser/codec compatibility — most decoders reject 8-bit RGB MP4)
  • Even-dimension fix: -vf scale=trunc(iw/2)*2:trunc(ih/2)*2 — H.264 requires even width/height; this rounds down to the nearest even number when needed
  • Faststart: -movflags faststart so the file starts streaming before fully buffered
  • Loop: uses -stream_loop to repeat the source N times (a static MP4 plays N+1 times, since --loop 0 means single play)
OptionDefaultRangeNotes
--loop <n>0non-negative integer0 = single play. 3 = source plays 4 times

Default output filename: <input-stem>.mp4.

bash
$ npx fqmpeg gif-to-video input.gif --dry-run

  ffmpeg -i input.gif -movflags faststart -pix_fmt yuv420p -vf scale=trunc(iw/2)*2:trunc(ih/2)*2 input.mp4

# Loop the GIF 4 times in the resulting MP4
$ npx fqmpeg gif-to-video input.gif --loop 3 --dry-run

  ffmpeg -stream_loop 3 -i input.gif -movflags faststart -pix_fmt yuv420p -vf scale=trunc(iw/2)*2:trunc(ih/2)*2 input.mp4

The most common use is shrinking a GIF that's grown too large — a 12-MB tutorial GIF often becomes a 600-KB MP4 with no perceptible quality loss.

Adaptive Streaming

These two verbs produce the kind of output you'd serve from a CDN behind a <video> tag with hls.js or dash.js.

hls — HLS playlist + segments

HTTP Live Streaming. Apple's adaptive-streaming format, native on Safari and iOS, supported via hls.js on every other browser. Output is a .m3u8 playlist plus a sequence of MPEG-TS (.ts) segments.

  • Source: src/commands/hls.js
  • Video codec: libx264 (forced — .ts containers and .m3u8 players have the strongest H.264 support)
  • Audio codec: aac
  • -hls_list_size 0: keep every segment in the playlist (VOD mode). For a live rolling window, you'd lower this — but VOD is the common case
  • Segment filename pattern: <playlist-stem>_%03d.ts next to the playlist
OptionDefaultRangeNotes
--segment <sec>6positive numberSegment duration. Apple recommends 6 s for VOD; 2–4 s for low-latency
-o, --output <path><input-stem>.m3u8pathSets both the playlist path and the segment-file directory
bash
$ npx fqmpeg hls input.mp4 --dry-run

  ffmpeg -i input.mp4 -c:v libx264 -c:a aac -hls_time 6 -hls_list_size 0 -hls_segment_filename input_%03d.ts input.m3u8

The output is one renditional ladder rung — hls doesn't generate the multi-bitrate variant playlist that real adaptive streaming needs. For that, you encode the source at 3–5 different bitrates (use bitrate from C1 for each) and stitch the resulting .m3u8 files into a master playlist by hand. The HLS CDN streaming guide walks through that whole pipeline including R2 / S3 deployment.

dash — MPEG-DASH manifest

The W3C-standard adaptive streaming format. Output is a .mpd manifest plus a set of fragmented MP4 segments. DASH plays via dash.js on most browsers; Safari leans HLS but accepts DASH through hls.js's experimental DASH mode or via Media Source Extensions.

  • Source: src/commands/dash.js
  • Video / audio: libx264 + aac (same defaults as HLS)
  • -use_timeline 1 -use_template 1: generates a SegmentTemplate-based manifest, which is the modern, smaller format favored over SegmentList for VOD
OptionDefaultNotes
--segment <sec>4DASH defaults shorter than HLS (4 s vs. 6 s) — DASH players tolerate smaller segments better
-o, --output <path><input-stem>.mpd
bash
$ npx fqmpeg dash input.mp4 --dry-run

  ffmpeg -i input.mp4 -c:v libx264 -c:a aac -f dash -seg_duration 4 -use_timeline 1 -use_template 1 input.mpd

Same caveat as HLS — single-rendition output. For multi-bitrate DASH ladders, run bitrate at several targets and combine the manifests, or use FFmpeg's adaptation_sets argument directly via --dry-run and a manual edit.

For a quick sanity check on which to pick: ship HLS if your audience is iOS-heavy or Safari-heavy; ship DASH if you're targeting living-room devices, Android TV, or anything that follows the W3C path. Most production stacks ship both.

Splitting & Stream Surgery

segment — Slice a file into fixed-duration chunks

Splits the input into N pieces of a fixed duration each, stream-copying so it's near-instantaneous. The output filenames follow a printf-style numbered pattern.

  • Source: src/commands/segment.js
  • Mode: -f segment -c copy (no re-encoding)
  • -reset_timestamps 1: each segment's PTS resets to 0, so segments play standalone
  • Container: preserves the input extension (segments of input.mp4 are .mp4; segments of input.mkv are .mkv)
Argument / OptionDefaultNotes
<duration> (positional)requiredSegment length in seconds
-o, --output <pattern>segment_%03d.<input-ext>printf pattern — %03d becomes 001, 002, …
bash
$ npx fqmpeg segment input.mp4 10 --dry-run

  ffmpeg -i input.mp4 -c copy -f segment -segment_time 10 -reset_timestamps 1 segment_%03d.mp4

A few practical notes:

  • Cuts at keyframes, not exact seconds. Stream-copy mode can only split on existing GOP boundaries. If your source has a 5-second keyframe interval, a 10-second segment may end at 9.7 s or 10.3 s. For frame-accurate cuts, re-encode (drop -c copy from the printed command) or shorten the source's GOP first
  • Not the same as HLS segments. HLS produces a playlist alongside chunks; segment just produces chunks. For HLS, use hls
  • The default output pattern dumps files in CWD, not next to the input. Run inside an output directory or pass -o /path/to/dir/seg_%03d.mp4

extract-stream — Pull one stream out by index

Extracts a single video, audio, or subtitle track from a multi-stream file. Stream-copy only — fast and lossless. The argument format follows FFmpeg's stream specifier syntax: v:0 (first video), a:1 (second audio), s:0 (first subtitle).

  • Source: src/commands/extract-stream.js
  • Mapping: -map 0:<spec> -c copy
  • Output extension: chosen from the stream type — v:*.mp4, a:*.aac, s:*.srt. If the actual codec doesn't match (e.g., the audio is Opus, not AAC), pass --output to override
ArgumentNotes
<input>Input file
<stream>Stream specifier: v:N, a:N, or s:N

Default output filenames:

  • v:0<input-stem>-stream-v0.mp4
  • a:1<input-stem>-stream-a1.aac
  • s:0<input-stem>-stream-s0.srt
bash
$ npx fqmpeg extract-stream input.mkv v:0 --dry-run

  ffmpeg -i input.mkv -map 0:v:0 -c copy input-stream-v0.mp4

$ npx fqmpeg extract-stream input.mkv a:1 --dry-run

  ffmpeg -i input.mkv -map 0:a:1 -c copy input-stream-a1.aac

$ npx fqmpeg extract-stream input.mkv s:0 --dry-run

  ffmpeg -i input.mkv -map 0:s:0 -c copy input-stream-s0.srt

To know which streams exist in a file, use info (one of the four inspection verbs from the hub article) or ffprobe -v error -show_streams input.mkv directly. The output gives you the index map: v:0, v:1, a:0, a:1, etc.

Real-World Recipes

Each recipe chains multiple verbs (across clusters) into a real workflow.

Recipe 1: Static-host HLS pipeline

You want to ship a 30-minute video from a static host (Cloudflare R2, S3 + CloudFront, Vercel) and play it with hls.js in a <video> tag. Single rendition is fine for most cases.

bash
# Step 1: compress to a sensible delivery bitrate
npx fqmpeg compress source.mov --crf 22 --preset slow
# → source-compressed.mp4

# Step 2: package as HLS in a clean output folder
mkdir hls-out
npx fqmpeg hls source-compressed.mp4 -o hls-out/stream.m3u8
# → hls-out/stream.m3u8 + hls-out/stream_001.ts ... stream_NNN.ts

Upload hls-out/ to the static host with public read. In your page:

html
<video controls></video>
<script type="module">
  import Hls from "https://cdn.jsdelivr.net/npm/hls.js@latest/+esm";
  const hls = new Hls();
  hls.loadSource("https://cdn.example.com/hls-out/stream.m3u8");
  hls.attachMedia(document.querySelector("video"));
</script>

For a multi-bitrate ladder and CDN cache settings, the HLS CDN streaming guide covers the production setup including byte-range requests and segment cache headers.

Recipe 2: GIF preview from a long edit

You exported a 4-minute screencast as MP4 and need a GIF for the README hero. The hero needs to be ~2 MB max and ~600 px wide.

bash
# Step 1: pick a representative 5-second window
# Visually scrub to a moment around 1:30 in the edit
# Step 2: extract that window as a GIF
npx fqmpeg gif screencast.mp4 --start 90 --duration 5 --width 600 --fps 12
# → screencast-gif.gif

Why --fps 12 instead of the default 15? Because for talking-head or cursor-movement screencasts, the perceived smoothness from 12 fps is fine and the file is 20 % smaller. If the GIF is still too big, drop --fps to 10 or --width to 480.

If the resulting .gif is over budget, the answer is rarely "tweak GIF settings further." Convert to MP4 with compress and serve a <video autoplay loop muted playsinline> instead — same visual, ~10× smaller, sharper.

Recipe 3: Multi-track MKV surgery

You have an .mkv from a Blu-ray rip with three audio tracks (Japanese, English, English commentary) and two subtitle tracks (English, English SDH). You want a clean MP4 with just the Japanese audio and English subs as a sidecar .srt.

bash
# Step 1: probe the streams (use the hub's info verb, or ffprobe directly)
ffprobe -v error -show_streams source.mkv | grep -E "index=|codec_type=|TAG:language"
# Suppose: v:0 H.264, a:0 jpn, a:1 eng, a:2 eng-comm, s:0 eng, s:1 eng-sdh

# Step 2: extract the Japanese audio
npx fqmpeg extract-stream source.mkv a:0 -o ja.aac

# Step 3: extract the English subtitle
npx fqmpeg extract-stream source.mkv s:0 -o eng.srt

# Step 4: extract the video
npx fqmpeg extract-stream source.mkv v:0 -o video.mp4

# Step 5: stitch video + Japanese audio into a single MP4
# (this part falls outside C2 — uses the underlying ffmpeg)
ffmpeg -i video.mp4 -i ja.aac -c copy -map 0:v -map 1:a final.mp4

The MKV → "video + chosen audio + sidecar sub" pattern is the core of "make this Blu-ray rip play on my phone." extract-stream does the unbundling without re-encoding, so a 4-GB file rewinds in seconds. The final stitch is a normal FFmpeg invocation; you can preview the merged form via --dry-run on whichever fqmpeg verb is closest to what you need (e.g., convert with custom flags) or write the FFmpeg directly.

Frequently Asked Questions

Why does convert --copy sometimes fail with "Could not find tag for codec"?

Because the source codec isn't legal in the target container. WebM only accepts VP8/VP9/AV1 video and Vorbis/Opus audio — copying H.264 into .webm will refuse. MP4 accepts H.264/H.265/AV1 video but not VP9. Drop --copy to fall back to re-encoding, or pick a more permissive container like .mkv.

Does hls produce a multi-bitrate adaptive ladder?

No — it produces a single rendition (.m3u8 + chunks). Multi-bitrate adaptive streaming requires encoding the source at 2–5 different bitrates (use bitrate from the compression cluster for each), then writing a master playlist that references each rendition's .m3u8. The HLS CDN streaming guide shows the master-playlist format.

Should I ship HLS or DASH?

HLS if your audience skews iOS / Safari / Apple TV — Apple's ecosystem treats HLS as native and DASH as second-class. DASH if you're targeting Android TV, smart TVs running W3C-standard players, or strictly W3C-conformant pipelines. Most large streaming services ship both, multiplexed at request time. For small projects, HLS-only covers ~95 % of viewers thanks to hls.js.

Why does segment cut at slightly off-by-the-second boundaries?

segment runs in stream-copy mode, which can only split at existing keyframes (GOP boundaries). If your source has a 5-second GOP, a 10-second segment will land somewhere between 5 s and 15 s — closest to 10 s. For frame-accurate cuts, re-encode by editing the printed --dry-run command to drop -c copy, or shorten the source's keyframe interval first with -g 30 -keyint_min 30 on the encoder.

Why does the default GIF look so much better than ffmpeg -i in.mp4 out.gif?

The naive one-liner uses GIF's native palette (a fixed 256 colors not derived from your clip), which produces banding and dithering noise. gif uses FFmpeg's two-pass palettegen / paletteuse filter to compute an optimal palette from the actual frames of your clip. The same trick is buried in dozens of "high quality GIF in FFmpeg" Stack Overflow posts; gif saves you from typing it.

Can I make extract-stream write to a custom extension?

Yes — pass -o output.<ext>. The default extension assumes AAC/SRT/MP4, which is right for most files but wrong for, say, Opus audio in an MKV (extract-stream input.mkv a:0 -o audio.opus). The -c copy flag means whatever bytes were inside the source stream get written verbatim to the new file, so the extension just needs to match the actual codec.

Can I batch-segment a folder of files?

Yes — fqmpeg is shell-friendly. The standard pattern:

bash
for f in *.mp4; do
  mkdir -p "segments/${f%.mp4}"
  ( cd "segments/${f%.mp4}" && npx fqmpeg segment "../../$f" 60 )
done

Each segment invocation runs in its own subdirectory so chunks don't collide. The outer subshell ( ... ) keeps the cd from leaking into the next iteration.

Is HLS Low-Latency (LL-HLS) supported?

Not via a dedicated flag — hls produces standard HLS. For LL-HLS you'd want --segment 1 or 2, plus the underlying FFmpeg's -hls_flags +program_date_time -hls_segment_type fmp4 and partial-segment options. Run --dry-run, copy the printed command, and add the LL-HLS flags manually. Apple's HTTP Live Streaming spec is the authoritative reference for the partial-segment and rendition-report fields.

Wrapping Up

The seven C2 verbs cover the format-shape decisions you'll make once the codec is settled:

  • convert for "same content, different wrapper" (with --copy for instant remux)
  • gif / gif-to-video for the GIF round-trip
  • hls / dash for adaptive-streaming manifests
  • segment for fixed-duration chunking
  • extract-stream for surgical track removal

Every verb prints its underlying FFmpeg invocation under --dry-run, so you can copy it verbatim, adapt it (LL-HLS, custom segment patterns, NVENC swap), or learn the syntax behind the verb. For the broader fqmpeg map, return to the fqmpeg complete guide. For codec-side decisions, see the compression & encoding deep dive.