fqmpeg's C5 cluster is fourteen verbs that change a video's geometry — its dimensions (resize, scale2x), its visible region (crop, crop-detect), its canvas (pad, aspect, backdrop), its orientation (rotate, transpose, mirror), its motion framing (zoom), or its field structure (deinterlace, interlace, field-order). These are the "make the picture the right shape" operations you reach for between capture and publish.
This guide walks through each verb — the FFmpeg filter it generates, the defaults, the output naming, and the gotchas that come from the underlying filter (the -2 even-dimension trick in scale, the force_original_aspect_ratio chain in pad, the hardcoded 1920×1080 in zoom). Everything below is verified against the source in src/commands/ of fqmpeg 3.0.2.
What you'll get out of this guide
- A decision matrix for the 14 verbs by task (scale / crop / pad / rotate / motion / interlace)
- Exact FFmpeg invocation each verb generates (verified
--dry-runoutput) - Defaults, allowed values, and output filenames for every command
- Three end-to-end recipes including a vertical-phone-to-YouTube pipeline
The 14 Verbs at a Glance
The cluster splits into six task groups. Pick the group, then the verb.
| Group | Verbs | What they do |
|---|---|---|
| Resize & upscale | resize, scale2x | Change pixel dimensions, or 2× upscale with a pixel-art-aware scaler |
| Crop | crop, crop-detect | Cut a sub-region; or detect black-bar borders for an automated crop |
| Pad & aspect | pad, aspect, backdrop | Letterbox/pillarbox to a target canvas, change DAR, or fill a 16:9 canvas with a blurred background |
| Rotate & mirror | rotate, transpose, mirror | Rotate 90/180/270, flip horizontally/vertically, or split-and-mirror |
| Motion | zoom | Ken Burns style zoom-in/zoom-out at 1920×1080@30 |
| Interlacing | deinterlace, interlace, field-order | Remove combing, add interlacing, or set the field order tag |
Three things to know before reading on:
resizeuses-2for the auto-scaled dimension. Pass-w 1280and the height becomes-2, which tells FFmpeg "preserve aspect, but snap to a multiple of 2." Most codecs (x264, x265, VP9) require even dimensions, so-2saves you fromwidth not divisible by 2errors that appear when you'd naively pass-1.padscales first, then pads. The filter isscale=W:H:force_original_aspect_ratio=decrease,pad=W:H:.... Sopad input.mp4 1920x1080on a 1280×720 source first scales up to 1920×1080-fit, then pads — it's not a pure-pad. For a literal pad without scaling, use FFmpeg'spadfilter directly via--dry-runediting.zoomalways outputs 1920×1080 at 30 fps. The filter is hardcoded tos=1920x1080:fps=30— there's no option to override. If you need a different output size or frame rate, copy the--dry-runoutput and edit those values before running FFmpeg directly.
Resize & Upscale
resize — Scale to a target width or height
The bread-and-butter scaler. Pass -w for a target width (height auto-scales) or -h for a target height (width auto-scales). The auto-side becomes -2 so the output snaps to even dimensions.
- Source:
src/commands/resize.js - Filter:
scale=W:-2orscale=-2:H - One of
-wor-his required — otherwise the command exits withError: specify --width (-w) or --height (-h).
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
-w, --width <px> | — | Target width; height auto-scales |
-h, --height <px> | — | Target height; width auto-scales |
-o, --output <path> | <input-stem>-<N>w.<ext> or -<N>h.<ext> | Override output |
$ npx fqmpeg resize input.mp4 -w 1280 --dry-run
ffmpeg -i input.mp4 -vf scale=1280:-2 -c:a copy input-1280w.mp4
$ npx fqmpeg resize input.mp4 -h 720 --dry-run
ffmpeg -i input.mp4 -vf scale=-2:720 -c:a copy input-720h.mp4
If you need both dimensions fixed (e.g. exactly 1280×720 regardless of source aspect), use pad or aspect --mode stretch instead — resize deliberately preserves aspect.
scale2x — 2× upscale with super2xsai
Doubles the pixel dimensions using FFmpeg's super2xsai filter, originally designed for pixel-art and low-resolution game footage. It's edge-preserving in a way that bilinear/bicubic scaling isn't, which makes it useful for retro game captures, sprite sheets, or any source with hard edges. For photographic content, resize -w <2x> with default Lanczos scaling looks better.
- Source:
src/commands/scale2x.js - Filter:
super2xsai
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
-o, --output <path> | <input-stem>-2x.<ext> | Override output |
$ npx fqmpeg scale2x input.mp4 --dry-run
ffmpeg -i input.mp4 -vf super2xsai -c:a copy input-2x.mp4
super2xsai is fixed at 2×; for 3× or 4× chain it (scale2x then scale2x again on the output). Modern AI upscalers (Real-ESRGAN, Topaz Video AI) beat super2xsai on detail recovery, but super2xsai runs in real time on a CPU where AI upscalers need a GPU.
Crop
crop — Cut a rectangular region
Crops a fixed-size rectangle from the source. Default position is centered; pass --pos x:y to override the top-left corner.
- Source:
src/commands/crop.js - Filter:
crop=W:H:X:Y <size>format:WxH(e.g.1280x720)
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
<size> | required | Crop size as WxH |
--pos <x:y> | center | Top-left corner; center calculates (in_w-W)/2:(in_h-H)/2 |
-o, --output <path> | <input-stem>-cropWxH.<ext> | Override output |
$ npx fqmpeg crop input.mp4 1280x720 --dry-run
ffmpeg -i input.mp4 -vf crop=1280:720:(in_w-1280)/2:(in_h-720)/2 -c:a copy input-crop1280x720.mp4
$ npx fqmpeg crop input.mp4 800x600 --pos 100:50 --dry-run
ffmpeg -i input.mp4 -vf crop=800:600:100:50 -c:a copy input-crop800x600.mp4
--pos only accepts numeric x:y. For "top-right with 20px margin from each edge" or similar relative positions, edit the --dry-run output to use FFmpeg expressions like crop=W:H:in_w-W-20:20.
crop-detect — Find optimal crop area
Doesn't write a file. Runs FFmpeg's cropdetect filter against the source and prints suggested crop coordinates to stderr. Useful for stripping black bars from letterboxed or pillarboxed downloads — pipe the output into a crop invocation.
- Source:
src/commands/crop-detect.js - Filter:
cropdetect=limit=N:round=2:reset=0tonullmuxer --limit: 0–255, default24. Pixels darker than this are treated as "black border."
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
--limit <n> | 24 | Black threshold (0–255). Higher = more aggressive cropping |
$ npx fqmpeg crop-detect input.mp4 --dry-run
ffmpeg -i input.mp4 -vf cropdetect=limit=24:round=2:reset=0 -f null -
The actual run prints lines like [Parsed_cropdetect_0 @ ...] x1:0 x2:1919 y1:140 y2:939 w:1920 h:800 x:0 y:140 crop=1920:800:0:140 — the last token is what you copy into crop --pos. If a noisy source isn't getting detected, raise --limit to 30–40.
Pad & Aspect
pad — Letterbox or pillarbox to a target canvas
Scales the source to fit inside WxH while preserving aspect, then pads the leftover space with a solid color. The filter chain is scale=W:H:force_original_aspect_ratio=decrease,pad=W:H:(ow-iw)/2:(oh-ih)/2:color=<color> — meaning the source is shrunk to fit, never upscaled past the canvas, then centered and padded.
- Source:
src/commands/pad.js - Filter:
scale=...:force_original_aspect_ratio=decrease,pad=...:color=<color> --colorformat: any FFmpeg color name or hex (black,white,0x808080)
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
<size> | required | Target canvas size as WxH |
--color <hex> | black | Padding color |
-o, --output <path> | <input-stem>-padWxH.<ext> | Override output |
$ npx fqmpeg pad input.mp4 1920x1080 --dry-run
ffmpeg -i input.mp4 -vf scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:color=black -c:a copy input-pad1920x1080.mp4
$ npx fqmpeg pad input.mp4 1920x1080 --color white --dry-run
ffmpeg -i input.mp4 -vf scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:color=white -c:a copy input-pad1920x1080.mp4
force_original_aspect_ratio=decrease matters: it shrinks the source so the larger of the two scaled dimensions equals the target, then the smaller dimension is padded. So a 1280×720 source padded to 1920×1080 ends up as 1920×1080 with no padding (it scales up to fit), while a 720×1280 vertical source ends up as 608×1080 video centered in a 1920×1080 black canvas — pillarbox bars on the sides.
aspect — Change display aspect ratio
Three modes for forcing a target aspect ratio: pad (letterbox), crop (cut to fit), or stretch (squash/stretch the picture).
- Source:
src/commands/aspect.js <ratio>format:W:H(e.g.16:9,4:3,1:1,21:9)--mode:pad(default) /crop/stretch
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
<ratio> | required | Target aspect as W:H |
--mode <mode> | pad | pad letterboxes, crop cuts to fit, stretch distorts |
--color <hex> | black | Padding color (only for pad mode) |
-o, --output <path> | <input-stem>-WxH.<ext> | Override output |
$ npx fqmpeg aspect input.mp4 16:9 --dry-run
ffmpeg -i input.mp4 -vf scale=iw:iw*9/16:force_original_aspect_ratio=decrease,pad=iw:iw*9/16:(ow-iw)/2:(oh-ih)/2:color=black,setsar=1 -c:a copy input-16x9.mp4
$ npx fqmpeg aspect input.mp4 1:1 --mode crop --dry-run
ffmpeg -i input.mp4 -vf crop='min(iw,ih*1/1)':'min(ih,iw*1/1)' -c:a copy input-1x1.mp4
$ npx fqmpeg aspect input.mp4 21:9 --mode stretch --dry-run
ffmpeg -i input.mp4 -vf scale=iw:iw*9/21,setsar=1 -c:a copy input-21x9.mp4
setsar=1 resets the sample aspect ratio to 1:1 so display aspect equals storage aspect — without it, players that respect SAR would render the result with the original display aspect. The pad mode here differs from the standalone pad verb: aspect --mode pad derives the target dimensions from the input width and the target ratio, while standalone pad takes literal pixel dimensions.
backdrop — Vertical video to 16:9 with blurred background
The "fix vertical phone video for YouTube" verb. Composites the source over a scaled-up, cropped, and blurred copy of itself — so the foreground is the original vertical clip, the background fills the 16:9 canvas with a soft-focus version of the same content. Filter graph is three-stage: scale-and-crop-and-blur for the background, scale-fit for the foreground, then overlay to center them.
- Source:
src/commands/backdrop.js - Filter:
[0:v]scale=W:H:force_original_aspect_ratio=increase,crop=W:H,boxblur=N[bg];[0:v]scale=W:H:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Vertical input video |
--size <WxH> | 1920x1080 | Output canvas |
--blur <n> | 20 | boxblur strength (higher = softer) |
-o, --output <path> | <input-stem>-backdrop.<ext> | Override output |
$ npx fqmpeg backdrop portrait.mp4 --dry-run
ffmpeg -i portrait.mp4 -filter_complex [0:v]scale=1920:1080:force_original_aspect_ratio=increase,crop=1920:1080,boxblur=20[bg];[0:v]scale=1920:1080:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2 -c:a copy portrait-backdrop.mp4
$ npx fqmpeg backdrop portrait.mp4 --size 1920x1080 --blur 30 --dry-run
ffmpeg -i portrait.mp4 -filter_complex [0:v]scale=1920:1080:force_original_aspect_ratio=increase,crop=1920:1080,boxblur=30[bg];[0:v]scale=1920:1080:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2 -c:a copy portrait-backdrop.mp4
For 4:3 horizontal output, pass --size 1440x1080. The blur is boxblur rather than the prettier (and slower) gblur — if you want gaussian blur, edit boxblur=20 to gblur=sigma=20 in the --dry-run output.
Rotate & Mirror
rotate — Rotate 90/180/270 or flip
Five modes, picked by argument: 90, 180, 270 (clockwise rotations), hflip (horizontal mirror), vflip (vertical mirror).
- Source:
src/commands/rotate.js - Filter:
transpose=1for 90,transpose=1,transpose=1for 180,transpose=2for 270,hflipfor hflip,vflipfor vflip
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
<angle> | required | One of 90, 180, 270, hflip, vflip |
-o, --output <path> | <input-stem>-rot<angle>.<ext> | Override output |
$ npx fqmpeg rotate input.mp4 90 --dry-run
ffmpeg -i input.mp4 -vf transpose=1 -c:a copy input-rot90.mp4
$ npx fqmpeg rotate input.mp4 hflip --dry-run
ffmpeg -i input.mp4 -vf hflip -c:a copy input-rothflip.mp4
180 is implemented as two 90-cw rotations (transpose=1,transpose=1) rather than the transpose=4 flag — the result is identical but two filter passes are slightly slower. For a one-off conversion the difference is invisible; for batch processing a thousand clips, use vflip,hflip directly via --dry-run editing.
transpose — Direct transpose direction
The lower-level cousin to rotate. Exposes FFmpeg's transpose filter directly: 0 = 90 ccw + vflip, 1 = 90 cw, 2 = 90 ccw, 3 = 90 cw + vflip.
- Source:
src/commands/transpose.js - Filter:
transpose=<dir>
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
--dir <n> | 1 | 0/1/2/3 — see above |
-o, --output <path> | <input-stem>-transposed.<ext> | Override output |
$ npx fqmpeg transpose input.mp4 --dir 2 --dry-run
ffmpeg -i input.mp4 -vf transpose=2 -c:a copy input-transposed.mp4
When in doubt between rotate and transpose: rotate 90 and transpose --dir 1 produce identical output. Reach for transpose only when you want the "rotate + flip" composite directions (0 or 3) that rotate doesn't expose.
mirror — Original + flipped, side by side
Splits the input into two streams, flips one, and stacks them side-by-side (horizontal) or top-and-bottom (vertical). The output is twice as wide (or tall) as the input, with the original on one side and a mirrored copy on the other — a common look for kaleidoscope-style backgrounds or symmetric music-video shots.
- Source:
src/commands/mirror.js - Filter (horizontal):
[0:v]split[left][right];[right]hflip[flipped];[left][flipped]hstack - Filter (vertical):
[0:v]split[top][bottom];[bottom]vflip[flipped];[top][flipped]vstack
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
--direction <dir> | horizontal | horizontal (hstack) or vertical (vstack) |
-o, --output <path> | <input-stem>-mirror.<ext> | Override output |
$ npx fqmpeg mirror input.mp4 --dry-run
ffmpeg -i input.mp4 -filter_complex [0:v]split[left][right];[right]hflip[flipped];[left][flipped]hstack -c:a copy input-mirror.mp4
$ npx fqmpeg mirror input.mp4 --direction vertical --dry-run
ffmpeg -i input.mp4 -filter_complex [0:v]split[top][bottom];[bottom]vflip[flipped];[top][flipped]vstack -c:a copy input-mirror.mp4
If you only want a flip (just the right side, not the doubled width), use rotate hflip instead. mirror always doubles the output dimension on the chosen axis.
Motion
zoom — Ken Burns zoom-in / zoom-out
The motion-graphics verb. Applies FFmpeg's zoompan filter to slowly zoom in (or out) on the center of the frame. Output is hardcoded to 1920×1080 at 30 fps — see "three things to know" #3 above.
- Source:
src/commands/zoom.js - Filter:
scale=8000:-1,zoompan=z='min(zoom+<speed>,1.5)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1920x1080:fps=30(zoom-in) - Output is always 1920×1080@30 — even if your source is a different size or frame rate.
- Pre-scale to 8000:-1 is to give
zoompana high-resolution working copy so the zoomed result stays sharp.
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video (or a still image) |
--direction <dir> | in | in zooms toward 1.5×, out starts at 1.5× and pulls back |
--speed <n> | 0.002 | Per-frame zoom step, 0.001–0.01 (lower = slower) |
-o, --output <path> | <input-stem>-zoom-<dir>.<ext> | Override output |
$ npx fqmpeg zoom input.mp4 --direction in --dry-run
ffmpeg -i input.mp4 -vf scale=8000:-1,zoompan=z='min(zoom+0.002,1.5)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1920x1080:fps=30 -c:a copy input-zoom-in.mp4
$ npx fqmpeg zoom input.mp4 --direction out --speed 0.005 --dry-run
ffmpeg -i input.mp4 -vf scale=8000:-1,zoompan=z='if(eq(on,1),1.5,max(zoom-0.005,1))':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1920x1080:fps=30 -c:a copy input-zoom-out.mp4
Speed 0.002 (the default) reaches 1.5× over 250 frames (≈ 8 seconds at 30 fps). Speed 0.005 reaches 1.5× in 100 frames (≈ 3.3 seconds). For still-image Ken Burns sequences, pre-stretch the input duration with loop or with FFmpeg's -loop 1 -t <seconds> flags — zoom itself doesn't extend duration.
Interlacing
deinterlace — Remove combing artifacts
Two algorithm choices: yadif (the default — fast, good quality) or bwdif (Bob Weaver Deinterlacing Filter — newer, slightly higher quality, slightly slower). For most modern footage you only encounter interlacing on archive/broadcast captures; both modes handle them well.
- Source:
src/commands/deinterlace.js - Filter:
yadiforbwdif
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
--mode <mode> | yadif | yadif or bwdif |
-o, --output <path> | <input-stem>-deinterlaced.<ext> | Override output |
$ npx fqmpeg deinterlace input.mp4 --dry-run
ffmpeg -i input.mp4 -vf yadif -c:a copy input-deinterlaced.mp4
$ npx fqmpeg deinterlace input.mp4 --mode bwdif --dry-run
ffmpeg -i input.mp4 -vf bwdif -c:a copy input-deinterlaced.mp4
If your source has visible combing (zigzag edges on horizontal motion), this is the first thing to run — before any compression or color grading. Doing it after re-encoding usually leaves residual artifacts because the comb pattern got compressed into the codec already.
interlace — Make progressive video interlaced
The reverse direction. Useful only for delivery to broadcast pipelines or legacy hardware that requires interlaced input. For everything web/mobile/streaming, you want progressive — leave this command alone.
- Source:
src/commands/interlace.js - Filter:
interlace
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
-o, --output <path> | <input-stem>-interlaced.<ext> | Override output |
$ npx fqmpeg interlace input.mp4 --dry-run
ffmpeg -i input.mp4 -vf interlace -c:a copy input-interlaced.mp4
The default field order from FFmpeg's interlace filter is top-field-first (TFF). If you need bottom-field-first for a specific delivery spec, chain field-order after.
field-order — Set TFF or BFF field order
Tags or rearranges fields so the output is top-field-first (tff) or bottom-field-first (bff). Use it when an interlaced source plays with juddery motion because the field order tag is wrong — flipping to the opposite order usually fixes it.
- Source:
src/commands/field-order.js - Filter:
fieldorder=tfforfieldorder=bff
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
<order> | required | tff or bff |
-o, --output <path> | <input-stem>-<order>.<ext> | Override output |
$ npx fqmpeg field-order input.mp4 tff --dry-run
ffmpeg -i input.mp4 -vf fieldorder=tff -c:a copy input-tff.mp4
Most modern interlaced sources (1080i broadcast, DV) are TFF. If you're dealing with an old capture that has BFF stored on disk but TFF intended (or vice versa), field-order rewrites the tags so subsequent deinterlace runs apply the correct algorithm.
Real-World Recipes
Each recipe chains multiple verbs into a workflow you'd actually use.
Recipe 1: Vertical phone video to YouTube-friendly 16:9
You shot a clip on your phone (1080×1920 portrait) and want a 16:9 output that doesn't have giant black pillarbox bars. backdrop fills the canvas with a blurred copy of the source as the background, then overlays the original on top.
# Step 1: 1080x1920 vertical → 1920x1080 with blurred backdrop
npx fqmpeg backdrop phone-clip.mp4 --blur 25
# → phone-clip-backdrop.mp4 (1920x1080, blurred bg + foreground video)
# Step 2 (optional): compress for upload
npx fqmpeg compress phone-clip-backdrop.mp4 --crf 23
# → phone-clip-backdrop-compressed.mp4
If the phone clip already has motion you want to preserve, lower the blur to 10–15 (the motion in the background becomes more visible and creates a "moving wallpaper" effect). Higher blur (25–40) makes the background nearly static and keeps viewer attention on the foreground.
Recipe 2: Strip black bars from a downloaded video
Many older downloads come with letterbox bars baked in. crop-detect finds the actual content rectangle, then crop removes everything outside.
# Step 1: detect the crop rectangle
npx fqmpeg crop-detect movie.mp4
# Look for: [Parsed_cropdetect_0 @ ...] crop=1920:800:0:140
# (means: keep a 1920x800 region starting at x=0, y=140)
# Step 2: apply the detected crop
npx fqmpeg crop movie.mp4 1920x800 --pos 0:140
# → movie-crop1920x800.mp4
crop-detect runs on the whole input; the value it prints is the most-common detection across all frames. If the source has a logo or watermark that's letterboxed in early frames but not later, the detection might be off — pass --limit 30 (or higher) to be more aggressive about treating dark pixels as "border."
Recipe 3: Restore old interlaced footage to clean progressive 1080p
You have a DV or 1080i capture from an old camcorder. Goal: clean progressive H.264 ready for upload. Order matters — deinterlace first (before any rescaling or compression locks in the comb pattern), then resize, then crop any dead borders.
# Step 1: deinterlace first - never compress before this
npx fqmpeg deinterlace dv-capture.avi --mode bwdif
# → dv-capture-deinterlaced.avi (still original DV resolution)
# Step 2: resize to 1080p height (width auto-scales to maintain aspect)
npx fqmpeg resize dv-capture-deinterlaced.avi -h 1080
# → dv-capture-deinterlaced-1080h.avi
# Step 3: detect and remove any black borders
npx fqmpeg crop-detect dv-capture-deinterlaced-1080h.avi
# Apply the detected coordinates:
npx fqmpeg crop dv-capture-deinterlaced-1080h.avi <WxH> --pos <x>:<y>
# → dv-capture-deinterlaced-1080h-crop<WxH>.avi
bwdif over yadif is worth the extra time on archival material — the small quality bump prevents "deinterlacing artifacts on top of compression artifacts" from compounding when you re-encode for streaming. After the three geometry steps, run compress --crf 20 to finalize.
Frequently Asked Questions
Why does resize use -2 instead of -1 for the auto-scaled side?
Because most codecs require even dimensions. -1 tells FFmpeg "preserve aspect," which can produce odd numbers like 537. When the encoder rejects that with width not divisible by 2, you've wasted a render. -2 says "preserve aspect, but snap to a multiple of 2," which is what you almost always want. fqmpeg hardcodes -2 so you don't trip on this.
pad input.mp4 1920x1080 makes my 1280×720 source bigger — why?
Because pad's filter chain starts with scale=1920:1080:force_original_aspect_ratio=decrease. That scales the source up to fit the target canvas while preserving aspect — for a 1280×720 source, that's 1920×1080 (no padding needed at the same aspect). For a literal pad without scaling, copy the --dry-run output and remove the scale=... part, leaving only pad=1920:1080:(ow-iw)/2:(oh-ih)/2:color=black.
Why is zoom locked to 1920×1080@30?
The zoompan filter expression hardcodes s=1920x1080:fps=30. There's no exposed option to override. If you need a different output size or frame rate, copy the --dry-run output and edit the s=...:fps=... values directly. For a 1080p@60 zoom, change to s=1920x1080:fps=60.
My source has visible combing. Is deinterlace always safe to run?
It's safe on actually-interlaced sources. On progressive sources that have been mistakenly tagged as interlaced (or vice versa), running deinterlace softens the picture and can introduce artifacts. Run ffprobe -show_streams input.mp4 | grep field_order to check the tag, and check visually for combing on horizontal motion. If the file is progressive but tagged interlaced, run field-order to fix the tag before anything else.
Can crop-detect run on just the first 30 seconds?
Not from fqmpeg directly — crop-detect always reads the whole file. The workaround is to trim first (trim source.mp4 --start 0 --duration 30 -o probe.mp4), then run crop-detect probe.mp4. For long files this can save many minutes.
What's the difference between aspect --mode pad and the pad verb?
aspect 16:9 --mode pad derives the canvas from the source width and the target ratio (so a 1280×720 source becomes a 1280×720 canvas — already 16:9, nothing changes; a 1280×960 source becomes a 1280×720 canvas with letterbox bars). The standalone pad 1920x1080 always pads to literal 1920×1080 regardless of source aspect, scaling first if needed. Use aspect when you care about the ratio, pad when you care about exact pixel dimensions.
How do I rotate based on EXIF orientation?
You can't do it via fqmpeg — none of the geometry verbs read the rotation metadata. But FFmpeg itself respects rotate metadata when it transcodes, so simply running any encode (e.g. compress) re-renders the video upright if the source has a rotation tag. To explicitly drop the rotation tag without re-rotating, use FFmpeg directly: ffmpeg -i input.mp4 -metadata:s:v:0 rotate=0 -c copy output.mp4.
Can I batch-resize a folder?
Standard shell loop:
for v in raw/*.mp4; do
npx fqmpeg resize "$v" -h 720 -o "resized/$(basename "$v" .mp4)-720p.mp4"
done
For batch crop-detect-then-crop, capture the crop=... line from each crop-detect run and feed it back, but the per-file detection means it's not a one-liner. A while read over find works.
Wrapping Up
The fourteen C5 verbs cover the geometry operations you reach for in a typical "fix the picture" pass:
resize,scale2xfor dimension changes (note the-2even-dimension trick;super2xsaiis for pixel art and edge-heavy sources)crop,crop-detectfor cutting visible region (defaultcropposition is center;crop-detectruns the cropdetect filter and prints suggested coordinates)pad,aspect,backdropfor canvas changes (padscales-then-pads,aspecthas three modes,backdropis the vertical-to-16:9 fix)rotate,transpose,mirrorfor orientation (rotatecovers 90/180/270/hflip/vflip;transpose --dir 0and--dir 3add the rotate-and-flip composites)zoomfor Ken Burns motion (always outputs 1920×1080@30 — edit the--dry-runfor other sizes)deinterlace,interlace,field-orderfor field structure (deinterlace before any other re-encoding;field-orderfixes mistagged sources)
Every verb prints its underlying FFmpeg invocation under --dry-run, so when the defaults don't fit (a different zoom size, a literal pad without pre-scaling), you can copy the command, customize, and run FFmpeg directly. For the broader fqmpeg map, see the fqmpeg complete guide.