fqmpeg's C7 cluster is the cleanup toolbox — eight verbs for fixing what's wrong with a clip rather than changing how it looks. Two stabilize shaky footage (stabilize, deshake). Three reduce noise, flicker, and compression artifacts (denoise, deflicker, deblock). One removes a logo or watermark from a fixed region (delogo). Two are analysis-only — they print timestamps to stderr without producing a file (blackdetect, freeze-detect), useful for finding bad cuts in a long capture before you touch the editor.
This guide walks each verb against its source in src/commands/ of fqmpeg 3.0.3 — the underlying FFmpeg filter, the defaults, the output filename, and the gotchas you can't see from --help alone (stabilize is a 2-pass run that drops a temp .trf file, denoise --target audio swaps to a completely different filter, deflicker --size must be odd, blackdetect and freeze-detect produce no output file at all).
What you'll get out of this guide
- A decision matrix for the 8 verbs by task (stabilize / clean up / remove / detect)
- Exact FFmpeg invocation each verb generates (verified
--dry-runoutput) - Defaults, ranges, and output filenames for every command
- Three end-to-end recipes including a "rescue a shaky low-light handheld clip" pipeline
The 8 Verbs at a Glance
The cluster splits into four task groups. Pick the group, then the verb.
| Group | Verbs | What they do |
|---|---|---|
| Stabilization | stabilize, deshake | Smooth out camera shake — 2-pass vidstab (highest quality) or 1-pass deshake (faster) |
| Noise & artifact removal | denoise, deflicker, deblock | Reduce video grain or audio hiss, even out brightness flicker, remove compression block artifacts |
| Object removal | delogo | Cover a fixed rectangular region (logo, watermark, bug) by interpolating from surrounding pixels |
| Detection (analyze-only) | blackdetect, freeze-detect | Print timestamps of black scenes or frozen frames — no output file, just stderr |
Three things to know before reading on:
stabilizeis 2-pass and writes a temp file. It runsvidstabdetectfirst (motion analysis), drops a.fqmpeg-transforms-<timestamp>.trffile next to the input, then runsvidstabtransformusing that file. The temp file is auto-cleaned on process exit — but if you kill the process mid-run withkill -9, you'll have an orphan.trffile to delete.deshakeis the single-pass fallback that doesn't need a temp file.denoise --target audiois a different filter entirely. The default--target videorunshqdn3d(spatial-temporal video denoise).--target audioswaps toafftdn(FFT-based audio noise reduction) with a different--strengthmapping. Same verb, two filters, two different output meanings — don't reach fordenoiseexpectinghqdn3dand discover you fed it--target audiofrom an earlier line.blackdetectandfreeze-detectproduce no output file. They use-f null -so FFmpeg processes the input and discards everything except the filter's stderr output (timestamps + durations of detected events). Pipe togrepto capture just the matches.
Stabilization
stabilize — 2-pass vidstab (highest quality)
The gold-standard stabilizer — analyzes motion in pass 1, applies a smoothed inverse-transform in pass 2. Higher quality than deshake because it sees the whole clip before deciding on the smoothing.
- Source:
src/commands/stabilize.js - Filter (pass 1):
vidstabdetect=shakiness=N:result=<tmpfile> - Filter (pass 2):
vidstabtransform=smoothing=M:input=<tmpfile> - Requires: FFmpeg built with
--enable-libvidstab
| Argument / Option | Default | Range | Notes |
|---|---|---|---|
<input> | required | — | Input video |
--strength <n> | 10 | 1–30 | Smoothing strength in pass 2 (higher = smoother but may crop more) |
--shakiness <n> | 5 | 1–10 | Pass-1 estimate of how shaky the input is |
-o, --output <path> | <input-stem>-stabilized.<ext> | — | Override output |
$ npx fqmpeg stabilize input.mp4 --strength 10 --shakiness 5 --dry-run
# Pass 1: Analyze motion
ffmpeg -i input.mp4 -vf vidstabdetect=shakiness=5:result=transforms.trf -f null -
# Pass 2: Apply stabilization
ffmpeg -i input.mp4 -vf vidstabtransform=smoothing=10:input=transforms.trf -c:a copy input-stabilized.mp4
A real run prints Pass 1/2: Analyzing motion... then Pass 2/2: Applying stabilization.... The intermediate .trf file is written to the same directory as the input as .fqmpeg-transforms-<timestamp>.trf and deleted at process exit. If the build doesn't have --enable-libvidstab, pass 1 fails with Unknown filter 'vidstabdetect' — use a static build from BtbN's FFmpeg-Builds which includes vidstab, or fall back to deshake.
--strength above 15 aggressively smooths the path, which means more cropping at the edges (the stabilizer needs slack to shift the frame). For a handheld shot of a moving subject, 10 is the sweet spot; for a tripod-bumped clip, 20+ is fine because there's less motion to preserve.
deshake — Single-pass built-in deshake filter
Faster, lower-quality alternative to stabilize. Uses FFmpeg's built-in deshake filter, which works in real-time on a per-frame basis (no analysis pass).
- Source:
src/commands/deshake.js - Filter:
deshake=rx=N:ry=M - Requires: Nothing extra (built into FFmpeg)
| Argument / Option | Default | Range | Notes |
|---|---|---|---|
<input> | required | — | Input video |
--rx <n> | 64 | 0–64 | Max horizontal shake in pixels the filter will correct |
--ry <n> | 64 | 0–64 | Max vertical shake in pixels |
-o, --output <path> | <input-stem>-deshake.<ext> | — | Override output |
$ npx fqmpeg deshake input.mp4 --rx 64 --ry 64 --dry-run
ffmpeg -i input.mp4 -vf deshake=rx=64:ry=64 -c:a copy input-deshake.mp4
The default 64 pixels of correction range works for most handheld footage. If your clip has small wobble (slight handheld at 4K), drop to --rx 16 --ry 16 for faster processing and less edge cropping. Versus stabilize: deshake is roughly 3-5× faster but produces a slightly less smooth result, especially on long clips with directional motion. Reach for it when iteration speed matters more than the final polish.
Noise & Artifact Removal
denoise — Reduce noise (video OR audio, dual-mode)
The verb behaves differently depending on --target. Default --target video runs FFmpeg's hqdn3d filter (spatial-temporal denoise — preserves detail well, removes both luma and chroma noise). --target audio runs afftdn (FFT-based noise floor reduction). The strength preset maps to different parameters in each mode.
- Source:
src/commands/denoise.js - Filter (video):
hqdn3d=<luma_spatial>:<chroma_spatial>:<luma_temporal>:<chroma_temporal> - Filter (audio):
afftdn=nf=<noise_floor_dB>
--strength | Video (hqdn3d) | Audio (afftdn noise floor) |
|---|---|---|
light | 3:2:3:2 | -20 dB |
medium | 5:4:5:4 | -30 dB |
strong | 7:6:7:6 | -40 dB |
| Argument / Option | Default | Choices | Notes |
|---|---|---|---|
<input> | required | — | Input file |
--target <type> | video | video / audio | Switches between hqdn3d and afftdn |
--strength <level> | medium | light / medium / strong | Preset mapping (see table above) |
-o, --output <path> | <input-stem>-denoised.<ext> | — | Override output |
$ npx fqmpeg denoise input.mp4 --target video --strength medium --dry-run
ffmpeg -i input.mp4 -vf hqdn3d=5:4:5:4 -c:a copy input-denoised.mp4
$ npx fqmpeg denoise input.mp4 --target audio --strength medium --dry-run
ffmpeg -i input.mp4 -af afftdn=nf=-30 -c:v copy input-denoised.mp4
For low-light handheld video, --strength medium (the default) hits the right balance — visible noise reduction without over-smoothing detail. --strength strong (7:6:7:6) is closer to "softening" than "denoising" and you'll notice it on textured surfaces (skin, fabric, foliage). For audio noise reduction, afftdn is the modern standard — -30 dB works for most hiss; bump to -40 (strong) for hum or background HVAC noise.
The mode-switching gotcha worth repeating: --target audio re-encodes video by copying (-c:v copy) and --target video re-encodes audio by copying (-c:a copy). You can chain them sequentially if you need both: denoise input.mp4 --target video -o tmp.mp4 && denoise tmp.mp4 --target audio.
deflicker — Even out brightness fluctuations (timelapse)
The flicker fix for timelapses shot in inconsistent light (Aperture Priority mode hunting between frames, or hand-changing exposure). Averages each frame's brightness against an N-frame sliding window.
- Source:
src/commands/deflicker.js - Filter:
deflicker=size=N - Range:
--sizeis2–129frames (FFmpeg filter limit, validated by fqmpeg before invoke)
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
--size <n> | 5 | Averaging window size in frames (range 2–129) |
-o, --output <path> | <input-stem>-deflickered.<ext> | Override output |
$ npx fqmpeg deflicker input.mp4 --size 5 --dry-run
ffmpeg -i input.mp4 -vf deflicker=size=5 -c:a copy input-deflickered.mp4
--size 5 (the default) smooths flicker over 5 consecutive frames — enough to even out aperture hunting without blurring real lighting transitions. Bump to --size 7 or 9 for severe flicker (sunset timelapses where the camera couldn't keep up with the sun). fqmpeg pre-validates --size to the FFmpeg filter range 2–129 and fails fast on out-of-range values (Error: size must be between 2 and 129). Even values work — odd sizes (5, 7, 9, 11) are conventional for moving averages because the window centers on the current frame, but FFmpeg doesn't require it. If you need to deflicker a short clip where the window would cover most of the duration, the filter's output gets unstable at the head/tail — trim the safe middle portion first with trim.
deblock — Remove block artifacts from compressed video
The fix for the macroblock grid you see in heavily-compressed YouTube re-uploads or old WhatsApp video. Applies FFmpeg's deblock filter in weak mode with a strength-mapped alpha/beta.
- Source:
src/commands/deblock.js - Filter:
deblock=filter=weak:alpha=A:beta=AwhereA = strength / 100
| Argument / Option | Default | Range | Notes |
|---|---|---|---|
<input> | required | — | Input video |
--strength <n> | 50 | 1–100 | Maps linearly to alpha and beta (both = strength / 100) |
-o, --output <path> | <input-stem>-deblocked.<ext> | — | Override output |
$ npx fqmpeg deblock input.mp4 --strength 50 --dry-run
ffmpeg -i input.mp4 -vf deblock=filter=weak:alpha=0.50:beta=0.50 -c:a copy input-deblocked.mp4
--strength 50 (the default) is moderate — visibly reduces block edges without smearing fine detail. --strength 80+ starts to look like "soft focus on the whole frame," which is rarely what you want; --strength 20–30 is better for lightly-compressed sources where you only need to clean up sky and skin gradients. The filter=weak mode is hardcoded — fqmpeg doesn't expose strong mode because it consistently over-smooths in our testing. If you want filter=strong, copy the --dry-run and edit it before running FFmpeg directly.
Object Removal
delogo — Remove a logo from a fixed region
Covers a rectangular region by interpolating from the pixels just outside the box. Use it for fixed-position logos, channel bugs, or watermarks that don't move.
- Source:
src/commands/delogo.js - Filter:
delogo=x=X:y=Y:w=W:h=H - Positional args — input file and
x:y:w:hregion
| Argument / Option | Default | Notes |
|---|---|---|
<input> | required | Input video |
<region> | required | Logo region as x:y:w:h (pixels from top-left) |
-o, --output <path> | <input-stem>-delogo.<ext> | Override output |
$ npx fqmpeg delogo input.mp4 10:10:120:60 --dry-run
ffmpeg -i input.mp4 -vf delogo=x=10:y=10:w=120:h=60 -c:a copy input-delogo.mp4
The region format is strict — x:y:w:h with four positive integers. Anything else (negative numbers, decimals, missing parts) fails with Error: region must be x:y:w:h. x and y are the top-left corner of the box; w and h are the width and height. Don't add a margin to the box — the filter interpolates from the pixels just outside, so a tight box gives the cleanest result. If the logo has anti-aliased edges, bump the box outward by 2-3 pixels on each side.
This works well for static logos and TV channel bugs. It does not work for animated logos, scrolling tickers, or anything that moves — for those you'd need a tracking pipeline outside fqmpeg. For a region that's most of the frame, the interpolation can't reconstruct missing content; delogo works best when the region is a small fraction of the frame.
Detection (Analyze-Only)
These two verbs don't produce an output video file. They run the source through a detection filter, discard the output (-f null -), and print the filter's findings to stderr. The use case: scanning a long capture for bad spots (a 90-minute screen recording that froze somewhere; a TV transfer with black frames between segments) before editing.
blackdetect — Find black scenes
Detects spans where the picture is mostly black. Useful for finding fade-to-black cuts in a long capture, or detecting "dropped signal" segments in a TV transfer.
- Source:
src/commands/blackdetect.js - Filter:
blackdetect=d=<duration>:pix_th=<threshold> - Output: No file — timestamps print to stderr as
[blacklist @ ...] black_start:N black_end:M black_duration:D
| Argument / Option | Default | Range | Notes |
|---|---|---|---|
<input> | required | — | Input video |
--threshold <n> | 0.98 | 0.0–1.0 | Fraction of pixels that must be black-ish to count as a black frame |
--duration <sec> | 0.5 | positive number | Minimum span length to report (seconds) |
$ npx fqmpeg blackdetect input.mp4 --threshold 0.98 --duration 0.5 --dry-run
ffmpeg -i input.mp4 -vf blackdetect=d=0.5:pix_th=0.98 -f null -
Pipe the stderr output to grep to capture just the matches:
npx fqmpeg blackdetect input.mp4 2>&1 | grep "black_start"
# [blackdetect @ 0x...] black_start:12.345 black_end:14.567 black_duration:2.222
# [blackdetect @ 0x...] black_start:120.0 black_end:121.5 black_duration:1.5
Drop --threshold to 0.95 if the "black" is actually dark gray (compression noise, low-light bleed). Raise --duration to 2 or higher if you want only the long pauses (intro/outro fade-to-black), not the brief 0.5 s cuts between scenes. Useful next steps: feed the timestamps into a trim or split invocation to slice on the black points.
freeze-detect — Find frozen / static frames
Detects spans where the picture stops changing (the frame "freezes"). Useful for screen recordings that hit a hang, or video files with a glitched encode where the same frame repeats for several seconds.
- Source:
src/commands/freeze-detect.js - Filter:
freezedetect=n=<noise>:d=<duration> - Output: No file —
freeze_start,freeze_duration,freeze_endprint to stderr
| Argument / Option | Default | Range | Notes |
|---|---|---|---|
<input> | required | — | Input video |
--duration <n> | 2 | positive number | Minimum freeze span to report (seconds) |
--noise <n> | 0.001 | 0–1 | Pixel-difference tolerance — frames within this delta count as identical |
$ npx fqmpeg freeze-detect input.mp4 --duration 2 --noise 0.001 --dry-run
ffmpeg -i input.mp4 -vf freezedetect=n=0.001:d=2 -f null -
npx fqmpeg freeze-detect input.mp4 2>&1 | grep "freeze_"
# [freezedetect @ 0x...] lavfi.freezedetect.freeze_start: 45.5
# [freezedetect @ 0x...] lavfi.freezedetect.freeze_duration: 8.3
# [freezedetect @ 0x...] lavfi.freezedetect.freeze_end: 53.8
The default --noise 0.001 is sensitive — useful for catching exact-duplicate frames (the dead giveaway of a hung capture). Raise to 0.01 if you want to find "nearly frozen" spans (a webinar where the slide didn't change for minutes but the encoder kept producing slightly-different frames). --duration 2 skips short legitimate pauses; drop to 0.5 for finer-grained detection.
Real-World Recipes
Each recipe chains multiple verbs into a workflow you'd actually use.
Recipe 1: Rescue a shaky low-light handheld clip
You shot a clip handheld at dusk — visible camera shake, noticeable noise in the shadows, and slight macroblocking from the camera's low bitrate. Goal: stabilize, then clean.
# Step 1: stabilize first - denoise wants stable input for temporal averaging
npx fqmpeg stabilize clip.mp4 --strength 10 --shakiness 5
# → clip-stabilized.mp4
# Step 2: denoise (video) at medium strength
npx fqmpeg denoise clip-stabilized.mp4 --target video --strength medium
# → clip-stabilized-denoised.mp4
# Step 3 (optional): deblock if the source still shows compression artifacts
npx fqmpeg deblock clip-stabilized-denoised.mp4 --strength 30
# → clip-stabilized-denoised-deblocked.mp4
Order matters. Stabilize before denoise — hqdn3d does temporal averaging across frames, and if the camera is shaking, each frame is at a slightly different angle, which makes the temporal denoise less effective (it averages misaligned pixels). On a stable source, the temporal component works as designed.
If the input was shot in genuinely low light, also reach for video brightness correction with the color verb after this pipeline — denoise tends to flatten contrast slightly, and a --gamma 1.1 lift restores midtone snap.
Recipe 2: Find and trim bad sections of a long screen recording
You have a 4-hour screen recording where the capture froze somewhere in the middle. Goal: find the freeze points, then trim around them.
# Step 1: scan for freezes longer than 5 seconds
npx fqmpeg freeze-detect recording.mp4 --duration 5 2>&1 | grep "freeze_"
# [freezedetect @ ...] freeze_start: 7230.5
# [freezedetect @ ...] freeze_duration: 45.2
# [freezedetect @ ...] freeze_end: 7275.7
# Step 2: trim out the section just before the freeze
npx fqmpeg trim recording.mp4 --start 0 --end 7230
# → recording-trimmed.mp4
# Step 3 (optional): also check for black frames (dropped signal)
npx fqmpeg blackdetect recording.mp4 --duration 1 2>&1 | grep "black_start"
grep filters to just the detection lines so you don't have to read FFmpeg's progress noise. If you want both detections in one pass, you'd need to invoke FFmpeg directly with both filters chained — fqmpeg runs one filter per verb by design. For longer captures, also consider splitting first with npx fqmpeg split recording.mp4 --chunks 4 and running detection on each chunk in parallel.
Recipe 3: Remove a channel logo from a TV transfer
You're cleaning up a recording that has a station bug in the top-right corner. Goal: remove the logo and slightly clean compression artifacts in one pass.
# Step 1: identify the logo coordinates with a frame grab + image viewer
npx fqmpeg snapshot recording.mp4 --time 5 -o frame.png
# Open frame.png, measure the logo box in pixels.
# Say it's at x=1750 y=20, 150x80 pixels.
# Step 2: remove the logo
npx fqmpeg delogo recording.mp4 1750:20:150:80
# → recording-delogo.mp4
# Step 3 (optional): deblock to clean macroblocking from the original transfer
npx fqmpeg deblock recording-delogo.mp4 --strength 40
# → recording-delogo-deblocked.mp4
Add 2-3 pixels of margin to the logo box on each side (so 1748:18:154:84 instead of the tight 1750:20:150:80) if the logo has soft edges — the interpolation needs clean reference pixels just outside the box. For an animated bug that pulses or rotates, delogo won't work cleanly — you'd need to capture the full bounding box of the animation, which often leaves a visible patch.
Frequently Asked Questions
stabilize vs deshake — which one should I use?
stabilize is the 2-pass vidstab flow and gives a noticeably smoother result, especially on long clips with directional motion (a walking shot, a vehicle mount). It needs FFmpeg built with --enable-libvidstab. deshake is single-pass, built into stock FFmpeg, and roughly 3-5× faster. Use deshake for iteration (try strengths quickly) or when your FFmpeg build lacks vidstab; use stabilize for the final render of footage that matters.
stabilize leaves a .fqmpeg-transforms-*.trf file behind — is that a bug?
No — the temp file is registered to delete at normal process exit. If you killed the process with kill -9 or a SIGKILL, the cleanup hook never runs and the .trf file remains. Safe to delete manually (rm .fqmpeg-transforms-*.trf in the input's directory). The file holds the motion-detection data from pass 1; pass 2 reads it, so don't delete it mid-run.
What's the valid range for deflicker --size?
FFmpeg's deflicker filter accepts size values from 2 to 129 frames (default 5) — verified against FFmpeg 6.1.1 with ffmpeg -h filter=deflicker. fqmpeg pre-validates this range and rejects out-of-range values with a clear error before invoking FFmpeg. Even values like --size 4 are accepted by the filter; odd sizes (5, 7, 9, 11) are conventional for moving averages because the window centers on the current frame, but it's a convention, not a requirement.
Can denoise --target audio and denoise --target video run in one command?
No — each invocation handles one target only because the underlying filters (hqdn3d for video, afftdn for audio) need different placement (-vf vs -af) and different -c:* copy companion flags. Chain two calls: denoise clip.mp4 --target video -o tmp.mp4 && denoise tmp.mp4 --target audio. If you need both in one pass, the --dry-run output of each gives you the filter strings to combine in a manual FFmpeg invocation.
blackdetect says no black scenes but I can see them visually — what's wrong?
Either --threshold is too high (the "black" in your source is actually dark gray — typical of cheap recorder hardware) or --duration is too long (the cuts are shorter than the floor). Try --threshold 0.90 --duration 0.2 to broaden detection. The pix_th parameter is the fraction of pixels below the black-luma threshold (default 0.10 luma) that must be present — at 0.98 you need 98% of pixels nearly-black to register, which a noisy "black" frame might not hit.
delogo interpolation looks like a smeared patch — can I fix it?
The filter reconstructs from surrounding pixels, so it can't recover detail it can't see. Three tactics that help: (1) tighten the box just to the logo edge — extra margin means more interpolation; (2) for high-detail backgrounds (foliage, water, text), the patch will be visible regardless — accept it or replace the region with a stock background; (3) if the logo is over a flat area (sky, wall), delogo works fine — try it first before doing manual masking in a real editor.
Will these verbs work on a VFR (variable frame rate) source?
stabilize, denoise, deflicker, and deblock all process per-frame so VFR doesn't break them, but the output VFR pattern may not be preserved depending on FFmpeg's defaults. blackdetect and freeze-detect report timestamps based on the source PTS, which works on VFR but the durations are wall-clock not frame-count. If you need frame-accurate detection, convert to CFR first with npx fqmpeg fps input.mp4 --fps 30.
Can I combine multiple cleanup verbs in one pass to avoid generation loss?
Not via fqmpeg directly — each verb produces one output and re-encodes. The generation loss from H.264 → H.264 at CRF 23 is small but real over 3-4 passes. Two workarounds: (1) raise the quality for intermediate steps by editing the --dry-run to use -crf 18 or lossless; (2) bypass fqmpeg for the chain — copy the filter strings from each verb's --dry-run and run them as one FFmpeg command with -vf "filter1,filter2,filter3".
Wrapping Up
The eight C7 verbs cover the restoration and detection operations you reach for before grading or compression:
stabilize,deshakefor camera shake (vidstab 2-pass vs built-in 1-pass — pick on quality vs speed)denoise,deflicker,deblockfor noise, brightness flicker, and macroblocking (denoise --target audioswitches to a completely different filter;deflicker --sizemust be odd)delogofor fixed-region object removal (works on flat backgrounds, struggles on detail)blackdetect,freeze-detectfor analysis-only scanning of long captures (no output file — pipe stderr togrep)
Every verb prints its underlying FFmpeg invocation under --dry-run, so when the defaults don't fit (a deblock filter=strong, a chained multi-filter pass to skip generation loss, a delogo with a non-rectangular mask), copy the command, customize, and run FFmpeg directly. For the broader fqmpeg map, see the fqmpeg complete guide.