32blogby Studio Mitsu

fqmpeg Editing: Trim, Speed, Concat, Frames

Fifteen fqmpeg verbs for editing on the time axis: trim, split, concat, loop, reverse, speed, crossfade, fade, freeze, interpolate, fps. Source-verified.

by omitsu23 min read
On this page

fqmpeg's C4 cluster is fifteen verbs that operate on the time axis of a video — cutting it (trim, split), joining it (concat, crossfade), repeating or reversing it (loop, reverse, boomerang), changing its speed (speed, interpolate, fps), fading edges (fade, fade-between), and freezing or stretching individual frames (freeze, repeat-frame, frame-step). Together they cover the editing operations you'd reach for between "I have raw footage" and "this is ready to publish."

This guide walks through each verb — the FFmpeg flags it generates, defaults, output naming, and the gotchas that come from the underlying filter (atempo's 0.5–2.0 range, xfade's offset semantics, tpad timestamp arithmetic). Everything below is verified against the source in src/commands/ of fqmpeg 3.0.1.

What you'll get out of this guide

  • A decision matrix for the 15 verbs by task (cut / join / time / transition / frame)
  • Exact FFmpeg invocation each verb generates (verified --dry-run output)
  • Defaults, allowed values, and output filenames for every command
  • Three end-to-end recipes including a smooth-slow-motion pipeline

The 15 Verbs at a Glance

The cluster splits cleanly into four task groups. Pick the group, then the verb.

GroupVerbsWhat they do
Cuts & joinstrim, split, concatCut one section out, slice into equal segments, or stitch multiple files together
Time playbackloop, reverse, speed, boomerangPlay N times, backwards, faster/slower, or forward-then-backward
Transitionscrossfade, fade, fade-betweenBlend two clips, fade in/out the edges, or fade-to-black between clips
Frame-levelfreeze, repeat-frame, frame-step, interpolate, fpsHold a single frame, hold the last frame, decimate, interpolate, or change frame rate

Three things to know before reading on:

  1. trim is keyframe-fast, not frame-accurate. It uses -c copy so the cut happens at the nearest preceding keyframe. For exact-frame cuts you re-encode — see FFmpeg lossless cut for the trade-off in detail.
  2. speed chains atempo for big speed changes. FFmpeg's atempo audio filter only accepts 0.5–2.0 per instance; fqmpeg builds the chain automatically (e.g. 4x becomes atempo=2.0,atempo=2).
  3. interpolate is CPU-heavy. Motion-compensated interpolation runs at ~5–20× real time on a typical laptop. Reach for it only when the visible result genuinely needs smooth motion (slow-mo, frame-rate up-conversion); use fps for everything else, including conformance to a target rate.

Cuts & Joins

trim — Cut a section out (stream-copy)

Cuts a section between --start and either --duration or --to. Uses -c copy so the trim is fast (no re-encode), but the cut snaps to the nearest preceding keyframe. For frame-accurate cuts, re-encode at the seam.

  • Source: src/commands/trim.js
  • Codec: -c copy -map 0 (passthrough, all streams)
  • Time format: seconds (30) or HH:MM:SS (00:01:30) for every time argument
  • One of --duration or --to is required — otherwise the command exits with Error: specify --duration (-d) or --to (-t).
Argument / OptionDefaultNotes
<input>requiredInput video
-s, --start <time>0Start time
-d, --duration <time>Length of cut from --start
-t, --to <time>End time (mutually exclusive with --duration)
-o, --output <path><input-stem>-trimmed.<ext>Override output
bash
$ npx fqmpeg trim input.mp4 --start 00:00:10 --duration 00:00:30 --dry-run

  ffmpeg -i input.mp4 -ss 00:00:10 -t 00:00:30 -c copy -map 0 input-trimmed.mp4

If the cut starts on a non-keyframe, FFmpeg seeks backward to the nearest keyframe and the resulting file may be slightly longer than requested or have a frozen first frame for a few hundred milliseconds. That's the cost of -c copy. The FFmpeg lossless cut guide covers two-pass approaches that re-encode only the GOPs at the seams.

split — Slice into equal segments

Splits the input into N-second segments using FFmpeg's segment muxer. Stream-copy, so it's instant. The default output pattern uses %03d (zero-padded three digits): input-part000.mp4, input-part001.mp4, etc.

  • Source: src/commands/split.js
  • Muxer: -f segment -segment_time <s> -reset_timestamps 1
  • Codec: -c copy -map 0
Argument / OptionDefaultNotes
<input>requiredInput video
<seconds>requiredSegment duration in seconds
-o, --output <pattern><input-stem>-part%03d.<ext>Pattern with printf-style placeholder
bash
$ npx fqmpeg split input.mp4 60 --dry-run

  ffmpeg -i input.mp4 -c copy -map 0 -f segment -segment_time 60 -reset_timestamps 1 input-part%03d.mp4

Like trim, segment boundaries snap to keyframes — so a "60-second" segment may be 58–62 seconds long depending on GOP placement. -reset_timestamps 1 resets each segment's PTS to 0, which is what most downstream tools (HLS, DASH, players) expect.

concat — Stitch files together

Concatenates two or more video files. Two modes:

  • Demuxer mode (default, stream-copy): builds a temporary filelist.txt with absolute paths, runs -f concat -safe 0, and cleans up the file on exit. Fast, but requires identical codec/container/resolution across inputs.

  • Re-encode mode (--re-encode): uses filter_complex concat to join inputs that differ in codec or resolution. Slower but tolerant.

  • Source: src/commands/concat.js

  • Default suffix: -joined

Argument / OptionDefaultNotes
<inputs...>required2 or more video files
--re-encodeoffSwitch to filter-based concat (use when codecs differ)
-o, --output <path><first-stem>-joined.<ext>Override output
bash
$ npx fqmpeg concat clip1.mp4 clip2.mp4 clip3.mp4 --dry-run

  # File list (auto-generated):
  # file '/abs/path/clip1.mp4'
  # file '/abs/path/clip2.mp4'
  # file '/abs/path/clip3.mp4'

  ffmpeg -f concat -safe 0 -i filelist.txt -c copy clip1-joined.mp4

$ npx fqmpeg concat clip1.mp4 clip2.mp4 --re-encode --dry-run

  ffmpeg -i clip1.mp4 -i clip2.mp4 -filter_complex [0:v][0:a][1:v][1:a]concat=n=2:v=1:a=1[outv][outa] -map [outv] -map [outa] clip1-joined.mp4

The --dry-run output for the demuxer mode includes the auto-generated file list as a comment block — handy when debugging absolute-path issues. The actual filelist.txt is written to the input directory with a hashed name like .fqmpeg-concat-1715450000000.txt and unlinked when the process exits, so you won't see it in your filesystem after a real run.

If demuxer-mode concat fails with Non-monotonous DTS in output stream or Could not write header, the inputs likely have different timestamps, codecs, or resolutions. Add --re-encode and try again.

Time Playback

loop — Play N times

Loops the video N times using -stream_loop. Stream-copy, so it's instant.

  • Source: src/commands/loop.js
  • Filter: -stream_loop <N-1> (FFmpeg's stream_loop counts additional loops, so loop 3 = play 3 times = stream_loop 2)
  • Default suffix: -loop<N>
Argument / OptionDefaultNotes
<input>requiredInput video
<count>requiredNumber of times to play (e.g. 3 = play 3 times)
-o, --output <path><input-stem>-loop<N>.<ext>Override output
bash
$ npx fqmpeg loop input.mp4 3 --dry-run

  ffmpeg -stream_loop 2 -i input.mp4 -c copy input-loop3.mp4

The count is parsed as a positive number but only the integer part reaches FFmpeg (Math.floor(n) - 1). Pass 0 or a negative number and you'll get a validator error before FFmpeg starts.

reverse — Play backwards

Applies the reverse video filter and areverse audio filter, so video and audio both play in reverse. Use --no-audio to drop audio entirely (much faster on long clips, since areverse buffers the entire audio stream into memory).

Argument / OptionDefaultNotes
<input>requiredInput video
--no-audio(audio kept)Drop audio (faster on long clips)
-o, --output <path><input-stem>-reversed.<ext>Override output
bash
$ npx fqmpeg reverse input.mp4 --dry-run

  ffmpeg -i input.mp4 -vf reverse -af areverse input-reversed.mp4

speed — Change playback speed

Re-times video with setpts and audio with one or more chained atempo filters. fqmpeg handles the chaining for you so you can pass any positive multiplier.

  • Source: src/commands/speed.js
  • Video filter: setpts=(1/speed)*PTS
  • Audio filter: chained atempo (FFmpeg's atempo only accepts 0.5–2.0 per instance, so 4× becomes atempo=2.0,atempo=2, 0.25× becomes atempo=0.5,atempo=0.5)
  • Default suffix: <N>x for speed-up, slow<N>x for slow-down
Argument / OptionDefaultNotes
<input>requiredInput video
<factor>requiredSpeed multiplier (e.g. 2 = 2× faster, 0.5 = half speed)
--no-audio(audio kept)Drop audio (useful for time-lapse)
-o, --output <path><input-stem>-<suffix>.<ext>Override output
bash
$ npx fqmpeg speed input.mp4 2 --dry-run

  ffmpeg -i input.mp4 -filter:v setpts=0.500000*PTS -filter:a atempo=2 input-2x.mp4

$ npx fqmpeg speed input.mp4 4 --dry-run

  ffmpeg -i input.mp4 -filter:v setpts=0.250000*PTS -filter:a atempo=2.0,atempo=2 input-4x.mp4

$ npx fqmpeg speed input.mp4 0.5 --dry-run

  ffmpeg -i input.mp4 -filter:v setpts=2.000000*PTS -filter:a atempo=0.5 input-slow0.5x.mp4

For time-lapse (high speed-ups, no useful audio), always pass --no-audio — chaining four or five atempo filters works but is slower and produces unlistenable artifacts anyway. For smooth slow-motion (factors below 1.0), speed 0.5 simply duplicates frames. To synthesize new in-between frames instead, chain in interpolate (Recipe 3 below).

boomerang — Forward then reverse

Splits the video stream, reverses one copy, and concatenates forward + reverse for a seamless ping-pong loop. The video portion is re-encoded; audio is stream-copied (or dropped with --no-audio).

bash
$ npx fqmpeg boomerang input.mp4 --dry-run

  ffmpeg -i input.mp4 -filter_complex [0:v]split[fwd][rev];[rev]reverse[reversed];[fwd][reversed]concat=n=2:v=1:a=0 -c:a copy input-boomerang.mp4

The audio handling is intentionally simple: the filter graph only concatenates video, so the copied audio (-c:a copy) plays through once and ends partway into the boomerang. For Instagram-style clips this usually doesn't matter (the format is muted on autoplay anyway). If audio is part of the effect, pass --no-audio to make the silence intentional, or post-process with audio verbs to author a custom track.

Transitions

crossfade — Blend two clips with xfade

Applies one of FFmpeg's built-in xfade transitions between two videos. By default, ffprobe is run on clip1 to detect its duration and the transition is timed to start near the end of clip1 — so clip1 plays in full, then crossfades smoothly into clip2. Audio also crossfades via acrossfade.

  • Source: src/commands/crossfade.js
  • Filter (default): [0:v][1:v]xfade=transition=<type>:duration=<sec>:offset=<auto>[v];[0:a][1:a]acrossfade=d=<sec>[a]
  • Auto offset: offset = ffprobe(clip1).duration - <crossfade duration>. Override with --offset <n>
  • Transitions (21): fade, wipeleft, wiperight, wipeup, wipedown, slideleft, slideright, slideup, slidedown, circlecrop, rectcrop, distance, fadeblack, fadewhite, radial, smoothleft, smoothright, smoothup, smoothdown, squeezev, squeezeh
Argument / OptionDefaultNotes
<input1>requiredFirst video
<input2>requiredSecond video
<duration>requiredCrossfade duration in seconds
--transition <type>fadeOne of the 21 transitions above
--offset <seconds>auto-detectedWhen the transition starts (in clip1's timeline)
--no-audio-fadeoffStream-copy audio instead of acrossfade (use when clips have no audio)
-o, --output <path><input1-stem>-crossfade.<ext>Override output
bash
# Default: auto-detect clip1 duration via ffprobe (here clip1 is 8.5s)
$ npx fqmpeg crossfade clip1.mp4 clip2.mp4 1.5 --transition wipeleft --dry-run

  ffmpeg -i clip1.mp4 -i clip2.mp4 -filter_complex [0:v][1:v]xfade=transition=wipeleft:duration=1.5:offset=7[v];[0:a][1:a]acrossfade=d=1.5[a] -map [v] -map [a] clip1-crossfade.mp4

# Manual offset, no audio crossfade (use when one clip has no audio track)
$ npx fqmpeg crossfade clip1.mp4 clip2.mp4 1.5 --offset 5 --no-audio-fade --dry-run

  ffmpeg -i clip1.mp4 -i clip2.mp4 -filter_complex xfade=transition=fade:duration=1.5:offset=5 -c:a copy clip1-crossfade.mp4

The auto-offset matches the most common expectation — "play clip1 in full, then crossfade into clip2." Pass --offset <n> to start the transition earlier (overlap effect), and --no-audio-fade if either input has no audio (acrossfade errors when an audio stream is missing).

fade — Fade in and/or out the edges

Adds a fade-in at the start, a fade-out at the end, or both. Both video (fade) and audio (afade) filters are emitted in lockstep, so audio levels track the visual fade.

  • Source: src/commands/fade.js
  • At least one of --in or --out is required
  • --out requires --duration (the total video length, so the filter knows when to start the fade)
Argument / OptionDefaultNotes
<input>requiredInput video
--in <seconds>0Fade-in duration
--out <seconds>0Fade-out duration
--duration <seconds>Total video length (required if --out > 0)
-o, --output <path><input-stem>-fade.<ext>Override output
bash
$ npx fqmpeg fade input.mp4 --in 2 --dry-run

  ffmpeg -i input.mp4 -vf fade=t=in:st=0:d=2 -af afade=t=in:st=0:d=2 input-fade.mp4

$ npx fqmpeg fade input.mp4 --in 2 --out 2 --duration 30 --dry-run

  ffmpeg -i input.mp4 -vf fade=t=in:st=0:d=2,fade=t=out:st=28:d=2 -af afade=t=in:st=0:d=2,afade=t=out:st=28:d=2 input-fade.mp4

Get the total duration first with npx fqmpeg duration input.mp4. The fade-out start time is computed automatically as duration - fadeOut. If the input has no audio track, the -af filter chain will fail — split the audio off with strip-audio first, or run plain FFmpeg without -af.

fade-between — Fade-to-black between two clips

Fades clip1 to black, then fades from black to clip2, and concatenates. Different from crossfade — there's a black frame in the middle, not a direct blend. Audio is concatenated end-to-end (no audio crossfade).

Argument / OptionDefaultNotes
<input1>requiredFirst video
<input2>requiredSecond video
--duration <n>1Fade duration in seconds (applied to both clips)
-o, --output <path><input1-stem>-faded.<ext>Override output
bash
$ npx fqmpeg fade-between clip1.mp4 clip2.mp4 --duration 1.5 --dry-run

  ffmpeg -i clip1.mp4 -i clip2.mp4 -filter_complex [0]fade=t=out:st=0:d=1.5[v0];[1]fade=t=in:st=0:d=1.5[v1];[v0][v1]concat=n=2:v=1:a=0[v];[0:a][1:a]concat=n=2:v=0:a=1[a] -map [v] -map [a] clip1-faded.mp4

Note that the fade st=0 (start time = 0) applies to each clip's own timeline — so clip1 begins fading from the start, not at the end. For the usual "fade out the last 1.5s of clip1, then fade in the first 1.5s of clip2" you'd need a different filter graph; this verb's semantics fit cases where the clips themselves are already short edits where the whole clip can ramp.

Frame-Level Operations

freeze — Hold a single frame

Freezes the video at a specified time for a specified duration using a tpad + setpts trick. Audio is stream-copied (which means audio keeps playing through the freeze).

  • Source: src/commands/freeze.js
  • Filter: tpad=stop_mode=clone:stop_duration=0,setpts='if(gte(T,<at>),if(lte(T,<at>+<hold>),<at>/TB,PTS-<hold>/TB),PTS)'
Argument / OptionDefaultNotes
<input>requiredInput video
<at>requiredTime position to freeze (seconds or HH:MM:SS)
<hold>requiredHow long to hold the freeze (seconds)
-o, --output <path><input-stem>-freeze.<ext>Override output
bash
$ npx fqmpeg freeze input.mp4 5 2 --dry-run

  ffmpeg -i input.mp4 -vf tpad=stop_mode=clone:stop_duration=0,setpts='if(gte(T,5),if(lte(T,5+2),5/TB,PTS-2/TB),PTS)' -c:a copy input-freeze.mp4

The setpts expression freezes time at <at> for <hold> seconds, then resumes playback shifted by <hold>. Because audio is stream-copied, the audio runs straight through the freeze (no audio pad). For a fully synchronized freeze (audio also pauses), drop into FFmpeg directly with -af "asetpts=..." or pause-and-resume with the concat demuxer on three trimmed parts.

repeat-frame — Hold the last frame

Pads the end of the video by repeating the final frame for N seconds. Useful for adding a still title card or extending a short clip to a target duration. Audio is stream-copied (so audio ends at its original length and the held frame is silent).

Argument / OptionDefaultNotes
<input>requiredInput video
<seconds>requiredDuration to hold the last frame
-o, --output <path><input-stem>-hold.<ext>Override output
bash
$ npx fqmpeg repeat-frame input.mp4 3 --dry-run

  ffmpeg -i input.mp4 -vf tpad=stop_mode=clone:stop_duration=3 -c:a copy input-hold.mp4

frame-step — Keep every Nth frame

Decimates the video by keeping every Nth frame and reflowing timestamps. Always strips audio (-an) — the original timing no longer applies. The output's frame rate stays the same as the input but plays back at 1/N the original speed... no wait — setpts=N/FRAME_RATE/TB reflows so output plays at the input's frame rate, just with N× fewer source frames per second of output (i.e. an N× time-lapse with no smoothing).

Argument / OptionDefaultNotes
<input>requiredInput video
<n>requiredKeep every Nth frame (positive integer)
-o, --output <path><input-stem>-step.<ext>Override output
bash
$ npx fqmpeg frame-step input.mp4 5 --dry-run

  ffmpeg -i input.mp4 -vf select='not(mod(n\,5))',setpts=N/FRAME_RATE/TB -an input-step.mp4

For a smoother time-lapse (rather than a hard decimation), use speed with --no-audio instead — speed rescales the timestamps continuously, while frame-step is effectively nearest-neighbor in the time dimension.

interpolate — Smooth slow motion via motion compensation

Generates synthetic in-between frames using FFmpeg's minterpolate with motion-compensated interpolation. This is the "smooth slow-mo" verb — drop a 30 fps source to half speed and interpolate to 60 fps and the result looks fluid (rather than the duplicated frames speed 0.5 produces).

  • Source: src/commands/interpolate.js
  • Filter: minterpolate=fps=<target>:mi_mode=mci:mc_mode=aobmc:me_mode=bidir:vsbmc=1
  • Slow. Motion-compensated interpolation is CPU-intensive — expect 5–20× real-time on a typical laptop.
Argument / OptionDefaultNotes
<input>requiredInput video
<target-fps>requiredTarget frame rate (e.g. 60, 120)
-o, --output <path><input-stem>-<N>fps-interp.<ext>Override output
bash
$ npx fqmpeg interpolate input.mp4 60 --dry-run

  ffmpeg -i input.mp4 -vf minterpolate=fps=60:mi_mode=mci:mc_mode=aobmc:me_mode=bidir:vsbmc=1 -c:a copy input-60fps-interp.mp4

mi_mode=mci is motion-compensated interpolation (the highest quality and slowest mode). mc_mode=aobmc is adaptive overlapped block matching, me_mode=bidir is bidirectional motion estimation, vsbmc=1 enables variable-size block motion compensation. These defaults trade speed for quality; if the result has shimmering or ghosting on fast motion, drop to mi_mode=blend or mi_mode=dup by editing the --dry-run output and running FFmpeg directly.

fps — Change frame rate (no interpolation)

The simple version: set the output frame rate by dropping or duplicating frames. No motion compensation, so speed-ups look choppy and slow-downs look stuttery — but it's instant where interpolate is slow.

Argument / OptionDefaultNotes
<input>requiredInput video
<rate>requiredTarget frame rate (e.g. 24, 30, 60)
-o, --output <path><input-stem>-<N>fps.<ext>Override output
bash
$ npx fqmpeg fps input.mp4 24 --dry-run

  ffmpeg -i input.mp4 -vf fps=24 -c:a copy input-24fps.mp4

When to use fps vs interpolate: fps for quick conformance to a target rate (e.g. converting 60 fps gameplay capture to 30 fps for a tutorial), interpolate when you actually want smooth motion at a higher rate (slow-motion playback, smooth pan).

Real-World Recipes

Each recipe chains multiple verbs into a workflow you'd actually use.

Recipe 1: Stitch a 3-clip sequence with crossfades and an outro fade

A common edit pattern: chain three clips together with smooth crossfades between each pair, then a 1-second fade-out at the very end. Each crossfade call auto-detects the previous clip's duration and times the transition correctly.

bash
# Step 1: crossfade clip1 and clip2 (1s transition)
npx fqmpeg crossfade clip1.mp4 clip2.mp4 1 -o c12.mp4

# Step 2: crossfade c12 with clip3
npx fqmpeg crossfade c12.mp4 clip3.mp4 1 -o c123.mp4

# Step 3: total length, then add a 1s fade-out
total=$(npx fqmpeg duration c123.mp4 | awk -F: '{print ($1*3600)+($2*60)+$3}')
npx fqmpeg fade c123.mp4 --out 1 --duration "$total" -o final.mp4

Each intermediate file is re-encoded (xfade is a filter graph, not a stream-copy operation), so encoding costs add up across the chain. For long clips, consider doing the whole chain in one filter graph by editing the --dry-run output of step 2 and adding the third input directly. Pass --no-audio-fade if any of the three clips has no audio stream — acrossfade errors out when an input is missing audio.

Recipe 2: Smooth slow-motion from a 30 fps source

Take a clip, slow it to half speed, and interpolate to 60 fps so it doesn't stutter. The two-step pipeline produces something close to what a 120-fps phone capture would look like at half-speed playback.

bash
# Step 1: slow to 0.5x (no audio - slow audio rarely sounds good)
npx fqmpeg speed source.mp4 0.5 --no-audio
# → source-slow0.5x.mp4

# Step 2: interpolate to 60 fps for smoothness
npx fqmpeg interpolate source-slow0.5x.mp4 60
# → source-slow0.5x-60fps-interp.mp4

Step 2 is the slow part — motion-compensated interpolation runs at ~5–20× real time. For batches, do the slow runs overnight or on a beefier machine. If the source is already 60 fps, skip step 2 — speed 0.5 gives you 30-fps-effective output, which is already smooth at 60 fps display.

Recipe 3: Highlight reel from a 1-hour stream

Pull three highlight clips out of a long stream, fade-to-black between them, and add a one-second fade-in at the very start.

bash
# Step 1: cut three highlights with trim
npx fqmpeg trim stream.mp4 --start 00:12:30 --duration 00:00:20 -o h1.mp4
npx fqmpeg trim stream.mp4 --start 00:34:15 --duration 00:00:25 -o h2.mp4
npx fqmpeg trim stream.mp4 --start 00:51:00 --duration 00:00:15 -o h3.mp4

# Step 2: fade-between h1 and h2
npx fqmpeg fade-between h1.mp4 h2.mp4 --duration 0.8 -o h12.mp4

# Step 3: fade-between (h1+h2) and h3
npx fqmpeg fade-between h12.mp4 h3.mp4 --duration 0.8 -o h123.mp4

# Step 4: total length, then add a 1s fade-in
total=$(npx fqmpeg duration h123.mp4)
# → 0:00:60.xxxxxx (somewhere around 60s after the fades shave a bit off each side)
npx fqmpeg fade h123.mp4 --in 1 --duration 60 -o reel.mp4

Each trim is keyframe-snapped — for cuts that need to land on a specific frame (e.g. mid-action), see the lossless-cut guide linked in step 1 of the table at the top. The fades in steps 2–4 re-encode, so the trims being keyframe-snapped doesn't compound across the pipeline.

Frequently Asked Questions

Why does loop 3 produce -stream_loop 2?

Because FFmpeg's -stream_loop counts additional loops, not total plays. -stream_loop 0 means "play once," -stream_loop 2 means "play once then loop twice more = 3 total plays." fqmpeg subtracts 1 from the user-facing count to match the user's intent ("play 3 times").

trim is fast but the cut isn't where I asked. What's going on?

trim uses -c copy, which means the output starts at the nearest preceding keyframe before your --start time. If the GOP size is 250 frames and you ask to start at 5 seconds, you might actually start at 4.0 seconds. The trade-off is speed: stream-copy is essentially I/O-bound (instant on SSDs), while frame-accurate cuts re-encode the GOPs at the seams. The FFmpeg lossless cut guide walks through the re-encode-the-edges approach.

concat fails with "Non-monotonous DTS" — what do I do?

The inputs have different timestamps, codecs, or resolutions, and stream-copy concat can't merge them. Re-run with --re-encode. That switches to filter_complex concat, which decodes and re-encodes everything (slower, but tolerates mismatches). If you have many clips and only one is the odd one out, re-encode just that clip first with compress, then run the demuxer concat on the homogenized set.

speed chains atempo for high speed-ups — does that hurt audio quality?

Yes, audibly so. Each atempo instance applies a phase-vocoder time stretch; chaining two or three is fine for casual content but produces noticeable artifacts (warbling on tonal content like music). For time-lapse use cases (high speed-ups), pass --no-audio and add a music track later. For modest speed-ups (1.0–2.0×) you only get one atempo and quality is fine.

Does frame-step give the same result as speed?

No. frame-step N keeps every Nth source frame and reflows timestamps so output plays at the input's frame rate — a hard decimation with no smoothing. speed N rescales timestamps continuously, so the output plays at the input's frame rate but with all source frames preserved (in a faster sequence). For time-lapse: use speed --no-audio for smoother motion, frame-step when you specifically want a stuttery decimated look.

Can I use freeze and keep the audio paused too?

Not with the verb as-is — audio is stream-copied, so it keeps playing through the freeze. For a synchronized pause, the cleanest pattern is to split the input into three pieces with trim (before the freeze, the freeze frame as a 1-frame clip, after the freeze), apply repeat-frame to the freeze frame for the hold duration, and concat the three back together with concat --re-encode.

interpolate is slow — when is it actually worth it?

When the visible result needs to be smooth motion: slow-motion playback of fast action, frame-rate up-conversion of an animated clip going to a high-refresh display, or rescuing 24 fps source for a 60 fps platform. It's not worth it for re-encoding a tutorial screencast (no fast motion to smooth), for time-lapse outputs (the artifacts are wrong-direction anyway), or anywhere you'd be re-uploading to a platform that re-encodes (YouTube, Instagram) — the re-encode will undo most of the smoothness gain.

How do I batch-trim a folder of videos to a fixed length?

Standard shell loop:

bash
for v in raw/*.mp4; do
  npx fqmpeg trim "$v" --start 0 --duration 60 -o "trimmed/$(basename "$v")"
done

Each output lands in trimmed/ with the same filename. For frame-accurate batch cuts, replace trim with the re-encode pattern from the lossless cut guide.

Wrapping Up

The fifteen C4 verbs cover the time-axis editing operations you reach for in a typical edit pass:

  • trim, split, concat for cuts and joins (stream-copy when codecs match, --re-encode when they don't)
  • loop, reverse, speed, boomerang for time playback (note the atempo chaining for speed and the audio-buffer cost of reverse)
  • crossfade, fade, fade-between for transitions (crossfade auto-detects clip1 duration and crossfades audio in parallel; pass --offset or --no-audio-fade to override)
  • freeze, repeat-frame, frame-step, interpolate, fps for frame-level work (use interpolate only when smoothness is actually needed; it's slow)

Every verb prints its underlying FFmpeg invocation under --dry-run, so when the defaults don't fit (the freeze audio handling, a custom crossfade --offset), you can copy the command, customize, and run FFmpeg directly. For frame-accurate trimming or the broader fqmpeg map, see the lossless cut guide and the fqmpeg complete guide.