32blogby Studio Mitsu

FFmpeg HDR to SDR: Tonemapping Guide That Actually Works

Convert HDR video to SDR with FFmpeg using zscale+tonemap or libplacebo. Covers hable, reinhard, mobius algorithms, GPU acceleration, Dolby Vision, and common fixes for washed-out colors.

by omitsu17 min read
On this page

Converting HDR video to SDR with FFmpeg requires tonemapping — a process that compresses the wide brightness and color range of HDR (BT.2020/PQ) into the narrower SDR range (BT.709). The quickest reliable command is:

bash
ffmpeg -i input_hdr.mp4 \
  -vf "zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p" \
  -c:v libx264 -crf 18 -preset slow -c:a copy output_sdr.mp4

This guide walks through two pipelines — the CPU-based zscale+tonemap and the GPU-accelerated libplacebo — so you can pick the right approach for your workflow.

What you'll learn

  • Why HDR→SDR conversion needs tonemapping (not just re-encoding)
  • The zscale+tonemap CPU pipeline step by step
  • The libplacebo GPU pipeline with Vulkan
  • Which tonemapping algorithm to use (hable, reinhard, mobius, bt.2390)
  • How to handle HDR10, HLG, HDR10+, and Dolby Vision
  • GPU acceleration with VAAPI, OpenCL, and NVENC
  • Fixing washed-out colors, banding, and metadata issues

Why You Can't Just Re-encode HDR Video

Ever tried converting an HDR video and ended up with something that looks worse than a VHS tape? If you throw an HDR video at ffmpeg -i hdr.mp4 -c:v libx264 output.mp4, the result looks terrible — washed out, with crushed shadows and blown highlights. This is one of the most common questions on Reddit's r/ffmpeg and Doom9 forums, and the answer is always the same: you need tonemapping. Here's why.

HDR video uses three things that SDR displays can't interpret correctly:

PropertyHDR (typical)SDR
Transfer functionPQ (SMPTE ST 2084) or HLGBT.709 gamma (~2.4)
Color primariesBT.2020 (wide gamut)BT.709 (standard gamut)
Bit depth10-bit8-bit
Peak brightness1,000–10,000 nits~100 nits

Simply changing the container or codec doesn't convert these properties. You need a tonemapping step that mathematically remaps the wide brightness range into the SDR range while preserving as much visual detail as possible — plus a color space conversion from BT.2020 to BT.709.

Check Your Input with ffprobe

Before converting, verify that your source is actually HDR:

bash
ffprobe -v quiet -show_streams -select_streams v:0 input_hdr.mp4 2>&1 | grep -E "color_|pix_fmt"

You should see something like:

pix_fmt=yuv420p10le
color_space=bt2020nc
color_transfer=smpte2084
color_primaries=bt2020
  • smpte2084 → HDR10/HDR10+ (PQ transfer)
  • arib-std-b67 → HLG
  • bt2020nc → BT.2020 non-constant luminance matrix
  • yuv420p10le → 10-bit 4:2:0

If you see bt709 for all color fields, the file is already SDR — no conversion needed.

For HDR10 content with static metadata (MaxCLL/MaxFALL), you can also check with ffprobe -v quiet -show_frames -read_intervals "%+#1" input.mp4 | grep -E "mastering|content_light" — this metadata helps tonemapping algorithms make better decisions about how to compress the brightness range.

Pipeline 1: zscale + tonemap (CPU)

This is the most compatible approach — it works on any FFmpeg build that includes the zscale (from zimg) and tonemap filters. No GPU required.

The full filter chain

bash
ffmpeg -i input_hdr.mp4 \
  -vf "zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p" \
  -c:v libx264 -crf 18 -preset slow \
  -c:a copy -movflags +faststart \
  output_sdr.mp4

Let's break down each filter in the chain:

Step 1: Linearize the transfer function

zscale=t=linear:npl=100

Converts the PQ (perceptual quantizer) curve to linear light. The npl=100 parameter sets the nominal peak luminance to 100 nits (standard SDR reference). This is the anchor point for the tonemapping curve — everything above 100 nits gets compressed.

Step 2: Convert to floating-point RGB

format=gbrpf32le

Switches to 32-bit floating-point planar RGB. This intermediate format prevents precision loss during the color math. The tonemap filter needs RGB input, and floating-point avoids banding artifacts from integer rounding.

Step 3: Convert color primaries

zscale=p=bt709

Maps colors from the BT.2020 wide gamut to the BT.709 standard gamut. Colors that fall outside BT.709's gamut get clipped to the nearest representable color. This is the step where you might lose some ultra-saturated greens and reds that only exist in BT.2020.

Step 4: Apply tonemapping

tonemap=hable:desat=0

The Hable filmic curve compresses the brightness range. desat=0 disables desaturation of bright highlights — without this, bright areas lose their color and turn grayish-white.

Step 5: Set output color properties

zscale=t=bt709:m=bt709:r=tv

Applies the BT.709 gamma curve (t=bt709), sets the YCbCr matrix to BT.709 (m=bt709), and constrains to TV range (16–235) which most players expect.

Step 6: Convert to 8-bit YUV

format=yuv420p

Final conversion to YUV 4:2:0 8-bit — the standard SDR pixel format with maximum player compatibility.

Pipeline 2: libplacebo (GPU via Vulkan)

libplacebo is the rendering engine behind mpv. It handles tonemapping, gamut mapping, dithering, and color management in a single GPU-accelerated filter — and produces noticeably better results than the CPU pipeline for most content.

Basic libplacebo command

bash
ffmpeg -init_hw_device vulkan \
  -i input_hdr.mp4 \
  -vf "libplacebo=tonemapping=hable:peak_detect=true:gamut_mode=perceptual:colorspace=bt709:color_trc=bt709:color_primaries=bt709:range=limited:dithering=blue:format=yuv420p" \
  -c:v libx264 -crf 18 -preset slow \
  -c:a copy output_sdr.mp4

Why libplacebo is better

Featurezscale + tonemaplibplacebo
Peak detectionStatic (metadata only)Dynamic (per-frame histogram)
Gamut mappingBasic desaturation6+ modes (perceptual, relative, saturation)
DitheringNone (relies on format filter)Built-in (blue noise, ordered)
Algorithms712 (including BT.2390, ST 2094-40)
Dolby VisionNot supportedProfile 5/8.x supported
Scene changeNoneDetection with threshold tuning
Contrast recoveryNoneBuilt-in (default 0.30)
ProcessingCPUGPU (Vulkan)

The dynamic peak detection is the biggest practical difference. Instead of relying on static MaxCLL metadata (which is often inaccurate or missing), libplacebo analyzes each frame's actual brightness histogram and adjusts the tonemapping curve in real time. This prevents scenes from being unnecessarily dark or washed out.

libplacebo with hardware decode (NVIDIA)

bash
ffmpeg -init_hw_device vulkan=vk,disable_multiplane=1 \
  -filter_hw_device vk \
  -hwaccel cuda -hwaccel_output_format cuda \
  -i input_hdr.mp4 \
  -vf "hwupload=derive_device=vulkan,libplacebo=tonemapping=hable:peak_detect=true:colorspace=bt709:color_primaries=bt709:color_trc=bt709:gamut_mode=perceptual:format=yuv420p,hwdownload,format=yuv420p" \
  -c:v libx264 -crf 18 -preset slow \
  -c:a copy output_sdr.mp4

This decodes on the NVIDIA GPU (CUDA), uploads to Vulkan for tonemapping, then downloads back for CPU encoding. For full GPU workflow, replace libx264 with h264_nvenc -cq 22.

Tonemapping Algorithms Compared

FFmpeg's built-in tonemap filter offers 7 algorithms. Here's when to use each:

AlgorithmBehaviorBest for
hableFilmic S-curve. Preserves shadow and highlight detailGeneral-purpose. Community default
reinhardGlobal luminance preservation. Slightly brighter outputContent where brightness matters more than contrast
mobiusPreserves in-range color accuracy, smooth rolloff for out-of-rangeColor-critical work
clipHard clip at the boundary. Maximum color accuracy for in-range valuesLow dynamic range HDR (peak < 400 nits)
linearLinear scaling of the entire rangeSpecial effects, not for normal viewing
gammaLogarithmic transfer between curvesNiche use cases
noneNo tonemapping, only desaturation of out-of-range valuesTesting/debugging

For most HDR→SDR conversions, start with hable + desat=0. This combination is practically gospel on Stack Overflow and Doom9 forums — and for good reason. If the result looks too dark (hable compresses aggressively), try reinhard which produces a brighter image at the cost of some contrast. Media servers like Jellyfin default to reinhard for this reason.

libplacebo-exclusive algorithms

libplacebo adds several algorithms beyond what the built-in filter offers:

  • bt.2390 — ITU-R BT.2390 EETF (Electrical-Electrical Transfer Function). The broadcast industry standard for HDR→SDR conversion. Uses a Hermite spline rolloff
  • bt.2446a — ITU-R BT.2446 Method A. Designed for mastered HDR content where preserving the creative intent matters
  • st2094-40 — Uses SMPTE ST 2094-40 dynamic metadata (HDR10+) for scene-by-scene tonemapping
  • auto — libplacebo's default. Uses internal heuristics to pick the best algorithm based on input metadata

Handling Different HDR Formats

HDR10 (static metadata)

The most common format. Uses PQ transfer with static MaxCLL/MaxFALL metadata:

bash
# zscale+tonemap — works with all HDR10 content
ffmpeg -i hdr10_input.mkv \
  -vf "zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p" \
  -c:v libx264 -crf 18 -c:a copy output_sdr.mp4

HLG (Hybrid Log-Gamma)

HLG is backwards-compatible with SDR by design, so the conversion is simpler. You still want to tonemap for best results:

bash
ffmpeg -i hlg_input.mkv \
  -vf "zscale=tin=arib-std-b67:t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p" \
  -c:v libx264 -crf 18 -c:a copy output_sdr.mp4

Note the tin=arib-std-b67 to explicitly tell zscale that the input uses HLG transfer.

HDR10+ (dynamic metadata)

HDR10+ adds per-scene brightness metadata on top of HDR10. The built-in tonemap filter ignores this extra metadata, but libplacebo can use it:

bash
ffmpeg -init_hw_device vulkan \
  -i hdr10plus_input.mkv \
  -vf "libplacebo=tonemapping=st2094-40:peak_detect=true:colorspace=bt709:color_primaries=bt709:color_trc=bt709:format=yuv420p" \
  -c:v libx264 -crf 18 -c:a copy output_sdr.mp4

The st2094-40 algorithm reads the dynamic metadata and adjusts tonemapping per scene — dark scenes stay dark, bright scenes get proper highlight compression.

Dolby Vision

Dolby Vision support in FFmpeg is limited but improving. libplacebo can handle Profile 5 and 8.x:

bash
ffmpeg -init_hw_device vulkan \
  -i dolby_vision_input.mkv \
  -vf "libplacebo=tonemapping=hable:apply_dolbyvision=true:peak_detect=true:colorspace=bt709:color_primaries=bt709:color_trc=bt709:format=yuv420p" \
  -c:v libx264 -crf 18 -c:a copy output_sdr.mp4

GPU Acceleration Options

The CPU tonemap pipeline processes 4K content at roughly 10 fps. If that's too slow, you have several GPU options:

OpenCL (AMD/NVIDIA/Intel)

bash
ffmpeg -init_hw_device opencl=ocl \
  -filter_hw_device ocl \
  -i input_hdr.mp4 \
  -vf "format=p010,hwupload,tonemap_opencl=tonemap=hable:desat=0:t=bt709:m=bt709:p=bt709:format=nv12,hwdownload,format=nv12" \
  -c:v libx264 -crf 18 -c:a copy output_sdr.mp4

The tonemap_opencl filter works on most GPUs but requires P010 (10-bit) input format.

VAAPI (Intel/AMD)

bash
ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi \
  -i input_hdr.mp4 \
  -vf "tonemap_vaapi=format=nv12:t=bt709:m=bt709:p=bt709" \
  -c:v h264_vaapi -qp 18 -c:a copy output_sdr.mp4

The tonemap_vaapi filter keeps everything on the GPU — decode, tonemap, and encode in hardware.

Full NVIDIA pipeline (NVDEC → Vulkan → NVENC)

bash
ffmpeg -init_hw_device vulkan=vk,disable_multiplane=1 \
  -filter_hw_device vk \
  -hwaccel cuda -hwaccel_output_format cuda \
  -i input_hdr.mp4 \
  -vf "hwupload=derive_device=vulkan,libplacebo=tonemapping=hable:peak_detect=true:colorspace=bt709:color_primaries=bt709:color_trc=bt709:format=yuv420p,hwupload=derive_device=cuda" \
  -c:v h264_nvenc -cq 22 -preset p4 \
  -c:a copy output_sdr.mp4

This is the fastest option on NVIDIA hardware — hardware decode (NVDEC), GPU tonemapping (Vulkan/libplacebo), and hardware encode (NVENC). Expect 60+ fps for 4K content on modern GPUs.

Performance comparison (approximate, 4K HEVC HDR10 → H.264 SDR)

PipelineSpeedQuality
zscale + tonemap (CPU)~10 fpsGood
tonemap_opencl (GPU)~40 fpsGood
tonemap_vaapi (Intel iGPU)~30 fpsAcceptable
libplacebo Vulkan (GPU)~25 fpsBest
NVDEC → libplacebo → NVENC~60 fpsBest

Choosing the Right Encoder for SDR Output

After tonemapping, you need to encode the SDR result. Here's a quick reference:

EncoderCRF/CQUse case
libx264 -crf 18 -preset slow18–22Maximum compatibility. Plays on everything
libx265 -crf 22 -preset medium20–24~40% smaller files than H.264 at same quality
libsvtav1 -crf 32 -preset 428–36Best compression. Growing device support
h264_nvenc -cq 22 -preset p420–26Hardware encode, fast but larger files

For archiving, libx265 or libsvtav1 make sense. For sharing or quick previews, libx264 is safest.

bash
# SVT-AV1 example — great quality-to-size ratio
ffmpeg -i input_hdr.mp4 \
  -vf "zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p" \
  -c:v libsvtav1 -crf 32 -preset 4 \
  -svtav1-params tune=0 \
  -c:a libopus -b:a 128k \
  output_sdr.mkv

Batch Processing

Converting multiple HDR files with a shell loop:

bash
#!/bin/bash
# batch-hdr-to-sdr.sh — Convert all .mkv HDR files in current directory

for f in *.mkv; do
  echo "Converting: $f"
  ffmpeg -y -i "$f" \
    -vf "zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p" \
    -c:v libx264 -crf 18 -preset medium \
    -c:a copy \
    -movflags +faststart \
    "${f%.mkv}_sdr.mp4"
done
echo "Done. Converted $(ls -1 *_sdr.mp4 2>/dev/null | wc -l) files."

For Python-based batch processing with progress tracking, see the FFmpeg Python batch automation guide.

Fixing Common Problems

If you've searched "ffmpeg hdr to sdr washed out" — you're not alone. This is by far the most asked-about issue in HDR conversion. Here are the problems you'll likely hit and how to fix them.

Washed-out colors

The most common complaint — practically a rite of passage for anyone converting HDR for the first time. Usually caused by one of:

  1. Missing tonemapping — re-encoding without the tonemap filter
  2. High desaturation — the default desat=2.0 is aggressive. Set desat=0
  3. Wrong filter order — linearize before tonemapping, not after

Fix: use the full zscale+tonemap chain with desat=0, or switch to libplacebo which handles this automatically.

Color banding (posterization)

Visible in gradients like skies. Caused by 10-bit → 8-bit quantization.

Fix options:

  • Use libplacebo with dithering=blue (best)
  • Keep output at 10-bit: format=yuv420p10le + libx265 -crf 22 (10-bit is H.265/AV1 default)
  • Add film grain to mask banding: libsvtav1 -svtav1-params film-grain=8

Output is too dark

Hable's filmic curve compresses highlights aggressively. Some content ends up darker than expected.

Fix:

  • Try tonemap=reinhard:desat=0 — produces brighter output
  • Adjust npl (nominal peak luminance): higher values = brighter output. Try npl=200
  • With libplacebo: contrast_recovery=0.5 can bring back some midtone contrast

HDR metadata still attached

Some players detect leftover HDR metadata and try to apply their own tonemapping on top of yours. This causes double-processing artifacts.

Fix: strip the side data after tonemapping:

bash
# Add this to the end of your filter chain, before format=yuv420p
...,sidedata=delete

ffprobe still shows BT.2020 after conversion

The output file's color metadata might not be set correctly. Add explicit output tagging:

bash
ffmpeg -i input_hdr.mp4 \
  -vf "zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p" \
  -c:v libx264 -crf 18 \
  -colorspace bt709 -color_primaries bt709 -color_trc bt709 \
  -c:a copy output_sdr.mp4

The -colorspace, -color_primaries, and -color_trc flags set the correct metadata on the output stream.

Understanding the Color Science

If you want to understand why the filter chain works the way it does, here's the color science behind it.

The three axes of color conversion

AxisHDR valueSDR valueWhat it controls
Transfer (EOTF/OETF)PQ (ST 2084) or HLG (ARIB STD-B67)BT.709 gammaHow brightness is encoded as signal values
Primaries (color gamut)BT.2020BT.709Which real-world colors can be represented
Matrix (YCbCr coefficients)bt2020ncbt709How RGB maps to luma + chroma channels

Each axis is converted independently in the zscale+tonemap pipeline. libplacebo handles all three in one pass internally.

Why linearize first?

The PQ transfer function is perceptually uniform — equal steps in signal value correspond to equal steps in perceived brightness. But tonemapping math works in linear light where doubling the value doubles the physical light intensity. If you tonemap in PQ space, the curve distorts shadows and highlights non-uniformly.

Why floating-point?

10-bit integer gives you 1,024 levels. After converting to linear light, the distribution becomes extremely non-uniform — most values cluster near zero. Floating-point avoids the precision loss that would cause banding in dark areas.

FAQ

What's the difference between HDR10 and HDR10+?

HDR10 uses static metadata — a single brightness value (MaxCLL/MaxFALL) for the entire video. HDR10+ adds dynamic metadata that changes per scene, so a dark movie scene and a bright outdoor scene each get optimized tonemapping. FFmpeg's built-in tonemap ignores the dynamic metadata, but libplacebo's st2094-40 algorithm can use it.

Which tonemapping algorithm should I use?

Start with hable (filmic curve) + desat=0. It preserves detail in both shadows and highlights. If the result is too dark, try reinhard for brighter output. For broadcast work, use bt.2390 via libplacebo — it's the ITU standard. For HDR10+ content, use st2094-40 which leverages dynamic metadata.

Can I tonemap Dolby Vision content?

Partially. libplacebo supports Dolby Vision Profile 5 and 8.x with apply_dolbyvision=true. Profile 7 (dual-layer) is not fully supported — you'll need to extract the base layer with dovi_tool first. The built-in zscale+tonemap pipeline doesn't support Dolby Vision at all.

Why does my converted video look washed out?

Three common causes: (1) You re-encoded without applying a tonemap filter. (2) The desat parameter is too high — set it to 0. (3) The filter order is wrong — you must linearize (zscale=t=linear) before tonemapping. See the Fixing Common Problems section for details.

Is libplacebo better than zscale+tonemap?

For quality, yes. libplacebo's dynamic peak detection, built-in dithering, and advanced gamut mapping produce visually superior results in most cases. The tradeoff is that it requires Vulkan GPU support and a custom FFmpeg build with --enable-libplacebo. If you just need a quick conversion on a server without GPU, zscale+tonemap works perfectly fine.

How do I keep 10-bit output to avoid banding?

Replace format=yuv420p with format=yuv420p10le in the filter chain, and use a 10-bit capable encoder like libx265 or libsvtav1 (both default to 10-bit). H.264 is 8-bit only in most implementations.

How fast is GPU tonemapping vs CPU?

On a 4K HEVC HDR10 source: CPU zscale+tonemap runs around 10 fps, OpenCL around 40 fps, and NVDEC → libplacebo → NVENC around 60+ fps. The exact speed depends on your GPU, encoder settings, and input complexity. See the GPU Acceleration Options section for pipeline comparisons.

Does tonemapping lose quality?

Yes, any conversion from a wider color/brightness space to a narrower one is inherently lossy — you can't represent 1,000 nits of brightness range in 100 nits without compression. The goal of tonemapping is to minimize the perceived quality loss. Using hable or bt.2390 with desat=0, 10-bit output, and dithering gives you the best achievable result.

Wrapping Up

For quick HDR→SDR conversions, the zscale+tonemap=hable:desat=0 pipeline handles the vast majority of content well. When quality matters or you're dealing with HDR10+/Dolby Vision, libplacebo is worth the setup effort.

The key things to remember:

  • Always tonemap — never just re-encode HDR video without converting the color space
  • Use desat=0 — the default desaturation crushes highlight colors
  • Use format=gbrpf32le — floating-point intermediates prevent banding
  • Check with ffprobe — verify the output actually reports BT.709 color properties

If you're working with FFmpeg regularly, you might also find these useful: