32blogby StudioMitsu
yocto11 min read

The Complete Guide to 10x Faster Yocto Builds

Learn how to dramatically speed up Yocto Project builds with sstate-cache, parallelism tuning, and storage optimization, based on Scarthgap 5.0 LTS.

yoctoembedded-linuxbuild-optimizationscarthgap
On this page

If you've ever kicked off a Yocto build and watched it crawl for hours, you're not alone. A default core-image-sato build can take half a day or more depending on your hardware.

This guide covers every practical technique to speed up your Yocto builds, based on Scarthgap 5.0 LTS (supported until April 2028). If you're not sure where to start, just work through the sections from top to bottom — they're ordered by impact.

Why Yocto Builds Are So Slow

Before jumping into fixes, let's understand what makes Yocto builds slow in the first place.

BitBake processes thousands of tasks with complex dependency chains. For core-image-sato, it downloads roughly 6 GB of source code and compiles thousands of packages. Three main bottlenecks dominate build time.

1. Storage I/O

Builds involve massive amounts of small file reads and writes. If you're building on an HDD, this is your biggest bottleneck by far. Switching to NVMe SSD makes an immediate difference.

2. CPU Parallelism Misconfiguration

BitBake auto-detects your CPU core count for parallel builds, but if you don't have enough RAM, the OOM killer steps in. Set parallelism too low and your CPU sits idle.

3. Unused Cache

The shared state cache (sstate-cache) lets you skip unchanged tasks on rebuilds. But the default configuration stores it inside the build directory — so every time you rm -rf build/, your cache goes with it.

Rethink Your Hardware Requirements

Before tuning software settings, make sure your hardware is up to the task. There's a significant gap between the official minimum requirements and what you actually need for a productive workflow.

ComponentOfficial MinimumPractical Recommendation
RAM8 GB32 GB or more
Disk90 GB (core-image-sato)500 GB NVMe SSD
CPU4 cores8–12 cores

RAM

Each parallel BitBake thread (BB_NUMBER_THREADS) requires roughly 2 GB of RAM. Running 8 threads means you need at least 16 GB just for the build, plus memory for the OS and page cache. 32 GB is the practical floor.

With 64 GB, more files stay in the OS page cache, significantly reducing I/O wait times.

CPU

Yocto builds scale up to about 12 cores. Beyond that, I/O becomes the bottleneck and adding more cores doesn't help much. A fast 8–12 core CPU offers the best cost-to-performance ratio.

Storage

Switching from HDD to NVMe SSD gives the single biggest performance improvement. The constant small-file I/O during builds completely saturates a spinning disk. SATA SSD helps, but NVMe is noticeably faster.

Without rm_work, a core-image-sato build needs about 90 GB of disk space. With rm_work enabled, that drops to roughly 22 GB. Either way, a 500 GB+ SSD gives you room for multiple projects.

PRSamsung 990 EVO Plus NVMe SSD 1TBView on Amazon

Cut Rebuild Time by 80% with sstate-cache

The shared state cache (sstate-cache) is the single most effective way to speed up Yocto builds. It stores the output of completed tasks and reuses them when inputs haven't changed.

The Default Configuration Trap

By default, the sstate-cache lives inside the build directory.

bash
# Default paths (inside build/)
build/sstate-cache/
build/downloads/

The problem: running rm -rf build/ to start fresh also deletes your cache. Wiping the build directory is a common operation when switching branches or troubleshooting — and with the default setup, you lose hours of cached work every time.

Move the Cache Outside the Build Directory

Add these lines to conf/local.conf.

bash
# conf/local.conf
SSTATE_DIR = "/home/shared/yocto/sstate-cache"
DL_DIR     = "/home/shared/yocto/downloads"

This single change preserves your cache across build directory resets and lets multiple projects share the same cache.

Share the Cache Across a Team

For team development, host the sstate-cache on a server and share it via HTTP or NFS.

bash
# conf/local.conf — fetch from a remote sstate mirror
SSTATE_MIRRORS = "file://.* http://sstate-server.local/sstate/PATH;downloadfilename=PATH"

PATH is a special variable that Yocto expands to the architecture-specific directory path automatically.

You can also mirror source downloads.

bash
# conf/local.conf — source mirror
SOURCE_MIRROR_URL = "http://mirror-server.local/sources/"
INHERIT += "own-mirrors"
BB_GENERATE_MIRROR_TARBALLS = "1"

Setting BB_GENERATE_MIRROR_TARBALLS = "1" creates tarballs from Git repository sources and stores them in DL_DIR. This is disabled by default but essential for offline builds and team sharing.

Tuning BB_NUMBER_THREADS and PARALLEL_MAKE

Two variables control build parallelism in BitBake. They're easy to confuse, but they serve different purposes.

VariableControlsDefault
BB_NUMBER_THREADSNumber of parallel BitBake tasksCPU core count (auto-detected)
PARALLEL_MAKENumber of parallel make jobs (-j N)CPU core count (auto-detected)

Both default to your CPU core count. In meta/conf/bitbake.conf, they're defined as:

bash
BB_NUMBER_THREADS ?= "${@oe.utils.cpu_count()}"
PARALLEL_MAKE ?= "-j ${@oe.utils.cpu_count()}"

When to Adjust

The auto-detected defaults assume all cores are available and you have enough RAM. If you're running out of memory, you need to lower both values. If you have RAM to spare, the defaults are usually fine.

Manual Configuration

bash
# conf/local.conf — example: 8 cores / 32 GB RAM
BB_NUMBER_THREADS = "8"
PARALLEL_MAKE = "-j8"

Important: lowering BB_NUMBER_THREADS alone doesn't help if PARALLEL_MAKE is still high — individual make processes will still spawn too many threads. Always adjust both together.

Pressure-Based Control (Langdale 4.1+)

BitBake supports dynamic parallelism control based on system pressure, instead of fixed values. This feature has been available since Langdale (4.1) and works in Scarthgap.

bash
# conf/local.conf — dynamic load-based control
BB_PRESSURE_MAX_CPU = "150"
BB_PRESSURE_MAX_IO = "80"
BB_PRESSURE_MAX_MEMORY = "50"

This uses Linux's /proc/pressure/ to monitor CPU, I/O, and memory pressure in real time. When pressure exceeds a threshold, BitBake delays starting new tasks. It's more flexible than fixed values and adapts automatically to different hardware.

Filesystem and Storage Optimization

Even with NVMe SSD, filesystem settings can squeeze out more performance.

Mount Options

The official documentation recommends noatime and commit= tuning.

bash
# /etc/fstab — build partition
/dev/nvme0n1p2 /home/build ext4 defaults,noatime,commit=30 0 2
  • noatime: Disables access time updates. With thousands of small file reads during a build, this eliminates unnecessary write operations
  • commit=30: Extends the disk write interval from the default 5 seconds to 30 seconds. Increases power-loss risk but reduces write frequency during builds

tmpfs (Limited Benefit)

You might think placing TMPDIR on tmpfs (RAM disk) would help, but the official documentation notes:

"While this can help speed up the build, the benefits are limited due to the compiler using -pipe."

Since the compiler avoids writing intermediate files to disk, the gain from tmpfs is minimal. Consider it only as a last resort after applying all other optimizations, and only if you have 64 GB+ of RAM.

Package Format

The default PACKAGE_CLASSES is package_rpm (RPM format).

bash
# conf/local.conf — IPK is the lightest
PACKAGE_CLASSES = "package_ipk"

IPK is lighter than RPM and faster to package. It's the standard choice for embedded targets, so unless you specifically need RPM, switching to IPK saves time.

Strip Unnecessary Features for Leaner Builds

Yocto's default configuration includes many features that are irrelevant for embedded targets. Reducing the build scope cuts both build time and image size.

Reduce DISTRO_FEATURES

Scarthgap's default DISTRO_FEATURES includes:

bash
# Default Poky DISTRO_FEATURES (Scarthgap)
# acl alsa bluetooth debuginfod ext2 ipv4 ipv6 pcmcia usbgadget
# usbhost wifi xattr nfs zeroconf pci 3g nfc x11 vfat seccomp
# + Poky additions: opengl ptest multiarch wayland vulkan

For a headless embedded device, GUI-related features are unnecessary.

bash
# conf/local.conf — headless configuration example
DISTRO_FEATURES:remove = "x11 wayland opengl vulkan bluetooth wifi 3g nfc pcmcia"

Use rm_work to Reduce Disk Usage

The rm_work class automatically deletes work directories after each recipe is built.

bash
# conf/local.conf
INHERIT += "rm_work"

# Exclude recipes you need to debug
RM_WORK_EXCLUDE += "my-custom-recipe"

According to the official documentation, this reduces disk usage for core-image-sato from 90 GB to roughly 22 GB. With less data on disk, more files fit in the page cache, indirectly improving build speed.

The trade-off is that source code and object files are deleted after build, making debugging harder. Use RM_WORK_EXCLUDE to keep work directories for recipes you're actively developing. For debugging techniques, see the build error debugging guide.

Disable Debug Packages

bash
# conf/local.conf
INHIBIT_PACKAGE_DEBUG_SPLIT = "1"

This prevents generation of debug symbol packages, which are unnecessary for production images.

The VPS Build Server Option

Building Yocto on your laptop locks it up for hours. A VPS build server lets you offload the work without tying up your local machine.

Why VPS Works for Yocto

  • High specs on demand: No need to own a 16-core / 32 GB workstation full-time
  • Persistent sstate-cache: Keep the cache on the server and share it across team members
  • CI/CD integration: Connect GitHub Actions or GitLab CI to your build server

Yocto full builds need at least 8 cores and 32 GB of RAM. Use these as guidelines.

Use CaseCPURAMStorage
Minimal (core-image-minimal)4 cores16 GB100 GB SSD
Standard (core-image-sato)8 cores32 GB300 GB SSD
Large / Team shared16 cores64 GB500 GB+ NVMe

Look for providers that offer NVMe SSD storage and at least 8 cores with 32 GB RAM. Major cloud providers (AWS, DigitalOcean, Hetzner, Vultr) all have plans in this range.

Setting Up on a VPS

On Ubuntu 24.04 LTS, install the required packages.

bash
sudo apt update
sudo apt install -y gawk wget git diffstat unzip texinfo gcc build-essential \
  chrpath socat cpio python3 python3-pip python3-pexpect xz-utils debianutils \
  iputils-ping python3-git python3-jinja2 python3-subunit zstd liblz4-tool \
  file locales libacl1
sudo locale-gen en_US.UTF-8

Then clone poky and configure the sstate-cache path.

bash
git clone git://git.yoctoproject.org/poky
cd poky
git checkout -t origin/scarthgap -b my-scarthgap
source oe-init-build-env

# Set external sstate-cache and download paths
echo 'SSTATE_DIR = "/home/build/sstate-cache"' >> conf/local.conf
echo 'DL_DIR = "/home/build/downloads"' >> conf/local.conf

bitbake core-image-minimal

Wrapping Up

The key to faster Yocto builds is tackling the highest-impact optimizations first.

PriorityTechniqueImpact
CriticalMove sstate-cache/DL_DIR outside build tree80%+ rebuild time reduction
HighSwitch to NVMe SSDEliminates I/O bottleneck
HighTune BB_NUMBER_THREADS/PARALLEL_MAKEPrevents OOM, optimizes CPU usage
MediumEnable rm_workDisk usage 90 GB → 22 GB
MediumReduce DISTRO_FEATURESFewer packages to build
MediumSwitch PACKAGE_CLASSES to IPKFaster packaging
Lownoatime/commit= mount optionsSmall to moderate improvement
LowtmpfsLimited benefit (per official docs)

Start by moving your sstate-cache and DL_DIR to a persistent location outside the build directory. That single change transforms the daily rebuild experience. From there, tune as needed for your specific hardware and workflow.

If you want to go deeper into the Yocto build system, these books are excellent references. Mastering Embedded Linux Development covers Scarthgap and walks through everything from build system internals to practical recipe writing.

PRMastering Embedded Linux Development 4th Ed (2024)View on Amazon
PREmbedded Linux Development Using Yocto Project 3rd Ed (2023)View on Amazon
PRYocto Project Customization for Linux (Apress, 2025)View on Amazon