When you need to download a file from the command line — especially on a remote server where you don't have a browser — wget is the tool for the job.
Whether it's a single file, a batch of hundreds, or mirroring an entire website, wget handles it from the terminal. No GUI needed. Works over SSH. Can run in the background, resume interrupted downloads, and be scripted for automation.
This guide covers everything from the basics to real-world use cases, with ready-to-run examples throughout.
What is wget?
wget is a command-line utility for downloading files over HTTP, HTTPS, and FTP. It's part of the GNU Project and comes pre-installed on virtually every Linux distribution and available on macOS via Homebrew.
Key characteristics:
- Non-interactive: runs completely unattended, perfect for scripts and cron jobs
- Resumable: pick up where you left off after an interrupted download
- Recursive: can crawl and download entire websites
- Proxy-aware: works through HTTP proxies
- Background-capable: detach from the terminal and download continues
Installation check
Verify wget is installed:
wget --version
If it's missing:
# Debian / Ubuntu
sudo apt install wget
# CentOS / RHEL
sudo yum install wget
# Fedora
sudo dnf install wget
# macOS (Homebrew)
brew install wget
Basic usage
Download a single file
wget https://example.com/file.zip
The file saves to the current directory. A progress bar shows download speed, amount downloaded, and estimated time remaining.
Specify the output filename or directory
# Save with a different name
wget -O myfile.zip https://example.com/file.zip
# Save to a specific directory
wget -P ~/downloads/ https://example.com/file.zip
# Combine both: custom directory and filename
wget -P ~/downloads/ -O setup.zip https://example.com/file.zip
Run in the background
When downloading large files, detach from the terminal so you can keep working:
wget -b https://example.com/largefile.iso
Output goes to wget-log. Monitor progress with:
tail -f wget-log
Common options reference
| Option | What it does |
|---|---|
-O FILE | Save as FILE |
-P DIR | Save into directory DIR |
-b | Background mode |
-c | Continue/resume interrupted download |
-q | Quiet mode (no output) |
--limit-rate=RATE | Limit speed (e.g. --limit-rate=1m) |
-r | Recursive download |
-l DEPTH | Set recursion depth |
--no-check-certificate | Skip SSL verification |
-i FILE | Download URLs listed in FILE |
--user-agent=STRING | Set custom User-Agent |
--header=STRING | Add HTTP header |
-N | Only download if newer than local copy |
--tries=N | Number of retry attempts |
--timeout=SECONDS | Set connection timeout |
Real-world use cases
Batch download from a URL list
Create a text file with one URL per line, then pass it to wget with -i:
cat > urls.txt << EOF
https://releases.ubuntu.com/24.04.1/ubuntu-24.04.1-desktop-amd64.iso
https://example.com/data/january.csv
https://example.com/data/february.csv
https://example.com/data/march.csv
EOF
wget -i urls.txt -P ~/downloads/
Each URL is downloaded in sequence. Combine with -b to run the whole batch in the background.
Resume an interrupted download
If a large download gets cut off by a network hiccup, just add -c and re-run the same command:
# Original download (interrupted)
wget https://example.com/bigfile.iso
# Resume from where it stopped
wget -c https://example.com/bigfile.iso
wget checks the local file size and requests only the remaining bytes. If no partial file exists, it starts fresh.
Throttle the download speed
Avoid saturating your connection or being rate-limited by the server:
# Limit to 1 MB/s
wget --limit-rate=1m https://example.com/file.iso
# Limit to 500 KB/s
wget --limit-rate=500k https://example.com/file.iso
Authenticate with username and password
For HTTP Basic Auth:
wget --user=myusername --password=mypassword https://example.com/protected/file.zip
Passwords in command arguments end up in shell history. If that's a concern, omit --password and wget will prompt for it interactively:
wget --user=myusername https://example.com/protected/file.zip
# prompts: Password for 'myusername'@example.com:
For FTP:
wget ftp://ftp.example.com/pub/file.tar.gz
wget --ftp-user=user --ftp-password=pass ftp://ftp.example.com/private/file.tar.gz
Mirror an entire website
Download a complete copy of a site for offline viewing or archiving:
wget --mirror \
--convert-links \
--adjust-extension \
--page-requisites \
--no-parent \
https://example.com/
What each flag does:
--mirror(or-m): recursive download + timestamps + infinite depth--convert-links: rewrite links to work offline (absolute → relative)--adjust-extension: add.htmlto pages that need it--page-requisites: fetch CSS, images, JS — everything needed to render the page--no-parent: don't go above the specified path
The downloaded site will be in a directory named after the domain.
Only download if the file has changed
Poll a URL and only download when the server's version is newer than your local copy:
wget -N https://example.com/data.csv
This is great for keeping local data files in sync. Pair it with cron for scheduled updates:
# crontab -e: run daily at 3 AM
0 3 * * * wget -N -q -P /var/data/ https://example.com/data.csv
Set a custom User-Agent
Some servers block requests from wget. Impersonate a browser:
wget --user-agent="Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0" \
https://example.com/file.zip
Add custom HTTP headers
Useful for APIs that require an Authorization token:
wget --header="Authorization: Bearer YOUR_API_TOKEN" \
--header="Accept: application/json" \
https://api.example.com/export/data.json
wget vs curl: which one to use?
Both wget and curl download content from URLs, but they have different strengths:
| Use case | wget | curl |
|---|---|---|
| Simple file download | Great | Fine |
| Recursive / site mirroring | Great | Not supported |
| Resume interrupted downloads | Great | Great |
| API requests | Limited | Great |
| Handling response data | Limited | Great |
| Background download | Built-in (-b) | Needs & disown |
| Batch from URL list | Built-in (-i) | Needs a script |
| HTTP method control | Limited | Full control |
Rule of thumb:
- Downloading files → wget
- Calling APIs, inspecting responses, complex HTTP → curl
- Scripting with flexible output handling → curl
Scripting examples
Download multiple versions of a file
#!/bin/bash
BASE_URL="https://example.com/releases"
VERSIONS=("1.0.0" "1.1.0" "1.2.0" "2.0.0")
DEST_DIR="./downloads"
mkdir -p "$DEST_DIR"
for VERSION in "${VERSIONS[@]}"; do
FILE="myapp-${VERSION}.tar.gz"
URL="${BASE_URL}/${VERSION}/${FILE}"
echo "Downloading ${FILE}..."
wget -q --show-progress -P "$DEST_DIR" "$URL"
if [ $? -eq 0 ]; then
echo " OK: ${FILE}"
else
echo " FAILED: ${FILE}"
fi
done
echo "Done."
Download and verify checksum
#!/bin/bash
# Check the official Ubuntu release page for the current version and SHA256 hash:
# https://releases.ubuntu.com/
URL="https://releases.ubuntu.com/24.04.1/ubuntu-24.04.1-desktop-amd64.iso"
EXPECTED_SHA256="<get the current hash from the official Ubuntu site>"
echo "Downloading..."
wget -q --show-progress -O ubuntu.iso "$URL"
echo "Verifying checksum..."
ACTUAL_SHA256=$(sha256sum ubuntu.iso | awk '{print $1}')
if [ "$ACTUAL_SHA256" = "$EXPECTED_SHA256" ]; then
echo "Checksum OK — file is valid"
else
echo "Checksum mismatch! File may be corrupted."
exit 1
fi
Troubleshooting
SSL certificate errors
# Skip verification (fine for testing, avoid in production)
wget --no-check-certificate https://example.com/file.zip
Connection timeouts or flaky servers
# 30 second timeout, retry up to 5 times with a 10 second wait between retries
wget --timeout=30 --tries=5 --waitretry=10 https://example.com/file.zip
# Retry indefinitely (useful for very large downloads on unreliable connections)
wget --tries=0 https://example.com/file.zip
Check redirect chain without downloading
Inspect headers and see where a URL redirects to, without actually downloading anything:
wget --server-response --spider https://example.com/file.zip
Download stalls at 0 bytes
Sometimes servers send a response but no data. Try adding a User-Agent or check if the URL requires authentication.
Wrapping up
wget is one of those tools you reach for daily once you're comfortable with it. The basics take five minutes to learn, and the advanced features are there when you need them.
Quick summary of the most useful options:
- Basic download:
wget URL - Rename output:
wget -O filename URL - Save to directory:
wget -P /path/ URL - Background:
wget -b URL - Resume:
wget -c URL - Batch from list:
wget -i urls.txt - Mirror a site:
wget --mirror --convert-links --page-requisites URL - Conditional update:
wget -N URL
Start with the basics, then reach for the advanced flags as specific needs come up. And when you find yourself repeatedly downloading files in a workflow, consider wrapping wget in a short script — it compounds nicely.