You need to extract specific fields from an API response. Parse a package.json to list dependencies. Filter error entries from JSON-formatted logs. All from the terminal, without writing a script.
That's what jq does. Often called the Swiss Army knife of JSON processing, it handles formatting, filtering, transforming, and aggregating JSON data — all in a single command.
This guide walks through everything from basic filters to real-world scripting examples, with ready-to-run commands throughout.
What is jq?
jq is a lightweight, command-line JSON processor written in C. It ships as a single binary with zero external dependencies.
Key features:
- Pretty-printing — format minified JSON into readable output
- Field extraction — pull out exactly the values you need
- Filtering — select data matching specific conditions
- Transformation — reshape data structures and compute aggregates
- Pipelining — chain multiple operations into a single expression
Just as sed and awk are the go-to tools for text processing, jq is the standard for JSON. Combined with curl, it's an essential part of any modern CLI toolkit.
Installation
# winget (recommended)
winget install jqlang.jq
# If using WSL, install via Linux package manager instead
# sudo apt install jq
Verify the installation:
jq --version
jq-1.8.1
Basic usage
Pretty-print JSON
The simplest filter is . — it takes the input and pretty-prints it.
echo '{"name":"test","value":42,"active":true}' | jq '.'
{
"name": "test",
"value": 42,
"active": true
}
To read from a file, pass the filename as an argument:
jq '.' data.json
Extract fields
Use dot notation to access specific fields:
echo '{"name":"jq","version":"1.8.1"}' | jq '.name'
"jq"
For raw string output without quotes, use -r (raw output):
echo '{"name":"jq","version":"1.8.1"}' | jq -r '.name'
jq
Nested fields
Chain dots to access deeply nested values:
echo '{"data":{"users":[{"name":"Alice","age":30}]}}' | jq '.data.users[0].name'
"Alice"
Array operations
# First element
echo '[10,20,30]' | jq '.[0]'
# Last element
echo '[10,20,30]' | jq '.[-1]'
# Array length
echo '[10,20,30]' | jq 'length'
# Slice (index 1 to 2)
echo '[10,20,30,40]' | jq '.[1:3]'
Output formatting
# Compact output (single line)
echo '{"name":"test","value":42}' | jq -c '.'
# Tab indentation
echo '{"name":"test","value":42}' | jq --tab '.'
# Sorted keys
echo '{"b":2,"a":1}' | jq -S '.'
Filters and pipes
The real power of jq lies in combining filters. Use the pipe operator | to chain operations together.
Array iterator
[] expands each element of an array:
echo '{"users":[{"name":"Alice"},{"name":"Bob"},{"name":"Charlie"}]}' \
| jq '.users[].name'
"Alice"
"Bob"
"Charlie"
select — conditional filtering
select() keeps only elements matching a condition:
echo '{"users":[{"name":"Alice","age":30},{"name":"Bob","age":25},{"name":"Charlie","age":35}]}' \
| jq '.users[] | select(.age > 28)'
{
"name": "Alice",
"age": 30
}
{
"name": "Charlie",
"age": 35
}
map — transform arrays
map() applies a filter to each element and returns a new array:
echo '{"users":[{"name":"Alice","age":30},{"name":"Bob","age":25}]}' \
| jq '.users | map(.name)'
[
"Alice",
"Bob"
]
Object construction
Build new objects with only the fields you need:
echo '{"users":[{"name":"Alice","age":30,"email":"alice@example.com","role":"admin"}]}' \
| jq '.users[] | {name: .name, email: .email}'
{
"name": "Alice",
"email": "alice@example.com"
}
String interpolation
Use \() to embed values in strings. Combine with -r for clean output:
echo '{"users":[{"name":"Alice","email":"alice@example.com"},{"name":"Bob","email":"bob@example.com"}]}' \
| jq -r '.users[] | "\(.name): \(.email)"'
Alice: alice@example.com
Bob: bob@example.com
Conditional expressions
echo '{"status":"ok","data":"hello"}' \
| jq 'if .status == "ok" then .data else "error: \(.status)" end'
"hello"
Sorting, grouping, and aggregation
# Sort by field
echo '[{"name":"Charlie","age":35},{"name":"Alice","age":30},{"name":"Bob","age":25}]' \
| jq 'sort_by(.age)'
# Unique values
echo '["apple","banana","apple","cherry","banana"]' | jq 'unique'
# Sum
echo '[10,20,30,40]' | jq 'add'
# Average
echo '[10,20,30,40]' | jq 'add / length'
keys and has
# List all keys
echo '{"name":"test","version":"1.0","license":"MIT"}' | jq 'keys'
# Check if a key exists
echo '{"name":"test"}' | jq 'has("name")'
Real-world use cases
Processing API responses
Combining jq with curl to extract data from API responses is one of the most common use cases.
curl -s https://api.github.com/repos/jqlang/jq \
| jq '{name: .name, stars: .stargazers_count, forks: .forks_count, license: .license.spdx_id}'
{
"name": "jq",
"stars": 31000,
"forks": 1600,
"license": "MIT"
}
Combining results across paginated API responses:
for page in 1 2 3; do
curl -s "https://api.github.com/users/octocat/repos?per_page=100&page=${page}"
done | jq -s 'flatten | map({name: .name, stars: .stargazers_count}) | sort_by(.stars) | reverse'
Parsing package.json
Quickly inspect project dependencies without opening the file:
# List dependency names
jq '.dependencies | keys' package.json
[
"next",
"react",
"react-dom"
]
# Show packages with versions
jq -r '.dependencies | to_entries[] | "\(.key)@\(.value)"' package.json
next@^16.0.0
react@^19.0.0
react-dom@^19.0.0
# Count dependencies vs devDependencies
jq '{deps: (.dependencies | length), devDeps: (.devDependencies | length)}' package.json
JSON log analysis
Process JSON Lines format (one JSON object per line) log files:
# Extract error logs with timestamps
cat app.log \
| jq -r 'select(.level == "error") | "\(.timestamp) [\(.level)] \(.message)"'
2026-03-08T10:15:30Z [error] Database connection timeout
2026-03-08T10:18:45Z [error] Failed to process request
# Count entries by log level
cat app.log \
| jq -s 'group_by(.level) | map({level: .[0].level, count: length})'
[
{ "level": "error", "count": 5 },
{ "level": "info", "count": 142 },
{ "level": "warn", "count": 23 }
]
Editing config files
Modify JSON configuration files from the command line:
# Update a value
jq '.database.port = 5433' config.json > tmp.json && mv tmp.json config.json
# Add a field
jq '.database.ssl = true' config.json > tmp.json && mv tmp.json config.json
# Delete a field
jq 'del(.debug)' config.json > tmp.json && mv tmp.json config.json
# Merge two JSON files (override takes precedence)
jq -s '.[0] * .[1]' base.json override.json > merged.json
Scripting examples
Export GitHub repos to CSV
#!/bin/bash
# github-repos-to-csv.sh
# Export a user's public repositories to CSV format
USERNAME="${1:?Usage: $0 <github-username>}"
OUTPUT="repos.csv"
echo "name,stars,forks,language,updated" > "${OUTPUT}"
page=1
while true; do
response=$(curl -s "https://api.github.com/users/${USERNAME}/repos?per_page=100&page=${page}")
count=$(echo "${response}" | jq 'length')
if [ "${count}" -eq 0 ]; then
break
fi
echo "${response}" \
| jq -r '.[] | [.name, .stargazers_count, .forks_count, (.language // "N/A"), .updated_at[:10]] | @csv' \
>> "${OUTPUT}"
page=$((page + 1))
done
total=$(tail -n +2 "${OUTPUT}" | wc -l)
echo "Exported ${total} repositories to ${OUTPUT}"
Key points:
- The
@csvfilter handles CSV formatting automatically, including quoting and escaping // "N/A"is the alternative operator — it returns a default when a value isnull- Pagination is handled with a simple loop until an empty response
Merge and aggregate JSON reports
#!/bin/bash
# merge-json-reports.sh
# Combine JSON report files and generate summary statistics
REPORT_DIR="${1:?Usage: $0 <report-directory>}"
if [ ! -d "${REPORT_DIR}" ]; then
echo "Error: Directory '${REPORT_DIR}' not found" >&2
exit 1
fi
file_count=$(find "${REPORT_DIR}" -name "*.json" -type f | wc -l)
if [ "${file_count}" -eq 0 ]; then
echo "Error: No JSON files found in '${REPORT_DIR}'" >&2
exit 1
fi
# Merge all JSON files and compute summary
find "${REPORT_DIR}" -name "*.json" -type f -exec cat {} + \
| jq -s '{
total_files: length,
total_records: (map(.records // 0) | add),
total_errors: (map(.errors // 0) | add),
avg_duration_ms: (map(.duration_ms // 0) | add / length | floor),
statuses: (group_by(.status) | map({status: .[0].status, count: length})),
date_range: {
earliest: (map(.timestamp) | sort | first),
latest: (map(.timestamp) | sort | last)
}
}'
echo ""
echo "Processed ${file_count} files from ${REPORT_DIR}"
Key points:
-s(slurp) combines multiple JSON objects into a single arraygroup_byandmaptogether produce per-status counts// 0provides fallback values for missing fieldsfloortruncates decimal places in the average
Security note
When processing untrusted JSON, keep these points in mind:
- Keep jq updated — parser vulnerabilities are discovered periodically. If
jq --versionshows anything below 1.8.1, update immediately - Limit input size — processing very large JSON files can consume significant memory. Use
head -cupstream in the pipeline to cap input size, or consider the streaming parser (--stream) for large datasets - Guard against shell injection — never pipe jq output directly into
evalorbash -c. Always capture values in quoted variables
# Dangerous (never do this)
eval $(curl -s https://example.com/config.json | jq -r '.command')
# Safe (capture in a variable)
value=$(curl -s https://example.com/config.json | jq -r '.setting')
echo "Setting: ${value}"
Wrapping Up
jq is the essential tool for JSON processing on the command line.
- Basics —
.for pretty-printing,.fieldfor extraction,-rfor raw string output - Filters —
selectfor conditional filtering,mapfor transformation, pipes for chaining - Real-world — API response processing, config file editing, log analysis
- Scripting —
@csvconversion,-sfor merging, shell script integration
Combine curl for data retrieval, jq for JSON processing, and sed & awk for text formatting — and you'll have a powerful data processing pipeline right in your terminal. Start with curl ... | jq '.' and build from there.
For a broader view of CLI tools, check out the CLI Toolkit.