Log
Analyze log files for errors, spikes, and repeated patterns — no manual searching.
annave log analyze parses a log file, detects anomalies, and returns ranked findings. It auto-detects the log format — no configuration needed for common formats.
Usage
bash
annave log analyze [file] [flags]Flags
| Flag | Short | Default | Description |
|---|---|---|---|
| --stdin | false | Read from stdin instead of a file argument | |
| --format | plain | Output format: plain, json, table | |
| --since | — | Only include entries after this time (duration, RFC3339, or date) | |
| --level | — | Minimum log level: debug, info, warn, error |
--since accepts a Go duration (1h, 30m, 2h30m), an RFC3339 timestamp (2026-05-16T10:00:00Z), or a date (2026-05-16).
Supported formats
| Format | Auto-detected by |
|---|---|
| JSON structured | Keys containing level/msg, severity/message, or similar pairs |
| nginx access log | IP address + HTTP method pattern in first field |
| syslog | Month + day + time prefix (e.g. May 16 10:42:07) |
| Plain text | Fallback — any line with a recognisable severity keyword |
What it detects
- Repeated error patterns — messages appearing 3 or more times, ranked by frequency
- Time spikes — one-minute windows with 3× the rolling average error rate
- Message clusters — similar messages normalised by replacing UUIDs, IPs, and numbers with placeholders, then grouped by template
Examples
Analyze a file
bash
annave log analyze /var/log/app.logRead from stdin
bash
tail -n 50000 /var/log/app.log | annave log analyze --stdinLast hour only
bash
annave log analyze /var/log/app.log --since 1hErrors only, JSON output
bash
annave log analyze /var/log/app.log --level error --format jsonPlain output
text
Log analysis — /var/log/app.log
format json
lines parsed 48231 / 48231
time range 2026-05-15 08:00:01 → 2026-05-16 07:59:58
3 finding(s):
[1] CRITICAL Spike detected at 2026-05-15 14:32 — 847 errors in 1 minute (avg 12/min)
top message: connection refused: redis:6379
[2] HIGH Repeated pattern (124×): failed to acquire database lock
[3] MEDIUM Message cluster (38 variants): timeout after [N]ms waiting for [TOKEN]JSON output shape
json
{
"file": "/var/log/app.log",
"format": "json",
"total_lines": 48231,
"parsed_lines": 48231,
"time_range": { "from": "2026-05-15T08:00:01Z", "to": "2026-05-16T07:59:58Z" },
"findings": [
{
"rank": 1,
"severity": "critical",
"summary": "Spike detected at 2026-05-15 14:32 — 847 errors in 1 minute",
"detail": "top message: connection refused: redis:6379",
"count": 847
}
]
}What to watch
- Files over 256 MB are rejected at input (
ERR_INVALID_INPUT). Changelog.max_file_size_mbinlimits.yamland rebuild to raise this limit. - Lines beyond 1,000,000 are silently truncated. Lower this with
log.max_linesfor faster analysis at the cost of completeness. - Time filtering with
--sincerequires that entries have parseable timestamps. Plain text logs without timestamps are not filtered and--sincehas no effect on them. - Findings exit with code
0— anomalies are data, not errors. Only flag failures in scripts by checking for the ERR_ prefix in stderr.