Log

Analyze log files for errors, spikes, and repeated patterns — no manual searching.

annave log analyze parses a log file, detects anomalies, and returns ranked findings. It auto-detects the log format — no configuration needed for common formats.

Usage

bash
annave log analyze [file] [flags]

Flags

FlagShortDefaultDescription
--stdinfalseRead from stdin instead of a file argument
--formatplainOutput format: plain, json, table
--sinceOnly include entries after this time (duration, RFC3339, or date)
--levelMinimum log level: debug, info, warn, error

--since accepts a Go duration (1h, 30m, 2h30m), an RFC3339 timestamp (2026-05-16T10:00:00Z), or a date (2026-05-16).

Supported formats

FormatAuto-detected by
JSON structuredKeys containing level/msg, severity/message, or similar pairs
nginx access logIP address + HTTP method pattern in first field
syslogMonth + day + time prefix (e.g. May 16 10:42:07)
Plain textFallback — any line with a recognisable severity keyword

What it detects

  • Repeated error patterns — messages appearing 3 or more times, ranked by frequency
  • Time spikes — one-minute windows with 3× the rolling average error rate
  • Message clusters — similar messages normalised by replacing UUIDs, IPs, and numbers with placeholders, then grouped by template

Examples

Analyze a file

bash
annave log analyze /var/log/app.log

Read from stdin

bash
tail -n 50000 /var/log/app.log | annave log analyze --stdin

Last hour only

bash
annave log analyze /var/log/app.log --since 1h

Errors only, JSON output

bash
annave log analyze /var/log/app.log --level error --format json

Plain output

text
  Log analysis — /var/log/app.log
  format          json
  lines parsed    48231 / 48231
  time range      2026-05-15 08:00:01 → 2026-05-16 07:59:58

  3 finding(s):

  [1] CRITICAL  Spike detected at 2026-05-15 14:32 — 847 errors in 1 minute (avg 12/min)
           top message: connection refused: redis:6379
  [2] HIGH     Repeated pattern (124×): failed to acquire database lock
  [3] MEDIUM   Message cluster (38 variants): timeout after [N]ms waiting for [TOKEN]

JSON output shape

json
{
  "file": "/var/log/app.log",
  "format": "json",
  "total_lines": 48231,
  "parsed_lines": 48231,
  "time_range": { "from": "2026-05-15T08:00:01Z", "to": "2026-05-16T07:59:58Z" },
  "findings": [
    {
      "rank": 1,
      "severity": "critical",
      "summary": "Spike detected at 2026-05-15 14:32 — 847 errors in 1 minute",
      "detail": "top message: connection refused: redis:6379",
      "count": 847
    }
  ]
}

What to watch

  • Files over 256 MB are rejected at input (ERR_INVALID_INPUT). Change log.max_file_size_mb in limits.yaml and rebuild to raise this limit.
  • Lines beyond 1,000,000 are silently truncated. Lower this with log.max_lines for faster analysis at the cost of completeness.
  • Time filtering with --since requires that entries have parseable timestamps. Plain text logs without timestamps are not filtered and --since has no effect on them.
  • Findings exit with code 0 — anomalies are data, not errors. Only flag failures in scripts by checking for the ERR_ prefix in stderr.