Halley / Minimal Log Aggregation Without Going Full ELK

Created Thu, 29 May 2025 11:05:00 +0000 Modified Sun, 31 Aug 2025 22:17:24 +0000
290 Words

Logs are the nervous system of your stack.
They’re also scattered across half a dozen machines and containers.

You don’t need an enterprise ELK stack to see what’s happening.
You just need enough centralisation to notice fires before they burn the house down.

Pick the Right Scope

Don’t aim to ingest everything.
Decide:

  • Which services matter most (auth, web, backups, DNS)
  • What log lines you actually read (errors, warnings, critical events)
  • How long you need to keep them (weeks vs months)

Clarity beats quantity.

Ship Logs Lightly

Instead of deploying heavy agents:

  • Use rsyslog or journald forwarding to send logs to one box
  • Or lightweight tools like fluent-bit to push specific log streams
  • Compress and rotate to keep storage sane

The goal is a single pane, not a Hadoop cluster.

Normalise Just Enough

Parsing logs helps when they’re inconsistent:

  • Use simple filters or tags for source and severity
  • Timestamp everything in the same format
  • Avoid elaborate pipelines you’ll never maintain

A little structure goes a long way.

Make Search Easy

Centralisation is pointless if you can’t query:

  • Plain text search with grep is fine for small setups
  • Use loki + promtail if you want minimal indexing with a UI
  • Write down basic queries for future you

You want to debug, not build a logging empire.

Alert on Patterns, Not Noise

A central log lets you add a few simple triggers:

  • Failed login spikes
  • Service restarts
  • Backup errors

Keep it rare and meaningful. Alerts you ignore are worse than no alerts.

Keep It Boring

Logs are tools, not trophies.
If you can tail one machine and see what’s on fire across your stack, you’ve won.

Leave the petabyte pipelines to big tech.
A small, boring log hub is all you need.