Log Analysis
Log lines with intent, not volume
Design structured fields and sampling policies so log analysis stays searchable when services get chatty.
- Format
- Self-paced with weekly office hours
- Duration
- 3 weeks, intensive
- Skill
- Intermediate
- Stack
- OpenSearch, Vector pipelines
Tuition (informational): KRW 280,000
Request a syllabus conversation
You work through ingestion pipelines that deliberately misbehave: bursty JSON, accidental printf debugging left on, and cross-service trace IDs that drift. Exercises cover field naming stability, redaction patterns, and when to push detail to tracing instead of logs. The goal is calmer incident threads where engineers trust the first page of results.
What is included
- Parsing exercises with edge-case Unicode and clock skew
- Sampling strategies that preserve error paths
- Correlation fields that align with tracing headers
- Redaction walkthroughs for identifiers and tokens
- Saved search libraries that teams can fork safely
- Dry runs on cost-versus-signal trade-offs
- Peer review of a logging style guide diff
Outcomes
- You draft a logging contract your team can adopt without a rewrite.
- You configure at least two guardrails that catch noisy deploys early.
- You can explain when logs should defer to traces for a given failure mode.
Instructor of record
Lab platform engineer who enjoys breaking parsers on purpose so classes learn gentle failure modes.
Leo Han
Primary feedback on labs
Participant questions
Go and Node appear most often, but the patterns apply broadly. Bring your own snippets for async review.
Recent voices
“The sampling lab finally convinced our juniors that dropping debug lines is not shameful if errors stay verbatim.”
“Redaction exercise caught three fields we were leaking quietly. Worth the enrollment alone.”
“Office hours felt crowded one week; still got answers, just plan questions early.”