JM.

Data & Automation

Tools and approaches I enjoy when building data-driven and automated backends.

I like mixing pragmatic tooling with modern runtimes. Python stays at the core (FastAPI, async workers) for expressiveness, while I complement with Go or Node when concurrency or streaming patterns demand tighter control. For data orchestration and pipelines: plain queues (Redis / SQS) plus lightweight schedulers beat over-engineered DAG monsters most of the time. I lean on SQL (Postgres / MySQL / analytical extensions) for truth, and use Redis or in‑memory caches only where latency truly matters. Observability matters early: structured logs, trace ids that flow across tasks, and small metrics (p95 latency, queue depth) give fast feedback loops. I prototype with Docker Compose, then push to containers on cloud (Azure / AWS) using IaC templates. AI integration: I like using LLMs to augment extraction, validation or summarization steps—never as an opaque black box. Guardrails + deterministic fallbacks keep pipelines stable. I keep things simple: fewer moving parts, clear contracts, documented behaviors, reproducible local runs.
What I build