Prompts, Tokens, and Plot Twists
Notes from the application layer, where LLMs meet reality: latency budgets, token bills, tool failures, and users who don't care that your agent was "almost right."
I break down coding agents and the systems around them, then write about what actually makes them reliable.
Latest posts All posts →