6 articles tagged #metrics.
Frameworks and metrics for measuring AI system performance, quality, and safety, from standard benchmarks to domain-specific evaluations.
Practices and tools for monitoring, tracing, and debugging AI systems in production, covering token metrics, latency, response quality, costs, and hallucination detection.
MCP server for analyzing git repository metrics and understanding team health. Published on npm as @jonmatum/git-metrics-mcp-server.
Collection and visualization of numerical system measurements over time to understand performance, detect anomalies, and make data-driven decisions.
Ability to understand a system's internal state from its external outputs: logs, metrics, and traces, enabling problem diagnosis without direct system access.
Framework for defining, measuring, and communicating service reliability through service level objectives (SLOs), indicators (SLIs), and agreements (SLAs).