Stable knowledge
Practice of designing and developing digital products usable by all people, including those with visual, auditory, motor, or cognitive disabilities.
Design patterns where AI agents execute complex multi-step tasks autonomously, combining reasoning, tool use, and iterative decision-making.
Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.
Tools using LLMs to help developers write, understand, debug, and refactor code, from autocomplete to agents that implement complete features.
Frameworks and metrics for measuring AI system performance, quality, and safety, from standard benchmarks to domain-specific evaluations.
Practices and tools for monitoring, tracing, and debugging AI systems in production, covering token metrics, latency, response quality, costs, and hallucination detection.
Patterns and frameworks for coordinating multiple AI models, tools, and data sources in production pipelines, managing flow between components, memory, and error recovery.
Field dedicated to ensuring artificial intelligence systems behave safely, aligned with human values, and predictably, minimizing risks of harm.
Practices for configuring effective alerts that notify real problems without generating fatigue from excessive notifications.
Principles and practices for designing clear, consistent, and evolvable programming interfaces that facilitate integration between systems.
Practices and tools for documenting APIs clearly, interactively, and maintainably, from OpenAPI specifications to documentation portals.
Pattern providing a single entry point for multiple microservices, handling routing, authentication, rate limiting, and response aggregation.
Field of computer science dedicated to creating systems capable of performing tasks that normally require human intelligence, from reasoning and perception to language generation.
AWS managed service for creating, publishing, and managing REST, HTTP, and WebSocket APIs that act as entry points to Lambda functions and other backend services.
AWS serverless service providing access to foundation models from multiple providers (Anthropic, Meta, Mistral, Amazon) via unified API, without managing ML infrastructure.
AWS infrastructure as code framework that allows defining cloud resources using programming languages like TypeScript, Python, or Java, generating CloudFormation.
AWS native service for defining and provisioning infrastructure as code using YAML or JSON templates, with state management and automatic rollback.
AWS serverless NoSQL database with single-digit millisecond latency at any scale, ideal for applications requiring high performance and automatic scalability.
AWS container orchestration service that runs and scales Docker applications without managing the underlying cluster infrastructure.
AWS serverless event bus connecting applications using events, enabling decoupled event-driven architectures with rule-based routing.
Serverless compute engine for containers that eliminates server management, allowing Docker container execution paying only for consumed resources.
AWS identity and access management service controlling who can do what in your account, with granular policies based on the principle of least privilege.
AWS serverless compute service that runs code in response to events without provisioning or managing servers, automatically scaling from zero to thousands of concurrent executions.
AWS object storage service with 99.999999999% durability, unlimited scalability, and multiple storage classes for cost optimization.
AWS open-source framework for building serverless applications with simplified CloudFormation syntax, CLI for local development, and integrated deployment.
AWS pub/sub messaging service that distributes messages to multiple subscribers simultaneously, enabling fan-out patterns and notifications at scale.
AWS fully managed message queue service that decouples distributed application components, guaranteeing message delivery with unlimited scalability.
AWS serverless orchestration service that coordinates multiple services into visual workflows using Amazon States Language (ASL), with built-in error handling, retries, and parallel execution.
AWS framework with six pillars of best practices for designing and operating reliable, secure, efficient, and cost-effective cloud systems.
Architectural pattern where each client type has its own dedicated backend adapting microservice APIs to that client's specific needs.
Spotify's open-source platform for building developer portals, with service catalog, templates, and extensible plugin system.
Prompting technique that improves LLM reasoning by asking them to decompose complex problems into explicit intermediate steps before reaching a conclusion.
Discipline of experimenting on production systems to discover weaknesses before they cause incidents, by injecting controlled failures.
Continuous Integration and Continuous Delivery/Deployment — practices that automate code integration, testing, and delivery to production. Foundation of modern software engineering.
Principles for designing intuitive, consistent, and productive command-line interfaces that developers enjoy using.
Development approach leveraging cloud advantages: containers, microservices, immutable infrastructure, and declarative automation for scalable and resilient systems.
Practices, tools, and metrics for maintaining readable, maintainable, testable, and defect-free code over time.
Repositories for storing, versioning, and distributing container images, from public registries like Docker Hub to private registries like ECR.
Practices and tools for securing containers throughout their lifecycle: image building, runtime, orchestration, and compliance.
The maximum number of tokens an LLM can process in a single interaction, determining how much information it can consider simultaneously to generate responses.
Practices and strategies to minimize cloud spending without sacrificing performance, including right-sizing, reservations, spot instances, and eliminating idle resources.
Pattern separating read and write operations into distinct models, optimizing each independently for performance and scalability.
Collection of reusable components, patterns, and guidelines ensuring visual and interaction consistency in digital products at scale.
Specification for defining reproducible development environments in containers, eliminating 'works on my machine' problems and accelerating onboarding.
Discipline focused on optimizing developer productivity, satisfaction, and effectiveness through well-designed tools, processes, and environments.
Structured process for new developers to become productive quickly, from environment setup to understanding team architecture and processes.
Centralized platforms providing developers with documentation, APIs, tools, and service catalogs in one place.
Culture and set of practices that unify development (Dev) and operations (Ops) to deliver software with greater speed, quality, and reliability. It's not a role — it's a way of working.
Set of technical and cultural practices that implement DevOps principles — from Infrastructure as Code to blameless post-mortems. The "how" behind the philosophy.
Integration of security practices throughout the software development lifecycle, automating security controls in the CI/CD pipeline.
Observability technique tracking requests across multiple services in distributed systems, enabling bottleneck identification and failure diagnosis.
Container platform that packages applications with all dependencies into portable, consistent units that run identically in any environment.
Tool for defining and running multi-container applications with a YAML file, simplifying local development of systems with multiple services.
Practice of treating documentation with the same tools and processes as code: versioned in Git, reviewed in PRs, and automatically generated when possible.
Software design approach centering development on the business domain, using a ubiquitous language shared between developers and domain experts.
Dense vector representations that capture the semantic meaning of text, images, or other data in a numerical space where proximity reflects conceptual similarity.
Pattern where application state is derived from an immutable sequence of events, providing complete audit trail and the ability to reconstruct state at any point in time.
Architectural pattern where components communicate through asynchronous events, enabling decoupled, scalable, and reactive systems.
Technique enabling activation or deactivation of features in production without deploying new code, enabling progressive releases and experimentation.
Process of specializing a pre-trained model for a specific task or domain through additional training with curated data, adapting its behavior without starting from scratch.
LLM capability to generate structured calls to external functions based on natural language, enabling integration with APIs, databases, and real-world tools.
Distributed version control system created by Linus Torvalds in 2005. Foundation of every modern development workflow — from local commits to global collaboration.
Branching model for Git proposed by Vincent Driessen in 2010. Defines branches with fixed roles (main, develop, feature, release, hotfix) for managing structured releases.
Collaborative development platform built on Git. More than repository hosting — it's the central hub for code review, CI/CD, project management, and open source collaboration.
GitHub's native CI/CD platform. Declarative YAML workflows that automate build, test, deploy, and any development lifecycle task — directly from the repository.
Minimalist branching model designed for continuous deployment. Only two elements — main and feature branches — with PRs as the integration point and immediate deploy after merge.
Operational practice using Git as single source of truth for infrastructure and configuration, with automatic reconciliation between desired and actual state.
Recommended, pre-configured paths for common development tasks incorporating best practices, reducing cognitive load for teams.
Techniques to reduce LLMs generating false but plausible information, from RAG to factual verification and prompt design.
Package manager for Kubernetes that simplifies installation and management of complex applications through reusable and configurable charts.
Architectural pattern isolating business logic from the outside world through ports and adapters, facilitating testing and technology changes.
Processes and practices for detecting, responding to, resolving, and learning from production incidents in a structured and effective way.
Techniques to reduce cost, latency, and resources needed to run language models in production, from quantization to distributed serving.
Practice of defining and managing infrastructure through versioned configuration files instead of manual processes. Foundation of modern operations automation.
Application of open-source development practices within an organization, allowing teams to contribute to other teams' projects with transparent processes.
Internally built platforms abstracting infrastructure and operations complexity, providing self-service to development teams.
Data structures representing knowledge as networks of entities and relationships, enabling reasoning, connection discovery, and semantic queries over complex domains.
Container orchestration platform that automates deployment, scaling, and management of containerized applications at scale, becoming the de facto standard for cloud native.
Massive neural networks based on the Transformer architecture, trained on enormous text corpora to understand and generate natural language with emergent capabilities like reasoning, translation, and code generation.
Automated tools that verify style, detect potential errors, and format code consistently, eliminating style debates and improving quality.
Proposed standard for publishing a Markdown file at a website's root that enables language models to efficiently understand and use the site's content at inference time.
Practices and tools for creating productive development environments on the developer's machine, replicating production as closely as possible.
Practices for implementing effective logging in distributed systems: structured logging, levels, correlation, and centralized aggregation.
Structured frameworks for progressively assessing and improving organizational capabilities, from CMMI to modern approaches like DORA and simplified models.
Collection and visualization of numerical system measurements over time to understand performance, detect anomalies, and make data-driven decisions.
Architectural pattern extending microservices to the frontend, allowing independent teams to develop and deploy parts of a web application autonomously.
Architectural style structuring an application as a collection of small, independent, deployable services, each with its own business logic and data.
Open protocol created by Anthropic that standardizes how AI applications connect with external tools, data, and services through a universal interface.
Code organization strategy where multiple projects coexist in a single repository, sharing dependencies, configuration, and build tooling.
Architectures where multiple specialized AI agents collaborate, compete, or coordinate to solve complex problems that exceed a single agent's capability.
Computational models inspired by brain structure that learn patterns from data, forming the foundation of modern artificial intelligence systems.
React framework for full-stack web applications with Server Components, file-based routing, SSR/SSG, and built-in performance optimizations.
Industry standards for delegated authorization (OAuth 2.0) and federated authentication (OpenID Connect), enabling third-party login and secure API access.
Ability to understand a system's internal state from its external outputs: logs, metrics, and traces, enabling problem diagnosis without direct system access.
Open source fork of Terraform maintained by the Linux Foundation. Compatible with HCL and Terraform providers, created in response to HashiCorp's license change to BSL 1.1.
Discipline designing and building internal self-service platforms so development teams can deploy and operate applications autonomously.
Practice of defining security, compliance, and governance policies as versioned, executable code, automating their verification in CI/CD pipelines.
Web applications using modern technologies to deliver native app-like experiences: installable, offline-capable, and with push notifications.
Technique that stores the internal computation of reused prompt prefixes across LLM calls, reducing costs by up to 90% and latency by up to 85% in applications with repetitive context.
The discipline of designing effective instructions for language models, combining clarity, structure, and examples to obtain consistent, high-quality responses.
JavaScript library for building user interfaces through declarative, reusable components, with an ecosystem spanning from SPAs to full-stack applications with Server Components.
Architectural pattern that combines information retrieval from external sources with LLM text generation, reducing hallucinations and keeping knowledge current without retraining the model.
Pattern for managing distributed transactions in microservices through a sequence of local transactions with compensating actions to handle failures.
Principles for designing development kits that are intuitive, consistent, and facilitate service integration across multiple programming languages.
Practices and tools for securely storing, distributing, and rotating credentials, API keys, and other sensitive data in applications and pipelines.
Development practices preventing security vulnerabilities from design, including input validation, error handling, and defense-in-depth principles.
Model where development teams can provision and manage infrastructure autonomously through automated interfaces, without depending on operations tickets.
Information retrieval technique that uses vector embeddings to find results by meaning, not just exact keyword matching.
React paradigm where components execute on the server, sending only HTML to the client, reducing the JavaScript bundle and improving performance.
Cloud computing model where the provider manages infrastructure automatically, allowing code execution without provisioning or managing servers, paying only for actual usage.
Infrastructure layer dedicated to managing communication between microservices, providing observability, security, and traffic control transparently.
Discipline applying software engineering principles to infrastructure operations, focusing on creating scalable and highly reliable systems.
Framework for defining, measuring, and communicating service reliability through service level objectives (SLOs), indicators (SLIs), and agreements (SLAs).
Development methodology where the specification is written before the code, serving as a contract between teams and as the source of truth for implementation.
Patterns and libraries for managing frontend application state predictably, from local component state to shared global state.
Open source SDK from AWS for building AI agents with a model-driven approach. Functional agents in a few lines of code, with multi-model support, custom tools, MCP, multi-agent, and built-in observability.
Incremental migration strategy that gradually replaces a legacy system with new components, progressively routing traffic until the old system can be retired.
Practices for ensuring the integrity and security of all dependencies, tools, and processes comprising the software development pipeline.
Algorithmically generated data that replicates the statistical properties of real data, used to train, evaluate, and test AI systems when real data is scarce, expensive, or sensitive.
Utility-first CSS framework enabling design building directly in markup using atomic classes, eliminating the need to write custom CSS.
HashiCorp's Infrastructure as Code tool that enables defining, provisioning, and managing multi-cloud infrastructure through declarative HCL files.
Approaches and testing levels for validating software works correctly, from unit tests to end-to-end tests and testing in production.
Process of splitting text into discrete units (tokens) that language models can process numerically, fundamental to how LLMs understand and generate text.
Design strategies and patterns for AI agents to select, invoke, and combine external tools effectively to complete complex tasks.
Twelve-principle methodology for building modern SaaS applications that are portable, scalable, and deployable on cloud platforms.
Typed superset of JavaScript adding optional static types, improving developer productivity, error detection, and code maintainability.
Discipline encompassing every aspect of a person's interaction with a product, system, or service, aiming for usefulness, usability, and satisfaction.
Storage systems specialized in indexing and searching high-dimensional vectors efficiently, enabling semantic search and RAG applications at scale.
Automated process of identifying known vulnerabilities in code, dependencies, containers, and infrastructure before they reach production.
Native web standards for creating reusable, encapsulated components that work in any framework or without one.
Security architecture that rigorously verifies every request regardless of origin, eliminating implicit trust in internal networks.