Field of computer science dedicated to creating systems capable of performing tasks that normally require human intelligence, from reasoning and perception to language generation.
Artificial intelligence is the field of computer science that seeks to create systems capable of performing tasks that traditionally require human intelligence: reasoning, learning, perceiving, generating language, and making decisions.
It's not a new concept — the term was coined in 1956 at the Dartmouth conference — but the convergence of three factors transformed it in recent years: massive amounts of data, accessible compute power (GPUs, TPUs), and advances in neural network architectures, particularly the Transformer introduced in 2017.
Foundation models are large-scale neural networks trained on massive amounts of unlabeled data. They're called "foundational" because they serve as a base for multiple tasks without requiring complete retraining.
Examples: GPT-4, Claude, Gemini, Llama, Mistral.
Their key characteristic is emergence: capabilities that weren't explicitly programmed but arise from training at scale, such as chain-of-thought reasoning, cross-language translation, or code generation.
A subset of foundation models specialized in processing and generating text. They use the Transformer architecture with attention mechanisms that allow them to capture long-range relationships in text sequences.
Current LLMs don't just generate text — they can follow complex instructions, maintain context in long conversations, and use external tools when configured to do so.
The application of foundation models to create new content: text, code, images, audio, video. It's the most visible layer for end users and the one that has driven massive adoption since 2022.
The way humans interact with AI systems has evolved rapidly:
| Paradigm | Mechanism | Main limitation | Example |
|---|---|---|---|
| Prompting | Natural language instructions | Depends on model knowledge | ChatGPT, Claude |
| RAG | Queries external sources before responding | Quality depends on retrieval | Perplexity, enterprise systems |
| Tool Use | Invokes APIs, databases, or services | Requires defining available tools | Function calling, MCP |
| Agents | Reasoning + memory + tools in multiple steps | Orchestration complexity and safety | Copilot Workspace, Devin |
Each paradigm builds on the previous one. AI agents represent the current frontier.
AI has moved from a research field to an engineering tool. Understanding its layers — from foundation models to agent frameworks — enables informed decisions about what to build, what to buy, and where to invest learning effort.
Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.
Open protocol created by Anthropic that standardizes how AI applications connect with external tools, data, and services through a universal interface.
Computational models inspired by brain structure that learn patterns from data, forming the foundation of modern artificial intelligence systems.
Key takeaways from Nate B Jones' second brain series — from the original 8 building blocks to Open Brain (Postgres + MCP), the two-door principle, and the implementation gap.
Key takeaways from Dr. Werner Vogels' final keynote at AWS re:Invent 2025, where he presents the Renaissance Developer framework and argues why AI will not replace developers who evolve.
Key takeaways from Dario Amodei's essay on civilizational risks of powerful AI and how to confront them.
Chronicle of building a second brain with a knowledge graph, bilingual pipeline, and agent endpoints — in days, not weeks, and what that teaches about the gap between theory and working systems.
Open source SDK from AWS for building AI agents with a model-driven approach. Functional agents in a few lines of code, with multi-model support, custom tools, MCP, multi-agent, and built-in observability.
Development methodology where the specification is written before the code, serving as a contract between teams and as the source of truth for implementation.
Information retrieval technique that uses vector embeddings to find results by meaning, not just exact keyword matching.
The discipline of designing effective instructions for language models, combining clarity, structure, and examples to obtain consistent, high-quality responses.
Structured frameworks for progressively assessing and improving organizational capabilities, from CMMI to modern approaches like DORA and simplified models.
Proposed standard for publishing a Markdown file at a website's root that enables language models to efficiently understand and use the site's content at inference time.
Massive neural networks based on the Transformer architecture, trained on enormous text corpora to understand and generate natural language with emergent capabilities like reasoning, translation, and code generation.
Data structures representing knowledge as networks of entities and relationships, enabling reasoning, connection discovery, and semantic queries over complex domains.
Process of specializing a pre-trained model for a specific task or domain through additional training with curated data, adapting its behavior without starting from scratch.
Frameworks and metrics for measuring AI system performance, quality, and safety, from standard benchmarks to domain-specific evaluations.
Field dedicated to ensuring artificial intelligence systems behave safely, aligned with human values, and predictably, minimizing risks of harm.