jonmatumalpha
conceptsnotesexperimentsessays

© 2026 Jonatan Mata · alpha · v0.1.0

#llm

20 articles tagged #llm.

  • Agentic Workflows

    Design patterns where AI agents execute complex multi-step tasks autonomously, combining reasoning, tool use, and iterative decision-making.

    evergreen#agentic#workflows#ai-agents#orchestration#automation#llm
  • AI Agents

    Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.

    growing#ai-agents#llm#autonomous-systems#tool-use#agentic-ai#react-pattern
  • AI Coding Assistants

    Tools using LLMs to help developers write, understand, debug, and refactor code, from autocomplete to agents that implement complete features.

    seed#coding-assistant#copilot#ai-tools#developer-experience#llm#ide
  • AI Evaluation Metrics

    Frameworks and metrics for measuring AI system performance, quality, and safety, from standard benchmarks to domain-specific evaluations.

    seed#evaluation#benchmarks#metrics#llm#quality#testing
  • AI Observability

    Practices and tools for monitoring, tracing, and debugging AI systems in production, covering token metrics, latency, response quality, costs, and hallucination detection.

    evergreen#observability#llm#monitoring#tracing#langfuse#production#metrics
  • AI Orchestration

    Patterns and frameworks for coordinating multiple AI models, tools, and data sources in production pipelines, managing flow between components, memory, and error recovery.

    evergreen#orchestration#llm#agents#pipelines#langchain#production#workflows
  • Artificial Intelligence

    Field of computer science dedicated to creating systems capable of performing tasks that normally require human intelligence, from reasoning and perception to language generation.

    growing#ai#machine-learning#deep-learning#llm#neural-networks#foundation-models
  • AWS Bedrock

    AWS managed service providing access to foundation models from multiple providers (Anthropic, Meta, Mistral) via API, without managing ML infrastructure.

    seed#aws#bedrock#llm#ai#foundation-models#serverless
  • Chain-of-Thought

    Prompting technique that improves LLM reasoning by asking them to decompose complex problems into explicit intermediate steps before reaching a conclusion.

    seed#chain-of-thought#cot#reasoning#prompting#llm#problem-solving
  • Context Windows

    The maximum number of tokens an LLM can process in a single interaction, determining how much information it can consider simultaneously to generate responses.

    seed#context-window#tokens#llm#memory#attention#scaling
  • Fine-Tuning

    Process of specializing a pre-trained model for a specific task or domain through additional training with curated data, adapting its behavior without starting from scratch.

    seed#fine-tuning#llm#transfer-learning#lora#rlhf#training
  • Function Calling

    LLM capability to generate structured calls to external functions based on natural language, enabling integration with APIs, databases, and real-world tools.

    evergreen#function-calling#tool-use#llm#api#json#structured-output
  • Hallucination Mitigation

    Techniques to reduce LLMs generating false but plausible information, from RAG to factual verification and prompt design.

    seed#hallucination#factuality#grounding#rag#verification#llm
  • Inference Optimization

    Techniques to reduce cost, latency, and resources needed to run language models in production, from quantization to distributed serving.

    seed#inference#optimization#quantization#latency#serving#llm#performance
  • Large Language Models

    Massive neural networks based on the Transformer architecture, trained on enormous text corpora to understand and generate natural language with emergent capabilities like reasoning, translation, and code generation.

    evergreen#llm#transformer#gpt#claude#foundation-models#deep-learning#nlp
  • Prompt Caching

    Technique that stores the internal computation of reused prompt prefixes across LLM calls, reducing costs by up to 90% and latency by up to 85% in applications with repetitive context.

    evergreen#prompt-caching#llm#cost-reduction#latency#anthropic#openai#optimization
  • Prompt Engineering

    The discipline of designing effective instructions for language models, combining clarity, structure, and examples to obtain consistent, high-quality responses.

    growing#prompt-engineering#llm#anthropic#openai#google#meta#best-practices#ai-tools
  • Retrieval-Augmented Generation

    Architectural pattern that combines information retrieval from external sources with LLM text generation, reducing hallucinations and keeping knowledge current without retraining the model.

    evergreen#rag#llm#embeddings#vector-search#information-retrieval#ai-architecture
  • Synthetic Data

    Algorithmically generated data that replicates the statistical properties of real data, used to train, evaluate, and test AI systems when real data is scarce, expensive, or sensitive.

    seed#synthetic-data#data-generation#privacy#training#evaluation#llm#augmentation
  • Tokenization

    Process of splitting text into discrete units (tokens) that language models can process numerically, fundamental to how LLMs understand and generate text.

    seed#tokenization#bpe#tokens#nlp#llm#preprocessing
All tags