jonmatumalpha
conceptsnotesexperimentsessays

© 2026 Jonatan Mata · alpha · v0.1.0

#openai

2 articles tagged #openai.

  • Prompt Caching

    Technique that stores the internal computation of reused prompt prefixes across LLM calls, reducing costs by up to 90% and latency by up to 85% in applications with repetitive context.

    evergreen#prompt-caching#llm#cost-reduction#latency#anthropic#openai#optimization
  • Prompt Engineering

    The discipline of designing effective instructions for language models, combining clarity, structure, and examples to obtain consistent, high-quality responses.

    growing#prompt-engineering#llm#anthropic#openai#google#meta#best-practices#ai-tools
All tags