Jonatan Matajonmatum.com
conceptsnotesexperimentsessays
© 2026 Jonatan Mata. All rights reserved.v2.1.1

#prompt-caching

1 article tagged #prompt-caching.

  • Prompt Caching

    Technique that stores the internal computation of reused prompt prefixes across LLM calls, reducing costs by up to 90% and latency by up to 85% in applications with repetitive context.

    evergreen#prompt-caching#llm#cost-reduction#latency#anthropic#openai#optimization
All tags