1 article tagged #context-window.
The maximum number of tokens an LLM can process in a single interaction, determining how much information it can consider simultaneously to generate responses.