Settings
Configure your Consensus preferences and API keys.
What is the context window?
Context is the information sent to AI models with each message. It includes system prompts, conversation history, and memory. Managing your context window helps optimize response quality and avoid truncation.
Context Composition
Each deliberation message includes these token categories. Monitoring usage helps you understand what consumes your context budget.
System Prompt
Method instructions, roles, and discussion setup
Memory
Relevant memories injected from your memory bank
Conversation
Prior turns and responses in the current deliberation
Response Reserve
Buffer reserved for the model to generate its reply
Context Warnings
Enable or disable warnings when context usage approaches limits.
Warning Threshold
A yellow indicator appears when any model reaches this percentage of its context capacity. Use this as an early signal to shorten your conversation.
When context exceeds this threshold, you will see a caution indicator. Must be lower than the critical threshold (90%).
Maximum Threshold
A red critical alert appears when any model reaches this percentage. At this level, older messages may be summarized or truncated by the model.
When context exceeds this threshold, a critical alert is shown. At the maximum, older messages will be summarized. Must be higher than warning (70%).
Threshold Preview
Visual overview of your configured context zones.
Reset Context Settings
Restore context thresholds to their default values (70% warning, 90% critical).
