Domain 515%

Glossary: Projects, Context & Advanced Configuration

Quick-lookup definitions for the 15% exam domain. Each entry includes a concise definition and exam context. Follow the lesson links to dive deeper.

Terms in this domain

Project

A self-contained workspace in Claude with its own chat history, knowledge base, and instructions. Projects act like dedicated office drawers — Marketing in one, Legal in another — ensuring context from one workstream does not leak into another. Available on Pro, Team, and Enterprise plans at no per-project cost.

Exam context: Know that chats within a shared project are private by default (not visible to all members), that projects support two permission levels (Can View, Can Edit), and that project instructions do not count against your message quota.

See also: 5.1 Projects in Cowork


Project Instructions

Persistent background instructions that shape Claude's behaviour for every chat within a specific project. They function as a system prompt — write them once and they apply automatically to every new conversation. This eliminates repeating your role, preferences, and formatting rules at the start of each session.

Exam context: Know that Project Instructions consume tokens from the context window (the same window your conversation needs) but do not count against your message quota. Overly long instructions crowd out space for actual work.

See also: 5.1 Projects in Cowork


Project Knowledge Base

A curated set of documents (up to 200K tokens) uploaded to a project that ground Claude's outputs in your specific internal data. Upload PDFs, text files, or other documents that Claude references when answering questions. This gives the AI access to your company's actual data rather than relying on general training knowledge.

Exam context: Know the 200K token limit, that knowledge base documents should be updated as business information evolves (not left static), and that uploading all files to a knowledge base is less efficient than Cowork's direct file access for one-time analysis tasks.

See also: 5.1 Projects in Cowork


Context Isolation

The architectural guarantee that information in one project does not leak into another. Without isolation, data from HR discussions could surface during marketing work, or legal terminology could creep into sales communications. Each project is its own self-contained environment with separate memory, instructions, and knowledge.

Exam context: Questions test whether you understand why creating separate projects is better than cramming everything into one "omni-project." The answer always involves preventing context bleed between unrelated workstreams.

See also: 5.1 Projects in Cowork


Writing Sample Calibration

Including 200 to 400 words of your own writing in Project Instructions for Claude to mirror your vocabulary, sentence structure, and rhythm. A concrete writing sample teaches the AI more about your voice than paragraphs of abstract description like "write in a direct, no-nonsense way."

Exam context: Know that writing samples are critical for professional consistency (not just creative projects), and that a 300-word sample plus explicit rules produces the highest voice fidelity.

See also: 5.2 Context Files & Persistent Knowledge


Tone Contrasts

Defining your desired voice by specifying what it is not. "Knowledgeable friend explaining over coffee, not a corporate press release. Direct, not evasive. Confident, not arrogant." Contrasts give Claude clear boundaries that prevent drift into generic AI patterns, and are more effective than positive-only descriptions.

Exam context: Questions may present two approaches to defining tone — one using abstract adjectives, one using contrasts — and ask which produces more consistent output.

See also: 5.2 Context Files & Persistent Knowledge


Memory Synthesis

An automated summary of key insights from your chat history, updated every 24 hours. It captures preferences, facts about you, and recurring patterns. The synthesis refreshes on a fixed cycle, not in real time — a critical distinction for exam questions about when new preferences take effect.

Exam context: Know that memory synthesis runs every 24 hours (not instantly), that manually editing memory entries takes effect immediately, and that "Reset Memory" permanently deletes all synthesis data with no recovery option.

See also: 5.3 Memory in Cowork


A retrieval-augmented generation (RAG) feature that lets you search across previous conversation sessions using natural language. Ask "What did we discuss about the logo redesign three weeks ago?" and Claude searches your chat history for relevant passages. Chat Search only searches conversation history — never files on your computer.

Exam context: A trap answer confuses Chat Search with file search. Chat Search retrieves past conversations only. For file-level search, you need Cowork's direct file access within a scoped folder.

See also: 5.3 Memory in Cowork


Incognito Chats

Temporary sessions (marked by a ghost icon) that are not saved to history and not used for memory synthesis. Use them for sensitive discussions you do not want Claude to remember long-term — salary negotiations, confidential client details, or exploratory conversations. Standard chats feed into memory synthesis by default.

Exam context: Know that starting an Incognito chat prevents Claude from accessing conversation history via Chat Search. If you need to search past conversations, do not use Incognito mode.

See also: 5.3 Memory in Cowork


Profile Preferences

Universal, account-wide settings (found under Settings > Profile) that apply to every conversation across your entire account. This is the place for rules that should never change regardless of context: your name, language preference, universal formatting rules, and behavioural guardrails like "no preambles or flattery."

Exam context: Know the distinction between Profile Preferences (global, applies everywhere) and Project Instructions (scoped, applies only within that project). Putting project-specific rules in global settings is a common mistake.

See also: 5.4 Global Instructions & Behavioural Rules


Custom Styles

Formatting presets — "Concise," "Explanatory," "Formal" — that control the look and feel of Claude's responses: sentence length, use of headers, level of detail. Styles are separate from Instructions. Instructions define what the AI knows and how it behaves; Styles control how the final output is formatted and presented.

Exam context: A trap answer claims Custom Styles and Project Instructions are the same thing. They are not. Styles control formatting; Instructions control behaviour and content.

See also: 5.4 Global Instructions & Behavioural Rules


Layered Context Stack

Claude processes context in a hierarchy: Global Preferences load first as the foundation, Project Instructions add task-specific context on top, and Styles adjust the final delivery format. Understanding this hierarchy is essential for troubleshooting unexpected behaviour — check which layer contains (or is missing) the relevant rule.

Exam context: Questions may present a scenario where a rule is being ignored and ask where it should be configured. Match the rule's scope to the correct layer.

See also: 5.4 Global Instructions & Behavioural Rules


Haiku

The lightest and fastest model in the Claude family, optimised for instant, simple tasks like summarisation, categorisation, and straightforward extraction. Haiku 4.5 rivals the reasoning capability of previous-generation Sonnet 4.0. It consumes the least rate limit — ideal for batch processing and mechanical tasks.

Exam context: A trap claims Haiku is "too limited for professional use." It is not — it excels at speed-sensitive batch tasks and simple extraction at a fraction of the rate limit cost.

See also: 5.5 Model Selection & Usage Optimisation


Sonnet

The versatile default model for coding, writing, and multi-step workflows. It handles the vast majority of professional tasks and is the correct choice when you are unsure which model to use. Sonnet balances speed with capability — the reliable daily driver.

Exam context: Sonnet is the default correct answer for "which model should I use?" unless the task specifically requires extreme speed (Haiku) or deep reasoning (Opus).

See also: 5.5 Model Selection & Usage Optimisation


Opus

The most powerful model, reserved for deep research, complex reasoning, and long-document analysis where accuracy and nuance are critical. Opus consumes significantly more rate limit than Sonnet or Haiku. Using it for simple tasks wastes your allocation for no quality gain.

Exam context: Know that Opus is correct only for tasks requiring genuine deep reasoning. "Always use Opus for best quality" is a trap — it wastes rate limits on tasks that Sonnet handles equally well.

See also: 5.5 Model Selection & Usage Optimisation


Adaptive Thinking

A feature in modern model versions (Sonnet and Opus 4.6) that automatically calibrates reasoning depth based on question complexity. Simple queries do not trigger deep reasoning chains, making the model more token-efficient. You do not need to manually toggle extended thinking on and off.

Exam context: Know that Adaptive Thinking makes manual thinking-mode management unnecessary. The model intelligently allocates resources based on task complexity.

See also: 5.5 Model Selection & Usage Optimisation