Domain 5 · Task Statement 5.5

Model Selection & Usage Optimisation

TL;DR

Choose the right Claude model for each task — Haiku for speed, Sonnet for daily work, Opus for deep analysis — understand rate limit implications, and leverage Adaptive Thinking for automatic reasoning calibration.

What You Need to Know

Claude isn't one model — it's a family of models with different strengths, speeds, and cost profiles. Choosing the right model for each task is a genuine skill that directly affects both the quality of your output and how much you can accomplish within your plan's rate limits. Using Opus for every task is like driving a lorry to the corner shop: it works, but you waste fuel and block traffic for no reason.

Haiku: the sprinter

Haiku is the lightest and fastest model, optimised for instant, simple tasks: summarisation, categorisation, quick lookups, straightforward extraction, and batch processing. It is the bicycle for a quick trip to the post box — fast, efficient, and uses almost no fuel.

Don't dismiss Haiku as a "dumb" model. Haiku 4.5 rivals the reasoning capability of previous-generation Sonnet 4.0. It isn't less intelligent in absolute terms — it's optimised for speed on tasks that don't require extended deliberation. For batch processing 50 files, sorting data into categories, or extracting structured information from documents, Haiku is often the optimal choice.

Sonnet: the daily driver

Sonnet is the versatile default for coding, writing, analysis, and multi-step workflows. It is the reliable car for your daily commute — handles most roads, balances speed with capability, and is the right choice for the vast majority of professional tasks.

If you are unsure which model to use, Sonnet is almost always the correct answer. It handles complex analysis, long-form writing, multi-document synthesis, and sophisticated reasoning well enough for most professional contexts. Save Opus for the tasks where Sonnet genuinely falls short.

Opus: the specialist

Opus is reserved for deep research, complex reasoning, detailed analysis, and tasks where accuracy matters more than speed. It is the heavy-duty vehicle you bring out for the big jobs — slower and consumes significantly more rate limit, but necessary when the task demands maximum reasoning depth.

Good Opus tasks:

  • Analysing a 40-page legal contract for liability clauses
  • Complex financial modelling with multiple interdependent variables
  • Research synthesis across a dozen contradictory sources
  • Tasks where nuance, edge cases, and subtle reasoning matter

Bad Opus tasks:

  • Summarising a meeting transcript (Sonnet handles this well)
  • Categorising 100 support tickets (Haiku is optimal)
  • Answering simple factual questions (any model works; use the lightest)
[~]

The 80/10/10 Rule

Most professionals find that Sonnet handles about 80% of their daily work. Haiku handles 10% (batch processing, simple extraction). Opus handles the remaining 10% (deep analysis, complex reasoning). If you are using Opus for more than 10-15% of your tasks, you are likely wasting rate limit.

Rate limit efficiency

Models consume your plan's rate limit at different rates. Haiku is the lightest, Sonnet is moderate, and Opus is the heaviest. A simple summary that takes minimal rate limit on Haiku can consume substantially more on Opus — not because the output is longer, but because the model's reasoning overhead is greater.

This matters practically: if you burn through your rate limit on Opus for tasks that Haiku could handle, you may not have enough capacity left when a genuinely complex task arrives that needs Opus. Think of your rate limit as a daily budget. Spending it wisely means using the lightest model that produces acceptable quality for each task.

Adaptive Thinking

Modern model versions (Sonnet and Opus 4.6) feature Adaptive Thinking, which automatically calibrates reasoning depth based on question complexity. A simple factual query triggers minimal reasoning overhead. A complex analytical question triggers deeper deliberation. The same model intelligently allocates resources based on what the task actually requires.

[!]

Exam Trap: Model Versions Are Separate Training Runs

A new model version (e.g., Sonnet 4.6 replacing Sonnet 4.0) isn't a patch or minor update. Each version is a separate training run with potentially different strengths, weaknesses, and capabilities. A task that required Opus six months ago may run perfectly on the latest Sonnet — always re-evaluate your model assumptions when new versions are released.

This means you generally don't need to micromanage the extended thinking toggle. With Adaptive Thinking enabled, the model automatically uses less reasoning for simple questions and more for complex ones. Manually disabling extended thinking can actually reduce quality on unexpectedly complex queries without meaningful efficiency gains.

Practical model selection framework

When deciding which model to use, ask three questions:

  1. Is the task simple and repetitive? → Haiku (batch processing, extraction, categorisation)
  2. Does the task require competent analysis or writing? → Sonnet (reports, emails, code, multi-step workflows)
  3. Does the task require deep reasoning, nuance, or handling of contradictions? → Opus (legal analysis, complex research, financial modelling)

If you are unsure, start with Sonnet. If the output feels shallow, try Opus. If the output is fine but you are processing many similar items, switch to Haiku to conserve rate limit.


Common Mistakes

Common Mistake

Defaulting to Opus for every task — including simple summaries, quick lookups, and straightforward categorisation — because it's the 'best' model.

Instead: Match the model to the task. A summary produced by Haiku is virtually identical to one produced by Opus, but costs a fraction of the rate limit. Reserve Opus for tasks that genuinely require deep reasoning.

Common Mistake

Assuming that a task which needed Opus last year still needs Opus today — never retesting with newer Sonnet or Haiku versions.

Instead: Each model version is a separate training run with different capabilities. Sonnet 4.6 is significantly more capable than Sonnet 4.0. Periodically retest your assumptions: a task that required Opus six months ago may now run on Sonnet — faster and at lower cost.

Common Mistake

Manually toggling extended thinking off for every simple question, not realising that Adaptive Thinking already calibrates reasoning depth automatically.

Instead: Leave Adaptive Thinking enabled. The model automatically uses less reasoning for simple queries and more for complex ones. Manual toggling adds friction without meaningful efficiency gains and can reduce quality on unexpectedly complex questions.

Simple factual lookup

Before

What is the capital of France?

After

Switch to Haiku. What is the capital of France?

Batch processing

Before

Summarise these 50 files using Opus for accuracy.

After

Use Sonnet for this batch: summarise each of the 50 files in this folder into a 3-sentence overview. Save each summary as [original-filename]-summary.txt.

Complex analysis

Before

Analyse this contract.

After

Switch to Opus. Analyse this 40-page services agreement. Identify all clauses that create liability for our company, flag any unusual termination conditions, and compare the indemnification terms against standard UK commercial practice.


Hands-On Activity

Hands-On Activity

Compare Model Performance on the Same Task

15 min

Run the same document through different models to see the quality and speed tradeoffs firsthand. Build an intuitive understanding of which model fits which task.

What you will learn

  • Compare output quality between Haiku and Opus on the same input
  • Observe the speed difference between models on the same task
  • Identify which model is optimal for summarisation vs. critical analysis
  • Develop an intuitive framework for model selection
  1. 01

    Find a complex document at least 5 pages long — a research paper, a business report, or a lengthy article. Run this prompt through Haiku: 'Summarise this document in 5 bullet points.'

    Why: Haiku is optimised for summarisation. This establishes a baseline for what the fastest, lightest model can produce on a straightforward extraction task.

    Expected: A clear, accurate 5-bullet summary produced almost instantly. The quality should be surprisingly good for the smallest model.

  2. 02

    Run the same document through Opus with this prompt: 'Critique the methodology of this document. Identify three weaknesses and suggest three improvements.'

    Why: This is a task that genuinely requires Opus — critical analysis, identification of subtle weaknesses, and creative suggestions that go beyond surface-level observations.

    Expected: A significantly deeper analysis with richer reasoning, specific critiques, and substantive improvement suggestions.

  3. 03

    Note the difference in response time and reasoning depth. Ask yourself: 'For my typical daily tasks, which model would I use most often?'

    Why: Most professionals find that Sonnet handles 80-90% of their daily work, with Haiku for batch processing and Opus reserved for the occasional deep analysis. This exercise calibrates your intuition.

    Expected: A clear understanding that model selection is about matching the tool to the task — not always choosing the most powerful option.


Practice Question

Practice Question

A developer needs to refactor 50 simple CSS files to use a new naming convention. The task is repetitive and straightforward with no ambiguity. Which model is the best choice?


Sources