News
🤖 AI Tutorials How to Ask (or Force) Claude to Use a Skill

How to Ask (or Force) Claude to Use a Skill

Get Claude Code to actually run the skill you installed — from polite nudges to hard enforcement via CLAUDE.md and hooks.

You spent an afternoon writing a good skill. You put it in .claude/commands/. You open a new Claude Code session, describe a task the skill is perfect for, and… Claude does the task by hand, ignoring the skill entirely. Sound familiar?

This tutorial walks through four levels of getting Claude to use a skill — from a gentle request to hard enforcement — and when each one is the right move.


Level 0 — Confirm the skill is actually loaded

Before blaming the model, make sure the skill is visible at all.

In a Claude Code session, type / and wait for the autocomplete. Every loaded skill shows up in the list. If yours is missing:

  • Check the file is in .claude/commands/ (project) or ~/.claude/commands/ (user-global)
  • Check the extension is .md, not .txt or .markdown
  • Restart the session — new skills are picked up at session start, not mid-conversation

A skill that does not appear in the autocomplete will never be invoked no matter how politely you ask. Fix this first.


Test the skill in isolation first

Before you spend time getting Claude to auto-invoke a skill, confirm the skill itself produces the output you want when it runs. The cleanest way to do that on Claude.ai is the built-in test runner:

  1. Open Customize in the sidebar, then Skills.
  2. Click the skill you want to test.
  3. On the skill page, click the three-dots menu (top-right of the skill header).
  4. Choose Test skill.

A sandboxed session opens with only that skill loaded. You can hand it a sample input and see exactly how Claude interprets the instructions without the noise of your other skills, project context, or CLAUDE.md rules interfering.

Use it to answer:

  • Does the skill do what its description says it does?
  • Are the instructions unambiguous, or does Claude fill in gaps differently each run?
  • Does it produce the output shape you actually want?

If the skill misbehaves in the isolated test, no amount of invocation coaxing later will fix that — go back and tighten the skill file first. Most of the time I think "Claude is ignoring my skill," the real problem is that the skill itself is underspecified, and the test runner is the fastest way to see that.


Level 1 — Invoke it explicitly

The simplest, most reliable form:

/review the changes in src/auth.ts

The leading /skillname is an explicit instruction. Claude will execute the skill's instructions directly. If the skill accepts arguments via $ARGUMENTS, anything after the command name is substituted in.

This is the right approach when:

  • You know exactly which skill you want
  • You want deterministic behavior
  • The skill is short-named and easy to remember

Downsides: you have to remember the skill exists. When you have fifteen skills installed, "what was that one called" becomes a real friction point.


Level 2 — Describe the task in the skill's own language

Claude Code reads the skill's frontmatter (especially the description field) to decide whether to invoke it automatically. A well-written description ends with trigger phrases — words someone might actually say when they want the skill.

If your skill has this frontmatter:

---
name: security-review
description: Review the current branch for security issues (injection, auth bypasses, secret leakage). Use when the user asks for a security review, to audit changes, or to check for vulnerabilities.
---

Then any of these user messages will reliably trigger it:

  • "Do a security review of this branch"
  • "Audit the changes for vulnerabilities"
  • "Check for auth bypasses in what I just wrote"

But these might not:

  • "Look this over" (too vague)
  • "What do you think?" (doesn't mention security)
  • "Make sure it's safe" (could mean a lot of things)

The lesson: if Claude ignores a skill you think should have fired, look at your own message alongside the skill's description. The closer the vocabulary match, the more reliable the auto-invocation.

Tip: add 3–5 concrete trigger phrases at the end of every skill's description. It feels redundant when you write it. It pays off every day after.


Level 3 — Force the skill via CLAUDE.md

Some skills should run unconditionally for certain tasks. Put that rule in your CLAUDE.md so Claude sees it at the start of every session:

## Required skills

- Before committing, always run `/security-review` on the changes.
- When adding a new API endpoint, run `/api-contract-check` first.
- For SQL migrations, run `/migration-safety` before writing the migration.

Claude Code reads the file every turn. A clear "always" or "before X, run Y" instruction in CLAUDE.md turns a suggestion into a rule Claude follows without being reminded.

This is stronger than describing the task well, because it does not depend on how you phrased the request — only on what kind of task it is.

Caveats:

  • Do not overdo it. If CLAUDE.md lists twelve required skills, Claude will miss some. Three to five is the sweet spot.
  • Make the trigger condition specific. "Always run /review" is too broad and will fire on every trivial edit. "Before making a commit, run /review on the staged changes" is actionable.

Level 4 — Hard enforcement with hooks

When "please run the skill" is not enough and you need "the skill will run, no exceptions" — that is what hooks are for. A hook is a shell command the Claude Code harness runs on a lifecycle event. Unlike instructions in CLAUDE.md, hooks run whether the model cooperates or not.

Example — fire a security review before every commit Claude makes:

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Bash",
        "command": "if echo \"$CLAUDE_TOOL_INPUT\" | grep -q 'git commit'; then claude --run /security-review || exit 1; fi"
      }
    ]
  }
}

Put this in .claude/settings.json. From that point on, any attempt to run git commit will trigger the security review first. If the review exits non-zero, the commit is blocked.

This is heavy machinery and it is usually overkill. Reach for hooks when:

  • The skill is load-bearing (security, audit, compliance)
  • You have been burned by forgetting to run it
  • You are configuring a shared repo and want the guarantee to apply to everyone on the team

For anything else, CLAUDE.md is enough.


Troubleshooting: Claude still ignores the skill

A few things to check before escalating to the next level:

The description is vague. "Helps with code review." Helps how? When? Rewrite it with concrete trigger phrases. Claude picks up specific vocabulary, not vibes.

The skill and the user message use different words. Your skill says "audit for vulnerabilities" and you asked to "check for bugs." Those are different domains to the model. Either broaden the description or match the vocabulary.

The skill name conflicts with a built-in. If you name a skill review, it may compete with a built-in slash command of the same name. Pick something distinctive. security-review, not review.

A user-level skill is shadowing a project-level one. When both exist with the same filename, the project-level one wins. If you edited the project copy and are still seeing old behavior, check ~/.claude/commands/ for a stale duplicate.

Claude is in the middle of a multi-step task. Once Claude commits to an approach, it often carries it through rather than pivoting to a skill mid-flight. Ask the skill explicitly at the start of the task, not three turns in.


The progression that works in practice

Most teams land at some mix of the four levels:

  1. Well-described skills so auto-invocation works when you describe a task naturally.
  2. A short ## Required skills block in CLAUDE.md for the three or four skills that should never be skipped.
  3. One or two hooks for the highest-stakes skills (security review before commit, dependency scan before push).
  4. Manual /skillname invocation for the long tail of skills used occasionally.

Layer them in that order. You get reliable default behavior from Level 2, safety rails from Level 3, hard guarantees from Level 4 where you really need them, and explicit control from Level 1 whenever you want it. That covers every situation without turning every session into a checklist.