Prompt Configuration
The secret weapon for seamless tool adoption by AI agents.
Why Prompts Matter
The prompt you configure in your AI agent (via agents.md or claude.md) plays a critical role in the ecosystem. A well-crafted prompt is the difference between:
❌ Without proper prompt: - "Use semantic search to find this" - "Search with parallel semantic search for that" - Constantly reminding the agent to use the tools - Friction in every interaction
✅ With proper prompt: - Agent naturally uses semantic search when appropriate - Understands when to use parallel vs single queries - Knows when to use refined answers vs raw results - Seamless integration that feels native

Agent naturally planning to use semantic search as part of its workflow - guided by the prompt, not by explicit user instruction
In the example above, you can see the agent: - Thinking proactively about semantic search access - Updating task status before performing the search - Planning the workflow with todos that include semantic search - No manual reminders needed - the prompt taught it to use the tool
The Problem: Tool Adoption
AI agents have access to many tools, but they don't automatically know when to use them. Without proper guidance in the prompt:
- Agents default to basic file reading instead of semantic search
- They might miss using commit history search
- They won't leverage parallel queries for complex questions
- Tools remain underutilized despite being available
The prompt is what teaches the agent to adopt the tools as part of its natural workflow.
What a Good Prompt Does
A well-designed prompt for this ecosystem should:
1. Define When to Use Semantic Search
The agent needs to understand semantic search is not just "another search" - it's the primary way to understand your codebase:
- Use semantic search for "how does X work?"
- Use it to find implementation patterns
- Use it to understand architecture
- Use it before reading files manually
2. Guide Parallel vs Single Search Strategy
The agent should know:
- Single query for specific, targeted searches
- Parallel queries for complex, multi-faceted questions
- When to use
refined_answer=Truefor analysis - When raw results are sufficient
3. Teach Commit History Search
Commit history is not just git log - it's semantic understanding of your codebase evolution:
- Search commits to understand "why" decisions were made
- Find similar past implementations
- Learn from previous refactorings
- Understand evolution of features
4. Integrate with the Workflow
The prompt should make tools feel like native capabilities, not external add-ons:
- Check
.codebase/state.jsonto know indexing status - Use semantic search before grep/file reading
- Leverage indexed commits for historical context
- Understand the relationship between indexer, search, and workflow
Prompt Structure
A good prompt for this ecosystem should include:
Project Context Section
- What the project is about
- Tech stack and conventions
- Current goals/tasks
Tool Usage Guidelines
- When to use semantic search (always prefer it)
- How to use parallel queries (complex questions)
- Why commit history matters (learning from past)
Workflow Integration
- Check indexing status first
- Use semantic search as primary discovery method
- Read files only after semantic search narrows scope
- Reference commits for historical context
Where to Configure
Claude Code
Create or edit ~/.claude/claude.md:
This file is automatically loaded by Claude Code on every session.
Codex / Other Agents
Create or edit agents.md in your project root or global config:
Different agents may use different filenames - check your agent's documentation.
Placeholder Prompt
Placeholder
The following is a placeholder prompt. A complete, production-ready prompt will be added here soon.
# System Instructions
You are an expert AI coding assistant with access to powerful semantic search tools.
## Available Tools
- `semantic_search`: Search code by meaning, not keywords
- `semantic_parallel_search`: Run multiple semantic queries in parallel
- `search_commit_history`: Search git history with LLM analysis
- `visit_other_project`: Search other codebases
## Tool Usage Guidelines
### Always Prefer Semantic Search
Before reading files manually:
1. Check `.codebase/state.json` for indexing status
2. Use `semantic_search` to understand the codebase
3. Use parallel queries for complex, multi-faceted questions
### Commit History is Knowledge
Use `search_commit_history` to:
- Understand why decisions were made
- Find similar past implementations
- Learn from previous work
### Strategy
- Single query: Specific, targeted searches
- Parallel queries: Complex questions requiring multiple angles
- Refined answer: When you need LLM analysis of results
- Raw results: When you just need code references
## Workflow
1. Check indexing status
2. Semantic search to understand
3. Read specific files for details
4. Reference commits for context
---
**[Your project-specific instructions go here]**
Best Practices
1. Be Specific About Tool Usage
Don't just say "use semantic search" - explain when and why:
❌ Bad: "Use semantic search when needed"
✅ Good: "Use semantic_search as the first step to understand any code question.
It's faster and more accurate than reading files manually."
2. Provide Examples
Show the agent concrete examples of good tool usage:
Example: User asks "How does authentication work?"
Good approach:
1. semantic_search(query="authentication flow implementation", ...)
2. Review results to understand structure
3. Read specific files for details
Bad approach:
1. grep for "auth"
2. Read random files hoping to find it
3. Explain the Ecosystem
Help the agent understand how tools relate:
The ecosystem:
- Codebase Index CLI: Indexes your code
- Semantic Search MCP: Provides search tools
- You: Use the tools naturally in your workflow
Check .codebase/state.json to know if codebase is indexed.
4. Set Expectations
Be clear about what the agent should achieve:
Goals:
- Understand codebase through semantic search
- Never read files blindly
- Use commit history to learn from past
- Leverage parallel queries for complex questions
Testing Your Prompt
After configuring your prompt, test it by asking questions that require tool usage:
Good test questions:
- "How does the authentication system work?"
- Should trigger semantic search
-
Should search commit history for context
-
"Find all error handling patterns"
- Should use parallel queries
-
Should check multiple aspects (logging, try-catch, validation)
-
"Why did we implement X this way?"
- Should search commit history
- Should look for related discussions in commits
What to observe:
- Does the agent use tools without being reminded?
- Does it choose the right tool for the job?
- Does it check indexing status first?
- Does it use parallel queries for complex questions?
Common Mistakes
1. Too Vague
This doesn't teach the agent anything about when or why to use tools.
2. Too Prescriptive
This removes agent flexibility and may lead to slow, unnecessary LLM calls.
3. Missing Context
Without explaining the indexer and .codebase/state.json, the agent won't understand the system.
4. No Examples
Agents learn best from concrete examples. Don't just explain - show.
Iterating Your Prompt
Your prompt is not set in stone. Iterate based on usage:
- Observe - How does the agent use (or not use) tools?
- Identify - What patterns are missing or wrong?
- Adjust - Update prompt to guide better behavior
- Test - Verify improvements with test questions
Common iterations:
- Agent reads files too early → Add "semantic search first" rule
- Agent doesn't use parallel queries → Add examples of complex questions
- Agent ignores commit history → Emphasize historical context value
- Agent forgets to check indexing status → Add explicit workflow step
Ecosystem Integration
Your prompt should acknowledge the complete ecosystem:
For Claude Code Users
You have access to:
- Codebase Index CLI (background indexing)
- Semantic Search MCP (search tools)
- Claude Code Chat UI (status monitoring)
- Claude Hooks (context injection)
Always check `.codebase/state.json` for indexing status.
For Other Agent Users
You have access to:
- Codebase Index CLI (run `codebase -start .` to index)
- Semantic Search MCP (provides search tools via MCP)
Check if `.codebase/state.json` exists before using search tools.
Further Reading
- Codebase Index CLI - How the indexer works
- Semantic Search MCP - Available tools and usage
- Claude Hooks - Enhance context with hooks
Coming Soon
A complete, battle-tested prompt configuration will be added here based on real-world usage and feedback from the community.
What will be included:
- Complete prompt template
- Project-specific customization guide
- Examples for different agent types
- Advanced strategies for complex codebases
Stay tuned!