Reading time: 10 minutes | Skill level: Beginner | Version: v1.4.0 | Last updated: 2026-01-23
| Command | Provider | Description |
|---|---|---|
ccd |
Anthropic Direct | Best quality, production-ready |
ccc |
GitHub Copilot | Free with subscription |
cco |
Ollama Local | 100% private, offline |
ccs |
Status | Check all providers |
| Command | Model | Speed | Use Case |
|---|---|---|---|
ccc-opus |
Claude Opus 4.6 | Slow | Best quality, code review |
ccc-sonnet |
Claude Sonnet 4.6 | Medium | Balanced (default) |
ccc-haiku |
Claude Haiku 4.5 | Fast | Quick questions |
ccc-gpt |
GPT-4.1 | Fast | Alternative perspective (0x quota) |
Copilot (25+ models available):
COPILOT_MODEL=claude-opus-4-6 ccc
COPILOT_MODEL=claude-haiku-4.5 ccc
COPILOT_MODEL=gpt-4.1 ccc
COPILOT_MODEL=gemini-3-pro-preview cccOllama (local models):
OLLAMA_MODEL=devstral-small-2 cco # Best agentic (default)
OLLAMA_MODEL=ibm/granite4:small-h cco # Long context, less VRAM
OLLAMA_MODEL=qwen3-coder:30b cco # Highest accuracy| Command | Output |
|---|---|
claude-switch --help |
Full usage guide |
claude-switch --version |
Version: v1.1.0 |
ollama-check.sh --help |
Diagnostic help |
ollama-check.sh --version |
Version: v1.1.0 |
ollama-optimize.sh --help |
Optimization help |
ollama-optimize.sh --version |
Version: v1.1.0 |
| Command | Purpose |
|---|---|
ollama-check.sh |
11-point installation diagnostic |
ollama-optimize.sh |
Apply Apple Silicon optimizations |
ccs |
Check all provider status |
| Command | Description |
|---|---|
cat ~/.claude/claude-switch.log |
View all sessions |
tail -20 ~/.claude/claude-switch.log |
Recent sessions |
grep "mode=copilot" ~/.claude/claude-switch.log |
Filter by provider |
grep "Session ended" ~/.claude/claude-switch.log |
View durations |
ccsccd
> Implement user authenticationccc
> Refactor this function for readabilitycco
> Analyze this proprietary algorithmccc-haiku
> What's the difference between map and flatMap?ccc-opus
> Review this code for security vulnerabilitiesccc-gpt
> Generate test cases for this functionollama-check.shExpected output:
═══════════════════════════════════════════════════════════
RÉSUMÉ
═══════════════════════════════════════════════════════════
Ollama actif: ✅
API accessible: ✅
Modèle 32B: ✅
claude-switch: ✅
ollama-optimize.shResult: 26-39 tok/s on M4 Pro 48GB with Qwen2.5-Coder-32B
# Morning: Explore (free)
ccc-haiku
> Help me understand this React project
# Afternoon: Implement (free)
ccc
> Add JWT authentication
# Before commit: Review (paid, best quality)
ccd
> Final security review# Public code: Use Copilot
ccc
> Implement standard REST API
# Proprietary code: Use Ollama
cco
> Optimize this trade secret algorithm# Quick iteration: Haiku
ccc-haiku
> 10 different approaches
# Refinement: Sonnet
ccc-sonnet
> Implement approach #3
# Polish: Opus
ccc-opus
> Final quality check| Scenario | Recommended Command | Reasoning |
|---|---|---|
| Production deployment | ccd or ccc-opus |
Best quality |
| PR review | ccd or ccc-opus |
Critical decisions |
| Learning new framework | ccc or ccc-haiku |
Fast iteration, free |
| Prototyping MVP | ccc |
Cost-effective |
| Proprietary code | cco |
100% private |
| Offline work | cco |
No internet required |
| Quick questions | ccc-haiku |
Fastest responses |
| Comparing approaches | ccc then ccc-gpt |
Different perspectives |
ccc # Starts new session
ccc -c # Resume previous session# Last 5 sessions
tail -5 ~/.claude/claude-switch.log
# Sessions with durations
grep "Session ended" ~/.claude/claude-switch.log
# Today's sessions
grep "$(date '+%Y-%m-%d')" ~/.claude/claude-switch.loggrep "mode=direct" ~/.claude/claude-switch.log # Anthropic
grep "mode=copilot" ~/.claude/claude-switch.log # Copilot
grep "mode=ollama" ~/.claude/claude-switch.log # OllamaNot supported within same session. Exit and restart:
# Terminal 1: Start with Copilot
ccc
> Start implementing feature
> /exit
# Switch to Anthropic for critical review
ccd
> Review the code I just wroteAlternative: Use multiple terminals in parallel:
# Terminal 1
ccc # Development
# Terminal 2
cco # Local testing
# Terminal 3
ccd # Final reviewTest same prompt with different models:
echo "Write a binary search function" | ccc-haiku
echo "Write a binary search function" | ccc-sonnet
echo "Write a binary search function" | ccc-opus
echo "Write a binary search function" | ccc-gptTrack Anthropic usage vs Copilot:
# Count Anthropic sessions
grep "mode=direct" ~/.claude/claude-switch.log | wc -l
# Count Copilot sessions (free)
grep "mode=copilot" ~/.claude/claude-switch.log | wc -lTrack Ollama performance:
# During session, in another terminal
watch -n 2 'ps aux | grep ollama | grep -v grep | awk "{printf \"RAM: %.2f GB\n\", \$6/1024/1024}"'# Add to project-specific .envrc or Makefile
export COPILOT_MODEL="claude-opus-4-6"
alias review="ccc" # Now uses Opus for this projectccs # Check status firstcopilot-api startollama-check.sh # Full diagnostic
brew services restart ollamatail -1000 ~/.claude/claude-switch.log > ~/.claude/claude-switch.log.tmp
mv ~/.claude/claude-switch.log.tmp ~/.claude/claude-switch.logclaude-switch --help # Full help# Read documentation
cat examples/multi-provider/README.md
cat examples/multi-provider/QUICKSTART.md
cat examples/multi-provider/OPTIMISATION-M4-PRO.mdAll commands tested and working on macOS (M4 Pro 48GB) ✅
- Quick Start Guide - Get started in 5 minutes
- Cheatsheet - One-page printable reference
- Decision Trees - Choose the right command
- Model Switching Guide - Dynamic model selection
- Best Practices - Strategic usage patterns
Back to: Documentation Index | Main README