A code reasoning MCP server, a fork of sequential-thinking
A Model Context Protocol (MCP) server that enhances Claude's ability to solve complex programming tasks through structured, step-by-step thinking.
Configure Claude Desktop by editing:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.json~/.config/Claude/claude_desktop_config.json{
"mcpServers": {
"code-reasoning": {
"command": "npx",
"args": ["-y", "@mettamatt/code-reasoning"]
}
}
}
Configure VS Code:
{
"mcp": {
"servers": {
"code-reasoning": {
"command": "npx",
"args": ["-y", "@mettamatt/code-reasoning"]
}
}
}
}
To trigger this MCP, append this to your chat messages:
Use sequential thinking to reason about this.
Use ready-to-go prompts that trigger Code-Reasoning:

/help to see the specific commands.See the Prompts Guide for details on using the prompt templates.
--debug: Enable detailed logging--help or -h: Show help informationDetailed documentation available in the docs directory:
├── index.ts # Entry point
├── src/ # Implementation source files
└── test/ # Testing framework
The Code Reasoning MCP Server includes a prompt evaluation system that assesses Claude's ability to follow the code reasoning prompts. This system allows:
To use the prompt evaluation system, run:
npm run eval
Significant effort went into developing the optimal prompt for the Code Reasoning server. The current implementation uses the HYBRID_DESIGN prompt, which emerged as the winner from our evaluation process.
We compared four different prompt designs:
| Prompt Design | Description |
|---|---|
| SEQUENTIAL | The original sequential thinking prompt design |
| DEFAULT | The baseline prompt previously used in the server |
| CODE_REASONING_0_30 | An experimental variant focusing on code-specific reasoning |
| HYBRID_DESIGN | A refined design incorporating the best elements of other approaches |
Our evaluation across seven diverse programming scenarios showed that HYBRID_DESIGN outperformed other prompts:
| Scenario | HYBRID_DESIGN | CODE_REASONING_0_30 | DEFAULT | SEQUENTIAL |
|---|---|---|---|---|
| Algorithm Selection | 89% | 82% | 92% | 88% |
| Bug Identification | 92% | 91% | 88% | 94% |
| Multi-Stage Implementation | 87% | 67% | 82% | 87% |
| System Design Analysis | 87% | 87% | 83% | 82% |
| Code Debugging Task | 96% | 87% | 91% | 93% |
| Compiler Optimization | 83% | 78% | 72% | 78% |
| Cache Strategy | 87% | 88% | 89% | 87% |
| Average | 89% | 83% | 85% | 87% |
The HYBRID_DESIGN prompt demonstrates the highest average solution quality (89%) and the most consistent performance across all scenarios, with no scores below 80%. It also produces the most thoughts. The src/server.ts file has been updated to use this optimal prompt design.
Personally, I think the biggest improvement was adding this to the end of the prompt: "✍️ End each thought by asking: "What am I missing or need to reconsider?"
See Testing Framework for more details on the prompt evaluation system.
This project is licensed under the MIT License. See the LICENSE file for details.
{
"mcpServers": {
"code-reasoning": {
"command": "npx",
"args": [
"-y",
"@mettamatt/code-reasoning"
]
}
}
}Related projects feature coming soon
Will recommend related projects based on sub-categories