by mclenhard · MCP Server · ★ 125
Last updated: · Indexed by AgentSkillsHub · Auto-synced every 8h
A Node.js package and GitHub Action for evaluating MCP (Model Context Protocol) tool implementations using LLM-based scoring. This helps ensure your MCP server's tools are working correctly and performing well.
| Stars | 125 |
| Forks | 12 |
| Language | TypeScript |
| Category | MCP Server |
| License | MIT |
| Quality Score | 50.75/100 |
| Open Issues | 5 |
| Last Updated | 2025-06-23 |
| Created | 2025-04-23 |
| Platforms | mcp, node |
| Est. Tokens | ~14k |
Looking for a mcp-evals alternative? If you're comparing mcp-evals with other mcp server tools, these 3 projects are the closest alternatives on Agent Skills Hub — ranked by topic overlap, star count, and community traction.
AgentEval is the comprehensive .NET toolkit for AI agent evaluation—tool usage validation, RAG quality metrics
Run workflows, delegate to swarms, and verify outputs before you apply them.
An opinionated list of awesome Pydantic-AI frameworks, libraries, software and resources.
Explore other popular mcp server tools:
mcp-evals is A Node.js package and GitHub Action for evaluating MCP (Model Context Protocol) tool implementations using LLM-based scoring. This helps ensure your MCP server's tools are working correctly and perfor. It is categorized as a MCP Server with 125 GitHub stars.
mcp-evals is primarily written in TypeScript. It covers topics such as ai, evals, mcp.
You can find installation instructions and usage details in the mcp-evals GitHub repository at github.com/mclenhard/mcp-evals. The project has 125 stars and 12 forks, indicating an active community.
mcp-evals is released under the MIT license, making it free to use and modify according to the license terms.
The top alternatives to mcp-evals on Agent Skills Hub include AgentEval, voratiq, awesome-pydantic-ai. Each offers a different approach to the same problem space — compare them side-by-side by stars, quality score, and community activity.