WildClawBench — Codex Skill by InternLM

by InternLM · Codex Skill · ★ 301

Last updated: · Indexed by AgentSkillsHub · Auto-synced every 8h

About WildClawBench

WildClawBench []() []() Hard, practical, end-to-end evaluation for AI agents — in the wild. WildClawBench is an agent benchmark that tests what actually matters: can an AI agent do real work, end-to-end, without hand-holding? We drop agents into a live OpenClaw environment — the same open-source personal AI assistant that real users rely on daily — and throw 60 original tasks at them: clipping goal highlights from a football match, negotiating meeting times over multi-round emails, hunting down contradictions in search results, writing inference scripts for undocumented codebases, catching privacy leaks before they happen. Useful things. Hard things. Hard enough that every frontier model we tested scores below 0.55 (top overall 0.52). That makes scores mean something. Why WildClawBench? Most agent benchmarks test isolated capabilities — calling a function, parsing JSON, following a sing

agentic-aiagentic-evaluationagentsbenchmarksopenclaw

Quick Facts

Stars301
Forks15
LanguagePython
CategoryCodex Skill
LicenseMIT
Quality Score58.358/100
Last Updated2026-04-21
Created2026-03-23
Platformspython
Est. Tokens~679k

Compatible Skills

These tools work well together with WildClawBench for enhanced workflows:

  • team-tasks — semantic(0.16)+complementary+same_lang+similar_pop+shared_platform (56%)
  • AEnvironment — semantic(0.30)+complementary+same_lang+similar_pop+shared_platform (56%)
  • get-physics-done — semantic(0.16)+complementary+same_lang+similar_pop+shared_platform (55%)
  • agent-builder — semantic(0.15)+complementary+same_lang+similar_pop+shared_platform (55%)

WildClawBench alternative? Top 1 similar tools

Looking for a WildClawBench alternative? If you're comparing WildClawBench with other codex skill tools, these 1 projects are the closest alternatives on Agent Skills Hub — ranked by topic overlap, star count, and community traction.

  • models by arimxyer · ⭐ 415

    TUI and CLI for browsing AI models, benchmarks, coding agents, and statuses for AI providers.

More Codex Skill Tools

Explore other popular codex skill tools:

View all Codex Skill tools →

Popular Python Agent Tools

Frequently Asked Questions

What is WildClawBench?

WildClawBench is An in-the-wild benchmark for AI agents in the OpenClaw Environment.. It is categorized as a Codex Skill with 301 GitHub stars.

What programming language is WildClawBench written in?

WildClawBench is primarily written in Python. It covers topics such as agentic-ai, agentic-evaluation, agents.

How do I install or use WildClawBench?

You can find installation instructions and usage details in the WildClawBench GitHub repository at github.com/InternLM/WildClawBench. The project has 301 stars and 15 forks, indicating an active community.

What license does WildClawBench use?

WildClawBench is released under the MIT license, making it free to use and modify according to the license terms.

What are the best alternatives to WildClawBench?

The top alternatives to WildClawBench on Agent Skills Hub include models. Each offers a different approach to the same problem space — compare them side-by-side by stars, quality score, and community activity.

View on GitHub → Browse Codex Skill tools