Rapid-MLX — Claude Skill by raullenchai

by raullenchai · Claude Skill · ★ 2.2k

Last updated: · Indexed by AgentSkillsHub · Auto-synced every 8h

About Rapid-MLX

Rapid-MLX Run AI on your Mac. Faster than anything else. Run local AI models on your Mac — no cloud, no API costs. Works with Cursor, Claude Code, and any OpenAI-compatible app. pip install → serve Gemma 4 26B → chat + tool calling → works with PydanticAI, LangChain, Aider, and more. Speed (

apple-siliconclaude-codecursordeepseekfastapihacktoberfestinferencellmlocal-llmm1

Quick Facts

Stars2,220
Forks272
LanguagePython
CategoryClaude Skill
LicenseApache-2.0
Quality Score53.13/100
Open Issues26
Last Updated2026-05-12
Created2026-02-25
Platformsclaude-code, python
Est. Tokens~1324k

Compatible Skills

These tools work well together with Rapid-MLX for enhanced workflows:

  • vllm-mlx — semantic(0.38)+complementary+shared_fw(openai)+rare_topics+same_lang+similar_pop+shared_platform (85%)
  • ovo-local-llm — semantic(0.48)+complementary+shared_fw(ollama,openai)+rare_topics+shared_platform (71%)
  • mlx-omni-server — semantic(0.37)+complementary+shared_fw(openai)+rare_topics+same_lang+similar_pop+shared_platform (71%)
  • openyak — semantic(0.20)+complementary+shared_fw(ollama,openai)+same_lang+similar_pop+shared_platform (68%)

Rapid-MLX alternative? Top 6 similar tools

Looking for a Rapid-MLX alternative? If you're comparing Rapid-MLX with other claude skill tools, these 6 projects are the closest alternatives on Agent Skills Hub — ranked by topic overlap, star count, and community traction.

  • claude-code-local by nicedreamzapp · ⭐ 2.6k

    Run Claude Code 100% on-device with local AI on Apple Silicon. MLX-native Anthropic-API server, 65 tok/s Qwen

  • vllm-mlx by waybarrios · ⭐ 1.1k

    OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL,

  • 9router by decolua · ⭐ 9.2k

    Unlimited FREE AI coding. Connect Claude Code, Codex, Cursor, Cline, Copilot, Antigravity to FREE Claude/GPT/G

  • agency-agents-zh by jnMetaCode · ⭐ 10.6k

    🎭 211 个即插即用的 AI 专家角色 — 支持 Hermes Agent/Claude Code/Cursor/Copilot 等 16 种工具,覆盖工程/设计/营销/金融等 18 个部门。含 46 个中国市场原创

  • tuui by AI-QL · ⭐ 1.1k

    A desktop MCP client designed as a tool unitary utility integration, accelerating AI adoption through the Mode

  • TokenTracker by mm7894215 · ⭐ 444

    Track token usage across 15 AI agent CLIs — Claude Code, Codex, Cursor, Gemini, Kiro, OpenCode, OpenClaw, Ever

More Claude Skill Tools

Explore other popular claude skill tools:

View all Claude Skill tools →

Popular Python Agent Tools

Frequently Asked Questions

What is Rapid-MLX?

Rapid-MLX is The fastest local AI engine for Apple Silicon. 4.2x faster than Ollama, 0.08s cached TTFT, 100% tool calling. 17 tool parsers, prompt cache, reasoning separation, cloud routing. Drop-in OpenAI replace. It is categorized as a Claude Skill with 2.2k GitHub stars.

What programming language is Rapid-MLX written in?

Rapid-MLX is primarily written in Python. It covers topics such as apple-silicon, claude-code, cursor.

How do I install or use Rapid-MLX?

You can find installation instructions and usage details in the Rapid-MLX GitHub repository at github.com/raullenchai/Rapid-MLX. The project has 2.2k stars and 272 forks, indicating an active community.

What license does Rapid-MLX use?

Rapid-MLX is released under the Apache-2.0 license, making it free to use and modify according to the license terms.

What are the best alternatives to Rapid-MLX?

The top alternatives to Rapid-MLX on Agent Skills Hub include claude-code-local, vllm-mlx, 9router. Each offers a different approach to the same problem space — compare them side-by-side by stars, quality score, and community activity.

View on GitHub → Browse Claude Skill tools