by yoloshii · MCP Server · ★ 152
Last updated: · Indexed by AgentSkillsHub · Auto-synced every 8h
On-device memory layer for AI agents. Claude Code, Hermes and OpenClaw. Hooks + MCP server + hybrid RAG search.
| Stars | 152 |
| Forks | 24 |
| Language | TypeScript |
| Category | MCP Server |
| License | MIT |
| Quality Score | 35.75/100 |
| Open Issues | 4 |
| Last Updated | 2026-05-04 |
| Created | 2026-02-06 |
| Platforms | claude-code, mcp, node |
| Est. Tokens | ~630k |
Looking for a ClawMem alternative? If you're comparing ClawMem with other mcp server tools, these 6 projects are the closest alternatives on Agent Skills Hub — ranked by topic overlap, star count, and community traction.
Local-first persistent agentic memory powered by Recursive Memory Harness (RMH). Open source must win.
Local-first identity, memory, and secrets for AI agents. Portable state across models and harnesses.
Single-file memory layer for AI agents, sub mili-second RAG on Apple Silicon. Metal Optimized On-Device. No Se
The local-first LLM Wiki: open-source knowledge graph builder, RAG knowledge base, and agent memory store. Bui
🏛 [UNDER CONSTRUCTION] A (roman) claude plugin marketplace
Local-first memory layer for OpenClaw, Codex App, and Codex CLI: capture, recall, dedupe, and native sync.
Explore other popular mcp server tools:
ClawMem is On-device memory layer for AI agents. Claude Code, Hermes and OpenClaw. Hooks + MCP server + hybrid RAG search.. It is categorized as a MCP Server with 152 GitHub stars.
ClawMem is primarily written in TypeScript. It covers topics such as ai-agent-memory, ai-agents, bun.
You can find installation instructions and usage details in the ClawMem GitHub repository at github.com/yoloshii/ClawMem. The project has 152 stars and 24 forks, indicating an active community.
ClawMem is released under the MIT license, making it free to use and modify according to the license terms.
The top alternatives to ClawMem on Agent Skills Hub include Ori-Mnemos, signetai, Wax. Each offers a different approach to the same problem space — compare them side-by-side by stars, quality score, and community activity.