by ivanfioravanti · LLM Plugin · ★ 62
Last updated: · Indexed by AgentSkillsHub · Auto-synced every 8h
📊 LLM Context Benchmarks - A comprehensive benchmarking tool for testing LLMs with varying context sizes using Ollama. Features dual benchmark modes (API/CLI), automatic hardware detection (optimized for Apple Silicon), visual performance charts.
| Stars | 62 |
| Forks | 8 |
| Language | Python |
| Category | LLM Plugin |
| License | Apache-2.0 |
| Quality Score | 42.3/100 |
| Open Issues | 4 |
| Last Updated | 2026-05-05 |
| Created | 2025-08-06 |
| Platforms | cli, python |
| Est. Tokens | ~176k |
Looking for a llm_context_benchmarks alternative? If you're comparing llm_context_benchmarks with other llm plugin tools, these 4 projects are the closest alternatives on Agent Skills Hub — ranked by topic overlap, star count, and community traction.
Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 tim
Dialectical reasoning architecture for LLMs (Thesis → Antithesis → Synthesis)
🔬 A Researcher&Agent-Friendly Framework for Time Series Analysis. Train Any Model on Any Dataset!
Benchmarking the gap between AI agent hype and architecture. Three agent archetypes, 73-point performance spre
Explore other popular llm plugin tools:
llm_context_benchmarks is 📊 LLM Context Benchmarks - A comprehensive benchmarking tool for testing LLMs with varying context sizes using Ollama. Features dual benchmark modes (API/CLI), automatic hardware detection (optimized. It is categorized as a LLM Plugin with 62 GitHub stars.
llm_context_benchmarks is primarily written in Python. It covers topics such as ai, benchmarking, llms.
You can find installation instructions and usage details in the llm_context_benchmarks GitHub repository at github.com/ivanfioravanti/llm_context_benchmarks. The project has 62 stars and 8 forks, indicating an active community.
llm_context_benchmarks is released under the Apache-2.0 license, making it free to use and modify according to the license terms.
The top alternatives to llm_context_benchmarks on Agent Skills Hub include bocoel, Hegelion, PyOmniTS. Each offers a different approach to the same problem space — compare them side-by-side by stars, quality score, and community activity.