llm_context_benchmarks — LLM Plugin by ivanfioravanti

by ivanfioravanti · LLM Plugin · ★ 62

Last updated: · Indexed by AgentSkillsHub · Auto-synced every 8h

About llm_context_benchmarks

📊 LLM Context Benchmarks - A comprehensive benchmarking tool for testing LLMs with varying context sizes using Ollama. Features dual benchmark modes (API/CLI), automatic hardware detection (optimized for Apple Silicon), visual performance charts.

aibenchmarkingllms

Quick Facts

Stars62
Forks8
LanguagePython
CategoryLLM Plugin
LicenseApache-2.0
Quality Score42.3/100
Open Issues4
Last Updated2026-05-05
Created2025-08-06
Platformscli, python
Est. Tokens~176k

llm_context_benchmarks alternative? Top 4 similar tools

Looking for a llm_context_benchmarks alternative? If you're comparing llm_context_benchmarks with other llm plugin tools, these 4 projects are the closest alternatives on Agent Skills Hub — ranked by topic overlap, star count, and community traction.

  • bocoel by rentruewang · ⭐ 289

    Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 tim

  • Hegelion by Hmbown · ⭐ 137

    Dialectical reasoning architecture for LLMs (Thesis → Antithesis → Synthesis)

  • PyOmniTS by Ladbaby · ⭐ 67

    🔬 A Researcher&Agent-Friendly Framework for Time Series Analysis. Train Any Model on Any Dataset!

  • ai-agents-reality-check by Cre4T3Tiv3 · ⭐ 56

    Benchmarking the gap between AI agent hype and architecture. Three agent archetypes, 73-point performance spre

More LLM Plugin Tools

Explore other popular llm plugin tools:

View all LLM Plugin tools →

Popular Python Agent Tools

Frequently Asked Questions

What is llm_context_benchmarks?

llm_context_benchmarks is 📊 LLM Context Benchmarks - A comprehensive benchmarking tool for testing LLMs with varying context sizes using Ollama. Features dual benchmark modes (API/CLI), automatic hardware detection (optimized. It is categorized as a LLM Plugin with 62 GitHub stars.

What programming language is llm_context_benchmarks written in?

llm_context_benchmarks is primarily written in Python. It covers topics such as ai, benchmarking, llms.

How do I install or use llm_context_benchmarks?

You can find installation instructions and usage details in the llm_context_benchmarks GitHub repository at github.com/ivanfioravanti/llm_context_benchmarks. The project has 62 stars and 8 forks, indicating an active community.

What license does llm_context_benchmarks use?

llm_context_benchmarks is released under the Apache-2.0 license, making it free to use and modify according to the license terms.

What are the best alternatives to llm_context_benchmarks?

The top alternatives to llm_context_benchmarks on Agent Skills Hub include bocoel, Hegelion, PyOmniTS. Each offers a different approach to the same problem space — compare them side-by-side by stars, quality score, and community activity.

View on GitHub → Browse LLM Plugin tools