by THUDM · Agent Tool · ★ 256
Last updated: · Indexed by AgentSkillsHub · Auto-synced every 8h
Towards Large Multimodal Models as Visual Foundation Agents
| Stars | 256 |
| Forks | 9 |
| Language | Python |
| Category | Agent Tool |
| License | Apache-2.0 |
| Quality Score | 39.2/100 |
| Open Issues | 16 |
| Last Updated | 2025-04-24 |
| Created | 2024-08-08 |
| Platforms | python |
| Est. Tokens | ~378k |
These tools work well together with VisualAgentBench for enhanced workflows:
Looking for a VisualAgentBench alternative? If you're comparing VisualAgentBench with other agent tool tools, these 6 projects are the closest alternatives on Agent Skills Hub — ranked by topic overlap, star count, and community traction.
Low code tool to rapidly build and coordinate multi-agent teams
Langtrace 🔍 is an open-source, Open Telemetry based end-to-end observability tool for LLM applications, prov
💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.
A research agent system deeply rooted in your own Zotero library.
[CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating nat
ChatGPT CLI is a powerful, multi-provider command-line interface for working with modern LLMs. It supports Ope
Explore other popular agent tool tools:
VisualAgentBench is Towards Large Multimodal Models as Visual Foundation Agents. It is categorized as a Agent Tool with 256 GitHub stars.
VisualAgentBench is primarily written in Python. It covers topics such as gpt, llm-agent, multimodal-large-language-models.
You can find installation instructions and usage details in the VisualAgentBench GitHub repository at github.com/THUDM/VisualAgentBench. The project has 256 stars and 9 forks, indicating an active community.
VisualAgentBench is released under the Apache-2.0 license, making it free to use and modify according to the license terms.
The top alternatives to VisualAgentBench on Agent Skills Hub include tribe, langtrace, Awesome-GUI-Agent. Each offers a different approach to the same problem space — compare them side-by-side by stars, quality score, and community activity.