-
First control plane for AMD GPU-powered neoclouds helps providers monetise LLM workloads with speed, governance, and ROI clarity.
-
New platform accelerates AI deployment and revenue for GPU-based neoclouds, co-launched at Advancing AI 2025.
SINGAPORE, 7 July 2025 – Embedded LLM recently announced the launch of TokenVisor, its Graphics Processing Unit (GPU) monetisation and administration control plane, at the Advancing AI 2025 event held in Santa Clara, California, United States of America, on 12 June.

TokenVisor was co-launched by Embedded LLM and AMD at Advancing AI 2025. The platform enables monetisation and control of AMD GPU clusters for LLM workloads in neocloud environments.
A Pioneering Solution for the AMD Ecosystem
TokenVisor is a pioneering solution — the first of its kind for the AMD GPU-powered neocloud ecosystem — offering neocloud providers and enterprises a ROI-assured, streamlined path to manage, control, and monetise AMD GPU clusters for Large Language Model (LLM) workloads, and fostering community growth and innovation.
Engineered to simplify model deployment, capacity management, and billing, TokenVisor addresses key user needs. Its intuitive control plane — designed with insights from the AMD GPU neocloud community and in the spirit of open collaboration championed at events like Advancing AI — enables GPU owners to easily:
- Set custom pricing
- Monitor usage
- Automate resource allocation
- Implement rate-limiting policies
These features help neoclouds quickly commercialise services and equip enterprises with robust internal cost allocation and governance.
“TokenVisor is the hypervisor for the AI Token era – unlocking decentralised GPU computing’s potential requires tools as powerful and flexible as the hardware. Co-launched at Advancing AI 2025, an event that celebrates AI innovation and open-source collaboration, marks an important milestone for the AMD GPU neocloud community,” said Ooi Ghee Leng, CEO of Embedded LLM.
“TokenVisor brings powerful new capabilities to the AMD GPU neocloud ecosystem, helping providers efficiently manage and monetise LLM workloads,” said Mahesh Balasubramanian, Senior Director of Product Marketing, Data Center GPU Business, AMD.
Early Adoption Success
Early adopters in the AMD GPU neocloud space report significant positive impact, including accelerated time-to-revenue and the ability to “open up shop” quickly post-installation of their GPU hardware and TokenVisor. The platform’s comprehensive support for popular LLM and multi-modal models, alongside responsive technical support, is a key differentiator, generating enthusiasm among users seeking rapid Return on Investment (ROI) on their AI infrastructure.
Embedded LLM commits to meeting high industry standards for enterprise and cloud customers to own, operate, and monetise deeply integrated and optimised AMD stacks for AI inferencing.
For media enquiries, please contact:
Jiaqi Lim
Head of PR & Marketing
pr@embeddedLLM.com
https://embeddedllm.com
About Embedded LLM
Embedded LLM PTE. LTD. creates innovative Large Language Model (LLM) platforms, empowering organisations with generative AI. Inspired by its mission to build essential AI infrastructure for the knowledge economy, the company delivers robust and secure solutions. A significant open-source contributor, notably enhancing vLLM for AMD ROCm, Embedded LLM also offers open source tools like its JamAI Base no-/low-code LLM orchestration platform. The company is committed to making LLM technology accessible and fostering innovation within the open ecosystem.
About AMD
For more than 55 years, AMD has driven innovation in high-performance computing, graphics, and visualization technologies. Hundreds of millions of consumers, Fortune 500 businesses, and leading scientific research facilities around the world rely on AMD technology to improve how they live, work, and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit amd.com.