AI Application Accelerated Through Seamless Integration of Hugging Face Models with Hyperstack AI Studio
LONDON, November 25, 2025 NexGen Cloud, the leading AI infrastructure-as-a-service provider, today announces the integration of Hugging Face with its Hyperstack AI Studio. The collaboration will enable developers building applications with AI studio to train, tune and deploy models from the Hugging Face Transformer Library seamlessly within the same environment.
Hyperstack AI Studio, an end-to-end platform for AI application development for enterprises and research organisations, will now feature the Hugging Face models as a core feature. Developers can import supported LoRA (Low-Rank Adaptation) adapters into Hyperstack AI Studio from Hugging Face to take advantage of pre-trained adapters developed by the open-source community. Once deployed, the adapter can be used in the AI Studio’s Playground to test, compare and prototype models in a live interactive environment. The integration of Hugging Face models into AI Studio will accelerate the development of AI applications from the stages of conceptualisation and evaluation, through to final testing and production deployment.
Developers will gain access to Hugging Face’s extensive Model Ecosystem that includes hundreds of thousands of pre-trained, state-of-the-art models covering diverse tasks in Natural Language Processing (NLP), computer vision, audio, and multimodal processing. It will also ensure the availability of the latest and trending open-source models, and links to domain-specific models that have been fine-tuned for applications such as medical diagnostics and financial risk analysis.
“We are committed to supporting AI developers in every way that we can,” explains Youlian Tzanev, NexGen Cloud’s Chief Strategy Officer & Co-Founder. “We have delivered state-of-the-art infrastructure, on-demand access and a comprehensive development environment with AI studio. We are now opening access to a vast array of pre-built and tested models that will further accelerate the work of developers and speed the deployment of their applications.”
The development community will further benefit from Hyperstack’s pre-configured environments and libraries, that now include the Hugging Face models, to reduce set-up time. Supported LoRA adapters include Llama 3.1 8B Instruct, Mistral Small 24B Instruct 2501, Llama 3.3 70B Instruct and OpenAI gpt-oss-120b
AI Studio is built on NexGen Cloud's Hyperstack platform and operates on a usage-based pricing model, with customers paying only for the AI resources they use. The on-demand infrastructure includes latest model GPUs, performance optimisation and enterprise-grade security, while enabling community collaboration.
About NexGen Cloud
NexGen Cloud is Europe’s foremost provider of enterprise AI cloud solutions for the Global market. By building on cutting-edge infrastructure, we deliver scalable, production-ready platforms tailored for the demands of modern Gen AI workloads.
NexGen Cloud’s solutions are tailored to meet the diverse needs of AI enterprises and practitioners through a suite of specialised products and services, including Private Clouds for large-scale secure AI deployments, and Hyperstack, an on-demand service for enterprise GPU access supporting AI startups and developers.
Media Contact
NexGen Cloud
[email protected]
t+442031375256