OpenAI is projected to generate over $10 billion in revenue next year, a clear sign that the adoption of generative AI is accelerating.
Yet, most companies struggle to deploy large AI models in production. With the steep costs and complexities involved, nearly 90% of machine learning projects are estimated never to make it to production.
Addressing this pressing issue, Simplismart is today announcing a $7m funding round for its infrastructure that enables organisations to deploy AI models seamlessly.
Like the shift to cloud computing, which relied on tools like Terraform and mobile app development fueled by Android, Simplismart is positioning itself as the critical enabler for AI’s transition into mainstream enterprise operations.
The series A funding round was led by Accel with participation from Shastra VC, Titan Capital, and high-profile angels, including Akshay Kothari, Co-Founder of Notion. This tranche, more than ten times the size of their previous round, will fuel R&D and growth for their enterprise-focused MLOps orchestration platform.
The company was co-founded in 2022 by Amritanshu Jain, who tackled cloud infrastructure challenges at Oracle Cloud, and Devansh Ghatak, who honed his expertise on search algorithms at Google Search.
In just two years, with under $1m in initial funding, Simplismart has outperformed public benchmarks by building the world’s fastest inference engine. This engine allows organisations to run machine learning models at lightning speed, significantly boosting performance while driving down costs.
Simplismart’s fast inference engine allows users to leverage optimised performance for all their model deployments. For example, Its software-level optimisation helps run Llama3.1 (8B) at an impressive throughput of >440 tokens per second.
While most competitors focus on hardware optimisations or cloud computing, Simplismart has engineered this breakthrough in speed within a comprehensive MLOps platform tailored for on-prem enterprise deployments – agnostic towards choice of model and cloud platform.
“Building generative AI applications is a core need for enterprises today. However, the adoption of generative AI is far behind the rate of new developments. It’s because enterprises struggle with four bottlenecks: lack of standardised workflows, high costs leading to poor ROI, data privacy, and the need to control and customise the system to avoid downtime and limits from other services,” said Amritanshu Jain, co-founder and CEO at Simplismart
Simplismart’s platform offers organisations a declarative language (similar to Terraform) that simplifies fine-tuning, deploying, and monitoring genAI models at scale.
Third-party APIs often bring concerns around data security, rate limits, and utter lack of flexibility, while deploying AI in-house comes with its own set of hurdles: access to computing power, model optimisation, scaling infrastructure, CI/CD pipelines, and cost efficiency, all requiring highly skilled machine learning engineers.
Simplismart’s end-to-end MLOps platform standardises these orchestration workflows, allowing the teams to focus on their core product needs rather than spending numerous manhours building this infrastructure.
Amritanshu Jain added: “Until now, enterprises could leverage off-the-shelf capabilities to orchestrate their MLOps workloads since the quantum of workloads, be it the size of data, model or compute required, was small. As the models get larger and the workload increases, it will be imperative to have command over the orchestration workflows. Every new technology goes through the same cycle: exactly what Terraform did for cloud, android studio for mobile, and Databricks/Snowflake did for data.”
“As GenAI undergoes its Cambrian explosion moment, developers are starting to realise that customising & deploying open-source models on their infrastructure carries significant merit; it unlocks control over performance, costs, customisability over proprietary data, flexibility in the backend stack, and high levels of privacy/security”, said Anand Daniel, partner at Accel.
“We were happy to see that Simplismart’s team saw this opportunity quite early, but what blew us away was how their tiny team had already begun serving some of the fastest-growing GenAI companies in production. It furthered our belief that Simplismart has a shot at winning in the massive but fiercely competitive global AI infrastructure market.”
Solving MLOps workflows will allow more enterprises to deploy genAI applications with more control. They want to manage the tradeoff between performance and cost to suit their needs.
Simplismart believes that providing enterprises with granular Lego blocks to assemble their inference engine and deployment environments is key to driving adoption.