Inductiva API v0.12: Benchmarking and Beyond

The Inductiva Team

Author

December 25, 2024

Tags:

Inductiva APIAPI ReleaseInductiva API v0.12 release featuresHow to choose the best cloud machine for simulationsCloud cost-saving tools for scientists and engineersAPI for Scientific Simulations
Banner image for blog post on V0.12 release

We’re excited to announce the release of v0.12 of the Inductiva API!

As we wrap up 2024, it’s hard not to feel a sense of pride when we look back at what our team has accomplished this year. Our first release of the year, v0.4, launched back in February. Since then, we’ve rolled out 16 updates across 9 versions, each packed with meaningful improvements and new features designed to support our awesome users: scientists and engineers tackling large-scale simulations.

Today, we’re thrilled to announce a feature that truly sets us apart in the market: Inductiva’s Benchmarking functionality is now part of the API! This powerful tool, originally developed for our internal use, is now available to all our users, adding even more value to your workflows.

Find the Best Machines for Your Workload

So, what is the Benchmarking functionality?

In short, it allows you to test a short sample simulation across dozens of available machine configurations, helping you make an informed decision about which option best suits your needs in terms of performance and cost.

As you’ve likely experienced, finding the “best machine for you” isn’t always straightforward. Cloud providers like Google Cloud offer a wide range of machine families, each with countless vCPU and RAM configuration options. Understanding what they all mean can quickly feel overwhelming,

Even if you manage to get a handle on the differences between the various hardware options, how do you figure out which configuration is actually the best for your specific simulation needs? With your own simulation software, time constraints, and budget to consider, navigating this sea of cloud machines can feel like an impossible task.

This is what we call the Allocation Problem, and it’s a tough one. It’s not just like finding a needle in a haystack (at least with a needle, you know what you’re looking for). It’s more like solving a jigsaw puzzle where the pieces keep changing shape as you try to fit them together.

But why should you even care? Why not just pick a machine that’s “good enough”? After all, don’t Google and AWS already offer great machines at reasonable prices?

The answer is: yes, they do. But here’s the catch: you could easily end up spending 10 times more than you need to.

In other words, you might be using a machine that’s far more expensive than necessary, paying 10 times more for the same simulation that could run just as well on a less expensive option with comparable performance.

You can’t truly know which machine is best for your needs upfront. The actual performance depends on a mix of factors, including the machine itself, the software you’re using, and how your simulations are configured. In other words, the only way to figure out if a machine is the right fit for you is to test it with your own simulation.

This is exactly what our new Benchmarking functionality is designed to do. It offers an easy, systematic way to run a sample simulation on multiple cloud machines, gather performance and cost metrics, and determine which option works best for your needs.

With Inductiva’s Benchmarking functionality, you no longer need to guess, you’ll measure. Over time, you’ll develop a clear intuition about which machines are the best fit for your simulations and which ones to avoid altogether.

And you’ll save a lot of money on cloud resources—big time!

To learn more about Benchmarking and how to run your own, check out our new tutorial: Quick Recipe to Run a Benchmark.

What Else is New in v0.12?

In addition to Benchmarking, v0.12 introduces several smaller but important features designed to enhance your experience.

First, our console—the web UI for managing simulation tasks, cloud resources, and output files—has received a series of usability improvements, making it even more intuitive and efficient.

Another exciting addition is the ability to directly use the outputs of previous simulations as inputs for new ones. This means you can start new simulations based on previous results without needing to download and re-upload large files. It significantly speeds up chained simulations and eliminates the hassle of relying on your local machine as an intermediate step.

What’s Next?

Plenty! We have exciting plans for 2025, starting with a game-changing feature that truly sets us apart. Soon, we’ll be enabling users to run Inductiva directly on their local resources. That’s right—you’ll be able to run simulations without incurring any cloud costs.

Zero costs?

Yes, you heard that right. Zero cloud costs.

Starting in 2025, users will be able to plug in their own resources to Inductiva and use the API to run simulations on those resources just as easily as they do on the cloud. No complicated setup, no extra hassle.

This feature allows users who have invested in their own infrastructure to strike the perfect balance: run simulations locally at zero cost, and scale up to the cloud when needed. It’s a game-changer, and no other provider offers this level of flexibility on behalf of the user.

Very soon, you’ll be able to do it all with Inductiva.

And that’s not all, there’s so much more to come in 2025.

Stay tuned!

Check out our blog

Blog post banner, blue abstract background

From Supercomputer to Cloud: A New Era for OpenFOAM Simulations

Inductiva joined the 1st OpenFOAM HPC Challenge to test how cloud infrastructure stacks up against traditional HPC for large-scale CFD simulations. Running the DrivAer automotive benchmark, the team explored multiple hardware setups, hyperthreading choices, and domain decomposition strategies. The results? Inductiva’s flexible MPI clusters handled up to 768 partitions with impressive price-performance—even outperforming pricier hardware in some cases. For simulations below massive supercomputer scales, cloud HPC proves not only competitive but cost-effective, offering engineers and researchers agility without sacrificing speed. Curious how to fine-tune your OpenFOAM workloads in the cloud? Dive into the benchmarks and see what’s possible.

Future of AI blog post banner

The Future of AI is Physical: Simulation is Key

Everyone’s talking about LLMs, but the real next wave is AI for the physical world—AI that understands fluid dynamics, thermodynamics, electromagnetism, and even quantum physics. From breakthroughs like DeepMind’s AlphaFold in protein folding to NVIDIA’s AI weather forecasting, industries from renewable energy to materials science are being transformed. But experimental data isn’t always enough; numerical simulations are crucial to generate high-quality datasets that power scientific AI. At Inductiva.AI, we’re offering a computational platform to democratize physics dataset generation, combining open-source simulators with cloud HPC, making it as easy as running a Python script. Explore how AI and scientific computing are converging—and what it means for the future of innovation.

Inductiva API v0.16 release

🔄 Real time Outputs, 💼 Smarter Projects, 🏷️ Metadata at scale, 🔒Private Simulators and two new ways to model the ocean 🌊 : What’s New in Inductiva.AI v0.16

We’re happy to share with our community Inductiva.AI v0.16, a powerful new release designed to help you organize simulations at scale, preserve context with metadata, and gain full control over your simulation environments. This version brings major upgrades to Project management, adds built-in metadata support, and now allows you to run your own private simulators securely. Plus, we’ve expanded our simulator catalog with two new engines for hydrodynamics and coastal modeling: OpenTelemac and Delft3D. Whether you're managing 10 or 10,000 simulations, this release helps you move faster, stay organized, and simulate anything — from river flows to molecular dynamics — on your terms.