Left to right: Tim Ellis (CTO), Krish Wadhwani (Co-Founder and CEO), and David Hyde (Chief Science Officer). Courtesy of Adviser Labs.
Cloud GPUs are no longer uniformly scarce, though high-end models like Nvidia’s H100 and H200 can still be tight in some regions and at certain providers. A more consistent bottleneck is operational: teams with working models or simulations lose hours to identity management, schedulers, images and cost guardrails. For labs and R&D groups, that means fewer experiment iterations even as cloud use for HPC/AI accelerates.
Adviser Labs, an Atlanta startup, is among the latest outfits focused on the problem. The company raised about $1 million in pre-seed funding to build a CLI-first platform that runs heavy workloads via a single adviser run command. It auto-provisions GPU and CPU clusters across major clouds (AWS, Azure, and Google Cloud) and handles executables in languages such as Python, C or Fortran.
The mention of Fortran, a programming language first released commercially in 1957 and still in use, was “mostly a light joke,” explained Krish Wadhwani, CEO and co-founder of the firm, in an email. “But it was really to emphasize that Adviser can handle virtually anything. Any language, any workload, no limitations.”
The startup spun out of Vanderbilt University research (the ADVISER project) and was co-founded by Wadhwani (University of Oxford, Reuben College), David Hyde (Assistant Professor of Computer Science at Vanderbilt; PhD, Stanford), and CTO Tim Ellis (previously at Apple, Confluent, and Stability AI).
Investors commit $1M to fuel growth
Investors include Drive Capital’s seed program (which operates in Atlanta), Chicago-based Simplex Ventures, and Menlo Park-based Unusual Ventures, plus angels from DoorDash. The cash will support product development and early-adopter outreach in high-performance computing.
Early targets focus on quant finance, where the tool handles Monte Carlo backtesting and options pricing. It also serves AI/ML for training and fine-tuning on H100-class instances, plus science and engineering jobs like computational fluid dynamics, molecular dynamics, and density functional theory. These workloads often bog down on DevOps setup and cost tuning.
Adviser markets a shift from hours-long configuration to seconds, with usage-based billing tied to compute consumption. The company says it selects cost-effective capacity, tapping options like cloud Spot/Preemptible instances—AWS, for instance, advertises up to 90% discounts on Spot versus On-Demand. The platform also factors in recent price changes, such as AWS’s June 2025 cuts of up to 45% on P5 (H100) On-Demand rates.
Born from real research headaches
Adviser’s origins trace back to the pain researchers and engineers face when running compute-intensive R&D in the cloud at institutions like Oxford, Vanderbilt, and Stanford. “Our team came from backgrounds in academia, scientific computing, and machine learning, and we consistently saw the same bottleneck: running large-scale simulations and data-heavy experiments required deep DevOps expertise, complex infrastructure setup, and constant cost management,” Wadhwani said.
The founders built Adviser initially to solve their own research workflow challenges, then validated broader demand by talking to hundreds of domain experts and engineers across quant finance, biotech, and energy. “Those conversations shaped Adviser into a patent-pending platform designed to make HPC-powered R&D accessible without requiring cloud expertise, while also optimizing speed, scalability, and cost efficiency,” Wadhwani added.
A core use case is scaling Python scripts, widely used to orchestrate cloud hardware workflows—setting up servers, managing access, and tuning costs. Traditionally, you might type python ./my_simulation.py in a terminal; that works for small tests but chokes on massive datasets or GPU-heavy math.
Adviser swaps that approach with a single command: adviser run python ./my_simulation.py. That command packages the environment, provisions requested GPUs/CPUs, executes the job, streams logs/checkpoints, and tears down resources when finished. In other words, it aims to remove one-off steps—IAM roles/policies, queue/scheduler setup, custom image/driver pinning, regional capacity hunting, and cost guardrails—that stall first runs and slow iteration.
Adviser says it enables scientists, engineers, and R&D teams to run jobs on the cloud with a single command (e.g., adviser run “python mysim.py”), while the platform automatically handles cluster provisioning, environment orchestration, cost optimization, and streaming results back to the user’s IDE or CLI.
Diverse applications across industries
Adviser’s early deployments cluster around domains where orchestration and cost control—not raw GPU supply—tend to be the limiting factors.
“On the R&D side, our early pilots and signed enterprise customer span quantitative finance, where proprietary trading firms use Adviser to run thousands of Monte Carlo simulations, backtesting pipelines, and risk models daily, reducing both infrastructure overhead and cloud costs,” Wadhwani said.
In biotech, research teams use the platform for genomics pipelines, protein-structure workflows, and molecular dynamics—often with multi-terabyte datasets. Adviser distributes docking experiments and model training across GPUs to reduce wall-clock time and operator overhead.
In materials science and climate/energy research, workloads include density functional theory (DFT), high-throughput materials screening for batteries and semiconductors, and geospatial models used in carbon-capture studies and wind-farm layout optimization.
“Drug discovery researchers leverage Adviser to parallelize compound docking experiments and machine learning models for hit identification, cutting compute runtimes from days to hours,” Wadhwani said. “In materials science, we support teams running DFT simulations and high-throughput screening of novel compounds … and in climate and energy research, Adviser enables large-scale geospatial modeling.”