Google on Tuesday announced the general availability of Vertex AI, a managed platform designed to help data scientists and ML engineers build, deploy and manage ML projects. The announcement came during Google’s I/O conference, held virtually this year.
While Google has a bevy of machine learning products and services — which compete with other platforms such as AWS’s SageMaker — Google contends that the tools on the market are often incomplete.
“The clouds and the other platform providers had really done our customers a tremendous disservice,” Craig Wiley, director of product management for Google Cloud AI, told ZDNet. “Three, four or five years ago, we all launched these platforms with notebook training and prediction and said, ‘Hey, you develop your model in your notebook, you train it in our training system, and you put it into production in our prediction service, and you’re done.’ Guess what — it’s not the case.”
At the same time, the nature of the tools makes it challenging work at scale.
With Google Cloud, “you went to go train a model in AutoML Vision, and you couldn’t use that same data set to do anything else on our stack. And that was a huge issue for us,” Wiley said. “We had customers coming to us saying, “Hey, I really like XY and Z, but I want to be able to open it up and do something else.”
Vertex AI aims to fix these issues by enabling highly scalable workflows, as well as access to MLOps tools to maintain and manage models in production. It also promises to speed up the time it takes to build and train models. The platform brings together the Google Cloud services for building ML under one unified UI and API. Working in a single environment should make it easier to move models from experimentation, discover trends and make predictions.
Vertex AI gives teams access to the AI tools Google uses internally for computer vision, language, conversation and structured data. The toolkit will be regularly improved by Google Research.
It also includes new MLOps features like Vertex Vizier, an optimization service. Customers give Vertex Vizier a set of variables, as well as the function or metric they’re trying to optimize, to ensure a model is tuned.
The fully-managed Vertex Feature Store helps users share and reuse ML features. By connecting features to tools like ML pipelines ,users could set up workflows, Wiley explained.
Meanwhile, Vertex Experiments is effectively an enterprise version of TensorBoard, a tool for measuring and visualizing machine learning workflows. Vertex Experiments makes it easy to share those measurements within your organization and find previously-trained models for a comparison.
“We’re trying to reduce the cycle time and increase the effectiveness of the data scientists as they seek to do this work right,” Wiley said, “whether seen a velocity improvement from the adoption of pipelines or maybe a accuracy improvement by better understanding the models and their behaviors using experiments. “
The platform’s MLOps toos, including Vertex Continuous Monitoring and Vertex Pipelines, eliminate the do-it-yourself maintenance often required for models in production.
Google launched Vertex AI in preview in November and quietly launched the unified platform in GA a couple months ago. Since then, a wide range of customers have been using it, Wiley said, including L’Oreal, Iron Mountain and Deutsche Bank. Wiley said the security capabilities needed for regulated industries — features like VPC controls or customer managed encryption keys — are built in.
The platform also lends itself to a broad range of skill levels, Wiley said, from business analysts using AutoML capabilities to sophisticated data science.