Tech giant Google announced last week the availability of Nvidia’s Tesla T4 GPUs and users can get from the Google Cloud Platform in beta. In November last year, Google Cloud Platform became the first Cloud provider to provide the T4 GPUs through private alpha. The Nvidia’s Tesla T4 GPUs can leverage Machine Learning and inference, and it is the first in Google’s GPU portfolio with devoted ray-tracing processors. Along with the United States, The T4 is now available in Brazil, India, the Netherlands, Singapore, and Tokyo.
Other GPUs in Google’s lineup comprise the Nvidia K80, P4, P100 and V100. The T4 is the best GPU in Google’s portfolio for driving inference workloads, Google notes, but it is also well-suited for Machine Learning training workloads and also the first data center GPU to embrace dedicated ray-tracing processors. The Nvidia’s Tesla T4’s 16GB of memory power supports large Machine Learning models or driving inference on compound smaller models at the same time. The V100 GPU is the go-to choice for Machine Learning training workloads, but the T4 provides a lower price point.
The T4 is based on the Turing architecture that blends ray-tracing and Artificial Intelligence inference for a hybrid kind of computer graphics rendering. Ray-tracing is a rendering modus operandi that builds pragmatic lighting effects. Search engine Google is also supporting virtual workstations on T4 instances, so designers can operate rendering applications from anywhere.