E-commerce giant Amazon’s web subsidiary Amazon Web Services (AWS) has introduced the EC2 instances with Tesla T4 GPUs that will be available to users in the coming weeks in G4 instances. Users can also get T4 GPUs through the Amazon Elastic Container service for Kubernetes. As per the reports, the new instance will be capable of harnessing up to 8 T4 GPUs at once in the Cloud.
During the keynote address at San Jose State University, AWS VP of Compute Matt Garman said that it will be featuring Nvidia T4 processors and really designed for machine learning and to help our customers shrink the time that it takes to do inference at the edge where that response time really matters, but also reduce the cost. In September last year, the T4 processor made its debut for data centers, and it utilizes Turing architecture and is packed with 2,560 CUDA cores and 320 Tensor cores with the power to process queries nearly 40 times faster than a CPU. Since the GPU made its debut, it has been integrated into several data centers run by companies like Cisco, Dell EMC, and Hewlett Packard Enterprise (HPE).
As part of the announcement, the company also announced general availability for self-driving platform Constellation, the debut of the Safety Force Field for driverless vehicles, Jetson Nano Computer for embedded devices and the reorganization of over 40 NVIDIA Deep Learning acceleration libraries under a new umbrella, named CUDA-X AI. CUDA-X AI libraries work with popular frameworks like MXNET, PyTorch, and TensorFlow. Moreover, NVIDIA researchers released GauGAN, an Artificial Intelligence system trained on 1 million Flickr photos which can generate realistic landscape images.