Researchers at Nvidia, in a demo at the GPU Tech Conference (GTC) in San Jose, California, has introduced a generative adversarial Artificial Intelligence system, named GauGAN, which enables users to generate realistic landscape images that never existed. Nvidia VP of Applied Deep Learning research, Bryan Catanzaro said that GauGAN builds upon learning from the Pix2Pix system, which launched last year that can render virtual worlds, but Pix2Pix can’t paint landscapes because doing so leaves artifacts in the resulting image.
Nvidia’s new AI system’s neural network is trained with 1 million open source Flickr images and permeated with an understanding of the relationship between 180+ objects like snow, water, trees, flowers, bushes, hills, or mountains. That understanding basically based on how objects relate to each other that means a tree next to water will show a reflection, or when the season changes and there’s snow on the ground, trees will be depicted without leaves. Style transfer is also possible so an image can adopt a warm sunset glow or display the cooler lights of a city skyline. The GauGAN application utilizes a segmentation map that performs like a coloring book that explains where the objects are but not provides detail.
In a paper, Nvidia principal research scientist Ming-Yu Liu and others described the formation of GauGAN and its spatially adaptive denormalization method for photo manipulation. Moreover, the computer gaming company debuted its Nvidia AI Playground, a website where people can tinker with a range of trained neural nets like GauGAN which utilize powerful AI to distort visuals or create realistic images. Nvidia’s GauGAN is the latest reality-bending AI system, creator of deep fake tech like StyleGAN that can make realistic images of people who never existed.