CIOTechie_ArtificialIntelligence_TensorFlow_Google_Lingvo_DeepLearning

Google Releases A Scalable Tensorflow Framework Lingvo For Deep Learning And Sequence-To-Sequence Modeling

Artificial Intelligence News

Google-Releases-A-Scalable_Tensorflow-Framework-Lingvo-For-Deep-Learning-And-Sequence-To-Sequence-Modeling Google Releases A Scalable Tensorflow Framework Lingvo For Deep Learning And Sequence-To-Sequence ModelingThe team of researchers at Google has introduced Lingvo, a new TensorFlow framework that provides a complete solution for collaborative Deep learning research. It also focused on sequence modeling tasks like machine translation, speech recognition, and speech synthesis.

As the TensorFlow team has open sourced Lingvo, it has been designed for collaboration and has a code with a consistent interface and style which is easy to read and understandable. It also consists of a flexible modular layering system that fosters code reuse. Since it comprises many people using the same codebase, it is easier to apply other ideas within users models. Also, they can adapt to the existing models to new datasets with ease. Moreover, it makes it easier to reproduce and compare the outcomes in research, because of Lingvo adopts a system where all the hyper-parameters of a model get configured within their own dedicated sub-directory detach from the model logic. All the hyper-parameters within Lingvo get explicitly declared and their values get logged at runtime. And all the models within the framework are developed from the same common layers that enable them to be easily compared with each other. The TensorFlow model can also easily train the on-scale production datasets, and further support offered for synchronous and asynchronous distributed training.

Initially, Lingvo began for NLP tasks (Natural Language Processing) but has become very flexible and is also appropriate to models used for tasks like image segmentation and point cloud classification. Lingvo also supports GANs, distillation, and multi-task models, as well as the TensorFlow framework,  doesn’t compromise on the pace and comes with an optimized input pipeline and fast distributed training.