San Francisco based transport company Uber has launched a new, open source Deep Learning toolbox, named Ludwig. It is designed to make training and testing of the Deep Learning models easier for non-experts. As noted by the company, experts and researchers, by utilizing Ludwig, can interpret the prototyping process and streamline data processing so that they can concentrate on making Deep Learning architectures rather than data wrangling.
From the past two years, Uber had been working on developing Ludwig to explain the deployment of Deep Learning models in projects. The company has utilized the toolkit for a number of its own projects like its Customer Obsession Ticket Assistant (COTA), information extraction from driver licenses, food delivery time prediction, and so on. Ludwig comes with a set of model architectures which can be combined to build an end-to-end model for a given use case. It has a range of features, including No need to write code- With Ludwig, users don’t require any coding skills in order to train a model and deploy it for gaining predictions. Generality-Ludwig utilizes a new data type-based approach for the Deep Learning model design making the tool accessible for different use cases. Extensibility- Easy to add new model architecture and new feature data types. Flexibility- Ludwig provides widespread control to its users over model building and training, making it very easy to use, especially for the beginners. Understandability- There are standard visualizations offered in Ludwig to assist users to be aware of the performance of their Deep Learning models and evaluate their predictions.
Excepting being flexible and accessible, Ludwig comes with further advantages for non-programmers involving a set of command line utilities for training, experimenting models, and gaining predictions. It also provides a programmatic API that enables users to train and deploy a model with only a few lines of code. Additionally, it includes other tools which assist with assessing models, comparing the performance and predictions of these models through visualizations in addition to extracting model weights and activations from them.