E-commerce giant Amazon announced its Amazon SageMaker Neo, a new Machine Learning feature in Amazon Sagemaker that allows developers to train Machine Learning models once and operate them anywhere in the Cloud and at the edge.
Amazon has released the code as Neo- AI project under the Apache software license, that open source release will support processor vendors, device manufacturers, and AI developers to bring and develop the latest Machine learning breakthroughs to a broad array of hardware platforms. Amazon SageMaker Neo was announced at AWS re:Invent 2018 as a newly added capability to Amazon SageMaker, its grass-roots Machine learning Platform-as-a-Service (PaaS). Neo-AI can deploy the Machine Learning models on various platforms by automatically optimizing TensorFlow, PyTorch, MXNet, ONNX, and XGBoost models. In addition, it can also transform the Machine Learning standards into a common form to avoid software compatibility difficulties. It presently supports platforms from Intel, NVIDIA, and ARM. It’ll also support Cadence, Xilinx, and Qualcomm in the near future.
Neo-AI is a Machine Learning compiler and runtime at its core, formed on conventional compiler technologies such as LLVM and Halide. It also utilizes TVM to assemble Deep Learning models and Treelite for compiling decision tree models that had started off as open source research projects at the University of Washington. It also does platform-specific optimizations from diverse contributors, as noted by the company. The Neo-AI project will get contributions from various companies including AWS, Intel, ARM, Qualcomm, Cadence, Xilinx, and others. Also, the Neo-AI runtime is deployed currently on devices like Lenovo, ADLINK, Leopard Imaging, Panasonic, and others.