Google’s New Handwriting Recognition AI System In Gboard Makes 40 Percent Fewer Mistakes

Artificial Intelligence News

Google’s-New_Handwriting-Recognition-AI-System-In-Gboard-Makes-40-Percent-Fewer-Mistakes Google’s New Handwriting Recognition AI System In Gboard Makes 40 Percent Fewer MistakesGoogle continues to improve its handwriting recognition in Gboard, its virtual keyboard for iOS and Android devices. According to the researchers at Google AI, its new handwriting recognition Artificial Intelligence system that makes fewer mistakes between 20 to 40 percent than the Machine Learning models it replaces.

The researchers noted that the development in Machine Learning has allowed new model architectures and training methodologies, enabling the company to revise its initial approach, instead of building a single model that runs on the entire input. The company has launched those new models for all based on Latin-script languages in Gboard at the beginning of the year, they added. As most of the handwriting specialists utilize touch points to get control on sketched-out Latin characters, and drawn inputs are represented as a series of strokes, and these strokes, in turn, comprise sequences of time-stamped points, Gboard first normalizes the touch-point coordinates to make sure they remain reliable across devices with various sampling rates and precisions and then transforms them into a series of cubic Bézier curves parametric curves usually utilized in computer graphics.

The major benefits of these sequences, as the senior software engineers Sandro Feuz and Pedro Gonnet pointed out that they’re more compact than the core sequence of input points. Each curve, toward that end, is represented by a polynomial, an expression of variables and coefficients, labeled by initial points, endpoints, and control points. These sequences then feed into a Recurrent Neural Network (RNN) trained to classify the character being written, particularly a bidirectional version of quasi-recurrent neural networks (QRNN), a network able to act efficient parallelization and good predictive performance. QRNNs, significantly, also keep the number of weights, the strength of the connections between the mathematical functions, or nodes, which structure the network, relatively small, plummeting file size.