CIOTEchie_ArtificialIntelligence_MachineLearning_Researcher_Text_AI-Algorithm_Attacks_Malicious_Paraprashing

Researchers Find Text-Based Artificial Intelligence Algorithms Are Vulnerable To Paraphrasing Attacks

Artificial Intelligence News

Researchers_Find-Text-Based-Artificial-Intelligence-Algorithms-Are-Vulnerable-To-Paraphrasing-Attacks Researchers Find Text-Based Artificial Intelligence Algorithms Are Vulnerable To Paraphrasing AttacksToday, companies and organizations are progressively putting Artificial Intelligence algorithms as much advancement happened in Natural Language Processing (NLP). They are utilizing these algorithms to carry out text-related tasks like filtering spam emails, evaluating the outlook of social media posts and online reviews, analyzing resumes, and spotting forged news. However, in this regard, recent research by IBM, Amazon, and the University of Texas claims that with the right tools, malicious actors can attack text-classification algorithms and manipulate their behavior in potentially malicious ways.

The research, presented at the SysML AI conference at Stanford, looks around paraphrasing attacks, a process which carries modifying input text so that it is classified differently by an AI algorithm without changing its actual meaning. As reported, a paraphrasing attack would modify the content of a spam message so that the AI classifies it as ‘not spam’. However, for a human reader, the tampered message would have the same meaning as the original one. From the past few years, a lot of researchers and research group have explored aspects of adversarial attacks, input modifications meant to cause AI algorithms to misclassify images and audio samples while defending their original appearance and sound to human eyes and ears. Paraphrasing attacks, according to the report, are the text equivalent of these. Attacking text models is much more complicated than tampering with computer vision and audio recognition algorithms.

The researcher at IBM and co-author of the research paper, Pin-Yu Chen says that for image and audio, it makes sense to do adversarial perturbations. For text, even if you make small changes to an excerpt, like a word or two, it might not read smoothly to humans. Previous work on adversarial attacks against text models comprised changing single words in sentences. While this new approach succeeded in converting the output of the AI algorithm that often resulted in modified sentences that sounded artificial. In this development, Chen and his colleagues focused not only on converting words but also on rephrasing sentences and changing longer sequences in a way that make sense.