Three researchers from New York University (NYU) have published a paper this week describing a method that an attacker could use to poison deep learning-based artificial intelligence (AI) algorithms. […]

from https://www.bleepingcomputer.com/news/security/ai-training-algorithms-susceptible-to-backdoors-manipulation/