DH Latest NewsDH NEWSLatest NewsNEWSTechnologyInternational

‘Data poisoning’: Google AI expert says cyber attackers could critically harm, disable AI systems

 

Google Brain research scientist Nicholas Carlini has said that cyber attackers could disable AI systems by ‘poisoning’ their data sets. According to a report by the South China Morning Post, Carlini said that by manipulating just a tiny fraction of an AI system’s training data, attackers could critically compromise its functionality.

‘Some security threats, once solely utilised for academic experimentation, have evolved into tangible threats in real-world contexts’, Carlini said during the Artificial Intelligence Risk and Security Sub-forum at the World Artificial Intelligence Conference, according to financial news outlet Caixin. In one prevalent attack method known as ‘data poisoning’, an attacker introduces a small number of biased samples into the AI model’s training data set. This deceptive practice ‘poisons’ the model during the training process, undermining its usefulness and integrity.

According to the International Security Journal, data poisoning involves tampering with machine learning training data to produce undesirable outcomes. An attacker will infiltrate a machine-learning database and insert incorrect or misleading information. As the algorithm learns from this corrupted data, it will draw unintended and even harmful conclusions. ‘By contaminating just 0.1 percent of the data set, the entire algorithm can be compromised. We used to perceive these attacks as academic games, but it’s time for the community to acknowledge these security threats and understand the potential for real-world implications’, Carlini said.

 

shortlink

Post Your Comments


Back to top button