In the ever-evolving landscape of cybersecurity, the challenges presented are as diverse as they are daunting. From traditional cyber threats to the integration of artificial intelligence (AI) and the impending revolution of quantum computing, the arsenal of potential risks continues to expand. Addressing these threats demands a comprehensive understanding of emerging dangers and proactive strategies to safeguard digital infrastructures. Therefore, solutions and methods of protection must be developed and implemented to reduce this danger to productively use artificial intelligence to enhance cybersecurity.
Among the array of cyber-attack methods, data poisoning emerges as a significant threat to artificial intelligence and machine learning systems. This is due to the negative effects it poses on artificial intelligence and machine learning. Data poisoning is the altering of training data provided to a machine learning system to manipulate the system to either perform in a way opposing its intended function or to completely disregard its duties (Simms, 2023). Data poisoning can also take place by adding corrupt data specifically crafted to train the system to partake in malicious activities (Mariarosaria Taddeo, 2019). Manipulating the ability of autonomous cars to interpret road signs is one of the several ways data poisoning is used to negatively alter the functions of a machine learning system. An example of this is the research conducted by Tencent, a Chinese Tech Giant, on Tesla’s autonomous cars and their artificial intelligence algorithms. The researchers only had to implement a minute change to the lane markings on the road....