Deep learning techniques have recently been shown to be very promising for detecting cyber security attacks and determining their nature. At the same time, many cybercriminals have devised new attacks aimed at disrupting the functioning of various deep learning tools, including those for image classification and natural language processing.
Perhaps the most common among these attacks are conflicting attacks, which are designed to “trick” deep learning algorithms using data that has been altered, causing them to classify it incorrectly. This can lead to malfunctions in many applications, biometric systems and other technologies that work through deep learning algorithms.
Several previous studies have shown the effectiveness of various conflicting attacks in getting deep neural networks (DNNs) to make unreliable and false predictions. These attacks include the Carlini & Wagner attack, the Deepfool attack, the Fast Gradient Sign method (FGSM) and the Elastic-Net attack (ENA).
Researchers at Citadel have recently developed a DNN that can detect a type of cyber attack known as distributed denial of service (DDoS) DSN amplification, and then used two different algorithms to generate conflicting examples that could fool their DNN. Their findings, published in a paper pre-published on arXiv, further confirm the unreliability of deep learning methods for DSN attack detection and their vulnerability to conflicting attacks.
DDoS DSN amplification attacks exploit domain name system (DNS) server vulnerabilities to amplify the queries being made to them, ultimately flooding them with information and bringing the servers down. These attacks can cause significant disruption to online services, including those run by both small and large multinationals.
Over the past few years, computer scientists have developed several deep learning techniques that can detect DDoS DSN amplification attacks. Nevertheless, the team at the citadel showed that these techniques could be circumvented using conflicting networks.
“Much of the current work in conflicting learning has been done in image processing and natural language processing using a wide range of algorithms,” Jared Mathews and his colleagues wrote in their paper. “Two algorithms of interest are Elastic-Net Attack on Deep Neural Networks (EAD) and TextAttack.”
EAD and TextAttack are two algorithms that have proven to be particularly good at creating manipulated data that would be misclassified by DNNs. Thus, Mathews and his colleagues developed a technique to detect DDOS DSN gain attacks and then tried to fool it by using conflicting data generated by the EAD and TextAttack algorithms.
“In our experiment, the EAD and TextAttack algorithms are used for a Domain Name System amplification classification,” the researchers wrote in their paper. “The algorithms are used to generate malicious DDoS adversarial examples and then feed as input to the neural network of the intrusion detection systems to classify as valid traffic.”
In their test, Mathews and his colleagues found that the conflicting data generated by EAD and TextAttack could trick their DNN into DDoS DSN gain attack detection 100% and 67.63% of the time. These findings thus highlight the significant flaws and vulnerabilities of existing deep learning-based methods of detecting these attacks.
“We show that both image processing and natural language processing conflicting learning algorithms can be applied against a neural network for network intrusion detection,” the researchers wrote in their paper.
In the future, the work of this team of researchers at the Citadel may inspire the development of more effective tools for detecting DDoS DSN amplification attacks that can detect conflicting data and correctly classify it. In their next studies, the researchers plan to test the effectiveness of conflicting attacks on a particular type of algorithm for detecting DNS amplification attacks, those targeting the so-called constrained application protocol (CoAP) used by many IoT devices.
Fool deep neural networks for object detection with conflicting 3-D logos
Jared Mathews, Prosenjit Chatterjee, Shankar Banik, Cory Nance, A deep learning approach to creating DSN reinforcement attacks. arXiv: 2206.14346v1 [cs.CR]arxiv.org/abs/2206.14346
© 2022 Science X Network
Citation: A deep learning technique to generate DSN amplification attacks (2022, July 14) Retrieved July 15, 2022 from https://techxplore.com/news/2022-07-deep-technique-dsn-amplification.html
This document is subject to copyright. Apart from any reasonable trade for the purpose of private investigation or research, no part may be reproduced without written permission. The content is provided for informational purposes only.