Statistical machine-learning techniques have been used in security applications for over 20 years, starting with spam filtering, fraud engines and intrusion detection. In the process we have become familiar with attacks from poisoning to polymorphism, and issues from redlining to snake oil. The neural network revolution has recently brought many people into ML research who are unfamiliar with this history, so it should surprise nobody that many new products are insecure. In this talk I will describe some recent research projects where we examine whether we should try to make machine-vision systems robust against adversarial samples, or fragile enough to detect them when they appear; whether adversarial samples have constructive uses; how we can do service-denial attacks on neural-network models; on the need to sanity-check outputs; and on the need to sanitise inputs. We need to shift the emphasis from the design of "secure" ML classifiers, to the design of secure systems that use ML classifiers as components.
Ross Anderson is Professor of Security Engineering at Cambridge University. He is one of the founders of a vigorously-growing new academic discipline, the economics of information security. Ross was also a seminal contributor to the idea of peer-to-peer systems and an inventor of the AES finalist encryption algorithm “Serpent”. He also has well-known publications on many other technical security topics including hardware tamper-resistance, emission security, copy-right marking, and the robustness of application programming interfaces (APIs). He is a Fellow of the Royal Society, the Royal Academy of Engineering, the IET and the IMA. He also wrote the standard textbook “Security Engineering-a Guide to Building Dependable Distributed Systems”
Backdoor attacks insert hidden associations or triggers to the deep learning models to override correct inference such as classification and make the system perform maliciously according to the attacker-chosen target while behaving normally in the absence of the trigger. As a new and rapidly evolving realistic attack, it could result in dire consequences, especially considering that the backdoor attack surfaces are broad. This talk first provides a brief overview of backdoor attacks, and then present countermeasures towards building trustworthy deep neural networks.
Dr Surya Nepal is a Senior Principal Research Scientist at CSIRO Data61. He currently leads the distributed systems security group comprising 30+ research staff and 50+ postgraduate students. His main research focus is on the development and implementation of technologies in the area of cybersecurity and privacy, and AI and Cybersecurity. He has more than 250 peer-reviewed publications to his credit. He is a member of the editorial boards of IEEE Transactions on Service Computing, ACM Transactions on Internet Technology, IEEE Transactions on Dependable and Secure Computing, and Frontiers of Big Data- Security Privacy, and Trust. He is currently a deputy research director of Cybersecurity Cooperative Research Centre (CRC), a national initiative in Australia.