Skip to main content
Publications lead hero image abstract pattern

Publications

Manuscript Submission Deadline

Special Issue

Call for Papers

Artificial Intelligence (AI) is constantly changing our lives, and has been applied to broad areas, such as video surveillance, autopilot cars, and Internet recommendation systems. When AI algorithms play a key role to bring too much convenience to our society, they are also vulnerable to attacks. AI system hacked by attackers may lead to leakage of user information, incorrect result of classification, property loss, and wrong decision making. The most common attacks are that adversaries can craft particular inputs, named adversarial samples, leading a model to produce an output behavior of their choice, such as misclassification or misunderstanding. Given their increased use in safety-critical and security applications, e.g., autopilot vehicles, intrusion detection, malicious behavior detection, and facial recognition, it is important to ensure that such algorithms are robust to malicious adversaries. Some research has been conducted on how to harden neural networks against these kinds of attacks and how to mitigate the harm caused by attack samples. However, current work cannot meet the security requirement for applications in the reality.

The privacy issues of AI have also received more and more attention. Smart voice robots, probe boxes and other AI technologies can make harassing calls to users or steal the private information of surrounding people in the public. A recommendation system may easily excavate user browsing and purchase records. The development of AI models cannot be accurate without using lots of real data. The applied AI system, on the other hand, also makes data acquisition easy. Real data can reveal our privacy, such as our consumption habits, medical information, online transactions, communications, personal data like sports and diet. As the AI technology matures, more and more AI applications are used to mine private information, including some legitimate and illegitimate operations. At present, people realizes the importance of privacy protection. How to achieve a balance between AI data usage and privacy is one of the most important research topics in the AI privacy protection.

This special issue focuses on the security and privacy of AI models and related applications. The ultimate goal is to protect the AI system from interference and to successfully complete the specified tasks with privacy protection. How to build a robust AI system requires not only designing a robust AI algorithm to accurately and quickly detect subtle perturbations in adversarial samples, but also imitating or constructing more realistic adversarial examples. Those adversarial examples aim to find the shortcomings of an AI system with or without knowing the system inside knowledge. A robust AI system can continuously improve and optimize itself, facing various attacking scenarios in the reality. For AI privacy issues involved, we need to design more intelligent and humane AI algorithms, which cannot only excavate useful information, but also protect privacy related information. This special issue will bring together academic and industrial researchers to identify and discuss technical challenges and recent results related to security and privacy for AI.

The topics of interest for this special issue include, but are not limited to:

  • Novel methods to defend against adversarial examples, such as image compression, distillation, high-level representation, etc.
  • Intelligent and humane AI system which can excavate useful information, but also protect privacy related information
  • Security and privacy models of AI or improvement by AI (machine learning, data mining and knowledge discovery), e.g., protecting AI models from attacking, privacy-preserving learning and federated learning, or achieving a balance between AI and privacy issues
  • Metrics for privacy computing such as differential privacy
  • Security and privacy operation and modelling, such as generative adversarial networks for adversarial samples generation, privacy coding, etc.
  • Applications of AI technologies in cyber security and privacy
  • Security and privacy issues of AI in blockchains
  • Simultaneous attacks from multiple AI tasks, such as target tracking, image classification, emotion recognition, other tasks
  • Private information of AI collection, storage, aggregation and retrieval
  • Security and privacy protection for machine learning on mobile devices, such as federated learning
  • Security and privacy of distributed AI systems
  • AI enabled networks paradigms and architectures supporting self-healing
  • Evaluate the Robustness of AI systems
  • Security and privacy of AI related applications (e.g., self-driving cars, identity authentication)

The papers for rigorous and well-coordinated peer-review process will be collected through the Manuscript Central System for IEEE Transactions on Network Science and Engineering.

Submission Guidelines

Prospective authors are invited to submit their manuscripts electronically, adhering to the IEEE Transactions on Network Science and Engineering guidelines. Note that the page limit is the same as that of regular papers.

Please submit your papers through the online system and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by e-mail to the Guest Editors directly.

Important Dates

Manuscripts Due: 1 September 2020
Peer Reviews to Authors: 1 December 2020
Revised Manuscripts Due: 1 February 2021
Second-round Reviews to Authors: 1 April 2021
Final Accepted Manuscript Due: 1 June 2021

Guest Editors

Bin Xiao (Lead)
Hong Kong Polytechnic University, China

Fan Wu
Shanghai Jiao Tong University, China

Francesco Chiti
University of Florence, Italy

Mohammad Hossein Manshaei
Florida International University, USA

Giuseppe Ateniese
Stevens Institute of Technology, USA