Scope and Topics

The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence and, in particular, machine learning and optimization. It has profoundly impacted several areas, including computer vision, natural language processing, and transportation. However, the use of rich data sets also raises significant privacy concerns: They often reveal personal sensitive information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities.
The second AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-21) held at the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) builds on the success of last year’s AAAI PPAI to provide a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop will focus on both the theoretical and practical challenges related to the design of privacy-preserving AI systems and algorithms and will have strong multidisciplinary components, including soliciting contributions about policy, legal issues, and societal impact of privacy in AI.

PPAI-21 will place particular emphasis on:
  1. Algorithmic approaches to protect data privacy in the context of learning, optimization, and decision making that raise fundamental challenges for existing technologies.
  2. Privacy challenges created by the governments and tech industry response to the Covid-19 outbreak.
  3. Social issues related to tracking, tracing, and surveillance programs.
  4. Algorithms and frameworks to release privacy-preserving benchmarks and data sets.


The workshop organizers invite paper submissions on the following (and related) topics:
  • Applications of privacy-preserving AI systems
  • Attacks on data privacy
  • Differential privacy: theory and applications
  • Distributed privacy-preserving algorithms
  • Human rights and privacy
  • Privacy issues related to the Covid-19 outbreak
  • Privacy policies and legal issues
  • Privacy preserving optimization and machine learning
  • Privacy preserving test cases and benchmarks
  • Surveillance and societal issues

Finally, the workshop will welcome papers that describe the release of privacy-preserving benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.


The workshop will be a one-day and a half meeting. The first session (half day) will be dedicated to privacy challenges, particularly those risen by the Covid-19 pandemic tracing and tracking policy programs. The second, day-long, session will be dedicated to the workshop technical content about privacy-preserving AI. The workshop will include a number of (possibly parallel) technical sessions, a virtual poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of privacy-preserving AI applications, including policy and societal impacts, a number of tutorial talks, and will conclude with a panel discussion.


Attendance is open to all. At least one author of each accepted submission must be present at the workshop.

Important Dates

  • November 16, 2020 – Submission Deadline [Extended]
  • December 7, 2020 – AAAI Fast Track Submission Deadline [New]
  • January 7, 2021 – Acceptance Notification [Updated]
  • February 8 and 9, 2020 – Workshop Date

Submission Information

Submission URL:

Submission Types

  • Technical Papers: Full-length research papers of up to 7 pages (excluding references and appendices) detailing high quality work in progress or work that could potentially be published at a major conference.
  • Short Papers: Position or short papers of up to 4 pages (excluding references and appendices) that describe initial work or the release of privacy-preserving benchmarks and datasets on the topics of interest.

Submission Tracks

  • Technical Track: This track is dedicated to the privacy-preserving AI technical content. It welcomes research contributions centered around the topics described above.
  • Privacy Challenges and Social Issues Track: This track is dedicated to discussion of privacy challenges, particularly those risen by the Covid-19 pandemic tracing and tracking policy programs. It welcomes both technical contributions and position papers.

[New] AAAI Fast Track (Rejected AAAI papers)

Rejected AAAI papers with *average* scores of at least 4.5 may be asubmitted directly to PPAI along with previous reviews. These submissions may go through a light review process or accepted if the provided reviews are judged to meet the workshop standard.

All papers must be submitted in PDF format, using the AAAI-21 author kit. Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.
Submissions of papers rejected from the AAAI 2021 technical program are welcomed.

For questions about the submission process, contact the workshop chairs.


All times are in Eastern Standard Time (UTC-5)
Join: Zoom link TBD

PPAI Day 1 - February 8, 2021

Time Title link to video
08:50 09:00 Introductory remarks
09:00 09:45 Invited Talk by John M. Abowd [join]
09:45 10:00 Spotlight 1: On the Privacy-Utility Tradeoff in Peer-Review Data Analysis [pre-recoding available]
10:00 10:15 Spotlight 2: Leveraging Public Data in Practical Private Query Release: A Case Study with ACS Data [pre-recoding available]
10:30 11:15 Invited Talk by Aswin Machanavajjhala [join]
11:20 12:50 Tutorial 1: Intro to DP by Audra McMillen [join]
13:30 13:45 Spotlight 3: Efficient CNN Building Blocks for Encrypted Data [pre-recoding available]
13:45 14:00 Spotlight 4: Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach [pre-recoding available]
14:00 14:15 Spotlight 5: A variational approach to privacy and fairness [pre-recoding available]
14:15 15:00 Invited Talk by Steven Wu [join]
15:00 17:00 Poster Session 1 [link to Discord channel]

PPAI Day 2 - February 9, 2021

Time Title Presenter link to video
09:00 09:45 Invited Talk Reza Shokri [join]
09:45 10:00 spotlight 7: Coded Machine Unlearning [pre-recoding available]
10:00 10:15 spotlight 8: DART: Data Addition and Removal Trees [pre-recoding available]
10:30 11:15 Invited 5 Aswin Machanavajjhala [join]
11:20 12:50 Tutorial 2: Federated Learning [join]
13:30 13:45 spotlight 9: Reducing ReLU Count for Privacy-Preserving CNNs [pre-recoding available]
13:45 14:00 spotlight 10: Output Perturbation for General Differentially Private Convex Optimization with Improved Population Loss Bounds, Runtimes and Applications to Private Adversarial Training [pre-recoding available]
14:15 15:00 Panel: “Differential Privacy: Implementation, deployment, and receptivity. Where are we and what are we missing?” [join]
15:00 17:00 Poster Session 2 [link to Discord channel]

Accepted Papers

Spotlight Presentations
  • Reducing ReLU Count for Privacy-Preserving CNNs
    Inbar Helbitz (Tel Aviv University); Shai Avidan (Tel Aviv University)
  • Output Perturbation for General Differentially Private Convex Optimization with Improved Population Loss Bounds, Runtimes and Applications to Private Adversarial Training
    Andrew Lowy (USC); Meisam Razaviyayn (USC)
  • Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach
    Cuong Tran (Syracuse University)
  • Coded Machine Unlearning
    Nasser Aldaghri (University of Michigan); Hessam Mahdavifar (University of Michigan); Ahmad Beirami (Facebook, USA)
  • Leveraging Public Data in Practical Private Query Release: A Case Study with ACS Data
    Terrance Liu (Carnegie Mellon University); Giuseppe Vietri (University of Minnesota); Thomas Steinke (); Jonathan Ullman (Northeastern University); Steven Wu (Carnegie Mellon University)
  • Efficient CNN Building Blocks for Encrypted Data
    Nayna Jain (IIIT Bangalore); Karthik Nandakumar (Mohamed Bin Zayed University of Artificial Intelligence, UAE); Nalini Ratha (SUNY Buffalo); Sharath Pankanti (Microsoft); Uttam Kumar (IIIT Bangalore)
  • An In-depth Review of Privacy Concerns Raised by the COVID-19 Pandemic
    Jiaqi Wang (Penn State University)
  • A variational approach to privacy and fairness
    Borja Rodríguez Gálvez (KTH Royal Institute of Technology); Ragnar Thobaben (KTH Royal Institute of Technology); Mikael Skoglund (KTH Royal Institute of Technology)
  • DART: Data Addition and Removal Trees
    Jonathan Brophy (University of Oregon)
  • On the Privacy-Utility Tradeoff in Peer-Review Data Analysis
    Wenxin Ding (Carnegie Mellon University); Nihar Shah (CMU); Weina Wang (CMU)
Poster Presentations
  • Differentially Private Random Forests for Regression and Classification
    Shorya Consul (University of Texas at Austin); Sinead Williamson (UT Austin/CognitiveScale)
  • An Analysis Of Protected Health Information Leakage In Deep-Learning Based De-Identification Algorithms
    Salman Seyedi (Emory University)
  • Dopamine: Differentially Private Secure Federated Learning on Medical Data
    Mohammad Malekzadeh (Imperial College London); Burak Hasircioglu ( Imperial College London); Nitish Mital (Imperial College London ); Kunal Katarya (Imperial College London ); Mehmet Emre Ozfatura (Imperial College London); Deniz Gunduz (Imperial College London)
  • Differential Privacy Meets Maximum-weight Matching
    Panayiotis Danassis (École Polytechnique Fédérale de Lausanne); Aleksei Triastcyn (EPFL); Boi Faltings (EPFL)
  • Intelligent Frame Selection as a Privacy-Friendlier Alternative to Face Recognition
    Mattijs Baert (Ghent University - IMEC); Sam Leroux (Ghent University - IMEC); Pieter Simoens (Ghent University - imec)
  • Accuracy and Privacy Evaluations of Collaborative Data Analysis
    Akira Imakura (University of Tsukuba); Anna Bogdanova (University of Tsukuba); Takaya Yamazoe (University of Tsukuba); Kazumasa Omote (University of Tsukuba); Tetsuya Sakurai (University of Tsukuba)
  • Maintaining the Utility of Privacy-Aware Schedules
    Arik Senderovich (University of Toronto); Ali Kaan Tutak (Humboldt University of Berlin); Christopher Beck (University of Toronto); Stephan Fahrenkrog-Petersen (Humboldt University of Berlin); Matthias Weidlich (Humboldt-Universität zu Berlin)
  • A Study of F0 Modification for X-Vector Based Speech Pseudo-Anonymization Across Gender
    Champion Pierre (INRIA); Denis Jouvet (INRIA); Anthony Larcher (Universitad du Mans - LIUM)
  • Private Emotion Recognition with Secure Multiparty Computation
    Kyle J Bittner (University of Washington Tacoma); Rafael Dowsley (Monash University); Martine De Cock (University of Washington Tacoma)
  • Optimized Data Sharing with Differential Privacy: A Game-theoretic Approach
    Nan Wu (Macquarie University and CSIRO's Data61); Farhad Farokhi (The University of Melbourne); David Smith (DATA61, CSIRO); Mohamed Ali Kaafar (Macquarie University and CSIRO-Data61)
  • Personalized privacy protection in social networks through adversarial modeling
    Sachin G Biradar (Amazon.Inc); Elena Zheleva (University of Illinois at Chicago)
  • Hybrid Privacy Scheme
    Yavor Litchev (Lexington High School); Abigail Thomas (Nashua High School South)
  • Compressive Differentially-Private Federated Learning Through Universal Vector Quantization
    Saba Amiri (University of Amsterdam); Adam Belloum (Multiscale Networked Systems (MNS) Research Group, University of Amsterdam, 1098 XH Amsterdam, The Netherlands); Leon Gommans (Air France KLM); Sander Klous (Vrije Universiteit Amsterdam)
  • S++: A Fast and Deployable Secure-Computation Framework for Privacy-Preserving Neural Network Training
    Prashanthi Ramachandran (Ashoka University); Shivam Agarwal (Ashoka University); Aastha Shah (Ashoka University); Arup Mondal (Ashoka University); Debayan Gupta (Ashoka University)
  • Differentially Private Multi-Agent Constraint Optimization
    Sankarshan Damle (Machine Learning Lab, International Institute of Information Technology, Hyderabad); Aleksei Triastcyn (EPFL); Boi Faltings (EPFL); Sujit P. Gujar (Machine Learning Laboratory, International Institute of Information Technology, Hyderabad)


Tutorial on Privacy-Preserving Federated Learning

Brendan McMahan (Google), Kallista Bonawitz (Google), Peter Kairouz (Google)
(Title and Details TBA)

Tutorial on Recent advances in Differential Privacy

Audra McMillan
(Title and Details TBA)

Invited Speakers

John M. Abowd

U.S. Census Bureau

Talk details TBA

Ashwin Machanavajjhala

Duke University

Talk details TBA

Nicolas Papernot

University of Toronto

Talk details TBA

Reza Shokri

National University of Singapore

Talk details TBA

Steven Wu

Carnegie Mellon University

Talk details TBA

Program Committee

  • Aws Albarghouthi - University of Wisconsin-Madison
  • Carsten Baum - Aarhus University
  • Aurélien Bellet - INRIA
  • Mark Bun - Boston University
  • Albert Cheu - Northeastern University
  • Graham Cormode - University of Warwick
  • Rachel Cummings - Georgia Tech
  • Xi He - University of Waterloo
  • Antti Honkela - University of Helsinki
  • Mohamed Ali Kaafar - Macquarie University and CSIRO-Data61
  • Kim Laine - Microsoft Research
  • Yuliia Lut - Georgia Institute of Technology
  • Terrence W.K. Mak - Georgia Institute of Technology
  • Olga Ohrimenko - The University of Melbourne
  • Catuscia Palamidessi - Laboratoire d'informatique de l'École polytechnique
  • Paritosh Ramanan - Georgia Institute of Technology
  • Marco Romanelli - INRIA
  • Reza Shokri - NUS
  • Sahib Singh - Ford and OpenMined
  • Vikrant Singhal - Northeastern University
  • Keyu Zhu - Georgia Institute of Technology

Workshop Chairs

Ferdinando Fioretto

Syracuse University

Pascal Van Hentenryck

Georgia Institute of Technology

Richard W. Evans

Rice University