In October 2020, I introduced a new elective course at IIMA, Artificial Intelligence and Marketing (AIM). Being a rookie and first-time teacher, I was excited and nervous at the same time by the prospect of offering a new course in an emerging field. Interestingly, midway through teaching this course, the idea of a second course on privacy came to my mind. Although the AIM course focused mainly on the role of data and machine learning in the marketing field, the issue of fairness and privacy surrounding the user's data was a persistent theme across all the case studies we discussed in the class. That is how I conceived the idea of a new course, titled "Privacy Paradox: Artificial Intelligence and Digital Platforms".
With the advent of data science and machine learning, companies are awash in customer data. They are leveraging powerful and state-of-the-art methods to gain insights from that data. However, in the absence of regulations and clear regulatory norms, firms find it challenging to decipher what qualifies as reasonable use and then act in the best interest of their customers. This course equipped future managers with tools to understand privacy issues emerging due to artificial intelligence, cybersecurity, misinformation and platform design.
In the first session, students discussed the Apple-versus-FBI case. In this session, we discussed the security-versus-privacy conflict. Students also learnt about privacy in the context of responsibility to society, i.e., identifying harms to third parties and determining responsibilities towards them. In the second session, students learnt about Sidewalk Labs, a city built up from the Internet. They discussed the key decisions needed to be made concerning privacy (collection, ownership, dissemination, etc.), who should make those decisions (consumers, companies or governments) and what privacy frameworks are most fruitful for enforcing those decisions (rule-based or principle-based). These issues raised are broadly applicable to many companies wrestling with their products and business models' privacy consequences and how these companies decide to engage their customer bases, regulators, etc., on these issues.
In the third session, students assessed the privacy issues associated with emerging AI-based technologies in environments with limited regulations, e.g., facial recognition algorithms, and the concerns about breaches of privacy, infringement on civil rights and racial biases inherent in facial recognition algorithms. The Ring Inc. also provided students with the opportunity to identify the responsibility of companies with disruptive technology in a nonregulated industry. The students also learnt about the role of companies offering public services when a gap appears in the government's provision of those services. Next, we dwelt upon the privacy intrusion and fake news circulating on social media platforms like Facebook. After that, we discussed the case of PatientsLikeMe and discussed the privacy framework for businesses involving patient-generated health data. We also discussed frameworks for gathering and processing consumer data, privacy and data dilemma (in the context of new regulations in data privacy, e.g., GDPR).
In the second last lecture, we discussed differential privacy. We also learnt some use cases on the federated and decentralised privacy-preserving algorithms and the interplay between privacy and adversarial robustness in machine learning. Last, we also discussed the relationships between privacy, fairness and transparency. We concluded the course with a small project where students designed privacy-conscious products that leveraged Aadhaar services.
Eminent guests from academia and industry also interacted with our students. Aaron Roth (University of Pennsylvania and Amazon) discussed a set of principled solutions based on the science of socially aware and privacy-preserving algorithm design. He also provided mathematical definitions of fairness and privacy and provided students with solutions to directly code algorithms to correct for existing biases. Next, we had a session led by Vaikkunth Mugunthan (MIT, CSAIL). He discussed various methods of incorporating privacy in data and products. Right from de-anonymising by combining multiple sets of data to adding random noise to the datasets to conceal individual pieces of information - also known as differential privacy - Vaikkunth walked the students through different concepts along with their real-life applications. From industry, students interacted with Sourabh Gupta (CEO, vdo.ai). He discussed with our students how his company ensures privacy with its data collection, storage and usage. Also, students virtually met Prem Ramaswami (Product Head, Sidewalk Labs), who patiently answered many queries on the privacy aspect of smart cities. Overall, it was a fulfilling experience where students and I jointly discovered the importance of building privacy-preserving products.