Today’s society comprises humans living in a complex and interconnected world that is intertwined with a variety of computing, sensing, and communicating devices. Human-generated data is being recorded at unprecedented rates and scales. Knowledge from such data can no longer be derived through simple manual algorithms and require sophisticated AI algorithms. Such powerful algorithms, which are capable of learning from large-scale human-generated data, are increasingly controlling various aspects of modern society: from social interactions (social media and search platforms, newsfeeds), economics (sharing platforms, blockchains, banking), learning, to governance (judgements, policing, voting). While these algorithms have a tremendous potential to change our lives for the better, but, via the ability to mimic and nudge human behavior, they also have the potential to be discriminatory, reinforce societal prejudices, violate privacy, polarize opinions, and influence political processes. As a consequence, the trust in such systems – from a societal, and legal standpoint – is plummeting. Further, the lack of an understanding of how humans generate data and how these algorithms extract knowledge from it, or how they influence human behavior, has become a major bottleneck in designing mechanisms to regulate modern society.
Nisheeth Vishnoi and his collaborators work on the foundations and design of context-aware decision-making methods to mitigate explicit and implicit biases, control polarization, improve diversity, and ensure privacy.