Research overview

My research so far has been about studying how decision-making is affected by adversarial behaviour in wide variety of settings arising in engineering as well as socio-economic scenarios. Any decision is based on the acquired information or data, whereby, the effectiveness of the action crucially depends on the accuracy of the information received. The following research projects study precisely study how the decision-making is affected due to imperfect information exchange arising due to the inherent strategic or adversarial behaviour of the interacting entities.

For a more up-to-date list of my publications, visit my Publications page.

A game-theoretic interpretation to fraud detection in electoral process

Numerous countries conduct elections where the eligible citizens record their votes in an electronic voting machine. These machines are not without their share of vulnerabilities and being digital machines, they carry a persistent threat of adversarial manipulation. Can a detector detect a fraud in the electoral process just by looking at the voter data? Can it deter any possible fraudulent manipulation by being less conservative about calling fraud and, at the same time, not losing its credibility?

We study the above questions by studying a hypothesis testing-like framework where a detector wishes to detect fraud by observing the electoral data that may be manipulated by an adversary. We study this setting as a game between the detector and the adversary and show that if the detector is too cautious about calling a fraud, then the adversary can manipulate without getting detected and can have a higher posterior probability of 'winning' than the prior winning probability. Moreover, this level of cautiousness has to be dropped in order to make the process of manipulation futile for the adversary and thereby deterring it.

Related publications

Extracting information from strategic agents

How should a health inspector at an airport/train station design questionnaires to recover the maximum number of travel histories of passengers? How should a controller 'talk' to a sensor that is compromised by an adversary to acquire maximum information? In all of the above situations a decision-making entity (health inspector, controller) makes a decision (screen or not screen, actuation commands) based on the information of the external agent (passengers, compromised sensors). Evidently, the agent may wish to influence the decision of the entity in its favour and thereby, may misreport its information. How should the entity then communicate with the agent? Is there a limit to the information that can be acquired?

We explore these questions by studying an abstract setting where an uninformed receiver wishes to extract information from an informed but a strategic sender. We pose this problem as a game from the perspective of the receiver. The primary contribution of this work is to show that there are fundamental limits to the amount of information that can be extracted from such a misreporting sender. These limits arise solely due to the strategic nature of the sender and exist even in the case of noise-free communication medium. Our analysis is based on tools that blend two distinct fields of study - mechanism design (for modelling and interpretation of results) and information theory (characterizing and quantifying the fundamental limits).

Related publications

Adversarial communication as a zero-sum game

Related publications