Nirav Ajmeri · AAMAS
Elessar: Ethics in Norm Aware Agents
We address the problem of designing agents that navigate social norms by selecting ethically appropriate actions. We present Elessar, a framework in which agents aggregate value preferences of users and select ethically appropriate actions through multicriteria decision making in different social contexts. Via simulations, seeded with a survey of user values and attitudes, we find that Elessar agents act ethically and are effective than baseline agents, in terms of (1) exhibiting the Rawlsian property of fairness, and (2) yielding a satisfactory social experience to users. Our results are stable across agent societies of different sizes and connectedness.
Multiple speakers · AAMAS
Ethics in Sociotechnical Systems
The surprising capabilities demonstrated by AI technologies overlaid on detailed data and fine-grained control give cause for concern that agents can wield enormous power over human welfare, drawing increasing attention to ethics in AI. Ethics is inherently a multiagent concern an amalgam of (1) one party's concern for another and (2) a notion of justice. To capture the multiagent conception, this tutorial introduces ethics as a sociotechnical construct. Specifically, we demonstrate how ethics can be modeled and analyzed, and requirements on ethics (value preferences) can be elicited, in a sociotechnical system (STS). An STS comprises of autonomous social entities (principals, i.e., people and organizations), and technical entities (agents, who help principals), and resources (e.g., data, services, sensors, and actuators). This tutorial includes three key elements. (1) Specifying a decentralized STS, representing ethical postures of individual agents as well as the systemic (STS level) ethical posture. (2) Reasoning about ethics, including how individual agents can select actions that align with the ethical postures of all concerned principals. (3) Eliciting value preferences (which capture ethical requirements) of stakeholders using a value-based negotiation technique. We build upon our earlier tutorials (e.g., at AAMAS 2015 and IJCAI 2016) on engineering decentralized MAS, which were well attended. However, we extend the previous tutorials substantially, including ideas on ethics and values. Attendees will learn the theoretical foundations as well as how to apply those foundations to systematically engineer an ethical STS.
Pradeep Murukannaiah · AAMAS
New Foundations of Ethical Multiagent Systems
Ethics is inherently a multiagent concern. However, research on AI ethics today is dominated by work on individual agents: (1) how an autonomous robot or car may harm or (differentially) benefit people in hypothetical situations (the so-called trolley problems) and (2) how a machine learning algorithm may produce biased decisions or recommendations. The societal framework is largely omitted.
To develop new foundations for ethics in AI, we adopt a sociotechnical stance in which agents (as technical entities) help autonomous social entities or principals (people and organizations). This multiagent conception of a sociotechnical system (STS) captures how ethical concerns arise in the mutual interactions of multiple stakeholders. These foundations would enable us to realize ethical STSs that incorporate social and technical controls to respect stated ethical postures of the agents in the STSs. The envisioned foundations require new thinking, along two broad themes, on how to realize (1) an STS that reflects its stakeholders’ values and (2) individual agents that function effectively in such an STS.