The Normative Imperative of Social Value Alignment in AI Systems From Individual Preferences to Democratic Technology Design
Full Citation : Gabriel, I., & Ghazavi, V. (Forthcoming). The Challenge of Value Alignment: from Fairer Algorithms to AI Safety. In The Oxford Handbook of Digital Ethics Oxford University Press.
Published : 30/01/2026
Quote
Ghazouani, M. (2026) “The Normative Imperative of Social Value Alignment in AI Systems From Individual Preferences to Democratic Technology Design,” The Atlaris Journal .
Abstract
This editorial analysis examines the normative framework proposed by Gabriel and Ghazavi regarding the "social value alignment" of artificial intelligence systems deployed in pluralistic societies. Critiquing the insufficiency of predominant "one-to-one" technical alignment methodologies and preference aggregation models, the text argues that current approaches fail to address the legitimacy deficits inherent in multi-stakeholder deployment contexts. By synthesizing insights from deliberative democratic theory, the work posits that consequential AI systems must embody normative principles that receive wide endorsement from those whose lives are powerfully affected by their operation. This constitutive claim shifts the alignment discourse from technical safety optimization toward a strict political legitimacy requirement, distinguishing between individual user satisfaction and the ethical obligations owed to diverse communities. Ultimately, this foundational critique offers a rigorous benchmark for evaluating the sociotechnical validity of emerging AI governance structures and democratic technology design .
Venue : Science