Tuva Falk

Download PDF

The design of AI systems has been largely technology-driven, prioritising efficiency, accuracy, and optimisation over usability, fairness, and transparency. This has led to critical failures including biased hiring tools, discriminatory facial recognition, and opaque automated decision-making where AI reinforces social inequalities rather than addressing them.

Our study (N=66) explores how technical and non-technical stakeholders perceive AI evaluation differently. The study identified key correlations between AI design choices and user priorities, highlighting the need for participatory UX evaluation metrics from the start of AI design.

To bridge this gap, we propose a three-pillar framework that embeds user perspectives into AI governance and development.

01 – Early User Involvement

AI systems are often developed using historical data, which reflect existing biases and disproportionately impact marginalised communities. Lack of early stakeholder involvement in AI means technical perspectives often takes precedence over human values.

Policy Recommendation:

  • Require participatory design in high-risk AI applications.
  • Prioritise user experience and social impact, moving beyond a purely technical approach.

02 – User Defined UX Evaluation

The lack of usability metrics in AI assessment creates significant risks, as opaque decision-making prevents users from understanding, questioning, or challenging potentially unjust or discriminatory outcomes, reinforcing power imbalances.

Policy Recommendation:

  • Mandate the inclusion of user-centered evaluation metrics defined together with impacted stakeholders.
  • Require qualitative usability and user testing alongside technical fairness audits.

03 – Value-Sensitive AI Design

When AI development assumes algorithmic solutions are universally applicable, it overlooks the complexity of human values and community norms, increasing the risk of misaligned or even harmful systems.

Policy Recommendation:

  • Require AI projects to conduct value-Sensitive impact assessments.
  • Shift from a tech-first to a user-first model, ensuring AI reflects diverse cultural, ethical, and social priorities.

Ensuring meaningful user involvement in AI development is not just an ethical imperative, it is a strategic necessity for creating transparent, accountable, and widely accepted AI systems. This brief has outlined the risks of a purely technical approach to AI design, highlighting how user-driven evaluation metrics can enhance fairness, usability, and trust.

However, translating these insights into actionable policies requires overcoming practical challenges, like the feasibility of integrating diverse user perspectives and potential conflicts between stakeholders.

This project, conducted as part of my internship at the AI Policy Lab @Umeå University, explores the role of participatory user involvement in AI design and the need for UX evaluation metrics. I extend my gratitude to everyone at the Lab for fostering an inspiring research environment, to Prof. Virginia Dignum for this opportunity and Prof. Henry Lopez Vega for his invaluable mentorship. To delve deeper into the findings, read the full Policy Brief and Research Paper through this link. Or connect on LinkedIn: www.linkedin.com/in/tuvafalk

Finished Externships

Name Year Topic Project
Name Tay Warner Macintosh Year 2025 Topic AI and Homelessness Project Ethical AI in the third sector - systems supporting people experiencing homelessness
Name Kevin Harerimana Year 2025 Topic AI in Education Project Policy Recommendations for Equitable AI-Driven Education in sub-saharan Countries: Ensuring Accessibility and Fairness