vlog

Authors:

  • Paz Peña

Description

Cover Art for Carr Center Discussion Papers In the hype of A.I., we are observing a world where States are increasingly adopting algorithmic decision-making systems altogether with narratives that portray them as a magic wand to “solve” social, economic, environmental, and political problems. But in practice, instead of addressing such promise, the so-called Digital Welfare States are likely to be deploying oppressive algorithms that expand practices of surveillance of the poor and vulnerable; automate inequalities; are racist and patriarchal by design; further practices of digital colonialism, where data and mineral extractivism feed Big Tech businesses from the Global North; and reinforce neoliberal practices to progressively drain out social security perspectives. While much has been discussed about “ethical”, “fair,” or “human-Centered” A.I., particularly focused on transparency, accountability, and data protection, these approaches fail to address the overall picture.

To deepen critical thinking and question such trends, led by case-based analysis focused on A.I. projects from Latin America that are likely to pose harm to gender equality and its intersectionalities of race, class, sexuality, territoriality, etc, this article summarizes some findings of the notmy.ai project, seeking to contribute to the development of feminist frameworks to question algorithmic decision-making systems that are being deployed by the public sector. The universalistic approach of human rights frameworks provide important goals for humanity to seek, but when we look into the present, we cannot ignore existing power relations that maintain historical relations of oppression and domination. Rights are not universally accessed.

Feminist theories and practices are important tools to acknowledge the existence of the political structures behind the deployment of technologies and, therefore, are an important framework to question them. For this reason, they can serve as a powerful instrument to imagine other tech and other worlds based on collective and more democratic responses to core societal challenges, focused on equity and social-environmental justice.

Citations

Joana Varon and Paz Peña. 10/17/2022. “ot My A.I.: Towards Critical Feminist Frameworks to Resist Oppressive A.I. Systems.”Cambridge, MA: Harvard Kennedy School.