On January 29th and 30th 2024, a small but international group of scholars, civil society representatives, and practitioners came together at the Berlin Social Science Center to discuss the controversies, imaginaries, developments, and policy solutions surrounding AI under the name “Shifting AI Controversies – Prompts, Provocations, and Problematisations for Society-Centered AI.” And with controversies as a starting point, the call for nuance and diversification in media coverage and voices heard was the recurring call to action.
Shaping 21st Century AI – or just Shaping AI – is a multi-disciplinary and multi-national collaboration in the social sciences. Researchers from Germany, the UK, Canada, and France worked together for three years (2021-2024) to investigate the global discourse trajectories and controversies on artificial intelligence taking place in their respective countries between 2012 and 2021. Funded by the European Open Research Area, the team of Shaping AI aimed at understanding the construction of AI across country borders and – through their focus on media, public policy, research, and engagement / participation – across different spheres.
At the two-day conference in Berlin, it was then time for these efforts to find the willing listeners of academic community. Besides the Shaping AI team, researchers from the Netherlands, Finland, and Chile amongst others presented their findings on the implications of AI implementations in various domains. In a mix of presentations, panels, and key notes, the topic of artificial intelligence was illuminated from different angles such as labour, education, public participation, art, economy, and regulation.
To close of the first rounds of presentations, the national teams presented the insights from their respective spheres of interest. Canada’s representatives from the NENIC Lab at INRS Montreal and the Algorithmic Media Observatory at Concordia University first presented their findings on the news making process on AI concluding that AI and tech news tend to be techno-optimistic, delivered from a business perspective, and reliant on few industry representatives and their employed computer scientists. Focusing on the controversies forming within research communities, the UK team from the Centre for Interdisciplinary Methodologies at the University of Warwick identified – through expert consultation and the reintroduction of standpoints into controversy mapping – concerns and disagreements amongst AI experts (those with a stake in the issue who are committed to genuine debate) in regard to the socio-technical underpinnings and techno-economic structures surrounding AI, and therefore contrasting the focus on domain-specific AI of regulatory agendas. The German team at the Humboldt Institute of Internet and Society, focusing on policy and regulation controversies surrounding AI, most interestingly found in their qualitative and quantitative analysis of policy documents and interviews, that the banning of AI technologies seemed not to be a consideration, while AI implementation is seen as a given. Hereby, regulation is seen as a competitive advantage, with patterns focusing on inevitability of implementation, national distinctiveness, multi-level governance, the EU as benchmark, and a lack of civil society actors identified across policy documents and discussions. Lastly, the French team from the Medialab at SciencesPo presented their findings on the engagement and participatory sphere of AI controversies. By engaging with AI practitioners and widening the circle of involved parties, the researchers were able to identify values and concerns of 29 AI practitioners, highlighting the concern regarding the conditions for good experimentation practices, as well as democratic concerns focusing on power asymmetries. Overall, the results presented by the Shaping AI team highlight a myriad of perspectives and discussion which have surrounded and are surrounding AI for over ten years now. By widening the scope of inquiry and adding the “quieter” perspectives of different types of AI experts and practitioners, the project manages to identify discourses and patterns next to news media and policy reports.
After the first day of presentations the first conference day culminated in an evening panel at the Museum of Communication Berlin titled “Not my Existential Risk! The Politics of Controversy in an Age of AI.” Leaving the academic sphere behind, the open event invited members of civil society organisations, and journalists to discuss and debate prompts which had emerged from the results of the Shaping AI project. Under the moderation of Noortje Marres (University of Warwick), Matthias Spielkamp (AlgorithmWatch), Brenda McPhail (Canadian Civil Liberties Association), Mark Scott (Politico), and Gloria González Fuster (Vrije Universiteit Brussel) discussed issues such as media reporting on AI and whether it adequately reflects its controversiality, and whether public policy puts economic prosperity over addressing AI’s harms and risk. Also outside the halls of academia, the need for nuance in our discussions about AI becomes apparent in light of the power distributions surrounding AI as the discussion went on. While the representatives of civil society advocated for nuance in our conversations about AI, to understand them through stories which are multiple, deep, broad, and plural as they affect many areas of our lives, urging for engagement and collaboration between academia and civil society, the journalistic perspective offered a different view. It highlighted a lack of tech literacy in journalism, leading to journalists chasing headlines and controversies about AI following a tech hype cycle, while a focus should actually be put on the power relations surrounding AI and who has access to media outlets, which are in most cases AI and tech entrepreneurs.
At the end, the Shaping AI team said it themselves: While they did investigate the controversies surrounding AI and their closures during a ten-year period across country boarders, the project itself does not have closure as new discussions arise around the continuous controversy called AI. The conference presented a multitude of views and discussions on AI from different groups, within different contexts, from different countries. Now that these have been identified and are continuously investigated, it should be time to organize ourselves: To follow the call for collaboration from civil society representatives. To create spaces for exchange between journalists, AI practitioners and researchers to break hype cycles and a lack of tech literacy. As the study of and engagement with news media, policy makers, computer scientists, and AI practitioners has shown: In order to make sense of the controversies and concerns surrounding AI, we need to break up disciplinary silos and open up to other domains.