In Development: Mapping the Humanities and AI

Humanities scholars across the UK are shaping critical thinking and practical action around AI. But the work is scattered across institutions and websites, which is making connections harder to spot. Mapping the Humanities and AI is a map in development that will visualise this activity, highlight networks and gaps, and support collaboration across research, policy, and practice. The project is co-led by Dr Anna-Maria Sichani and Dr Shani Evenstein Sigalov.
Recent work in the humanities emphasises that the confrontation between AI and “the humanistic disciplines emerges as a fertile ground for both conceptual exploration and critical inquiry.” Thinking with AI: Machine Learning the Humanities (2025) positions Critical AI Studies as a field that “aims to dissect the intricate web of relationships between AI and the socio-cultural milieu it inhabits,” recognising that AI is “not a neutral tool but rather a socio-technical system deeply embedded in and reflective of the values, biases, and power structures of the society that uses it.” To better understand AI, then, one needs to look at the social, economic, historical, and institutional contexts in which it is developed and deployed.
Humanities scholarship plays a crucial role in examining AI, acting as a critical corrective and providing the necessary scaffolding for ethical governance. As Thinking with AI argues, Critical AI Studies can help dissect “the conceptual and philosophical assumptions that underlie the design and use of machine learning systems,” demystifying the notion that AI is built on datasets that are “objective and neutral representations of the world.” Within the arts and humanities, questions of ethics, interpretation, responsibility, and meaning become central to AI research.
As current case studies have shown, these research questions have generated actionable insights for government bodies, cultural institutions, and industry leaders. We are currently seeing a proliferation of humanities-led interventions across the UK, shaping policy, informing regulation, copyright reform, and technical development. To capture this rapidly evolving terrain, we are working on the Mapping the Humanities and AI project. This new initiative is meant to visualise the breadth of humanities-led infrastructures that are already steering the future of AI.
The Humanities engage AI not only as a powerful analytical tool – enabling work at scale and with unprecedented depth – but also as a sociocultural and political system requiring critical interpretation. By working with AI to unlock complex sources and on AI to interrogate its underlying datasets, biases and architectures, the Humanities help shape more ethical, inclusive, fair and human-centred technological futures.
Anna-Maria Sichani
This dual engagement, both critical and infrastructural, is increasingly visible across the field. As Shani Evenstein Sigalov observes:
Across academia and beyond, Humanities scholars are actively shaping the infrastructures, governance models, and knowledge assumptions that underpin contemporary AI. This work spans research hubs, cultural institutions, policy contexts, and community-led initiatives, yet it often remains dispersed. What we lack is not activity, but connective tissue. Mapping this landscape helps us see how these efforts relate, where they cluster, and where collaboration can be strengthened.
From Theory to Practice
As we will show below, there is already significant work underway that bridges the gap between technical capability and social responsibility, transforming theory into strategy for equitable AI technologies. But without a map, a unified view, of how these interventions connect locally and nationally, we risk duplicating efforts or missing vital collaborations. We need a clear picture of our existing infrastructure to determine what is required for the future.
The urgency of such a mapping exercise is underscored by the breadth of humanities-led interventions already shaping the field. Some examples include “Doing AI Differently” white paper, which argues that closing the gap between technical metrics and real-world success requires “humanities upstream,” defined in the policy brief as “integrating interpretive expertise in core AI development.” Shifting towards a humanities-based approach is essential for achieving “interpretive depth,” which is explained as the capacity to “understand context, cultural nuance, and multiple valid perspectives – rather than producing single ‘correct’ answers.”
Another key intervention is the Fairwork “Policy Brief: Work, Regulation, and AI Governance in the UK” (2022). Cant et al. critique the UK’s “patchwork” approach to regulation, discussing the dangers of allowing the tech sector to self-regulate. The policy brief contends that the current reliance on voluntary ethics washing is insufficient to protect workers from the risks of algorithmic management. Instead, the authors call for AI-specific legislation grounded in human rights and civil liberties, contending that fair governance requires social dialogue. The brief demonstrates how humanities-led research can translate ethical principles into real-world action, such as empowering trade unions and increasing resources for bodies like the Information Commissioner’s Office.
The “BRAID Researchers’ Response to the UK AI & Copyright Consultation” (2025) exemplifies how humanities scholarship is directly intervening in legislative debates. The response rejects the government’s proposed “opt-out” model for text and data mining, asserting that it places an unfair administrative burden on creators and risks damaging the UK’s “national identity and soft power.” The researchers further maintain that an “opt-out” system compromises the economic viability of the creative industries and fails to address the specific needs of Galleries, Libraries, Archives, and Museums (GLAM), where open access policies are being exploited by commercial entities without consent. The response advocates for an “opt-in” approach underpinned by mandatory transparency. The report is a case in point as to how arts- and humanities-led research can challenge technical standards (such as “rights reservation” protocols) to prioritise the agency and remuneration of human creators.
Meanwhile, the University of London’s report “AI and/for Humanities Research” (2025) emphasises the importance of “informed decision-making” regarding the rapidly evolving landscape, aiming to equip researchers with practical “guardrails” to engage actively and responsively with AI. Moving beyond the view that AI is primarily a productivity tool, the report advocates for “AI-as-method,” where technology is directly integrated into the research process, and “AI-as-object,” where the models become subjects of critical enquiry. Finally, the report highlights the ethical and environmental limitations of closed, proprietary systems and argues for a pivot toward open-source, bespoke models that allow scholars to inspect training data and retain ownership of their outputs. By framing adoption around core questions of transparency, bias, and reliability, the report encourages the humanities to lead the path in developing ethical, research-driven AI ecosystems. Alongside this report, another valuable tool is the University’s interactive resource, written by Anna-Maria Sichani and Kaspar Beelen, AI and Humanities Research, which provides a practical primer for scholars looking to navigate AI technologies responsibly.
Similarly, the edited collection Navigating Artificial Intelligence for Cultural Heritage (2025) offers a roadmap for incorporating humanities values into technical infrastructure, such as the automated systems used to classify and search digital heritage collections. The volume posits that the “sheer quantity” of born-digital records requires a revolution in archival practice that automation cannot alone solve without “expert knowledge” to manage risk and bias. Case studies such as the “Legacy of Slavery” project illustrate how AI can be repurposed to “reassert erased memory.” The book reinforces the call for a “cyclical rather than linear” view of technological progress, whereby AI can “support core activities that extend our imperfect, eminently human judgement.”
Taken together, these examples demonstrate how humanities-led work moves from critique to infrastructure, from theory to practice. In this context, the AI-BRIDGES initiative, hosted at the Digital Humanities Research Hub, School of Advanced Study, University of London, brings together cultural and academic institutions, open knowledge infrastructures, technologists and funders. It explores how institutional data and Generative AI can be aligned in ways that are responsible, sustainable, and publicly beneficial. Operating as both a research project and a convening space, it examines how Linked Open Data infrastructures and Generative AI systems can be more meaningfully connected, while remaining attentive to governance, sustainability, and public value. Rather than advancing a single technical solution, the initiative combines empirical research, collaborative experimentation, and open dialogue across institutions and communities, exemplifying the kind of cross-sector, infrastructural work that this mapping project seeks to make visible.
Visualising Humanities and AI Infrastructure
Based on the above and a preliminary mapping exercise, the map will show the research collectives that are working collaboratively to address urgent questions surrounding AI development, governance, and use. This work spans a wide range of research interests, including AI and literature, history, ethics, law, language, and creativity, and often operates across institutional and disciplinary boundaries.
Mapping the Humanities and AI endeavours to make such activity visible. The project will map the breadth of humanities-led infrastructures that are already shaping the development, governance, legislation, and application of AI. It does so in order to support researchers, policymakers, funders, early-career scholars, and wider publics seeking to understand how humanities expertise is engaging with AI today. The map also captures the varying degrees to which infrastructures engage with AI, from more peripheral connections to forms of work in which AI is central to the organisation.
Mapping this evolving ecosystem is not only about visibility, but about coordination. When we can see how academic research, cultural institutions, policy initiatives, and community-led infrastructures intersect, we can move from parallel innovation toward shared capacity-building. A collective view strengthens collaboration, reduces duplication, and reinforces the role of academia in shaping responsible, public-interest AI.
Shani Evenstein Sigalov
Moreover, we are building the resource so that it is able to capture the full spectrum of humanities-oriented infrastructures working with AI, including those that may not primarily define themselves as part of the humanities but nonetheless draw on humanities perspectives through their themes, memberships, collaborations, and project activity. The map will highlight university-based centres, research hubs, learned societies, associations, and independent research organisations, to mention a few, that approach AI as a methodological, ethical, social, or creative question.
Mapping the current landscape of Humanities and AI is essential if we are to understand not only where innovation is emerging, but how critical, sociocultural, and ethical perspectives around our scholarship and our lives in general – from funding priorities to evaluation frameworks – are shaping the development of these intelligent systems. Without mapping, our understanding and all these diverse contributions risk remaining invisible; with it, strategic coordination and investment, meaningful collaboration and responsible innovation become possible.
Anna-Maria Sichani
Finally, the map will support strategic reflection and capacity building by showing where AI-Humanities work is taking place, how themes cluster, who is working with whom in collaborative networks, and where opportunities for early-career researchers are emerging. It will also draw attention to gaps in the landscape, indicating where AI-Humanities activity remains limited or underrepresented.
How do we capture the diversity of our research infrastructure?
Professor Jane Winters, Director of the Digital Humanities Research Hub at the School of Advanced Study, explores the challenges of identifying, classifying and quantifying our arts and humanities research landscape.
Infrastructure and innovation
💡As the UK’s arts and humanities infrastructure is mapped for the first time; Dr Jaideep Gupte considers how it will help to strengthen the UK’s global standing in the sector.
Join our mailing list
Receive blogs as soon as they’re published, along with project updates and event information, plus special access to digital tools to help you make the most of the dataset.
