Mapping the Arts and Humanities Blog

Mapping Openness and AI: Insights from the AI and Openness Workshop for the Arts and Humanities (Part 1)

Feb 4, 2026 | Case Studies, Education & Pedagogy

Following the “AI and Openness” workshop, a collaboration between BRAID Fellows Anna-Maria Sichani, Paula Westenberger and Nick Bryan-Kinns and hosted at the Ada Lovelace Institute, this post examines the contested definitions of “openness” in the age of AI. From the risks of commercial “open washing” to the structural inequalities of compute power, we explore how humanities-led governance is essential to move beyond vague terminology toward genuine reciprocity, transparency, and community control.

At the Mapping the Arts and Humanities Project (MAHP), we usually work with things you can put on a map: research centres, hubs, labs, networks, learned societies, and associations, to name a few. They appear in our dataset as records with names, tags, locations, and infrastructural relationships, giving a structured, quantitative view of the UK’s arts and humanities research landscape. 

One area of growing interest within MAHP is the intersection of AI and the humanities, where universities, galleries, museums, archives, government bodies, and independent collectives are confronting practical questions about how AI could support their work, streamline processes, and what kinds of ethical and governance responsibilities such adoption creates. Mapping can show where activity happens, and to a point how it is organised, but it cannot speak to how researchers and practitioners experience emerging technologies or the pressures and possibilities they introduce. To better understand AI and the humanities, examining the qualitative side of infrastructures (the working practices, governance norms, conceptual vocabularies, and tools) is equally important, helping us bring into view the concerns and emerging practices of practitioners that shape how they navigate AI in their day-to-day work.

The recent “AI and Openness” workshop, organised through a collaboration between BRAID (Bridging Responsible AI Divides) Fellows Anna-Maria Sichani, Paula Westenberger and Nick Bryan-Kinns, and hosted at the Ada Lovelace Institute, brought together researchers, cultural organisations, artists, technologists, and policy-adjacent practitioners to ask what “open” currently means in AI, who benefits from claims of openness, and what kinds of data governance are needed for communities to use AI systems safely and on their own terms. For MAHP, the workshop functioned as a field note, giving us the chance to listen to how colleagues are wrestling between the extractive realities of commercial “open washing” and the desire for genuine reciprocity and attribution. The workshop traversed legal and technical barriers that currently hinder public institutions and artists from engaging in safe experimentation and collaborative research, and surfaced proposals for new protocols and interventions that prioritise consent, inclusivity, and community control over unrestricted access.

In this blog post, we focus on one strand of those conversations: how participants defined and contested “openness” in the context of AI, and what a more meaningful, situated notion of openness might look like for arts and humanities infrastructures. Future posts will explore other themes that cut across the day’s discussions: consent and control over data; the role of small, local, specialised models in supporting context-specific research and accessible as well as safe experimentation; economic justice and reciprocity built into tools and platforms; provenance and archival practice; making AI’s environmental footprint visible, and the forms of critical AI literacy and community governance needed to support more equitable AI futures. 

“Openness” and the problem of terminology

Dr. Anna-Maria Sichani (School of Advanced Study, University of London) opened the workshop with the following provocation: in the context of AI, there is no agreed definition of “open.” The term circulates freely – sometimes as a technical or ethical descriptor, other times as a methodology, though more often than not it becomes a marketing device. Both Sichani and roundtable participants agreed that without shared definitions, openness becomes elastic enough to be stretched and even strategically misused.  

From a humanities’ perspective, the instability inherent in terminology is familiar territory. Hence, Sichani argued for applying a specific interpretive vigilance to AI, inspired by Harold Lasswell’s communication theory: asking who is saying what, how, and why as a first port of call, giving practitioners a way to orient themselves in a “muddy” landscape where the same terms (open/openness) are used across contexts that differ radically in their methods and intentions, often masking practices that are incompatible with the ethos initially ascribed to “open.”

The confusion over terminology becomes especially visible in the “AI stack,” where terms like “open source,” “open weights,” and “open data” are often conflated despite referring to different practices. As Sichani stressed, these distinctions matter. “Open source” implies a fully transparent system in which the model’s weights, training code, data, and documentation are all accessible. “Open weights” describes a more limited form of openness, where the parameters of a model are released but the training data and code remain closed. At the other end of the spectrum sit “closed models” where every component (including data provenance and documentation) is restricted. Lack of clarity, according to Sichani, is what allows companies to engage in “open washing,” invoking the language of openness to gain trust or “cultural capital” while keeping crucial components of the AI pipeline closed and proprietary, which makes it difficult for practitioners to assess what “open” actually grants, and to whom.

This is why terminology matters at the outset. Before we talk about governance, consent, or creative labour, Sichani foregrounded the importance of understanding how the language of openness is already shaping decisions and expectations. And here the humanities offer the ability to sit with conceptual plurality, to identify contradictions, and to question the authority of any single definition by showing how meaning is actively negotiated across communities and disciplines.

Openness in practice: structural inequalities and power asymmetries

The difficulty, as several contributors emphasised, is that the problem of terminology is much larger, tied to practices across sectors. Once applied to AI systems, the question of “open” becomes embroiled with economics, governance, compute resources, and geopolitical interests, among others.

The need for a more rigorous definition of openness became a theme throughout the day, encompassing principles such as transparency, reusability, extensibility, and inclusivity, to allow for what a collaborator called “pluralist understanding.” Responsible and genuine openness requires regulation and governance to counteract existing power imbalances, specifically addressing the material privilege of openness — from having a laptop to understanding computational systems, as per Dr. Phoenix Perry (Creative Computing Institute, University of Arts London). 

Participants further elaborated that the ability to fully contribute to or benefit from open-source initiatives (such as taking advantage of big open-source datasets) is currently restricted to those with access to vast computational resources (compute) and the legal safety net to navigate copyright regulations. This includes the capacity to absorb risk in a way that public bodies cannot; unlike commercial tech firms, public institutions are often bound by restrictive exceptions like CPDA’s Section 29A, which legally prevents them from sharing their datasets outside their own networks. As Stephen McConnachie (British Film Institute) related, this restriction can hinder and even block collaboration with academic partners and forces institutions to rely on expensive on-premise hardware, as they cannot legally transfer copyright-protected data to external researchers or cloud environments for analysis.

At the same time, without support for federated infrastructures (as posited by Perry) that allow for local control, open datasets risk paradoxically consolidating power in the hands of the few “hyperscalers” capable of processing them. Currently, the “open” model requires users to upload their data to a central place (usually owned by Big Tech) to be useful, which creates a power imbalance. For Perry, keeping the data “local” (or controlled) is an act of resistance against the current “openness” of the web that has been weaponised by AI companies to harvest creative work without permission. Federated systems allow models to “visit” the data where it sits (meaning the computation occurs locally on the data holder’s infrastructure, and only the learned patterns are transmitted) which would enable McConnachie to collaborate without legal transfer, and artistic communities to contribute without surrendering ownership to Big Tech.

The critique of centralised, high-resource systems extended to their physical footprint. Speakers such as Perry and Dr. Kaspar Beelen (School of Advanced Study, University of London) argued that the industry’s obsession with massive models overlooks the value of “small” approaches, which are often more efficient and sustainable. Consequently, as other contributors highlighted, true openness must extend to making the environmental impact of AI visible, addressing the consumption of energy beyond just the data centres and the global north/south divide in resource extraction. Questions were raised regarding whether developers are transparent about the minerals required for their hardware, and who bears the brunt of this extraction. A participant suggested that openness could take the form of “labelling,” where every AI inference call includes metadata revealing its energy cost, similar to sustainability labels in other industries. 

The realities behind AI’s rhetoric of Openness

The tension between the rhetoric of openness and the realities of AI development blur the boundaries between public value and private profit, technical transparency and opaque decision-making, and local control and planetary-scale datasets. One contributor observed that the rapid uptake of “open” language in AI often has little to do with public benefit and everything to do with positioning: “Open brings money. It brings credibility. But what’s actually open?” Another reminded the room that openness is never neutral, pointing to the geopolitical asymmetries that shape who can actually benefit from “open” models. “There’s a reason Meta and China are pushing openness right now. Europe is outgunned by an order of magnitude in capital. Openness becomes a strategy of power.”

For cultural heritage institutions, unclear provenance and unclear model design create significant operational risks. A participant drew attention to the fact that for public bodies (such as a national archive), openness is a practical requirement, noting that “it’s very difficult for [a cultural organisation] to use models where there’s real uncertainty about training data.” He further argued that archives representing filmmakers and creators cannot ethically use systems “where rights have potentially been violated or treated unfairly,” making transparency around training data essential for maintaining the trust of the communities they serve. Without disclosures and transparency, institutions risk appearing complicit in the extractive practices they are meant to safeguard against.

Participants also pointed to an “asymmetry of openness” between the creative sector and the technology companies that develop AI systems. As one online contributor remarked, there is a power dynamic where “openness is something that we demand from creative practitioners,” yet when dealing with AI companies, any request for transparency is treated as “asking for something which is not the default.” The double standard (expecting creators to have their work folded into training datasets in the name of openness while allowing commercial actors to withhold the underlying data and its sourcing, code, and model architecture) leaves the arts and humanities sector vulnerable, limiting their ability to identify ethical and legal risks, protect the communities they represent, participating on equal terms in the development and deployment of AI, and scrutinise systems whose internal processes are otherwise invisible to researchers and practitioners. 

Humanities-centred governance in AI

What became clear across the workshop discussions is that navigating AI has largely been framed as a matter of technical capacity, when the deeper issues concern governance and legislation – questions about the rights of communities and whose data is being drawn into these systems, often without our knowledge. The most difficult conversations (concerning extractivism, environmental impact, consent, attribution, ownership, legitimacy, and cultural value) sit squarely in the humanities. Participants returned repeatedly to the necessity of critical AI literacy and situated knowledge, arguing that these are necessary prerequisites for resisting systems where “openness” is used to obscure liability.

Humanities expertise, therefore, helps us render visible what technical descriptions often conceal: who is affected, what kinds of labour are being drawn upon, whose heritage is being commodified, and where power sits in decisions about data and access. The humanities supply the critical vocabulary needed to notice the asymmetries that quantitative models flatten, and to insist that creators, archivists, researchers, and heritage communities retain agency in how their work circulates. Aligning with the “data justice” framework proposed by Picasso et al., which advocates for analysing data systems by paying “particular attention to structural inequality” and the power dynamics that shape them, humanities approaches make legible the structures through which data practices become inequitable or extractive. What’s more, AI governance is often grounded in technical safeguards (audits; model documentation; alignment protocols; compute limits; model cards). Humanities-led thinking needs to precede those interventions, shaping model design and data structures – including decisions about what data is collected and how – and setting the parameters and ethical baselines against which technical systems are built. The discipline also helps articulate the practical conditions under which openness is, in Dr. Marina Markellou’s (University of Groningen) words, “meaningful” — when creators are credited, when institutions can verify provenance, and when communities can say no without losing the ability to participate.

For MAHP, the conversation reinforced that mapping the arts and humanities requires attention to the qualitative (the norms, practices, ethics, and frictions) as much as the quantitative that underpin research. The next post in this series moves from terminology to the stakes of consent and control, tracing how creators and open-licensing communities are attempting to establish fairer terms of engagement in the face of rapid AI development.

BRAID is a UK-wide, multi-year research programme to integrate Arts, Humanities and Social Sciences research more fully into the ecosystem of responsible artificial intelligence. It is funded by the Arts and Humanities Research Council (AHRC), part of UK Research and Innovation (UKRI). The programme’s core delivery partners are the University of Edinburgh, the Ada Lovelace Institute and the BBC.

braid.org / @braid__UK

Join our mailing list

Receive blogs as soon as they’re published, along with project updates and event information, plus special access to digital tools to help you make the most of the dataset.