WRITING TODAY: AI, Writing and Trust
Published: 23 January 2026
Benjamin Field is the Co-Founder and CEO of Deep Fusion Films and a leading voice in A.I. ethics across the creative industries. He advises unions, industry bodies and policymakers on how emerging technologies intersect with creative rights, and has contributed to guidance with organisations such as PACT, Equity and government working groups.
We asked him to explore the rapidly evolving landscape of A.I., and what its growing presence means for writers, production teams and anyone working in content production.
This post is part of our Writing Today strand, where we invite guest contributors to share the issues shaping the industry from their perspective. The views expressed here are the author’s own and do not represent those of the BBC.
Who owns work generated with AI assistance? What happens if training data is scraped without consent? How transparent should producers be with contributors and audiences? And where does accountability sit when things go wrong?
For many writers I’ve spoken to, AI does not feel like a neutral technological shift. It feels personal. Writing is not just ‘output’; it is voice, labour, identity and authorship. When AI enters that space without rules, it threatens all four at once. The unease writers feel is rational, and it deserves more than vague reassurance or corporate optimism. It may feel like it’s the Wild West out there with no clarity and no support.
My perspective on AI and creativity has been shaped less by pure enthusiasm for shiny new AI tools and more by the responsibility of helping others draw workable boundaries in our sector whilst navigating an industry in constant flux. Alongside producing television and documentary content, I spend a vast amount of my time as a leading voice on the ethics of AI, advising industry bodies, unions and policymakers on how Artificial Intelligence intersects with creative rights. That includes contributing to guidance developed with PACT, working in dialogue with Equity, and feeding into government discussions around protections and opportunities for creatives as AI becomes embedded in traditional and burgeoning production workflows.
These conversations are not purely theoretical. They are driven by practical questions creatives across the sector are asking. Who owns work generated with AI assistance? What happens if training data is scraped without consent? How transparent should producers be with contributors and audiences? And where does accountability sit when things go wrong?
One thing that we should be clear about though is what are we trying to govern? Who can we influence. Setting those considerations for ourselves is just as important as setting anything else. In my view we have to be focused on influencing those who work within the commercial media world. We can’t try and influence what people making content for social media can do. I focus on the engagements that have clear commercial contracts, whether that’s work that’s created ‘pre-contract’, on spec, intended for commercial release or under a traditional agreement. Anything that lives outside of that is tricky to value, so even trickier to govern. The PACT principles were born with those parameters in mind and the results that emerged from the work I did with them are deliberately clear-eyed.
In no particular order:
- Copyright must be respected.
- Human creativity must be valued and cannot be replaced. Responsibility for content remains with producers.
- Transparency is essential.
- Bias and data misuse must be actively mitigated.
These principles exist because without them, trust between creatives and the industry erodes very quickly.
What it cannot do is replace a writer’s voice or intent. If a writer cannot reasonably say “this is my work”, then the tool has moved beyond assistance and into substitution, and that is a line we do not cross.
At Deep Fusion Films, our publicly available Generative AI policy is an attempt to apply those shared principles in real production environments. It is not a manifesto and it is not a claim to moral leadership. It is a working document shaped by legal reality, union dialogue and the lived concerns of writers, contributors and commissioners.
One of the most important distinctions we make is between assistance and authorship. AI can support thinking. It can help organise research, analyse patterns, or stress-test narrative structures. What it cannot do is replace a writer’s voice or intent. If a writer cannot reasonably say “this is my work”, then the tool has moved beyond assistance and into substitution, and that is a line we do not cross.
Consent is another area where clarity matters. AI systems are trained on data, and data comes from people. If a writer’s work, style or archive material is involved in training or generation, permission must be explicit. No one should discover after the fact that their labour has been absorbed into a system they did not agree to participate in. This applies just as much to contributors and performers as it does to writers.
Transparency is often framed as a burden, but in practice it is protective. If AI plays a meaningful role in development or production, that fact should be disclosed plainly. Not buried in contracts or legal language, but stated in a way collaborators can understand. Writers do not need spectacle or slogans. They need honesty about how work is made and who is accountable for it.
AI will continue to evolve, whether the industry is ready or not. Writers will continue to define tone, meaning and cultural value.
Provenance is another concern that writers are right to raise. Many generative models have been trained on scraped material with unclear rights histories. Our policy restricts us to tools and datasets where training sources, licensing and ownership are defensible. This is not about technological purity. It is about professional responsibility and respect for creative labour.
There is also a misconception that ethical frameworks slow innovation. In practice, the opposite is often true. Clear rules reduce legal risk, prevent reputational damage, and stop bad ideas progressing too far before someone asks the obvious question. More importantly, they create conditions where writers can engage with new tools without feeling that their role is being quietly undermined.
AI will continue to evolve, whether the industry is ready or not. Writers will continue to define tone, meaning and cultural value. The real choice is not between technology and creativity, but between systems built with writers in mind and systems built around them and those systems are being built right now. Believe it or not the government and large scale institutions like Costar National Lab are involved in workshopping real-world responses to the main challenges the creative sector is facing and the results are creeping through into the real world. C2PA (the Coalition for Content Provenance and Authenticity) which safeguards against works being scraped by AI ‘bots’ is being adopted by Adobe and others, digital watermarking is coming in and we hope digital fingerprinting is not too far away either. Legal systems are being challenged and clarity is coming, one judgement at a time.
The future is not dark and gloomy, yes we’re in a muddle and seems scary but there is a lot of work being done to protect our fellow creatives.
To find out more about Deep Fusion Films and to download their AI white paper visit their website
Source link
