Our approach to artificial intelligence
This includes:
Under our publishing contracts, authors confirm that all creative text is their own original work. AI tools may be used for research or factual context, but not to draft, rewrite or generate the prose itself.
Stories are created by people.
Books are shaped by people.
Voices are performed by people.
We believe that creative originality, artistic risk, and human collaboration are at the heart of publishing. That is not something we are willing to outsource to a machine.
In these areas, we may use AI tools to support – but not replace – human decision-making.
This includes:
In short: we may use AI to assist with organisation and efficiency.
We do not use AI to replace creative judgement, decision-making or original creative work.
All outputs generated with AI assistance are reviewed and approved by a member of the Pellerin team.
We opt out of machine-learning training on the platforms and tools we use, in order to protect our authors’ intellectual property and our unpublished materials.
We do not knowingly upload manuscripts or creative work into systems that retain, repurpose or use submitted content for model training.
Unpublished manuscripts are never entered into public AI tools.
Our team does not use free, public AI platforms for company work. Where AI tools are used, they are secure, paid-for systems with appropriate data protections in place.
As a publisher, we have a responsibility to safeguard the originality, copyright and commercial value of the books entrusted to us. That includes being cautious about how new technologies interact with creative work.
We support the principle that authors should control how their work is used – including in relation to AI training and data scraping.
We also recognise that AI tools are developing rapidly. Where appropriate, we will review new tools carefully before adopting them, considering their ethical, legal and environmental implications.
Our responsibility is to stay informed, ask questions, and consider not only efficiency but ethics, authorship, copyright, consent and environmental impact.
Convenience alone will never be enough reason to cross a line we believe matters.
This AI Policy will be reviewed regularly and updated where necessary. As the technology and the industry change, our commitment remains the same:
We believe publishing is, at its heart, a human endeavour.
And we intend to keep it that way.
There are areas of publishing that require original creative thought, human judgement, taste, emotional intelligence and lived experience. In these areas, we will not use AI.
Under our publishing contracts, authors confirm that all creative text is their own original work. AI tools may be used for research or factual context, but not to draft, rewrite or generate the prose itself.
Stories are created by people.
Books are shaped by people.
Voices are performed by people.
We believe that creative originality, artistic risk, and human collaboration are at the heart of publishing. That is not something we are willing to outsource to a machine.
In these areas, we may use AI tools to support – but not replace – human decision-making.
In short: we may use AI to assist with organisation and efficiency.
We do not use AI to replace creative judgement, decision-making or original creative work.
All outputs generated with AI assistance are reviewed and approved by a member of the Pellerin team.
We opt out of machine-learning training on the platforms and tools we use, in order to protect our authors’ intellectual property and our unpublished materials.
We do not knowingly upload manuscripts or creative work into systems that retain, repurpose or use submitted content for model training.
Unpublished manuscripts are never entered into public AI tools.
Our team does not use free, public AI platforms for company work. Where AI tools are used, they are secure, paid-for systems with appropriate data protections in place.
As a publisher, we have a responsibility to safeguard the originality, copyright and commercial value of the books entrusted to us. That includes being cautious about how new technologies interact with creative work.
We support the principle that authors should control how their work is used – including in relation to AI training and data scraping.
We also recognise that AI tools are developing rapidly. Where appropriate, we will review new tools carefully before adopting them, considering their ethical, legal and environmental implications.
Our responsibility is to stay informed, ask questions, and consider not only efficiency but ethics, authorship, copyright, consent and environmental impact.
Convenience alone will never be enough reason to cross a line we believe matters.
This AI Policy will be reviewed regularly and updated where necessary. As the technology and the industry change, our commitment remains the same:
We believe publishing is, at its heart, a human endeavour.
And we intend to keep it that way.
If you have questions about accessibility, or need help accessing one of our titles, we’re always happy to hear from you.
Please e-mail us.
If there’s something we can do better, we want to hear about it.
