Generative AI tools such as ChatGPT and Claude have been a hot topic in writing and publishing over the past few years. As a (retired) software professional, life-long science-fiction fan, and nascent science-fiction writer, I find the technology kind of intriguing, as recent advancements have started looking like they could finally begin to make headway in passing the Turing Test (although exactly how to test Artificial Intelligence for “intelligence” remains a matter of debate (Jones, 2025b)). Writers and publishers, however, are bumping into issues with respect to using these tools for their creative work, and that is my focus here.
Keep reading for the long version, or head to the TL;DR.
The Downsides
You can skip ahead to my hot take on the downsides or keep reading for the details.
Publishers Getting Swamped
I first became aware of generative AI as a problem at some science-fiction convention panels, in particular Neil Clarke’s saga of trying to contend with the explosion of AI-generated story submissions at his magazine. “Generated submissions were increasing daily and on-course to equal or outnumber human-written works by the end of the month. That workload was simply unsustainable” (Clarke, 2023). There were “make money with ChatGPT” scams fueling this explosion. “Quantity is easy” Clarke says, but quality was horrible.
In a sense, [these models] are trained to be supercharged autocomplete machines.
(Heaven, 2024)
Copyright Questions
A still-unresolved legal question is whether AI training data was acquired in violation of copyright (Panettieri, 2025). Whether the data was acquired legally and ethically or not, these “supercharged autocomplete machines” (Heaven, 2024) are leveraging that mass of previously published training data. This means that many (such as Clarke) view anything they produce as essentially plagiarized, and therefore not publishable as the creative work of the user of the AI tools.
Reliability
AI tools have historically shown significant bias of one kind or another. While this might be put down to flaws in poorly curated training data, Dastin (2018) reports a case where the unintentional bias was difficult to eradicate and ultimately caused the tool to be abandoned.
Some have suggested using AI for fact-checking or research (Lujan, 2024; Weiland, 2025). But AI is not guaranteed to produce truthful results, famously going so far as to invent non-existent case-law citations in legal briefs (Strom, 2023). Generative AI tools do not “understand” anything; counter-factual generated answers are often called “hallucinations” and everything AI produced must still be fact-checked (Jones, 2025a).
[L]arge language models are still struggling to tell the truth, the whole truth and nothing but the truth.
(Jones 2025a)
In fact, it seems that everything generative AI produces could qualify as a hallucination because it is derived solely from the training data, not from any genuine understanding of truth or the real world. It’s just that some (perhaps most) of these hallucinations are close enough to reality that we are misled into believing the tools produce true answers. Thus, we become upset when our misunderstanding trips us up, and then we blame the tool for “hallucinating” when in fact that’s all it ever does.
Resource Consumption
Invoking AI has huge water demands (Danelski, 2023) and energy costs (Hao, 2019) that are not obvious to the end user, with consequent damage to the environment. The Chinese-built DeepSeek model had claims to vastly lower energy costs, but these appear to be less dramatic than originally thought (O’Donnell, 2025).
Cognitive Issues
While AI tools appear to be helpful creatively (Heaven, 2025), especially to less-creative writers (Williams, 2024), the results still show a degree of sameness that makes them less interesting (Doshi & Hauser, 2024). Finally, at least one study shows that using these tools has clear negative cognitive effects—people rely on the tools and fail to develop, and may even impair, their own creativity (Kosmyna et al., 2025).
Downsides: Summary
I view the creativity-boost and interesting-ness research results as fairly natural consequences of the autocompletion nature of generative AI. Autocompletion is a prediction of what is likely to come next, based on what has gone before, and so is unlikely to produce anything surprising or genuinely novel and interesting; it will produce something average. This can help the less-creative users, but will be unhelpful to those with average or higher creativity.
Beyond that, the autocompletion nature of these AI tools looks only at the form of the output, and is not designed to look at the truthfulness or reliability of what it produces.
In sum, the data sources are legally questionable, the output quality is mediocre and often contrary to fact, the environmental cost is serious, and the mere act of using AI instead of your own mind to do creative work will numb your own creativity.
The Upsides
The debate, then, becomes a question of whether we can use AI at all ethically as writers. The answer, from some people, is a limited yes.
Things AI Can Actually Be Good For
Weiland suggests a list of AI-amenable tasks:
- Relieving the mental stress of busy to-do lists.
- Helping with administrative duties, copywriting, ads, social media, etc.
- Helping with research.
- Helping with search engine optimization. (Weiland, 2025)
I would take issue with the ethics of using AI for “copywriting, ads, social media” as these are not obviously different than using AI for other creative work; Weiland does not explain why she views these as reasonable and ethical uses.
Lujan (2024) finds that organizing one’s own notes, and summarizing possibly disjointed feedback or critiques, can be quite beneficial and improve an author’s efficiency. As these tasks are manipulating private writings for private consumption, there are no ethical difficulties with respect to plagiarism, or with taking editing work away from another person and substituting an AI tool.
AI appears to have valid and ethical uses in contexts where the output is for the author’s information, and not used directly in creative work. Using AI as a kind of research assistant or for other factual information is risky and requires verification.
Pushing the Boundaries
DeLay & Vendera (2024) discuss at length some potential uses of AI for writing.
Vendera has used ChatGPT to turn prompts into (non-fiction) articles, starting with a paragraph or so, and iterating to get into a form he likes. Clearly he has no ethical problem with this, viewing ChatGPT as something of a collaborator rather than a plagiarizer. Both have used AI to summarize a document. DeLay has also used Grammarly and similar tools for text cleanup and making sure the tone and style of a piece of her work is consistent throughout, although she says this works better for nonfiction than fiction. Both find that using the tools enhances their writing productivity. Vendera has used AI to “punch up” a paragraph, to make it funnier for example, although getting it to do so without entirely rewriting the paragraph takes some effort.
They admit the tools “can” plagiarize but won’t tell you. This contrasts with other views that everything generative AI produces is plagiarized (Clarke, 2023). Some publishers will ask whether you’ve used AI and they have tools to check; failing this check will cause your work to be rejected.
DeLay and Vendera agree that what generative AI produces on its own tends to be bland; their intuitive assessment aligns with what Doshi & Hauser (2024) found. They also remark that overdependence on AI will make you “lazy” which would appear to align with the results from Kosmyna et al. (2025).
These writers are not taking AI output verbatim and publishing it as their own; they have learned to iterate and polish the result. This is nevertheless pushing the boundary of what others would arguably consider outright plagiarism.
The TL;DR
Using generative AI tools to write fiction (or produce any other creative work) is wildly problematic. However, these tools do have reasonable use cases for other tasks that do not directly feed into creative output, and have their place in the ethical writer’s toolbox.
References
Clarke, N. (2023, April). Written by a human. Clarkesworld Science Fiction & Fantasy Magazine. https://clarkesworldmagazine.com/clarke_04_23/
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG
Danelski, D. (2023, April 28). AI programs consume large volumes of scarce water. UC Riverside News. https://news.ucr.edu/articles/2023/04/28/ai-programs-consume-large-volumes-scarce-water
DeLay, K. & Vendera, J. (2024, October 3). Episode 58- Will AI replace writers? We revisit the topic of using AI in writing. All Ways Write Podcast. https://www.youtube.com/watch?v=bo44Gshu17A
Doshi, A.R. & Hauser, O.P. (2024, July 12). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances 10(28). https://doi.org/10.1126/sciadv.adn5290
Hao, K. (2019, June 6). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/?utm_source=chatgpt.com
Heaven, W.D. (2024, July 10). What is AI? MIT Technology Review. https://www.technologyreview.com/2024/07/10/1094475/what-is-artificial-intelligence-ai-definitive-guide/
Heaven, W.D. (2025, May/June). How AI can help supercharge creativity. MIT Technology Review 128(3), 24-31.
Jones, N. (2025a, January 21). AI hallucinations can’t be stopped—but these techniques can limit their damage. Nature. https://www.nature.com/articles/d41586-025-00068-5
Jones, N. (2025b, January 23). How should we test AI for human-level intelligence? Nature 637, 774-775.
Kosmyna, N., Hauptmann, E., Yuan, Y.T., Situ, J., Liao, X., Beresnitzky, A.V., Braunstein, I. & Maes, P. (2025, June 10). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. ArXiv.org. https://arxiv.org/abs/2506.08872 https://doi.org/10.48550/arXiv.2506.08872
Lujan, C. (2024, July). Mini-series: Ethical ways authors can use ChatGPT when editing. She Writes. https://shewrites.com/mini-series-ethical-ways-authors-can-use-chatgpt-when-editing/
O’Donnell, J. (2025, January 31). DeepSeek might not be such good news for energy after all. MIT Technology Review. https://www.technologyreview.com/2025/01/31/1110776/deepseek-might-not-be-such-good-news-for-energy-after-all/
Panettieri, J. (2025, June 16). Generative AI lawsuits timeline: Legal cases vs. OpenAI, Microsoft, Anthropic, Nvidia, Perplexity, Intel and more. Sustainable Tech Partner. https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/
Strom, R. (2023, June 22). Fake ChatGPT cases cost lawyers $5,000 plus embarrassment. Bloomberg Law. https://news.bloomberglaw.com/business-and-practice/fake-chatgpt-cases-costs-lawyers-5-000-plus-embarrassment
Weiland, K.M. (2025, February 10). Exploring the impact of AI on fiction writing: Opportunities and challenges. Helping Writers Become Authors. https://www.helpingwritersbecomeauthors.com/impact-of-ai-on-fiction-writing/
Williams, R. (2024, July 12). AI can make you more creative—but it has limits. MIT Technology Review. https://www.technologyreview.com/2024/07/12/1094892/ai-can-make-you-more-creative-but-it-has-limits/