This study aims to analyse critical responses to the generation of texts, including scientific and medical articles by means of Artificial Intelligence. It is argued that the affordances of text-generation models using Artificial Intelligence (AI) are increasingly plausible, as the programs reproduce the format of texts memorized for the purposes of producing computer-generated content, mimicking but not replicating human intelligence. The most recent incarnation of such programs, ChatGPT, a Large Language Model (LLM), has been widely adopted in the academic community, not just by novice writers, since its launch in November 2022, giving rise to considerable alarm as it is evident that the model can replicate essay writing and research articles in a plausible manner, resulting in significant concerns among the editors of scientific journals, including medical journals, dismayed by the large-scale submission of articles drafted not by the purported authors, but by means of ChatGPT. Journal editors are seeking to adopt defence mechanisms to identify and weed out the manuscripts of dubious authorship, but preliminary studies show that they are not readily detectable even by experienced journal editors without adopting AI output detector software. In a critical discourse analysis framework (Bhatia 1993, 2014, 2017) this study examines critical comments from Nature Briefing, Science, the New York Times and the Financial Times, in addition to the scientific publications listed in the Bibliography, in which scholars in a range of disciplines express their concern about the impact of AI-generated text not just on undergraduate essay writing, but also on scientific and medical writing, resulting in plausible sounding text that presents fabricated results that are purported to be authentic, a phenomenon known as hallucination. The critical responses are classified according to author stance, with the authors identified as ALARMISTS, ENTHUSIASTS, SCEPTICS, PROHIBITIONISTS, PHILOSOPHERS, ETHICISTS AND PRAGMATISTS. The article concludes by arguing that ChatGPT should not be viewed as a matter of concern solely for IT specialists as this disruptive technology is raising questions that need to be addressed by language specialists, discourse analysts and scholars concerned about academic integrity and the quality and reliability of scientific and medical discourse. BIBLIOGRAPHY Agomuoh F. ChatGPT: how to use the viral AI chatbot that took the world by storm. Digital Trends. Published 13 December 2022. Accessed 6 January 2023. https://www.digitaltrends.com/computing/how-to-use-openai-chatgpt-text-generation-chatbot/ Bhatia, Vijay K. 1993. Analysing Genre: Language Use in Professional Settings. London: Longman. Bhatia, Vijay K. 2014. Worlds of Written Discourse: A genre-based view. London: Continuum. Bhatia, Vijay K. 2017. Critical Genre Analysis: Investigating Interdiscursive Performance in Professional Contexts. London: Routledge Bishop JM. Artificial intelligence is stupid and causal reasoning will not fix it. Front Psychol. 2020;11:513474. Accessed 6 January 2023. https://pubmed.ncbi.nlm.nih.gov/33584394/#article-details Briganti G, Le Moine O. Artificial Intelligence in Medicine: Today and Tomorrow. Front Med (Lausanne). 2020;7:27. Accessed 8 February 2023 https://www.frontiersin.org/articles/10.3389/fmed.2020.00027/full. Clark E, August T, Serrano S, Haduong N, Gururangan S, Smith NA. All that’s ‘human’ is not gold: Evaluating human evaluation of generated text. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol. 1: Long Papers). Association for Computational Linguistics; 2021:7282–7296. Accessed 8 February 2023. https://arxiv.org/abs/2107.00061 Else, Holly. Researchers cannot always differentiate between AI-generated and original abstracts. Nature Briefing 12 January 2023. Accessed 12 February 2023. Haque MU, Dharmadasa I, Sworna ZT, Rajapakse RN, Ahmad H. “I think this is the most disruptive technology”: Exploring Sentiments of ChatGPT Early Adopters using Twitter Data. arXiv [csCL]. Published online 12 December 2022. Accessed 6 February 2023. http://arxiv.org/abs/2212.05856 Hern A. AI bot ChatGPT stuns academics with essay-writing skills and usability. The Guardian. Published 4 December 2022. Accessed 6 February 2023.. Korngiebel DM, Mooney SD. Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. NPJ Digit Med. 2021;4(1):93. Accessed 8 February 2023. https://pubmed.ncbi.nlm.nih.gov/34083689/#article-details Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. bioRxiv. Published online 20 December 2022. Accessed 7 February 2023. doi:10.1101/2022.12.19.22283643 Masa. SciNote Manuscript Writer - using Artificial Intelligence. SciNote. Published 18 November 2020. Accessed 2 February 2023. https://www.scinote.net/blog/scinote-can-write-draft-scientific-manuscript-using-artificial-intelligence/ Nature. Much to discuss in AI ethics. Nature Machine Intelligence 2022;4(12):1055–1056. Accessed 8 February 2023. OpenAI. ChatGPT: Optimizing language models for dialogue. OpenAI. Published 30 November 2022. Accessed 6 January 2023. https://openai.com/blog/chatgpt/ Shankland S. ChatGPT: Why everyone is obsessed this mind-blowing AI chatbot. CNET. Published 14 December 2022. Accessed 6 January 2023. Stokel-Walker C. AI bot ChatGPT writes smart essays - should professors worry? Nature. Published online 9 December 2022. Accessed 6 January 2023. doi:10.1038/d41586-022-04397-7 Susnjak T. ChatGPT: The end of online exam integrity? arXiv [csAI]. Published online 19 December 2022. Accessed 6 January 2023. http://arxiv.org/abs/2212.09292 Whitford E. Here’s how Forbes got the ChatGPT AI to write 2 college essays in 20 minutes. Forbes. Published 9 December 2022. Accessed 8 February 2023. Yeadon W, Inyang OO, Mizouri A, Peach A, Testrow C. The death of the short-form Physics essay in the coming AI revolution. arXiv [physics.ed-ph]. Published online 22 December 2022. Accessed 6 January 2023. doi:10.48550/ARXIV.2212.11661
Neither Artificial, nor Intelligent: Critical Responses to Text Generation by Means of the Artificial Intelligence App ChatGPT / Bromwich, William John. - 1:(2025), pp. 88-107.
Neither Artificial, nor Intelligent: Critical Responses to Text Generation by Means of the Artificial Intelligence App ChatGPT
WILLIAM JOHN BROMWICH
2025
Abstract
This study aims to analyse critical responses to the generation of texts, including scientific and medical articles by means of Artificial Intelligence. It is argued that the affordances of text-generation models using Artificial Intelligence (AI) are increasingly plausible, as the programs reproduce the format of texts memorized for the purposes of producing computer-generated content, mimicking but not replicating human intelligence. The most recent incarnation of such programs, ChatGPT, a Large Language Model (LLM), has been widely adopted in the academic community, not just by novice writers, since its launch in November 2022, giving rise to considerable alarm as it is evident that the model can replicate essay writing and research articles in a plausible manner, resulting in significant concerns among the editors of scientific journals, including medical journals, dismayed by the large-scale submission of articles drafted not by the purported authors, but by means of ChatGPT. Journal editors are seeking to adopt defence mechanisms to identify and weed out the manuscripts of dubious authorship, but preliminary studies show that they are not readily detectable even by experienced journal editors without adopting AI output detector software. In a critical discourse analysis framework (Bhatia 1993, 2014, 2017) this study examines critical comments from Nature Briefing, Science, the New York Times and the Financial Times, in addition to the scientific publications listed in the Bibliography, in which scholars in a range of disciplines express their concern about the impact of AI-generated text not just on undergraduate essay writing, but also on scientific and medical writing, resulting in plausible sounding text that presents fabricated results that are purported to be authentic, a phenomenon known as hallucination. The critical responses are classified according to author stance, with the authors identified as ALARMISTS, ENTHUSIASTS, SCEPTICS, PROHIBITIONISTS, PHILOSOPHERS, ETHICISTS AND PRAGMATISTS. The article concludes by arguing that ChatGPT should not be viewed as a matter of concern solely for IT specialists as this disruptive technology is raising questions that need to be addressed by language specialists, discourse analysts and scholars concerned about academic integrity and the quality and reliability of scientific and medical discourse. BIBLIOGRAPHY Agomuoh F. ChatGPT: how to use the viral AI chatbot that took the world by storm. Digital Trends. Published 13 December 2022. Accessed 6 January 2023. https://www.digitaltrends.com/computing/how-to-use-openai-chatgpt-text-generation-chatbot/ Bhatia, Vijay K. 1993. Analysing Genre: Language Use in Professional Settings. London: Longman. Bhatia, Vijay K. 2014. Worlds of Written Discourse: A genre-based view. London: Continuum. Bhatia, Vijay K. 2017. Critical Genre Analysis: Investigating Interdiscursive Performance in Professional Contexts. London: Routledge Bishop JM. Artificial intelligence is stupid and causal reasoning will not fix it. Front Psychol. 2020;11:513474. Accessed 6 January 2023. https://pubmed.ncbi.nlm.nih.gov/33584394/#article-details Briganti G, Le Moine O. Artificial Intelligence in Medicine: Today and Tomorrow. Front Med (Lausanne). 2020;7:27. Accessed 8 February 2023 https://www.frontiersin.org/articles/10.3389/fmed.2020.00027/full. Clark E, August T, Serrano S, Haduong N, Gururangan S, Smith NA. All that’s ‘human’ is not gold: Evaluating human evaluation of generated text. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol. 1: Long Papers). Association for Computational Linguistics; 2021:7282–7296. Accessed 8 February 2023. https://arxiv.org/abs/2107.00061 Else, Holly. Researchers cannot always differentiate between AI-generated and original abstracts. Nature Briefing 12 January 2023. Accessed 12 February 2023. Haque MU, Dharmadasa I, Sworna ZT, Rajapakse RN, Ahmad H. “I think this is the most disruptive technology”: Exploring Sentiments of ChatGPT Early Adopters using Twitter Data. arXiv [csCL]. Published online 12 December 2022. Accessed 6 February 2023. http://arxiv.org/abs/2212.05856 Hern A. AI bot ChatGPT stuns academics with essay-writing skills and usability. The Guardian. Published 4 December 2022. Accessed 6 February 2023.. Korngiebel DM, Mooney SD. Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. NPJ Digit Med. 2021;4(1):93. Accessed 8 February 2023. https://pubmed.ncbi.nlm.nih.gov/34083689/#article-details Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. bioRxiv. Published online 20 December 2022. Accessed 7 February 2023. doi:10.1101/2022.12.19.22283643 Masa. SciNote Manuscript Writer - using Artificial Intelligence. SciNote. Published 18 November 2020. Accessed 2 February 2023. https://www.scinote.net/blog/scinote-can-write-draft-scientific-manuscript-using-artificial-intelligence/ Nature. Much to discuss in AI ethics. Nature Machine Intelligence 2022;4(12):1055–1056. Accessed 8 February 2023. OpenAI. ChatGPT: Optimizing language models for dialogue. OpenAI. Published 30 November 2022. Accessed 6 January 2023. https://openai.com/blog/chatgpt/ Shankland S. ChatGPT: Why everyone is obsessed this mind-blowing AI chatbot. CNET. Published 14 December 2022. Accessed 6 January 2023. Stokel-Walker C. AI bot ChatGPT writes smart essays - should professors worry? Nature. Published online 9 December 2022. Accessed 6 January 2023. doi:10.1038/d41586-022-04397-7 Susnjak T. ChatGPT: The end of online exam integrity? arXiv [csAI]. Published online 19 December 2022. Accessed 6 January 2023. http://arxiv.org/abs/2212.09292 Whitford E. Here’s how Forbes got the ChatGPT AI to write 2 college essays in 20 minutes. Forbes. Published 9 December 2022. Accessed 8 February 2023. Yeadon W, Inyang OO, Mizouri A, Peach A, Testrow C. The death of the short-form Physics essay in the coming AI revolution. arXiv [physics.ed-ph]. Published online 22 December 2022. Accessed 6 January 2023. doi:10.48550/ARXIV.2212.11661| File | Dimensione | Formato | |
|---|---|---|---|
|
ABSTRACT BROMWICH 27-06-2025.docx
Open access
Tipologia:
Abstract
Dimensione
21.79 kB
Formato
Microsoft Word XML
|
21.79 kB | Microsoft Word XML | Visualizza/Apri |
Pubblicazioni consigliate

I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris




