Authorship – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 12 Nov 2025 14:46:41 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Authorship – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Restoring trust in science: a proposed framework for verifying researcher identity https://thepublicationplan.com/2025/11/12/restoring-trust-in-science-a-proposed-framework-for-verifying-researcher-identity/ https://thepublicationplan.com/2025/11/12/restoring-trust-in-science-a-proposed-framework-for-verifying-researcher-identity/#respond Wed, 12 Nov 2025 14:46:39 +0000 https://thepublicationplan.com/?p=18386

KEY TAKEAWAYS 

  • The International Association of Scientific, Technical & Medical Publishers’ Research Identity Verification Framework aims to tackle fraudulent submissions, including from paper mills.
  • The framework of layered identity checks for researchers, peer reviewers, and editors aims to raise obstacles to misconduct and enhance transparency, while maintaining inclusivity for all authentic researchers.

Research is facing an unprecedented integrity challenge, with sophisticated paper mills publishing poor-quality and fraudulent papers by unverifiable researchers and fake personas. To combat this issue, the International Association of Scientific, Technical & Medical Publishers (STM) has developed a Research Identity Verification Framework, released for community review. In an interview with Retraction Watch, Hylke Koers, Chief Information Officer at STM, shared how the framework could be used by journals and institutions to verify the identity of researchers.

Why is the framework needed?

Currently, publishers rely on time-consuming manual checks to validate the identity of contributors such as authors, peer reviewers, or guest editors. These processes do not match the speed and organisation of fraudulent networks. Part of the problem lies in the ease with which untraceable digital identities can be created and used to manipulate key parts of the publishing pipeline, for example, suggesting a fake reviewer. New approaches are needed to tackle this growing issue.

How will the framework be used?

The framework introduces a layered, systemic method of identity verification. Suggested methods include asking individuals to:

  • validate an institutional email address
  • sign in via ORCiD or use ORCiD Trust Markers
  • provide a government document, such a passport or driving licence.

Koers notes that implementing these checks would make impersonation or identity theft more difficult and improve accountability, while multiple options for verification retain accessibility. Publishers are advised to assess the level of risk, asking “how confident can we be that this person is who they claim to be, and that the information they’ve provided is genuine?”.

Implementing these checks would make impersonation or identity theft more difficult and improve accountability”

What are the next steps?

The success of the Research Identity Verification Framework will rely on widespread adoption. The STM plans to collaborate with early adopters to develop practical implementation pathways and refine future recommendations.

Koers notes that ultimately, no framework can eliminate all fraud, but making it more difficult to act fraudulently and easier to trace and respond to publishing misconduct should have a positive impact.

—————————————————

Do you believe STM’s Research Identity Verification Framework will reduce academic fraud?

]]>
https://thepublicationplan.com/2025/11/12/restoring-trust-in-science-a-proposed-framework-for-verifying-researcher-identity/feed/ 0 18386
Wiley develops AI guidelines in response to demand from researchers https://thepublicationplan.com/2025/10/01/wiley-develops-ai-guidelines-in-response-to-demand-from-researchers/ https://thepublicationplan.com/2025/10/01/wiley-develops-ai-guidelines-in-response-to-demand-from-researchers/#respond Wed, 01 Oct 2025 08:25:23 +0000 https://thepublicationplan.com/?p=18314

KEY TAKEAWAYS

  • Wiley embraces a future-looking AI policy with guidelines on responsible and ethical use, with human oversight, to ensure the integrity of publications.
  • The guidelines also provide tips on how AI can be used, effective prompt engineering, and choosing the best AI tools for the project.

Artificial intelligence (AI) is becoming more widely adopted within scientific publishing, yet many authors remain unsure how to use it effectively while maintaining the integrity of their research. Highlighted by an article in Research Information, Wiley have released AI guidelines for book authors in response to findings that ~70% of researchers want publisher guidance on using AI.

The guidelines include:

  • Reviewing terms and conditions: authors should regularly review terms and conditions to ensure that their chosen AI technology does not claim ownership over the content or limit its use.
  • Maintaining human oversight: AI should assist but not replace authors. Authors must take full responsibility for their work and review any AI-generated content before submission.
  • Disclosing AI use: authors should document all AI use, including its purpose and impact on findings, and describe how AI-generated content was verified.
  • Ensuring protection of rights: authors must ensure that the AI used (or its provider) does not gain rights over the authors’ material, including for the purposes of training the AI.
  • Using AI responsibly and ethically: authors must comply with data protection laws, avoid using AI to copy the style or voice of others, fact-check the accuracy of AI-generated content, and be mindful of potential biases.

The guidance also provides recommendations on how to write prompts and select AI tools, as well as suggestions on use cases for authors newer to AI:

  • analysing research and recognising themes across sources
  • exploring ways to simplify complicated topics
  • adapting work so it is relatable for different audiences
  • polishing work by refining language and checking for consistency.

The guidelines complement Wiley’s existing generative AI framework for journal publications. As stated by Jay Flynn (Wiley EVP & General Manager, Research & Learning), “writers and researchers are already using AI tools, whether publishers like it or not. At Wiley, we’d rather embrace this shift than fight it”.

“Writers and researchers are already using AI tools, whether publishers like it or not. At Wiley, we’d rather embrace this shift than fight it”
– Jay Flynn, Wiley EVP & General Manager, Research & Learning

—————————————————

What do you think – should publishers give authors more guidance on how to use AI appropriately?

]]>
https://thepublicationplan.com/2025/10/01/wiley-develops-ai-guidelines-in-response-to-demand-from-researchers/feed/ 0 18314
Is high-volume publishing threatening research integrity? https://thepublicationplan.com/2025/07/01/is-high-volume-publishing-threatening-research-integrity/ https://thepublicationplan.com/2025/07/01/is-high-volume-publishing-threatening-research-integrity/#respond Tue, 01 Jul 2025 11:39:04 +0000 https://thepublicationplan.com/?p=18053

KEY TAKEAWAYS

  • A recent analysis revealed ~20,000 scientific authors publishing impossibly high numbers of articles.
  • High-volume publishing in the pursuit of inflated metrics represents a threat to research integrity.

We have reported previously on the rising numbers of highly prolific scientific authors. Dalmeet Singh Chawla recently highlighted this issue in Chemical & Engineering News, discussing findings that ~20,000 scientists from Stanford’s top 2% list publish an “implausibly high” number of papers. Singh Chawla explored the implications of high-volume publishing on research integrity, as well as potential solutions.

Study findings

The study, published in Accountability in Research, examined the publication patterns of ~200,000 researchers spanning 22 distinct disciplines, from Stanford University’s list of top 2% scientists (based on citation metrics). It found that:

  • around 10% (20,000 scientists) produced an impossibly high volume of publications
  • some scientists published hundreds of studies per year, with hundreds or even thousands of new co-authors
  • approximately 1,000 were early-career scientists with ≤10 years’ academic experience.

Impact on research integrity

Analysis authors, Simone Pilia and Peter Mora, blame the surprising number of hyperprolific authors on a culture that rewards publication quantity through high scores on metrics. They suggest that this not only compromises research quality but leads to some scientists, “particularly the younger ones”, feeling pressured. Pilia and Mora linked the incentive to churn out large quantities of publications with “unethical practices” such as the inclusion of co-authors who have not made adequate contributions to the research. Based on their findings, Pilia and Mora warn that normalising high-volume publishing poses a significant threat to the fundamental academic process.

“Normalising high-volume publishing poses a significant threat to the fundamental academic process.”

A divisive solution?

Pilia and Mora propose adjusting metrics for scientists exceeding publication and co-authorship thresholds. However, according to Singh Chawla, information scientist Ludo Waltman fears that such adjustments would make research evaluation too complex and confusing. He proposes that research assessment should focus less on metrics and more on a wider range of research activities.

The reliability of metrics for research evaluation is an ongoing topic of discussion within the scientific community, and this latest research serves as a reminder for authors to keep research integrity at the heart of their publication decisions.

————————————————–

Do you think high-volume publishing undermines research integrity?

]]>
https://thepublicationplan.com/2025/07/01/is-high-volume-publishing-threatening-research-integrity/feed/ 0 18053
Unlocking the potential of AI in global healthcare: is international research collaboration the key? https://thepublicationplan.com/2025/04/24/unlocking-the-potential-of-ai-in-global-healthcare-is-international-research-collaboration-the-key/ https://thepublicationplan.com/2025/04/24/unlocking-the-potential-of-ai-in-global-healthcare-is-international-research-collaboration-the-key/#respond Thu, 24 Apr 2025 15:32:12 +0000 https://thepublicationplan.com/?p=17664

KEY TAKEAWAYS

  • North America, Europe, and Oceania are global leaders for the output of high-quality AI-powered life science research.
  • International collaboration may be key to unlocking AI’s full potential.

The use of artificial intelligence (AI) in life science research is rising exponentially, from aiding drug development to assisting in the publication process. However, geographical imbalances in AI use could lead to biased models and implications for medical care.

Geographical variation

In an article for Nature Communications, Dr Leo Schmallenbach and colleagues evaluated the geographical spread of AI-related life science research. Their analysis revealed geographical differences in the quantity, quality, and relevance of AI-related life science research. 

  • Quantity: The USA and China published the largest share of research, while countries in Africa and Latin America lagged behind. In 2020, China surpassed the USA to lead the world in the number of AI-related life science publications per year, making Asia the continent with the largest cumulative output.
  • Quality: Northern America, Europe, and Oceania had a greater proportion of research published in high-ranking journals than Asia, Latin America, and Africa.
  • Relevance: Publications from Oceania, Europe, and Northern America were more frequently cited in life science and clinical research articles than those from Asia.

“Analysis revealed geographical differences in the quantity, quality, and relevance of AI-related life science research.”

International collaboration is key to success

The authors also compared research stemming from national versus international collaborations, with international collaborations defined as articles with authorship across 2 or more countries. International research collaborations were 35% more likely to be published in high-ranking journals and received 21% more citations in life science articles.

Speaking to Global Health Otherwise, Dr Schmallenbach concluded that “international collaboration is critical to unlocking the full potential of AI in healthcare” and called for policies encouraging more international partnerships.

————————————————–

What do you think – is international collaboration the key to unlocking AI’s full potential in global healthcare?

]]>
https://thepublicationplan.com/2025/04/24/unlocking-the-potential-of-ai-in-global-healthcare-is-international-research-collaboration-the-key/feed/ 0 17664
ICMJE recommendations update 2024: what’s new and what’s next? https://thepublicationplan.com/2024/04/02/icmje-recommendations-update-2024-whats-new-and-whats-next/ https://thepublicationplan.com/2024/04/02/icmje-recommendations-update-2024-whats-new-and-whats-next/#respond Tue, 02 Apr 2024 13:02:35 +0000 https://thepublicationplan.com/?p=15481

KEY TAKEAWAYS

  • Key updates to the ICMJE Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals include guidance on the use of AI by authors, editors, and reviewers.
  • Other important updates include statements on fair authorship assignment, sustainability goals, funding support declarations, and protection of research participants.

The International Committee of Medical Journal Editors (ICMJE) recently updated its Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Key updates provide guidance on appropriate authorship of research carried out in low- and middle-income countries (LMICs) and the use of artificial intelligence (AI) in generating and reporting data. The latest recommendations and an annotated version of the previous recommendations are both freely available on the committee’s website, and a summary of all updates is provided below.

  • Authorship: local investigators should be included as authors on publications reporting data from LMICs. As well as ensuring fairness, local author contributions provide additional context on the implications of research.
  • Use of AI (authors): if AI is used to provide writing assistance, this should be clearly stated in the article acknowledgements. The use of AI by researchers to help collect data or generate figures should be noted in the methods.
  • Use of AI (editors and reviewers): journal editors should be aware of potential confidentiality concerns if AI is used in the review process. Reviewers must request permission from the journal before using AI assistance.
  • Carbon emissions: all stakeholders in medical publishing should collaborate to work towards net zero carbon emissions.
  • Acknowledging funding support: funding statements should relate directly to the work being reported, for example: “This study was funded by A; Dr. F’s time on the work was supported by B.” Other potential conflicts of interest and general funding support should be included in the disclosures section.
  • Protection of research participants: authors should be prepared to provide approval documentation for their study if requested by editors.
  • Citations: wherever possible, cited references should be published articles rather than abstracts.

In an editorial published in Cureus, Sankalp Yadav takes a detailed look at the evolution of the recommendations and their impact on medical publishing, describing the latest updates as a “beacon of ethical guidance in the ever-evolving domain of biomedical research and publishing”. Yadav also discusses some of the ongoing challenges in implementing the ICMJE guidance, such as the promotion of fair and ethical authorship practices and keeping pace with new developments – something that may be particularly true for AI and its increasing impact across all areas of medical research and publishing.

If AI is used to provide writing assistance, this should be clearly stated in the article acknowledgements.

————————————————

Which aspect of the updated ICMJE recommendations do you believe will have the most positive impact on the quality and integrity of medical publications?

]]>
https://thepublicationplan.com/2024/04/02/icmje-recommendations-update-2024-whats-new-and-whats-next/feed/ 0 15481
Rise in “extremely productive” authors sparks concern https://thepublicationplan.com/2024/03/14/rise-in-extremely-productive-authors-sparks-concern/ https://thepublicationplan.com/2024/03/14/rise-in-extremely-productive-authors-sparks-concern/#respond Thu, 14 Mar 2024 13:36:45 +0000 https://thepublicationplan.com/?p=15399

KEY TAKEAWAYS

  • The number of highly prolific scientific authors is continuing to rise.
  • Publishing behaviours could be monitored to detect unusual authorship patterns.

The number of extremely productive scientific authors is on the rise and may reflect an increase in “questionable research practices and fraud” – according to John Ioannidis, coauthor of a recent study posted on BioRxiv.

As reported in a Nature News article by Gemma Conroy, the study found that the number of extremely productive authors – defined as those who publish the equivalent of more than 60 papers a year – has almost quadrupled since a previous analysis carried out in 2018. This increase was surprising given that such high productivity levels had started to level off in 2014, said Ioannidis. Based on raw citation counts, extremely productive authors now account for 44% of the 10,000 most-cited authors across all areas of science.

To assess productivity levels in their new study, Ioannidis et al. counted all articles, reviews, and conference papers published between 2000 and 2022 and indexed in Scopus. They identified 12,624 extremely productive physicists (analysed separately due to their unique authorship practices) and 3,191 extremely productive scientists working in other areas. Topping this list was clinical medicine – perhaps unsurprising given that one in three scientists work in this field – which had 678 authors who published the equivalent of a paper at least once every 6 days during 2022.

678 authors working in clinical medicine published the equivalent of a paper at least once every 6 days during 2022.

The preprint authors speculate that a range of possible factors may explain the recent rise in extreme productivity across all research areas, including lax authorship practices, financial incentives, and paper mills. And while acknowledging that some highly prolific authors may be very talented, they caution that “spurious and unethical behaviours may also abound”. They call for unusual authorship patterns of individual scientists, teams, institutions, and countries to be monitored using centralised, standardised databases.

————————————————

Should unusual authorship patterns of individual authors, teams, institutions, and countries be centrally monitored?

]]>
https://thepublicationplan.com/2024/03/14/rise-in-extremely-productive-authors-sparks-concern/feed/ 0 15399
Are we coming close to accurate AI detection? https://thepublicationplan.com/2024/02/20/are-we-coming-close-to-accurate-ai-detection/ https://thepublicationplan.com/2024/02/20/are-we-coming-close-to-accurate-ai-detection/#respond Tue, 20 Feb 2024 12:19:04 +0000 https://thepublicationplan.com/?p=15009

KEY TAKEAWAYS

  • Findings of a recent study suggest that accurate detection of AI-generated text can be achieved.
  • Researchers propose that accuracy is dependent on tailoring detectors to specific fields and writing types.

The meteoric rise of large language models, such as ChatGPT, is likely to result in a rapid increase in the use of generative artificial intelligence (AI) in academic publishing. This presents a quandary for journal publishers and editorial teams as they strive to develop guidance and ‘stay ahead’ of the technology. Currently, attitudes vary somewhat between journals, ranging from The Lancet limiting AI use to improving readability, to Nature adopting a firm stance against the use of generative AI to create images. Regardless of the detail in individual guidelines, enforcement is reliant on accurate detection of AI-generated content; technology which, to date, has been viewed as flawed. A recent Nature News article by McKenzie Prillaman spotlights research on a potential solution, namely, the development of more specialist detectors.

Developing a specialist AI detector

As Prillaman reports, a recent study published in Cell Reports Physical Science suggests that tailoring AI detectors so that they are trained to check specific types of writing may result in more reliable detection methods.

Tailoring AI detectors so that they are trained to check specific types of writing may result in more reliable detection methods.

The research group, Desaire et al., used 100 published (ie, human-created) introductions from articles in various chemistry journals to train ChatGPT 3.5 to develop 200 introductions that followed similar styles. These documents were used to train their machine learning algorithm. The model was then used to test more articles, checking for AI- vs human-generated content via 20 different features of writing style. The group found that:

  • the detector identified AI-generated documents with 98–100% accuracy
  • human-written documents were detected with 96% accuracy
  • the model outperformed other more general detectors, such as OpenAI’s AI classifier and ZeroGPT, in detecting AI-generated documents
  • the model performed similarly when tested on writing from chemistry journals beyond those it was trained on, but not when tested on more general science magazine writing.

Implications for scientific publishers

The group concluded that their detector outperformed its contemporaries because it was trained specifically on academic publications. They propose that this tailored approach is vital for the development of accurate AI detectors suitable for use by academic publishers.

————————————————

What do you think – can AI detectors be used successfully in academic publishing?

]]>
https://thepublicationplan.com/2024/02/20/are-we-coming-close-to-accurate-ai-detection/feed/ 0 15009
Beyond the impact factor: a new way to assess journal quality https://thepublicationplan.com/2024/02/15/beyond-the-impact-factor-a-new-way-to-assess-journal-quality/ https://thepublicationplan.com/2024/02/15/beyond-the-impact-factor-a-new-way-to-assess-journal-quality/#respond Thu, 15 Feb 2024 15:53:47 +0000 https://thepublicationplan.com/?p=15117

KEY TAKEAWAYS

  • The ‘diversity factor’ has been proposed as a new, more equitable metric for assessing journal quality and the impact of health research.
  • The index takes into account the diversity of the authors, study participants, and departmental affiliations to promote a wider range of perspectives in research.

The impact factor remains the dominant metric among researchers for assessing journal and (indirectly) research paper quality, despite multiple calls for it to be superseded by alternative measures. Recently, a novel metric claimed the spotlight in an MIT News article. The article describes a study by Dr Jack Gallifant et al., published in PLOS Global Public Health, which suggests that the impact factor misses the mark in capturing a paper’s impact on health. The researchers argue that, for a more accurate understanding of impact, journal metrics should take into account the diversity of the authors and of the study participants. They propose a novel metric, termed the ‘diversity factor’.

The index is comprised of 3 key components:

  • author demographics: the gender and geographic location of the authors
  • participant demographics: the gender, ethnicity, race, language, geographic location, and age of the individuals enrolled in the study
  • departmental affiliation: papers with authors from different disciplines (eg, doctors, nurses, and engineers) score more highly than papers with authors from a single field.

After settling on the metric’s components, the group used the database OpenAlex to extract metadata relating to the authors of over 100,000 medical papers, from around 7,500 journals, published in the last 20 years. A considerable number of the papers retrieved were not open access, meaning that participant demographics could not be included in the final analysis. However, as the researchers predicted, most papers did not perform well against the new metric, even when considering author information alone. Specifically, there was significant underrepresentation of female authors and of authors from low- or middle-income countries. The group hope that by quantifying and tracking diversity in this way, over time, those working in health research would be prompted to drive progress against these measures.

So, why exactly is a lack of diversity a problem for global health outcomes? Ultimately, it boils down to ‘blind spots’ in medical knowledge, explains Dr Leo Anthony Celi, senior author of the paper:

“What happens when all of the authors involved in a project are alike is that they’re going to have the same blind spots. They’re all going to see the problem from the same angle. What we need is cognitive diversity, which is predicated on lived experiences.”

Dr Celi believes that stakeholders within medical publishing — including journals, academic institutions, funding bodies, and even the media — are accountable for the inequity seen in health research. As such, each must play their part in diversifying medical research publications. To this end, Dr Celi calls for the diversity factor to prompt discussions within the medical research community and provide a first step towards a more equitable evaluation of the true impact of research.

————————————————

What do you think – should journal metrics take into account the diversity of authors and study participants?

]]>
https://thepublicationplan.com/2024/02/15/beyond-the-impact-factor-a-new-way-to-assess-journal-quality/feed/ 0 15117
Is it time to change our approach to reporting author contributions? https://thepublicationplan.com/2023/12/07/is-it-time-to-change-our-approach-to-reporting-author-contributions/ https://thepublicationplan.com/2023/12/07/is-it-time-to-change-our-approach-to-reporting-author-contributions/#respond Thu, 07 Dec 2023 14:37:30 +0000 https://thepublicationplan.com/?p=14947

KEY TAKEAWAY

  • Researchers propose novel methods for ascribing authorship contributions, which involve assigning authorship to each result in a manuscript.

The last few years have seen concerted efforts to bring more consistency and quantification to the way that authorship and author contributions are assigned. In addition to existing tools such as Contributory Roles Taxonomy (CRediT), various bodies have suggested new methods to facilitate transparency and ensure authorship and author contributions are easily and appropriately assigned. These include the International Society for Medical Publications Professionals (ISMPP) authorship algorithm tool and initiatives such as the quantitative authorship decision support tool and Author Contribution Index. Now, Oded Rechavi and Pavel Tomancak provide an alternative method in a recent commentary published in Nature Reviews.

Rechavi and Tomancak’s approach involves assigning credit to each result in a manuscript. They “argue that it should be known who thought of each idea, who ran each experiment, and who analysed the data.” But how exactly would this be achieved? The authors propose two ways. Rechavi suggests substituting the word “we” for the names of specific, responsible authors. For instance, “we sequenced RNA” would become “Rechavi sequenced RNA”. Alternatively, Tomancak proposes assigning a number to each author in the author list and citing these for each contribution. For example, “we sequenced RNA1” would credit the first author in the author list.

“It should be known who thought of each idea, who ran each experiment, and who analysed the data.

The authors list multiple advantages of ascribing authorship to each result, irrespective of how it is achieved. These include:

  • vague author contribution statements become redundant
  • unexpected contributions are recognised (eg, theorists performing experimental work)
  • the semi-quantitative data provided could help to justify or assign author order.

Nevertheless, they acknowledge several concerns raised by their peers, including:

  • extra work will be needed to recall ‘who did what’ for each sentence
  • reading the names of authors throughout a manuscript may be cumbersome
  • disputes may arise when discussing who contributed to a specific study.

Rechavi and Tomancak counter this by calling on researchers to experiment with this alternative method in their own papers and suggest that bioRxiv, the preprint server, is an ideal place to try it out. They end with a clear call to action: ‘bottom-up’ adoption by the scientific community is needed to implement meaningful, lasting changes to the way in which author contributions are assigned.

————————————————

]]>
https://thepublicationplan.com/2023/12/07/is-it-time-to-change-our-approach-to-reporting-author-contributions/feed/ 0 14947
ChatGPT: the newest author of scientific research? https://thepublicationplan.com/2023/11/16/chatgpt-the-newest-author-of-scientific-research/ https://thepublicationplan.com/2023/11/16/chatgpt-the-newest-author-of-scientific-research/#respond Thu, 16 Nov 2023 13:44:00 +0000 https://thepublicationplan.com/?p=14487

KEY TAKEAWAY

  • A ‘ChatGPT-authored’ scientific paper highlights the promise and pitfalls of using AI in research and publications.

Use of artificial intelligence (AI) in scientific publishing seems inevitable. While the full capabilities of this fast-changing technology are yet to be determined, some in medical publishing have begun to explore ways to harness the potential of generative AI, while others urge caution and lament a lack of structured guidance. Recently, as reported by Gemma Conroy in Nature News, Professor Roy Kishony and his student, Tal Ifargan, provided new fuel for the debate, by asking ChatGPT to conduct research and write a paper from scratch.

Kishony and Ifargan used a ‘data to paper’ system, in which software acted as a ‘go between’ between humans and generative AI. This system automatically prompted ChatGPT to follow the steps of scientific research, from hypothesis generation to development of a scientific manuscript. In less than an hour, ChatGPT developed a study objective; wrote code to analyse a large, publicly available dataset; and drew conclusions based on its findings and existing literature, which it reported in a 19-page research article.

The study highlighted some promising aspects of incorporating AI into research and publication pathways, namely reduced timelines and the potential to quickly generate written summaries. However, it also shone a light on a number of limitations and risks:

  • False narratives: In this case, ChatGPT claimed to ‘address a gap in the literature’, although the subject (a link between diabetes risk and diet and exercise) was already well investigated.
  • Decrease in research quality: Kishony flagged the risks of generative AI leading to ‘p hacking’ or a flood of low-quality research papers.
  • Incapable of self-correction: Stephen Heard of Scientist Sees Squirrel also provided commentary and analysis on the limitations thrown up by the study, including generative AI’s lack of accuracy. Expert human intervention was required throughout, to spot and correct errors.
  • Regurgitating existing ideas: Heard also emphasised that generative AI creates content based on existing source material, thus perpetuating biases and reducing innovation and creativity.
  • Hallucinations: As explained by Jie Yee Ong in The Chainsaw, ‘hallucinations’ are a well-known problem with generative AI. This study was no exception, with ChatGPT generating fake citations despite access to the published literature. As Ong puts it, “for now, it is best not to treat everything ChatGPT spits out as gospel”.

Kishony and Ifargan’s carefully planned study allowed generative AI’s work to be checked for accuracy by human experts. Researchers agree that these human checks and balances remain essential to ensuring the credibility of scientific research and publications in which AI plays a role.

Researchers agree that human checks and balances remain essential to ensuring the credibility of scientific research and publications in which AI plays a role.

————————————————–

What do you think will be the biggest impact of using AI in the publication of scientific research?

]]>
https://thepublicationplan.com/2023/11/16/chatgpt-the-newest-author-of-scientific-research/feed/ 0 14487