Nature – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Thu, 11 Jul 2024 08:14:39 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Nature – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Japan initiates a nationwide plan towards open science https://thepublicationplan.com/2024/07/11/japan-initiates-a-nationwide-plan-towards-open-science/ https://thepublicationplan.com/2024/07/11/japan-initiates-a-nationwide-plan-towards-open-science/#respond Thu, 11 Jul 2024 08:14:37 +0000 https://thepublicationplan.com/?p=16172

KEY TAKEAWAYS

  • Japan’s government gets started on its nationwide plan to make publicly funded research free to read by investing ¥10 billion (£50 million).
  • This investment will be used to standardise institutional data/publication repositories, making all research available on the same national server.

In June, the Japanese government took another step towards its goal of making publicly funded research papers free to access from April 2025. As reported by Dalmeet Singh Chawla for Nature News, this makes Japan one of the first countries in the world to launch a plan for open access (OA) on a national scale.

Investment in infrastructure

To make the transition to OA, the Japanese government will invest ¥10 billion (around £50 million) to standardise university data and publication repositories. While each institution will host research by their own academics, these repositories will all be hosted on the same national server. The result: Japan will have “a unified record of all research produced by its academics” that, importantly, does not overlook articles published in Japanese.

A green OA strategy

Japan’s transition to open science is based on green OA, a strategy the government considers more feasible for universities than a gold OA model. As reported by Singh Chawla, experts in open science and OA have praised the Japanese government’s plans.  Johan Rooryck, Executive Director of cOAlition S, supported the use of green OA “especially for all the content that is still behind the paywall”. Meanwhile, Kathleen Shearer, Executive Director of the Confederation of Open Access Repositories, highlighted the equitable nature of the plans.

Although slow to embrace open science, Shearer notes that Japan is now leading the way in OA publishing.

Japan is now leading the way in OA publishing.

————————————————–

What do you think – how important is a unified, national approach to ensuring the success of open access publishing?

]]>
https://thepublicationplan.com/2024/07/11/japan-initiates-a-nationwide-plan-towards-open-science/feed/ 0 16172
Could advances in AI accelerate drug development? https://thepublicationplan.com/2024/06/25/could-advances-in-ai-accelerate-drug-development/ https://thepublicationplan.com/2024/06/25/could-advances-in-ai-accelerate-drug-development/#respond Tue, 25 Jun 2024 16:32:15 +0000 https://thepublicationplan.com/?p=15990

KEY TAKEAWAYS

  • AI tools have the potential to streamline, optimise, or speed up various stages of the clinical trial process.
  • This could accelerate drug development and potentially reduce the number of participants needed in clinical trials.

While advances in the field of artificial intelligence (AI) have been making headlines recently, drug development has been slowing down. Taking more than a billion dollars and a decade to complete increasingly complex clinical trials, the majority of investigational drugs never reach the market. Could AI help reverse this trend?

How can AI aid clinical research?

In a recent article for Nature, Matthew Hutson discusses how researchers have already begun to investigate the potential for AI to optimise clinical trial processes:

  • Clinidigest claims to simultaneously access dozens of clinical trial records to create summaries, allowing researchers to gain a quick overview of existing trial data.
  • HINT aims to predict a drug’s success during the trial design stage to ensure resources are not wasted.
  • Trialpathfinder seeks to optimise eligibility criteria by testing whether broadening criteria would have any effect on risk to patients; its developers believe this would also allow for more inclusive trials.
  • DQueST seeks to match patients looking to participate in research with suitable clinical trials.
  • SDQ is used to extract, analyse, and clean datasets.
  • Some AI tools aim to predict missing data points and identify relevant clinical subgroups.
  • Others aim to monitor adherence to medication, so that investigators will not have to.

Pharmaceutical companies are now experimenting with software that completes tasks within a couple of days, which previously took 2 months.

Can AI help patients too?

As well as facilitating recruitment and including more populations in trials, AI tools could decrease the number of patients required for successful research. Hutson highlights examples such as Unlearn.AI, which aims to achieve this through creating ‘digital twins’, predicting a patient’s results had they been given a placebo. The makers claim this reduces the sample size required and allows a greater proportion of patients to take investigational drug rather than placebo.

Technologies such as ChatDoctor answer patients’ questions, which could help retain participants in clinical trials, as well as being useful in clinical practice.

Should we be worried?

As Hutson points out, there are concerns about AI and research integrity. Xiaoyan Wang, co-developer of AutoCriteria, highlights the risk of biased data and confidentiality issues when providing tools with a huge training data set. The WHO has published guidelines on making sure  AI is used ethically. As is currently widely acknowledged, AI outputs may always need to be checked by a human expert.

While AI has the potential to optimise clinical trials, researchers and clinicians need to be mindful of its limitations.

————————————————–

What do you think – will using AI cause or solve problems in the running of clinical trials?

]]>
https://thepublicationplan.com/2024/06/25/could-advances-in-ai-accelerate-drug-development/feed/ 0 15990
Opening the door to open science: progress and challenges https://thepublicationplan.com/2024/05/17/opening-the-door-to-open-science-progress-and-challenges/ https://thepublicationplan.com/2024/05/17/opening-the-door-to-open-science-progress-and-challenges/#respond Fri, 17 May 2024 14:29:55 +0000 https://thepublicationplan.com/?p=15471

KEY TAKEAWAY

  • The first global study on trends and standards in open science highlights some good practices, but also warns of inequities.

Recent data from UNESCO show a mixed outlook for the adoption of open science practices around the world. While some progress has been made in recent years, more still needs to be done to ensure specific initiatives, such as open access publishing, translate into truly equitable access to science.

In 2021, UNESCO published an international framework for the advancement of open science. Adopted by 193 countries, the Recommendation on Open Science outlined common values, principles, and guidelines for achieving open science globally. At the end of last year, the organisation shared its first global comprehensive assessment of trends and standards in open science. A recent editorial published in Nature commented on the key findings. Among its positive insights were:

  • an increase in spending on ‘societal engagement’ projects by the European Commission from 2002 to 2020
  • mandated open access publishing for research data arising from the EU Horizon 2020 programme
  • the establishment of a national infrastructure sharing scheme for scientific research in Brazil
  • progress towards building a national open science policy to improve the scrutiny, transparency, and reproducibility of research in South Africa.

While the report acknowledged a clear increase in open access publishing, it warned that focusing on scientific outputs is only part of the picture. As UNESCO emphasise

“Open science is about making sure not only that scientific knowledge is accessible but also that the production of that knowledge itself is inclusive, equitable and sustainable.”

Indeed, Ismael Rafols (UNESCO Chair on Diversity and Inclusion in Global Science) highlights in his recent blog post at Leiden Madtrics that there is a danger of creating a ‘streetlight effect’, whereby the focus of policy on measurable outputs causes the underlying open science principles to be neglected.

Another issue with current open science practices — highlighted by Rafols – is the high costs associated with some models of open access publishing, which can put scientists in lower-income countries at a disadvantage. The open access publisher eLife has recognised this territorial inequity and recently established the Global South Committee for Open Science. The initiative unites researchers who are minoritised on the basis of their nation’s socioeconomic or political status, to increase their representation in the global scientific community.

Of course, all scientific stakeholders should support the principles of open science. Now is a good time for us to reflect on how we, as individuals and within our own organisations, can help further the true spirit of the movement.

————————————————–

What do you think – is open access publishing a force for good in the pursuit of truly equitable open science?

]]>
https://thepublicationplan.com/2024/05/17/opening-the-door-to-open-science-progress-and-challenges/feed/ 0 15471
Reviewing retractions and research misconduct: a national solution? https://thepublicationplan.com/2024/04/30/reviewing-retractions-and-research-misconduct-a-national-solution/ https://thepublicationplan.com/2024/04/30/reviewing-retractions-and-research-misconduct-a-national-solution/#respond Tue, 30 Apr 2024 10:56:36 +0000 https://thepublicationplan.com/?p=15511

KEY TAKEAWAYS

  • A recent government-initiated national review required all university researchers in China to declare research retractions.
  • Results are awaited, but outputs of national monitoring schemes of this nature could help to reduce research misconduct.

Numerous papers are retracted from academic journals each year, owing to honest mistakes or research misconduct. In 2023, most retractions were from Hindawi, a subsidiary of the publisher Wiley. A recent analysis performed by Nature revealed a high proportion of those retracted articles involved Chinese co-authors. In response, the Chinese government issued a national notice to universities to investigate retracted research papers and misconduct. Now, a recent Nature News article by Smriti Mallapaty summarises the key details of the review and discusses the wider impact it could have on academia.

Notice calling for disclosure of retractions

The notice, issued by the Ministry of Education’s Department of Science, Technology and Informatization, called for:

  • a record of listed and unlisted retractions from English- and Chinese-language journals from the past 3 years
  • reasons for retractions, such as misconduct (eg, image manipulation), or an honest mistake
  • penalties for misconduct or failure to declare retracted articles (eg, salary cuts, bonus withdrawals, demotions or suspensions from grant applications).

As reported by Mallapaty, this is considered to be the first national review on this scale, with a clearer target and broader scope than earlier efforts.

Short timeframe to complete review

Mallapaty also flagged that universities were required to complete their reviews within a strict timeframe, and that views on this approach were somewhat mixed. While some felt that the tight deadline might have ensured that universities worked hard to complete their reviews on time, others suggested that universities may only have submitted preliminary reports.

Impact of national review

Although the next actions from the Ministry are unclear, it is suggested that publicising the reasons for retractions could be useful alongside existing online retraction notices. A yearly review could also ensure universities monitor research integrity.

Science- and innovation-policy researcher Li Tang says “cultivating research integrity takes time, but China is on the right track”.

“Cultivating research integrity takes time, but China is on the right track”.

With reports submitted in mid-February, it will be interesting to see the ultimate impact of this national review and whether other countries undertake similar initiatives to investigate research retraction and misconduct.

————————————————–

What do you think – could national reviews that monitor research retractions and misconduct help to prevent such cases occurring?

]]>
https://thepublicationplan.com/2024/04/30/reviewing-retractions-and-research-misconduct-a-national-solution/feed/ 0 15511
Is it time to change our approach to reporting author contributions? https://thepublicationplan.com/2023/12/07/is-it-time-to-change-our-approach-to-reporting-author-contributions/ https://thepublicationplan.com/2023/12/07/is-it-time-to-change-our-approach-to-reporting-author-contributions/#respond Thu, 07 Dec 2023 14:37:30 +0000 https://thepublicationplan.com/?p=14947

KEY TAKEAWAY

  • Researchers propose novel methods for ascribing authorship contributions, which involve assigning authorship to each result in a manuscript.

The last few years have seen concerted efforts to bring more consistency and quantification to the way that authorship and author contributions are assigned. In addition to existing tools such as Contributory Roles Taxonomy (CRediT), various bodies have suggested new methods to facilitate transparency and ensure authorship and author contributions are easily and appropriately assigned. These include the International Society for Medical Publications Professionals (ISMPP) authorship algorithm tool and initiatives such as the quantitative authorship decision support tool and Author Contribution Index. Now, Oded Rechavi and Pavel Tomancak provide an alternative method in a recent commentary published in Nature Reviews.

Rechavi and Tomancak’s approach involves assigning credit to each result in a manuscript. They “argue that it should be known who thought of each idea, who ran each experiment, and who analysed the data.” But how exactly would this be achieved? The authors propose two ways. Rechavi suggests substituting the word “we” for the names of specific, responsible authors. For instance, “we sequenced RNA” would become “Rechavi sequenced RNA”. Alternatively, Tomancak proposes assigning a number to each author in the author list and citing these for each contribution. For example, “we sequenced RNA1” would credit the first author in the author list.

“It should be known who thought of each idea, who ran each experiment, and who analysed the data.

The authors list multiple advantages of ascribing authorship to each result, irrespective of how it is achieved. These include:

  • vague author contribution statements become redundant
  • unexpected contributions are recognised (eg, theorists performing experimental work)
  • the semi-quantitative data provided could help to justify or assign author order.

Nevertheless, they acknowledge several concerns raised by their peers, including:

  • extra work will be needed to recall ‘who did what’ for each sentence
  • reading the names of authors throughout a manuscript may be cumbersome
  • disputes may arise when discussing who contributed to a specific study.

Rechavi and Tomancak counter this by calling on researchers to experiment with this alternative method in their own papers and suggest that bioRxiv, the preprint server, is an ideal place to try it out. They end with a clear call to action: ‘bottom-up’ adoption by the scientific community is needed to implement meaningful, lasting changes to the way in which author contributions are assigned.

————————————————

]]>
https://thepublicationplan.com/2023/12/07/is-it-time-to-change-our-approach-to-reporting-author-contributions/feed/ 0 14947
ChatGPT – key priorities for research and publishing https://thepublicationplan.com/2023/07/04/chatgpt-key-priorities-for-research-and-publishing/ https://thepublicationplan.com/2023/07/04/chatgpt-key-priorities-for-research-and-publishing/#respond Tue, 04 Jul 2023 08:17:15 +0000 https://thepublicationplan.com/?p=14119

KEY TAKEAWAYS

  • ChatGPT and Al technology may revolutionise research and publishing, creating both opportunities and concerns.
  • Policies and recommendations are needed to ensure ethical and transparent use of AI technologies in science.

ChatGPT is a machine-learning system with the ability to autonomously learn from huge data sets to produce what appears to be intelligent writing. Consequently, since its release in November 2022, ChatGPT has been the focus of many discussions within the MedComms community, due to its potential impact on medical research and publication processes.

An example of how ChatGPT  can be used was recently reported by Curtis Kendrick in a Scholarly Kitchen article. Kendrick described using ChatGPT to prepare a presentation about racism in academic libraries, by asking the system queries and requesting citations on the subject. The author concluded that while responses were credible and clearly written, the generated citations were either incomplete or used non-existent references.

In an article published in Nature, Eva A M van Dis and colleagues discuss ChatGPT and other AI technologies in the context of publishing and research. They note that whilst ChatGPT offers many opportunities, it also raises concerns:

ChatGPT  “might accelerate the innovation process, shorten time-to-publication and, by helping people to write fluently, make science more equitable and increase the diversity of scientific perspectives. However, it could also degrade the quality and transparency of research and fundamentally alter our autonomy as human researchers”.

van Dis et al. highlight 5 key recommendations when using systems like ChatGPT:

1. Retain human verification steps

  • Expert-driven verification processes should be used to prevent inaccuracies, bias, and plagiarism.
  • These issues may arise if relevant articles are missing in the ChatGPT training set, relevant information is not extracted, or credible sources are not distinguished from less credible sources.

2. Develop transparency and accountability rules

  • The use of Al technologies should be stated by authors (including the extent of its use in the preparation of manuscripts and analyses) and by scientific journals (eg, in the selection of manuscripts for publications).

3. Invest in open-source AI technologies

  • The authors encourage investments in non-profit projects to develop open-source, transparent Al technologies that are under democratic control.
  • The training sets used for the development of AI technology should be publicly available, in line with moves towards increased transparency and open science, and academic publishers should allow machine-learning systems access to their archives to ensure AI outputs are accurate and comprehensive.

4. Embrace opportunities

  • ChatGPT can accelerate certain tasks, such as performing a literature search. However, this advantage needs to be carefully balanced with the potential loss of skills and autonomy in the research process.

5. Debate on the ethics, integrity, and transparency of ChatGPT use in science

  • van Dis et al call for an ongoing international forum on the development and responsible use of AI technologies for research.
  • As a first step, they suggest a summit for scientists, technology companies, research funders, science academies, publishers, non-governmental organisations, and privacy and legal specialists to discuss and make recommendations and policies.

The authors conclude “The focus should be on embracing the opportunity and managing the risks. We are confident that science will find a way to benefit from conversational Al without losing the many important aspects that render scientific work one of the most profound and gratifying enterprises: curiosity, imagination and discovery”.

—————————————————–

In your opinion, should the use of ChatGPT and AI technology in research and publishing be regulated?

]]>
https://thepublicationplan.com/2023/07/04/chatgpt-key-priorities-for-research-and-publishing/feed/ 0 14119
Is it time to redesign peer review? https://thepublicationplan.com/2023/04/27/is-it-time-to-redesign-peer-review/ https://thepublicationplan.com/2023/04/27/is-it-time-to-redesign-peer-review/#respond Thu, 27 Apr 2023 17:04:30 +0000 https://thepublicationplan.com/?p=13665

KEY TAKEAWAYS

  • Breaking peer review into stages could decrease the burden on expert reviewers and improve the quality of published research.

Peer review is a key part of scholarly publishing; however, there have been increasing calls to shift away from the traditional peer review model to make the process more efficient and sustainable. In a Nature World View article, Professor Olavo B. Amaral describes an alternative approach to peer review that could improve data quality and transparency, and lessen the burden on peer reviewers.

Conventional peer review relies on expert referees to evaluate an article’s claims and its suitability for publication in the target journal. Due to time constraints, the underlying data are rarely scrutinised, potentially allowing errors and fraudulent results to go undetected.

Prof. Amaral believes that every manuscript should undergo basic checks to ensure that the data are complete and consistent, calculations are correct, and analyses are reproducible, but that only select articles, such as those of special interest, should be sent out for expert review. Such an approach would allow peer reviewers to use their time more effectively, on papers for which the data have been validated.

“Not all research needs to be reviewed by an expert. Much of the low hanging fruit of quality control doesn’t need a specialist — or even a human.”

Although certain aspects of manuscript quality control could be automated, algorithms work best on structured text, and most scientific fields do not have standardised formats for presenting results. A more fundamental problem is that data checks cannot verify that the data were collected as reported and have not been ‘cherry-picked’. To address this issue systematically, Prof. Amaral suggests that the focus should switch from scrutinising manuscripts to quality control of research practices, as proposed by frameworks such as Enhancing Quality in Preclinical Data (EQIPD). Implementing this change could not only make peer review more viable but could also improve data reproducibility and increase trust in published research.

Prof. Amaral calls on field experts to develop guidelines for data standardisation and urges funding agencies to facilitate the efforts to improve data collection and reporting by, for example, rewarding researchers for having specific aspects of their results certified.

—————————————————–

In your opinion, would breaking peer review into stages and employing algorithms for basic quality checks improve the sustainability of the current peer review system?

]]>
https://thepublicationplan.com/2023/04/27/is-it-time-to-redesign-peer-review/feed/ 0 13665
Is enough being done to account for the role of sex in medical research? https://thepublicationplan.com/2023/03/28/is-enough-being-done-to-account-for-the-role-of-sex-in-medical-research/ https://thepublicationplan.com/2023/03/28/is-enough-being-done-to-account-for-the-role-of-sex-in-medical-research/#respond Tue, 28 Mar 2023 17:34:56 +0000 https://thepublicationplan.com/?p=13479

KEY TAKEAWAYS

  • Reporting guidelines recommend that researchers factor the role of sex into animal and clinical studies, but progress in adherence to these guidelines has been slow.
  • Sex-based analyses have led to some key medical discoveries, and researchers are encouraged to examine data for sex differences to enhance study reproducibility and open up questions for scientific pursuit.

Medical research funders and publishers are increasingly calling for the role of sex to be considered in preclinical and clinical studies. In a recent Nature News Feature article, Dr Emily Willingham highlights the importance of reporting sex differences in medical research and examines why progress in this area has been slow.

Sex as a variable has important health implications. A recent example is COVID-19, which has higher mortality in men but affects more women in the form of long COVID. Accounting for sex can enhance the scientific rigour and reproducibility of a study, and even if there are no sex-based differences to report, negative findings are still informative.

Accounting for sex can enhance the scientific rigour and reproducibility of a study, and even if there are no sex-based differences to report, negative findings are still informative.

Yet, since the thalidomide tragedy in the late 1950s, women of childbearing age have been under-represented in clinical trials. Progress was made in the early 1990s, when the US National Institutes of Health (NIH) began requiring that women are included in clinical research. Both the NIH and EU now call for both sexes to be included in cell and animal studies.

In 2016, Dr Shirin Heidari led the publication of the Sex and Gender Equity in Research (SAGER) reporting guidelines, with the aim of encouraging authors to consider sex and gender differences in scientific publications. However, progress in adherence to these guidelines has been slow. An analysis of 720 papers published in 34 biology journals in 2009 and 2019 found that although the proportion of sex-inclusive studies had risen, the proportion incorporating sex-based analyses had decreased from 50% to 42%. Another study reported that even when sex is considered as a variable, treatment effects are often not compared properly between sexes, leading to misinterpretation of data.

The reasons for the relatively slow uptake of sex inclusion and reporting policies include:

  • general resistance to change – some journals assert that the SAGER guidelines are not applicable to their fields
  • cost – mice studies that include two sexes require more animals, which adds expense
  • complexity of sex – some researchers argue that a binary definition based on specific anatomy or chromosome numbers is too limiting.

Encouragingly though, since sex inclusion guidelines were put in place, important medical discoveries have been made. One key finding is that risk of cardiovascular disease begins to rise at a lower blood pressure in women than in men – a revelation that came about from a call for studies looking specifically at sex differences in health outcomes. Considering the potential implications for medicine, we hope to see more researchers incorporate sex-specific analyses in their studies.

—————————————————–

Do you follow sex and gender reporting guidelines when writing your research manuscripts?

]]>
https://thepublicationplan.com/2023/03/28/is-enough-being-done-to-account-for-the-role-of-sex-in-medical-research/feed/ 0 13479
Transparent peer review: are authors willing to publish referee reports? https://thepublicationplan.com/2022/12/08/transparent-peer-review-are-authors-willing-to-publish-referee-reports/ https://thepublicationplan.com/2022/12/08/transparent-peer-review-are-authors-willing-to-publish-referee-reports/#respond Thu, 08 Dec 2022 18:51:26 +0000 https://thepublicationplan.com/?p=12736

KEY TAKEAWAYS

  • Nature is giving authors the option to publish anonymised peer review reports alongside their article.
  • Opening up the peer review process promotes transparency and could benefit the research community and general public.

Although an important tool for scientific progress and upholding rigorous standards of proof, peer review reports have typically remained confidential from the wider research community. In a bid to promote transparency in publishing, Nature is piloting a trial to give authors the opportunity to publish their discussions with referees.

In 2016, Nature Communications started offering authors the option to publish peer reviewers’ comments and rebuttal letters alongside their articles. Following a positive uptake in 2021 (approximately 70%), the journal has now gone a step further, announcing that peer review files will be published for all accepted research articles that were submitted after 1 November 2022. For Nature, who began piloting this option in February 2020, almost half (46%) of authors agreed to publish anonymised peer review reports in 2021, and indications in early 2022 suggest that this number is rising.

Almost half (46%) of authors agreed to publish anonymised peer review reports in Nature in 2021.

Nature strongly encourages researchers to consider publishing their exchanges with reviewers, citing the following benefits to the scientific research community and general public:

  • promoting transparency
  • preserving valuable scholarship
  • providing insight into the peer review process to both early-career researchers and those who study peer review systems
  • recognising the contributions of peer reviewers
  • highlighting discussion on valid potential caveats or limitations of studies
  • allowing readers to better critically assess the robustness of the study conclusions
  • enabling authors to raise supportive arguments not suitable for integration within the article itself.

Given the promising results from the Nature’s pilot, the journal hopes that more authors will opt to publish their referee reports with a view to improve how science is represented.

—————————————————–

Do you think more publishers should adopt transparent peer review?

]]>
https://thepublicationplan.com/2022/12/08/transparent-peer-review-are-authors-willing-to-publish-referee-reports/feed/ 0 12736
Language-generating AI in science: transformational or deformational? https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/ https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/#respond Thu, 13 Oct 2022 14:57:18 +0000 https://thepublicationplan.com/?p=12376

KEY TAKEAWAYS

  • Language-generating artificial intelligence could have an empowering impact in science, but non-transparency and oversimplification of complex data could threaten scientific professionalism.
  • Authors call on government bodies to enforce systematic regulation to help realise the potential of large language models in science.

Large language models (LLMs) are artificial intelligence algorithms that recognise, summarise, and generate human language from very large text-based datasets. LLMs could well empower scientists to draw information from big data; however, researchers from the University of Michigan are concerned that without appropriate regulation, LLMs could threaten scientific professionalism and intensify public distrust in science.

A recent report examined the potential social change brought about by LLMs. In a subsequent Nature Q&A, the report’s co-author, Professor Shobita Parthasarathy, described the impact of LLMs in the scientific disciplines. She highlighted the potential for LLMs to help large scientific publishers to automate aspects of peer review, generate scientific queries, and even evaluate results, but cautioned that without systematic regulation, LLMs could exacerbate existing inequalities and oversimplify complex data.

Without appropriate regulation, LLMs could threaten scientific professionalism and intensify public distrust in science.

Developers are not required to disclose the accuracy of an LLM, and the models’ processes are not transparent, meaning that users could be unaware that LLMs can make errors, include outdated information, and remove important nuances. Furthermore, readers are unable to distinguish LLM-generated text from human-generated text, thereby highlighting that the technology could be employed to distribute misinformation and generate fake scientific articles.

For the potential of LLMs to be realised in science, Prof Parthasarathy calls on government bodies to enforce transparency in their use, stipulating that those who develop LLMs should disclose the models’ processes and make clear where LLMs have been used to generate an output.

—————————————————–

Do you think large language models could benefit science if appropriately regulated?

]]>
https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/feed/ 0 12376