Machine learning – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Tue, 20 Aug 2024 10:55:47 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Machine learning – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Transforming medical translation: the benefits and risks of AI https://thepublicationplan.com/2024/08/20/transforming-medical-translation-the-benefits-and-risks-of-ai/ https://thepublicationplan.com/2024/08/20/transforming-medical-translation-the-benefits-and-risks-of-ai/#respond Tue, 20 Aug 2024 10:55:45 +0000 https://thepublicationplan.com/?p=16299

KEY TAKEAWAYS

  • Translation tools such as CAT, NMT, and, more recently, generative AI, have revolutionised the field of medical translation, increasing efficiency and aiding consistency.
  • While the widespread use of these translation tools appears to now be inevitable, it is essential to fully understand their limitations and the continued, critical role of the human translator.

Artificial intelligence (AI) looks set to transform the fields of medical research, publications, and translation, sparking considerable discussion to date around potential opportunities and pitfalls. Medical translation in particular has already undergone significant changes, with the introduction of digital tools such as computer assisted translation (CAT) and neural machine translation (NMT).

In a recent article published by the European Medical Writers Association (EMWA), medical translator Ann Marie Boulanger reviews available technologies and outlines the benefits and limitations of NMT and generative AI tools in medical translation.

Pros of generative AI and machine translation

  • Increased efficiency: NMT translation tools can improve productivity by around 20%.
  • Standardisation: Consistency is key when it comes to medical terminology. Translation tools have been invaluable for ensuring terms are standardised and up to date.
  • Multilingual support: NMT/generative AI tools have been useful for rapidly processing and translating large amounts of information, increasing the speed at which it can be disseminated to scientists, doctors, and patients.

Limitations remain

  • Overconfidence: While machine/AI translations appear increasingly sophisticated, it would be dangerous to assume that this makes them accurate.
  • Limited contextual understanding: Medicine is a complex field, filled with rapidly changing terminology, acronyms, and confusing shorthand notation. These factors render it challenging for machine translation to reliably translate texts.
  • Confidentiality concerns: Free tools are often not secure, posing confidentiality concerns.
  • The human impact: NMT/generative AI tools work best in the hand of experienced translators, which presents a quandary for the industry. A shift to using machine translation with human postediting, often by less experienced translators and for poorer pay, has caused compensation for medical translation to plummet despite the high level of expertise required.

The essential role of the human translator

While the use of NMT/AI tools may have become inevitable, Boulanger  argues that medical translators must “view machine translation and AI as nothing more than aids, tools in a toolbox, as opposed to solutions designed to do the work for them”.

Medical translators must “view machine translation and AI as nothing more than aids, tools in a toolbox, as opposed to solutions designed to do the work for them”.

To this end, Boulanger urges the industry to remember that human translators will always be needed, at the very least for final validation, especially in a field as complex and critical as medical translation.

————————————————–

Will AI translation tools ever entirely replace human translators?

]]>
https://thepublicationplan.com/2024/08/20/transforming-medical-translation-the-benefits-and-risks-of-ai/feed/ 0 16299
Scopus AI: supercharging research insights with artificial intelligence https://thepublicationplan.com/2024/03/07/scopus-ai-supercharging-research-insights-with-artificial-intelligence/ https://thepublicationplan.com/2024/03/07/scopus-ai-supercharging-research-insights-with-artificial-intelligence/#respond Thu, 07 Mar 2024 14:20:03 +0000 https://thepublicationplan.com/?p=15382

KEY TAKEAWAYS

  • Scopus AI is a new generative AI tool integrated into the Scopus database.
  • It promises to deliver fast and accurate summaries and insights for researchers while maintaining reliability and transparency.

On 16 January 2024, Elsevier launched Scopus AI, an innovative tool that merges generative artificial intelligence (AI) with the Scopus database. The move reflects a broader trend in the publishing industry to leverage AI to improve systems and services.

The decision to introduce Scopus AI stems from the recognition of the challenges faced by researchers, including information overload, misinformation, mounting workloads, and time pressures. The tool aims to address these challenges by providing deeper insights faster while ensuring the summaries it generates are trustworthy and transparent. Its key features and capabilities include:

  • a natural-language search bar that allows users to pose queries in a conversational manner
  • generation of concise, fully referenced summaries based on Scopus abstracts, helping users to quickly grasp the essence of a research topic
  • suggestions for follow-up questions to assist in uncovering additional research insights
  • a visual concept map for each query to illustrate how different research themes are interconnected
  • a summary of the top researchers linked to each query along with an explanation of why each researcher was selected, drawing on more than 19 million Scopus author profiles.

Elsevier emphasises Scopus AI’s credibility, noting that the scholarly content it uses to generate summaries draws from 27,000 academic journals worldwide and is rigorously reviewed by an independent board of renowned scientists, researchers, and librarians.

Scopus AI aims to address [challenges faced by researchers] by providing deeper insights faster while ensuring the summaries it generates are trustworthy and transparent.

Since the launch of the pilot version in August 2023, Scopus AI has been tested by thousands of researchers whose feedback has led to the addition of a number of features, including the ability to identify leading academic experts in their fields.

Maxim Khan, Senior Vice President of Analytics Products and Data Platform at Elsevier, underlined the importance of supporting researchers in navigating complex topics and fostering interdisciplinary collaboration to maximise research impact. We look forward to seeing how Scopus AI might redefine the boundaries of scholarly exploration and contribute to the dynamic evolution of research practices.

————————————————

What challenges do you face in your research activities that you believe tools like Scopus AI could address?

]]>
https://thepublicationplan.com/2024/03/07/scopus-ai-supercharging-research-insights-with-artificial-intelligence/feed/ 0 15382
Are we coming close to accurate AI detection? https://thepublicationplan.com/2024/02/20/are-we-coming-close-to-accurate-ai-detection/ https://thepublicationplan.com/2024/02/20/are-we-coming-close-to-accurate-ai-detection/#respond Tue, 20 Feb 2024 12:19:04 +0000 https://thepublicationplan.com/?p=15009

KEY TAKEAWAYS

  • Findings of a recent study suggest that accurate detection of AI-generated text can be achieved.
  • Researchers propose that accuracy is dependent on tailoring detectors to specific fields and writing types.

The meteoric rise of large language models, such as ChatGPT, is likely to result in a rapid increase in the use of generative artificial intelligence (AI) in academic publishing. This presents a quandary for journal publishers and editorial teams as they strive to develop guidance and ‘stay ahead’ of the technology. Currently, attitudes vary somewhat between journals, ranging from The Lancet limiting AI use to improving readability, to Nature adopting a firm stance against the use of generative AI to create images. Regardless of the detail in individual guidelines, enforcement is reliant on accurate detection of AI-generated content; technology which, to date, has been viewed as flawed. A recent Nature News article by McKenzie Prillaman spotlights research on a potential solution, namely, the development of more specialist detectors.

Developing a specialist AI detector

As Prillaman reports, a recent study published in Cell Reports Physical Science suggests that tailoring AI detectors so that they are trained to check specific types of writing may result in more reliable detection methods.

Tailoring AI detectors so that they are trained to check specific types of writing may result in more reliable detection methods.

The research group, Desaire et al., used 100 published (ie, human-created) introductions from articles in various chemistry journals to train ChatGPT 3.5 to develop 200 introductions that followed similar styles. These documents were used to train their machine learning algorithm. The model was then used to test more articles, checking for AI- vs human-generated content via 20 different features of writing style. The group found that:

  • the detector identified AI-generated documents with 98–100% accuracy
  • human-written documents were detected with 96% accuracy
  • the model outperformed other more general detectors, such as OpenAI’s AI classifier and ZeroGPT, in detecting AI-generated documents
  • the model performed similarly when tested on writing from chemistry journals beyond those it was trained on, but not when tested on more general science magazine writing.

Implications for scientific publishers

The group concluded that their detector outperformed its contemporaries because it was trained specifically on academic publications. They propose that this tailored approach is vital for the development of accurate AI detectors suitable for use by academic publishers.

————————————————

What do you think – can AI detectors be used successfully in academic publishing?

]]>
https://thepublicationplan.com/2024/02/20/are-we-coming-close-to-accurate-ai-detection/feed/ 0 15009
Could you spot AI-generated medical writing? https://thepublicationplan.com/2024/01/11/could-you-spot-ai-generated-medical-writing/ https://thepublicationplan.com/2024/01/11/could-you-spot-ai-generated-medical-writing/#respond Thu, 11 Jan 2024 10:46:06 +0000 https://thepublicationplan.com/?p=14906

KEY TAKEAWAYS

  • Readers cannot readily distinguish between healthcare text written by a human or by AI.
  • As the use of generative AI in medical writing increases, clear guidance and regulations are needed.

The use of generative artificial intelligence (AI) continues to grow apace, and recent research suggests that the fields of medical writing and medical publishing are no exception. While these  tools can be a powerful means of increasing efficiencies, concerns remain about their use in a sphere that requires “accurate and complete information”. AI-generated content can contain ‘hallucinations’ and inaccurate or biased details, which can be difficult for even experts to identify. This highlights the importance, for professionals and patients alike, of being able to distinguish material created by AI from that developed by humans. Unfortunately, this task can be harder than you think.

Even healthcare professionals struggle to identify AI-generated content

In a recent edition of Medical Writing from the European Medical Writing Association, Natalie Bourré explained how her research sheds light on this complex issue. Bourré asked survey respondents to tease apart healthcare-related content written by humans from that created by AI. Bourré found that:

  • overall, respondents were correct only 54% of the time
  • surprisingly, healthcare professionals did not fare better than others in differentiating AI- versus human-created content
  • medical writers were slightly better at spotting AI-generated text than other respondents.

Other groups have reported similar results and found that AI can create “compelling disinformation”. These issues are even more concerning when considered alongside earlier research by Bourré, which found that readers can be overconfident in their ability to spot AI-generated content.

What’s next? Shaping AI usage in medical writing

Bourré calls for clear guidance and regulations on the use of AI in medical writing. These safeguards would help the medical publishing community to harness the efficiencies and improved accessibility that AI may bring, while maintaining the accuracy and credibility of medical research.

[Clear guidance and regulations on the use of AI] would help the medical publishing community to harness the efficiencies and improved accessibility that AI may bring, while maintaining the accuracy and credibility of medical research.

————————————————

How frequently do you consider if the healthcare-related content that you read could have been generated by AI?

]]>
https://thepublicationplan.com/2024/01/11/could-you-spot-ai-generated-medical-writing/feed/ 0 14906
AI in publishing: an underutilised tool? https://thepublicationplan.com/2023/08/24/ai-in-publishing-an-underutilised-tool/ https://thepublicationplan.com/2023/08/24/ai-in-publishing-an-underutilised-tool/#respond Thu, 24 Aug 2023 08:40:45 +0000 https://thepublicationplan.com/?p=14334

KEY TAKEAWAY

  • AI technology has the potential to enhance medical publishing by increasing efficiency and predicting which papers will have the most impact.

As generative artificial intelligence (AI) models like ChatGPT and BioGPT become more widely used, some in medical publishing have raised concerns about shortfalls, while others are beginning to champion the potential utility of AI. In a recent article for The Scholarly Kitchen, Emma Watkins looked at how the medical publishing community could begin to use AI to its advantage.

Increasing efficiency

Watkins identifies various aspects of the article review process that could be made more efficient through the addition of AI, helping to decrease publication lead times while ensuring research integrity is maintained. She suggests that AI could be used to:

  • select appropriate peer reviewers
  • assess submissions to journals based on relevance
  • suggest alternative journals when a submission is not accepted
  • and even detect fraudulent AI-generated content.

Additionally, Watkins proposes a role for AI in automatic generation of certain types of content, for example lay summaries of otherwise complex research papers.

Making intelligent predictions

Watkins also explores the ways in which machine learning could be used to predict trends in medical publishing. As discussed in a recent paper in Nature Biotechnology by researchers at MIT, AI can be trained on previous scientific publications and used to predict which new papers will be the most impactful. Being able to identify high-impact papers early would allow publishers to direct resources to publicising these articles, although Watkins cautions against a cycle that could then miss truly innovative papers.

The future

Currently, it seems unlikely that publishing decisions will ever rely solely on AI without human input. However, if harnessed correctly, AI could be used to improve certain aspects of medical publishing. The challenge for publishers, Watkins highlights, “is to ensure they are the creative adopters leading the charge”.

If harnessed correctly, AI could be used to improve certain aspects of medical publishing. The challenge for publishers, Watkins highlights, “is to ensure they are the creative adopters leading the charge”.

————————————————–

Where do you think AI has the most potential to impact medical publishing?

]]>
https://thepublicationplan.com/2023/08/24/ai-in-publishing-an-underutilised-tool/feed/ 0 14334
BioGPT: a useful tool or cause for concern? https://thepublicationplan.com/2023/07/11/biogpt-a-useful-tool-or-cause-for-concern/ https://thepublicationplan.com/2023/07/11/biogpt-a-useful-tool-or-cause-for-concern/#respond Tue, 11 Jul 2023 15:37:12 +0000 https://thepublicationplan.com/?p=14138

KEY TAKEAWAYS

  • BioGPT, a biomedical-specific generative AI tool, is pre-trained on millions of research articles and shows human parity in its generated answers.
  • The rapid development of AI and its potential applications within medical publishing hold huge promise, but a lack of regulation and guidance is causing some concern across the community.

As the development of generative artificial intelligence (AI) models, such as ChatGPT, continues apace, conversations are ongoing across the medical publishing community regarding the possible benefits and pitfalls of this technology. Microsoft’s biomedical-specific BioGPT, which generates text based on millions of published research articles, has huge potential, but many medical publications professionals remain cautious about its use and call for appropriate guidance to be established. In a recent article for Clinical Trials Arena, William Newton outlines the promise of BioGPT, along with the challenges that must first be overcome.

The promise of BioGPT

As reported by Luo et al in a recent preprint, pre-trained language models such as BioBERT have already displayed powerful abilities in discriminative downstream biomedical tasks, with text mining from existing literature playing essential roles in areas such as drug discovery and clinical therapy. However, these models are not generative and pre-trained GPT has the potential to vastly extend the utility of AI within the biomedical field. BioGPT, when evaluated against 6 biomedical natural language processing scales, including PubMedQA, outperforms other AI tools and exhibits human parity when answering biomedical questions.

As the take-up of GPT tools among authors, publishers, and even peer reviewers, continues to increase, the medical publishing industry must move quickly if it is to provide timely advice and regulation.

The challenges of using generative AI in medical publishing

The sophistication of BioGPT opens many interesting avenues within biomedical research, for instance, in drug development, digital biomarkers, and patient selections in clinical trials. However, like ChatGPT, BioGPT has several limitations. These include tendencies to generate inaccurate or misleading text and even perpetuate existing biases within scientific research.

The uncertain future

The inaccuracies within AI tools such as BioGPT present a growing concern among medical communications professionals alongside the rise of AI, with the issue featuring on the agenda of this year’s International Society for Medical Publication Professionals (ISMPP) Meeting. Many expressed optimism over AI’s potential within science communication. However, significant concerns surrounding potential user overreliance on AI tools and disclosure of AI use within manuscripts also arose. To ensure appropriate usage of AI within medical publishing, speakers called for guidance, including on the disclosure of AI tools and prompts used. As the take-up of GPT tools among authors, publishers, and even peer reviewers, continues to increase, the medical publishing industry must move quickly if it is to provide timely advice and regulation.

—————————————————–

Are you using generative AI in medical publications?

]]>
https://thepublicationplan.com/2023/07/11/biogpt-a-useful-tool-or-cause-for-concern/feed/ 0 14138
Are your AI-generated data reproducible? https://thepublicationplan.com/2023/02/16/are-your-ai-generated-data-reproducible/ https://thepublicationplan.com/2023/02/16/are-your-ai-generated-data-reproducible/#respond Thu, 16 Feb 2023 16:47:51 +0000 https://thepublicationplan.com/?p=13239

KEY TAKEAWAYS

  • Researchers have raised concerns about the reproducibility of AI-based studies across many scientific fields including medicine.
  • Reporting checklists and guidelines could help to avoid common pitfalls in studies using AI but more needs to be done to ensure scientific credibility.

The results of many studies that use machine learning or artificial intelligence (AI) methodologies could be being overstated. This warning of a reproducibility crisis in machine learning was recently reported by Elizabeth Gibney in Nature, and is based in part on the findings of a preprint co-authored by Sayash Kapoor and Arvind Narayanan, who identified a collective 329 studies across multiple scientific disciplines with shortcomings regarding the reproducibility of their findings.

Machine learning and AI have become powerful tools at the disposal of biomedical researchers, but the reproducibility of these methodologies and associated outcomes is paramount for their credibility. A methodologic pitfall frequently encountered by Kapoor and Narayanan in their analysis was so called ‘data leakage’, where data used to train the AI model were subsequently used in the test data set, potentially exaggerating the AI’s ability to make accurate predictions. To counter this, Kapoor and Narayanan propose that researchers use ‘model info sheets’ to transparently report the details of their AI models.

“Unless we do something like this, each field will continue to find these [reproducibility] problems over and over again.”

Reporting checklists are not unfamiliar to AI researchers in the biomedical field, as noted by Gibney, who referred to initiatives like the EQUATOR Network’s CONSORT-AI and SPIRIT-AI reporting guidelines developed by Dr Xiao Liu and colleagues. While checklists are an important and useful tool, greater collaboration between researchers and specialists in machine learning could also help. It is encouraging then to note the 1,200 people registering to attend a workshop on reproducibility co-organised by Kapoor with the mission to resolve the reproducibility crisis in AI-based science.

—————————————————–

What do you think – do AI and machine learning models need greater scientific scrutiny?

]]>
https://thepublicationplan.com/2023/02/16/are-your-ai-generated-data-reproducible/feed/ 0 13239
DECIDE-AI: will it make the development of artificial intelligence-based clinical decision support systems more robust? https://thepublicationplan.com/2022/12/06/decide-ai-will-it-make-the-development-of-artificial-intelligence-based-clinical-decision-support-systems-more-robust/ https://thepublicationplan.com/2022/12/06/decide-ai-will-it-make-the-development-of-artificial-intelligence-based-clinical-decision-support-systems-more-robust/#respond Tue, 06 Dec 2022 17:48:38 +0000 https://thepublicationplan.com/?p=12722

KEY TAKEAWAYS

  • New DECIDE-AI guidelines aim to improve reporting of early-stage clinical trials of AI-based clinical decision support systems.
  • The authors hope that addressing this unmet need will facilitate more robust research, and ultimately translate to increased uptake of these technologies.

With the advent of artificial intelligence (AI), there has been much discussion, debate, and research on its applicability to clinical decision support systems (CDSS). Although some regulatory guidance is available, and AI-based CDSS have been approved, a recent article by Vasey et al in The BMJ and Nature Medicine argues that a need remains for a more robust approach in early clinical development. The authors present new guidelines aimed at improving reporting at this stage which, they argue, would create a stronger foundation for larger clinical trials and ultimately increase uptake.

The Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI) guidelines provide minimum reporting standards for studies of AI-CDSS, whether for detection, diagnosis, prognosis, or therapy. It complements other guidance, such as CONSORT-AI (for randomised controlled trials) and SPIRIT-AI (for protocol reporting).

DECIDE-AI incorporates 27 recommended reporting items, 17 of which are AI specific. These include:

  • methods for integrating the system into the clinical care pathway
  • user familiarisation with the system
  • description of the algorithm, inputs, and outputs
  • human factors (ie interactions between humans and the system)
  • characteristics of both patients and users
  • usability
  • risks/ harms and means for mitigating these.

The authors note that the complexity of the field is reflected in the breadth and depth of reporting detail that is required by the guidance. They suggest that, to ensure adequate and complete reporting, “thorough evaluation of AI systems should not be limited by word count…publications reporting on such systems might benefit from special formatting requirements.”

“Thorough evaluation of AI systems should not be limited by word count…publications reporting on such systems might benefit from special formatting requirements.”

As highlighted by the authors, this field continues to evolve. Future iterations of the guidance may be expanded further as more research is carried out, to cover currently contentious topics such as interpretability and user trust levels.

—————————————————–

What do you think – will the DECIDE-AI guidelines improve the development of AI-based clinical decision support systems?

]]>
https://thepublicationplan.com/2022/12/06/decide-ai-will-it-make-the-development-of-artificial-intelligence-based-clinical-decision-support-systems-more-robust/feed/ 0 12722
Language-generating AI in science: transformational or deformational? https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/ https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/#respond Thu, 13 Oct 2022 14:57:18 +0000 https://thepublicationplan.com/?p=12376

KEY TAKEAWAYS

  • Language-generating artificial intelligence could have an empowering impact in science, but non-transparency and oversimplification of complex data could threaten scientific professionalism.
  • Authors call on government bodies to enforce systematic regulation to help realise the potential of large language models in science.

Large language models (LLMs) are artificial intelligence algorithms that recognise, summarise, and generate human language from very large text-based datasets. LLMs could well empower scientists to draw information from big data; however, researchers from the University of Michigan are concerned that without appropriate regulation, LLMs could threaten scientific professionalism and intensify public distrust in science.

A recent report examined the potential social change brought about by LLMs. In a subsequent Nature Q&A, the report’s co-author, Professor Shobita Parthasarathy, described the impact of LLMs in the scientific disciplines. She highlighted the potential for LLMs to help large scientific publishers to automate aspects of peer review, generate scientific queries, and even evaluate results, but cautioned that without systematic regulation, LLMs could exacerbate existing inequalities and oversimplify complex data.

Without appropriate regulation, LLMs could threaten scientific professionalism and intensify public distrust in science.

Developers are not required to disclose the accuracy of an LLM, and the models’ processes are not transparent, meaning that users could be unaware that LLMs can make errors, include outdated information, and remove important nuances. Furthermore, readers are unable to distinguish LLM-generated text from human-generated text, thereby highlighting that the technology could be employed to distribute misinformation and generate fake scientific articles.

For the potential of LLMs to be realised in science, Prof Parthasarathy calls on government bodies to enforce transparency in their use, stipulating that those who develop LLMs should disclose the models’ processes and make clear where LLMs have been used to generate an output.

—————————————————–

Do you think large language models could benefit science if appropriately regulated?

]]>
https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/feed/ 0 12376
How artificial intelligence is changing the landscape of scientific communication https://thepublicationplan.com/2021/11/30/how-artificial-intelligence-is-changing-the-landscape-of-scientific-communication/ https://thepublicationplan.com/2021/11/30/how-artificial-intelligence-is-changing-the-landscape-of-scientific-communication/#respond Tue, 30 Nov 2021 14:52:32 +0000 https://thepublicationplan.com/?p=10330

KEY TAKEAWAYS

  • The use of artificial intelligence in scientific communication is rapidly expanding, with multiple applications in manuscript preparation and editorial workflows.
  • The scholarly publishing community must adapt to and embrace the use of advanced artificial intelligence.

Artificial intelligence (AI), natural language processing (NLP), and machine learning are widely employed across scholarly publishing, with a reduction in human workload a key driver of their adoption. A recent article by Dr Habeeb Razack and colleagues, published in Science Editing, examined the current and prospective impact of these technologies across the scientific publications arena. The authors concluded that greater adoption of AI in the future could increase the quality of published content as well as retrospectively improve the use of content already in the public domain.

AI is expected to play an increasingly important role in complex editorial processes and improving AI literacy among scholarly publishing stakeholders will be important for future adoption.

The article examined the use of AI across 7 areas of scholarly publishing.

  1. Literature searching and information retrieval: In the current infodemic era, data handling is an increasing drain on time and resources. With more than 127,000 research papers published on COVID-19 alone, the ability of AI tools to extract data from large and noisy datasets is becoming increasingly important. AI tools can generate citation metrics, authenticate hypotheses, position results based on relevance, connect data from various domains and concept areas, access supplementary information, and automate systematic reviews.
  1. Manuscript preparation: Recent improvements in NLP have further enhanced the quality of AI outputs and a number of AI-backed writing tools have entered the market. High profile examples include Grammarly and PerfectIt™.
  1. Bibliography and citation management: In addition to established referencing software features, AI elements such as citation recommendations (wizdom.ai), analysis of citation quality (including identifying retractions; scite.ai), ‘SmartSearch’ algorithms (SciWheel), and tools to identify related publications (Connected Papers) can greatly reduce the time spent on referencing.
  1. Target journal selection: Several web-based platforms are available to assist with journal selection. Notable examples include EndNote’s Manuscript Matcher, which uses an algorithm for determining a ‘match score’, and Elsevier’s JournalFinder, which uses a ‘fingerprint engine’ and subject specific vocabularies.
  1. Plagiarism prevention: Plagiarism has long plagued scholarly publishing, but AI tools can help by identifying content similarity. This now includes novel tools that can detect plagiarism across different languages (CopyLeaks) and identify similarity in bar charts using optical character recognition. The use of AI-supported stylometry has also been suggested as a way of identifying an individual author’s writing style.
  1. Peer review and quality assessment: NLP-driven AI approaches can help identify peer reviewers in a non-biased manner. Tools have also been developed to assess statistical errors (StatCheck) and quality (StatReviewer) in submitted manuscripts.
  1. Editorial workflow and publication production: AI has the potential to simplify editorial tasks, including technical checks (UNSILO Evaluate) and journal-specific manuscript formatting. It can also help editors triage submissions by predicting future citation counts (Meta), and improve post-publication click and retention rates (UNSILO Recommend).

As AI use continues to expand in scholarly communication, Dr Razack and colleagues believe that advanced preparation will enhance AI utilisation and support the workforce by promoting human–machine collaboration. Although some professionals may be concerned that introduction of automated systems will lead to job losses, results of a 2019 survey suggest this is unlikely to occur.

—————————————————–

Will the increased adoption of AI benefit scholarly publishing?

]]>
https://thepublicationplan.com/2021/11/30/how-artificial-intelligence-is-changing-the-landscape-of-scientific-communication/feed/ 0 10330