Plagiarism – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Thu, 22 Aug 2024 17:05:45 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Plagiarism – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 Why are retraction rates rising? https://thepublicationplan.com/2024/07/24/why-are-retraction-rates-rising/ https://thepublicationplan.com/2024/07/24/why-are-retraction-rates-rising/#respond Wed, 24 Jul 2024 10:57:46 +0000 https://thepublicationplan.com/?p=16215

KEY TAKEAWAYS

  • The retraction rate for biomedical science papers with corresponding authors based at European institutions quadrupled between 2000 and 2020.
  • Unreliable data has emerged as a leading reason for retraction, while duplication remains a key factor.

Research misconduct remains a major concern, with increasing efforts dedicated to monitoring retraction rates – and the underlying reasons. An analysis recently published in Scientometrics and discussed in Nature news uncovered a quadrupling of retraction rates since 2000 among biomedical science articles with corresponding authors based at European institutions, from about 11 per 100,000 articles to almost 45 per 100,000 in 2020.

Why are articles retracted?

Fabián Freijedo-Farinas and colleagues reviewed over 2,000 retracted English-, Spanish-, and Portuguese-language articles collated by Retraction Watch to identify underlying reasons. Research misconduct was the most prevalent factor, accounting for 67% of cases, while 16% of retractions were due to honest errors (with no reason provided for the remainder). Research misconduct-related retractions were due to:

Reasons have shifted over time, with authorship and affiliation issues falling from one of the top reasons to joint 5th of 7. Duplication has remained steady as a cause, while retractions due to unreliable data – including bias and lack of original data availability – have skyrocketed. The authors suggest paper mills have a major role to play.

However, it’s not the same story across Europe: of the 4 countries with the most retractions, the proportion of duplication-related retractions has fallen in the UK but substantially increased in Italy and Spain.

Why are retraction rates increasing?

Arturo Casadevall, who identified similar rates of research misconduct-related retractions in a 2012 analysis, commented that the overall hike in retraction rates could be due to authors, institutions, and journals increasingly viewing retraction as the best route to correct the scientific record.

The overall hike in retraction rates could be due to authors, institutions, and journals increasingly viewing retraction as the best route to correct the scientific record.

In addition, publications have increasingly drawn the attention of online sleuths, who may raise concerns with journals, according to research integrity specialist Sholto David. New digital technologies are also making it easier to screen publications for suspicious text or data. Retraction Watch co-founder Ivan Oranksy believes use of plagiarism-detection software could be partially responsible for the increase; looking to the future, tools like image manipulation detectors could mean retraction rates rise further.

After reading the article, click here for a brief survey and to receive your authorization code for your Credit Tracker. This serves as documentation for the activity.

————————————————–

How much do you think increasing use of image manipulation detectors will impact retraction rates?

]]>
https://thepublicationplan.com/2024/07/24/why-are-retraction-rates-rising/feed/ 0 16215
Image manipulation: how AI tools are helping journals fight back https://thepublicationplan.com/2024/04/09/image-manipulation-how-ai-tools-are-helping-journals-fight-back/ https://thepublicationplan.com/2024/04/09/image-manipulation-how-ai-tools-are-helping-journals-fight-back/#respond Tue, 09 Apr 2024 12:34:13 +0000 https://thepublicationplan.com/?p=15454

KEY TAKEAWAYS

  • Image manipulation is a prevalent issue in academic publishing and a potential sign of research misconduct.
  • Many journals are now using AI tools to identify problematic images prior to publication; however, these will need to evolve as image manipulation becomes increasingly sophisticated.

Image manipulation in research articles is a growing concern. In a recent article for Nature News, Nicola Jones outlines how academic journals are embracing the use of artificial intelligence (AI) tools to identify manipulated images pre-publication.

How prevalent is image manipulation?

While often unintentional, image manipulation is prevalent and a potential sign of research misconduct. As reported by Jones, a 2016 study by science integrity consultant Dr Elisabeth Bik and colleagues found that nearly 4% of published biomedical science papers contained problematic figures. Similarly, around 4% of the 51,000 documented retractions in the Retraction Watch database flag a concern relating to published images. A more recent study by Dr Sholto David, which used AI to help identify suspect images, puts this figure at up to 16%.

What action is being taken by journals?

Jones highlights that a number of journals are taking steps to identify problematic images prior to publication. Some, including Journal of Cell Science, PLOS Biology, and PLOS One, either ask for or require the submission of raw images used in figures. In addition, many journals now use AI tools such as ImageTwin, ImaChek, and Proofig to screen images for signs of manipulation prior to publication. In January 2024, the Science family of journals revealed it will be using Proofig across all submissions, while other publishers are developing their own AI image integrity software.

Will AI put an end to this issue?

Jones reports that while AI tools make it faster and easier to detect problematic images, experts warn that they have limited capabilities to detect more complex manipulations, such as those made using AI. Bernd Pulverer, chief editor of EMBO Reports, cautions that as image manipulation becomes increasingly sophisticated it will become ever harder to detect, with existing screening tools soon becoming largely obsolete.

While AI tools make it faster and easier to detect problematic images, experts warn that they have limited capabilities to detect more complex manipulations such as those made using AI.

To stamp out image manipulation in the long run, we need to change how science is done, Dr Bik proposes. She calls for a greater focus on rigour and reproducibility and a crackdown on bullying and high pressure environments in research labs, which she believes create a culture where cheating is acceptable. We look forward to seeing how the development of increasingly advanced AI tools will help in the continuing fight against research misconduct.

————————————————

What do you think – are AI screening tools the answer to stopping image manipulation?

]]>
https://thepublicationplan.com/2024/04/09/image-manipulation-how-ai-tools-are-helping-journals-fight-back/feed/ 0 15454
Image duplication in scientific papers: how AI outperforms humans at detecting research misconduct https://thepublicationplan.com/2024/01/26/image-duplication-in-scientific-papers-how-ai-outperforms-humans-at-detecting-research-misconduct/ https://thepublicationplan.com/2024/01/26/image-duplication-in-scientific-papers-how-ai-outperforms-humans-at-detecting-research-misconduct/#respond Fri, 26 Jan 2024 09:49:32 +0000 https://thepublicationplan.com/?p=14920

KEY TAKEAWAYS

  • AI outperforms humans in detecting duplicated images in scientific papers, offering a faster and more comprehensive means of identifying potential research misconduct.
  • Experts argue that, despite the huge potential of AI, human oversight remains important.

As the academic community grapples with image manipulation in research papers, artificial intelligence (AI) tools are emerging as powerful allies. As reported by Anil Oza in Nature News, biologist and image sleuth Dr Sholto David recently showcased just how effective AI tools can be in identifying inappropriately duplicated images in research papers.

After spending several months manually scrutinising hundreds of papers for image duplication in Toxicology Reports, Dr David put an AI tool to the test with remarkable results. Working up to 3 times faster, the AI tool successfully identified nearly all of the suspicious images that Dr David had marked. It also identified an additional 41 instances of image duplication that had escaped his careful scrutiny.

Image duplication is a potential sign of research misconduct and is a growing concern for publishers and researchers alike. In 2016, prominent image forensic specialist Dr Elisabeth Bik identified – through visual inspection – that approximately 4% of articles published in biomedical science journals contained inappropriately duplicated images. Dr David’s AI-powered study, currently published as a preprint and so not yet peer reviewed, dwarfs earlier estimates: 16% of the papers he inspected contained duplicated images. As Oza explains, Dr Bik is not surprised by the figure and neither is expert image integrity analyst Jana Christopher, describing it as “entirely plausible” that 16% of a journal’s images could be duplicated.

Enormous potential, but human oversight is essential

According to its developers, the tool used in Dr David’s study, Imagetwin, works by generating “something like a fingerprint” for each image, scanning the entire paper for duplications. Within seconds, it also cross‑references these fingerprints with a database of over 25 million images.

While the value of AI tools in publishing is undeniable, experts stress the importance of utilising these in combination with human oversight. In our 2022 interview, Dr Bik acknowledged that such tools have limitations and stressed the dangers of blindly relying on their verdict. Not all instances of image duplication or manipulation are detected by AI tools, with some that human experts detect missed by the technology. Overall though, the experts agree that AI tools that detect image duplication will become an integral part of journals’ article review processes.

While the value of AI tools in publishing is undeniable, experts stress the importance of utilising these in combination with human oversight.

————————————————

Do you trust AI tools to play an integral role in the review process for image manipulation?

]]>
https://thepublicationplan.com/2024/01/26/image-duplication-in-scientific-papers-how-ai-outperforms-humans-at-detecting-research-misconduct/feed/ 0 14920
Raise the Papermill Alarm! A new tool for identifying potential fake articles https://thepublicationplan.com/2023/01/17/raise-the-papermill-alarm-a-new-tool-for-identifying-potential-fake-articles/ https://thepublicationplan.com/2023/01/17/raise-the-papermill-alarm-a-new-tool-for-identifying-potential-fake-articles/#respond Tue, 17 Jan 2023 16:39:38 +0000 https://thepublicationplan.com/?p=12968

KEY TAKEAWAYS

  • The production of fraudulent articles by paper mills is on the increase.
  • Papermill Alarm is a new software tool that can screen submitted manuscripts for similarities to known bogus articles.

The submission of journal articles produced by illegal paper mills is a common problem in scientific publishing, and such articles can be difficult to identify. In a recent Nature News articleHolly Else highlights a new tool, ‘Papermill Alarm’, that could be adopted in the fight against bogus content.

Paper mills are paid to produce fake manuscripts that appear similar to legitimate research papers. Developed by Adam Day, Papermill Alarm is a software tool that can analyse the titles and abstracts of scientific papers to assess their similarity to previously identified fraudulent articles. Although not providing definitive proof that an article has been produced by a paper mill, the tool does flag those that may warrant further investigation.

Using Papermill Alarm, Day determined that 1% of PubMed articles contain text similar to those produced by paper mills.

Using Papermill Alarm, Day determined that 1% of PubMed articles contain text similar to those produced by paper mills, with a prior report from the UK Committee on Publication Ethics (COPE) suggesting the figure may be at least 2%, and up to 46% in certain journals.

Several publishers are reportedly interested in adopting Papermill Alarm as a screening tool for submitted manuscripts. Whilst the scientific impact of fraudulent articles produced by paper mills may be limited, given their low citations counts, they nevertheless retain the potential to damage the trust in, and reputation of, scientific research. As such, there is an urgent need for joint action by scholarly research stakeholders to address the thriving paper mill industry.

—————————————————–

Do you believe journals are doing enough to combat the rise in bogus content?

]]>
https://thepublicationplan.com/2023/01/17/raise-the-papermill-alarm-a-new-tool-for-identifying-potential-fake-articles/feed/ 0 12968
Spotting fake images in scientific research: insights from science integrity consultant Elisabeth Bik https://thepublicationplan.com/2022/11/29/spotting-fake-images-in-scientific-research-insights-from-science-integrity-consultant-elisabeth-bik/ https://thepublicationplan.com/2022/11/29/spotting-fake-images-in-scientific-research-insights-from-science-integrity-consultant-elisabeth-bik/#respond Tue, 29 Nov 2022 10:04:33 +0000 https://thepublicationplan.com/?p=12667

Many of us will be familiar with the concept of plagiarised text as a form of misconduct within scientific literature, but perhaps a lesser-known problem, and one which most of us would find much harder to spot, is the publication of manipulated images. Elisabeth Bik is a science integrity consultant who has been described as a super-spotter or image sleuth due to her unique talent for identifying scientific photos that have been tampered with. Elisabeth strives to tackle the issue of scientific misconduct and has a blog dedicated to the topic of science integrity. To date, her scientific detective skills have led to 951 retractions, 122 expressions of concern, and 956 corrections. The Publication Plan spoke to Elisabeth to find out more about her work.

Could you tell us how and why you became involved in investigating fraudulent scientific work and how you discovered your talent for spotting duplicated/manipulated images?

“In 2013 I heard about plagiarism so I took a sentence that I had written and put in into Google Scholar to see if anybody had used my text. I had not expected any results, but by chance the sentence that I had picked randomly, had been stolen by somebody else, so I found a paper that had plagiarised my text, and that of many others. I subsequently kept on finding more and more papers that had plagiarised other people’s work. I worked on that for about a year whilst I was working full-time at Stanford, so it was a kind of weekend project. Then in around 2014 I came across a PhD thesis, not one that had stolen my work but one that had plagiarized text, and one that also contained images – western blots. A couple of the figures had panels that had been reused, so the same panel had been used to represent different experiments. The panel had a very distinctive shape and so I realised that I had some talent for spotting these things, and started searching for other papers with similar image issues.”

What do you look for when analysing images, and what are the most common issues you encounter?

I look for photos specifically because they contain a lot of information, much more than a line graph”.

“I look for photos specifically because they contain a lot of information, much more than a line graph. A line graph could be duplicated but it is very hard to remember, as it’s just a line. Whereas there are features in photos that you can remember at least for a short period, so I compare photos within scientific papers. Because I mainly focus on photos of blots or gels, or microscopy photos of tissues and cells, those are typically the types of images where I find issues, but sometimes I work on photos of plants or mice, visible objects that don’t require a microscope. Occasionally I will find a plot that has been duplicated but as I said plots are hard to find so I don’t focus on those. I look for duplications. There are three main duplication problems: two panels that have been duplicated; two panels that have been duplicated and shifted so that they sort of overlap; and duplication of elements within a photo, for example a group of cells might be visible multiple times. Occasionally I will also find evidence suggestive of tampering with a photo, for example you might see a different background around one particular band in a gel, which indicates that it did not originate from that photo. This example is not a duplication but a sign of potential tampering – that parts of the photo came from somewhere else.”

How common and widespread is the problem of duplicated/manipulated images within the scientific literature and what are the potential consequences of such images going unidentified?

“Duplications are found in around 4% of papers that contain at least one photo. This finding is based on a systematic search I performed for papers that contain the term ‘western blot’ to enrich for papers with molecular biology photos or other figures. In the resulting set of papers, I scanned 20,000, and I found around 800 to contain duplications, so that’s 4% of papers. Those contained one of the three types of duplication I listed, which could result from an honest error or could have been intentionally duplicated with an intention to mislead the reader. The first case, an honest error in a photo, is usually not a big problem. In my opinion it should be corrected, but we all make errors in papers, and so that’s the least concerning. But when images are duplicated with overlaps, or are rotated or stretched, or contain duplicated elements within the same photo, that’s clearly a manipulation of the data. To me those are visible signs of manipulation which cast doubt over all the data in that paper, because if one image has been potentially tampered with or manipulated then so might have other types of data, which are much harder to catch. For example, you cannot really see if values in a table have been fabricated or manipulated so it makes the whole paper less reliable and maybe also other works by those same authors. In some cases, images are manipulated to make the data look better. If a photo contains duplicated elements, then you can’t even be sure that the experiment happened and what the results were. Duplications within the same photo are very suggestive of an intention to mislead and that the results were not obtained as they have been presented. Such fraud in my opinion goes against everything that science should be – science should be about finding the truth and fraud is the opposite of that.”

“Fraud in my opinion goes against everything that science should be – science should be about finding the truth and fraud is the opposite of that.”

What proportion of questionable images do you think could result from honest error and how many are likely to be deliberate acts of misconduct?

“In the study I referred to previously, where I found 800 of 20,000 papers to contain duplicated figures, we estimated that about half of the duplications were deliberate. It is sometimes difficult to know whether a duplication is deliberate in an individual paper, but because we had 800, that was our best guess. It was based on there being roughly an equal distribution of papers over the three duplication categories, so 30% in each category. Since overlapping images could result from honest error, we estimated that about half of the 800 papers had deliberately duplicated or manipulated photos, so 2% of papers overall. Of course the real percentage of manipulation might be much higher because at least photos leave traces if you manipulate them, but as I said, manipulation in other types of data, such as tables or line graphs is much harder to detect so the real percentage of papers with misconduct might be much higher than 2%.”

What systems do journals have in place, if any, to identify problematic images before publication and what are the limitations of these systems?

“Some journals scan all incoming papers for image duplications and others have traditionally hired people like me who can spot these duplications, to scan all their accepted papers for image problems. This might only take a couple of minutes per paper so it’s really not a huge time investment if you know what to look for. After I raised my concerns about 4% of papers having image problems, some other journals upped their game and have hired people to look for these things. This is still mainly being done I believe by humans, but there is now software on the market that is being tested by some publishers to screen all incoming manuscripts. The software will search for duplications but can also search for duplicated elements of photos against a database of many papers, so it’s not just screening within a paper or across two papers or so, but it is working with a database to potentially find many more examples of duplications. I believe one of the software packages that is being tested is Proofig. I have never worked with this software so I don’t know exactly what it does or how good it is, but I would love to test it. Although there have been situations where an editor has informed me that Proofig didn’t find any evidence of a duplication or any evidence of tampering with an image in which I can clearly see a problem. So I think there is a danger if an editor doesn’t really know how to use the software or just blindly relies on the software’s verdict.”

What kind of response do you tend to get from journal editors when you report a potential issue in one of the papers they have published? Your work has resulted in numerous retractions and corrections – is that a common result when you notify a journal of an issue?

“In the past no response was common – I would just not hear anything. Nowadays I specifically write in my email that I keep track of which journals respond to my message, so I usually receive a notification or acknowledgement of receipt or something like that, but then very often I still hear nothing. I reported that initial set of 800 papers in which I found problems to the journals in roughly 2015, and kept track of what happened – two-thirds of those papers have not been retracted after 5 years, some are still being retracted so the number is steadily going down, but around 60% of papers have not been addressed. For the more current papers that I’ve reported, that number is slightly better with half not being addressed after waiting a year or two, but the majority are still not addressed. I get an acknowledgement of receipt but then it seems that nothing happens. When an issue is addressed, the two most common outcomes are a correction or a retraction, which each account for roughly half of cases. There is also a tool called expression of concern, which is very rarely used but I feel should be used more because it provides a very fast way for an editor to flag that they have been alerted to a big problem with the paper and are investigating it, so readers know to proceed with caution if they read that paper. As mentioned, corrections and retractions are the most common outcomes but they are only used in about 40 to 50% of cases – for the majority there is still no outcome after waiting a couple of years.

“Corrections and retractions are the most common outcomes but they are only used in about 40 to 50% of cases – for the majority there is still no outcome after waiting a couple of years.”

But I do feel that the situation is improving, maybe my work has finally earnt some acknowledgement that I’m signalling for positive reasons, not out of malice. In the past I have felt I’ve been ignored a little bit more and I go to social media sometimes too to vent about the lack of response from journals, which I feel has helped so the numbers are getting better but I feel that journals can still do a much better job.”

How important do you think websites such as PubPeer, Retraction Watch and your own blog, Science Integrity Digest, are in creating transparency and raising awareness of possible flawed research? Does the creation of such sites indicate an increasing problem or a greater awareness of the need to check the integrity of science?

“I don’t want to talk about my own blog too much, but I do feel that PubPeer and Retraction Watch have played a huge role in openness about problems in papers. There is no other good website where you can report problems. You may try writing privately to a journal, or sometimes there are comments sections in journals, but very often these comments disappear after a while or they never come out of moderation. I feel PubPeer does a really good job in alerting people that there might be a problem with a paper and it’s the only platform that I know of that we can use. Retraction Watch offers a glimpse of what happens once a paper gets retracted because they provide the background to a retraction. In many cases a retraction notice is very vague, simply stating that the authors or editors decided to retract the paper because of a problem without indicating what the problem was, which is not fair for the reader because parts of the paper may still be good. We want to know why the paper was retracted and what the specific problem was. Retraction Watch go into a little bit more detail, they interview people – the scientists, the authors, the editors – and ask them for their side of the story. Sometimes you learn that a retraction was actually a very good thing because an author found, for example a big problem with their paper due to a mistake in a formula, so they did the right thing in retracting their own paper. To hear people talk about why they retracted a paper is very useful and gives you a lot more information. I feel both Retraction Watch and PubPeer create transparency as a lot of these cases are otherwise hidden by the journals or institutions.

As to whether it is an increasing problem, I do believe it is for several reasons. First, papers are getting more and more complex, which provides more opportunities to fake data. Digital photography also means it is much easier to digitally alter a photo than it used to be – when I did my PhD you would still bring your gel to the photographer, there was no digital photography and subsequent Photoshopping.  Another reason is the increasing pressure to publish. Certain countries have really increased their pressure to publish and made it mandatory to publish for example, a paper when you finish your Master’s degree or to publish multiple papers when you finish your PhD, or in medical school you need to publish a paper to get a promotion. China in particular has issued a lot of these mandatory publication demands. In some cases they are impossible to fulfil as people do not have the time to do the research, but of course they still want to get a promotion or a position at a hospital so they might just buy a paper. Therefore, there is this whole growing market of papermills, which are companies that mass produce papers. There are different models but they basically sell fake papers to authors who need them, which was not a problem that existed 20 years ago. If you look at papers from 30 years ago I’m sure there was fraud but those papers usually only contained one figure and one table, so there were fewer opportunities to commit fraud compared with papers today that have 6 to 8 figures and additional supplementary figures. Although I feel that this is an increasing problem, I believe that there is also a greater awareness of the issue”

What more could be done to improve research integrity within the scientific literature? How do you think the research integrity landscape will have changed in 5 years?

 

“I hope there is more emphasis on reproducibility in the future because I feel reproducibility is the only way for us to know that an experiment has really been performed and yielded the reported results.”

“I hope there is more emphasis on reproducibility in the future because I feel reproducibility is the only way for us to know that an experiment has really been performed and yielded the reported results. I hope we have less emphasis on output – measuring a scientist’s output by measuring numbers of papers or impact factor – to remove some of that pressure and instead reward reproducibility. Reproducing a study may not be novel and of course there is not a lot of funding for it, but I feel it gives so much more validity to a study than trying to do something new. Pre-registration of clinical trials is a wonderful thing as it requires people to publish their results even if they are negative, which I feel might result in less cheating. I’m also very worried about artificial intelligence (AI) and its potential to create fake papers and images. We’ve seen several examples of what technology can do right now, if you think about dinosaurs in movies, they look more and more real every year, so I think in the next 5 years AI is going to be a huge problem for scientific publishing, because it might generate fake photos, data and text. Distinguishing what is real and what is fake, which may be impossible in 5 years from now, will be a problem for journalists too. We need to think about how we can prove that images, photos or other data are real. The obvious errors that we currently use to determine that a paper is probably faked can be overcome by a very smart fraudster – they can make their images look very realistic and AI is going to help them tremendously, so I’m very worried about that. I’m not quite sure if we can safeguard the integrity of science with the ever-increasing amount of pressure that we put on scientists and the advantages that digital photography and AI can offer fraudsters and so I’m a bit pessimistic there, but I hope we have more funding to look into solutions, technical solutions for that. Some of that is solvable – we can maybe look at original images, and ways of proving that they really came from a microscope for example, and were not generated by AI. I’m not quite sure how, that goes beyond my technical comprehension of the issue, but there are hopefully ways to solve that.”

Elisabeth Bik is a science integrity consultant. You can contact Elisabeth via LinkedIn.

—————————————————–

What do you think should be done to combat the issue of fraudulent images?

]]>
https://thepublicationplan.com/2022/11/29/spotting-fake-images-in-scientific-research-insights-from-science-integrity-consultant-elisabeth-bik/feed/ 0 12667
Is health research no longer to be trusted? https://thepublicationplan.com/2022/01/28/is-health-research-no-longer-to-be-trusted/ https://thepublicationplan.com/2022/01/28/is-health-research-no-longer-to-be-trusted/#respond Fri, 28 Jan 2022 15:16:34 +0000 https://thepublicationplan.com/?p=10640

KEY TAKEAWAYS

  • Approximately 20% of clinical trials are thought to be false.
  • Cochrane published guidance on how to manage potentially problematic studies in their systematic literature reviews.
  • Dr Richard Smith suggests it’s time to assume all trials are untrustworthy unless proven otherwise.

With increasing evidence that scientific fraud is widespread, Cochrane has published a policy for managing untrustworthy clinical trials in the context of systematic literature reviews. However, Dr Richard Smith, cofounder of the Committee on Medical Ethics (COPE) and member of the board of the UK Research Integrity Office, suggests that it is time to go a step further and assume that all research is fraudulent until proven otherwise.

In a BMJ opinion piece, Smith outlines evidence from research leaders who, in their own investigations, found that many studies underlying systematic reviews were fatally flawed or contained false data. Professor Ben Mol, leader of the Evidence-Based Women’s Health Care Research Group at Monash University, estimates that 20% of trials are false. Availability of individual patient data increases the likelihood of detecting fraud, with one study showing that up to 44% of examined trials were untrustworthy.

Cochrane’s policy provides guidance for dealing with these ‘potentially problematic’ trials, including:

  • retracted studies
  • studies with a published Expression of Concern
  • studies where there are serious questions about trustworthiness of data or findings but no formal post-publication amendment.

However, as noted in an editorial accompanying the policy, the scope of problematic studies is wide-ranging and there is no validated method to identify them (although tools such as the REAPPRAISED checklist can be useful). As more evidence becomes available and consensus emerges in this area, the guidance will need to be updated.

The scope of problematic studies is wide-ranging and there is no validated method to identify them.

With the risk of medical research fraud ultimately leading to patients being given inappropriate treatment, Smith concluded that it may be time to move away from trusting research is honest and reliable to assuming it is untrustworthy until there is evidence to the contrary.

—————————————————–

What do you think – should medical research be assumed untrustworthy until proven otherwise?

]]>
https://thepublicationplan.com/2022/01/28/is-health-research-no-longer-to-be-trusted/feed/ 0 10640
How artificial intelligence is changing the landscape of scientific communication https://thepublicationplan.com/2021/11/30/how-artificial-intelligence-is-changing-the-landscape-of-scientific-communication/ https://thepublicationplan.com/2021/11/30/how-artificial-intelligence-is-changing-the-landscape-of-scientific-communication/#respond Tue, 30 Nov 2021 14:52:32 +0000 https://thepublicationplan.com/?p=10330

KEY TAKEAWAYS

  • The use of artificial intelligence in scientific communication is rapidly expanding, with multiple applications in manuscript preparation and editorial workflows.
  • The scholarly publishing community must adapt to and embrace the use of advanced artificial intelligence.

Artificial intelligence (AI), natural language processing (NLP), and machine learning are widely employed across scholarly publishing, with a reduction in human workload a key driver of their adoption. A recent article by Dr Habeeb Razack and colleagues, published in Science Editing, examined the current and prospective impact of these technologies across the scientific publications arena. The authors concluded that greater adoption of AI in the future could increase the quality of published content as well as retrospectively improve the use of content already in the public domain.

AI is expected to play an increasingly important role in complex editorial processes and improving AI literacy among scholarly publishing stakeholders will be important for future adoption.

The article examined the use of AI across 7 areas of scholarly publishing.

  1. Literature searching and information retrieval: In the current infodemic era, data handling is an increasing drain on time and resources. With more than 127,000 research papers published on COVID-19 alone, the ability of AI tools to extract data from large and noisy datasets is becoming increasingly important. AI tools can generate citation metrics, authenticate hypotheses, position results based on relevance, connect data from various domains and concept areas, access supplementary information, and automate systematic reviews.
  1. Manuscript preparation: Recent improvements in NLP have further enhanced the quality of AI outputs and a number of AI-backed writing tools have entered the market. High profile examples include Grammarly and PerfectIt™.
  1. Bibliography and citation management: In addition to established referencing software features, AI elements such as citation recommendations (wizdom.ai), analysis of citation quality (including identifying retractions; scite.ai), ‘SmartSearch’ algorithms (SciWheel), and tools to identify related publications (Connected Papers) can greatly reduce the time spent on referencing.
  1. Target journal selection: Several web-based platforms are available to assist with journal selection. Notable examples include EndNote’s Manuscript Matcher, which uses an algorithm for determining a ‘match score’, and Elsevier’s JournalFinder, which uses a ‘fingerprint engine’ and subject specific vocabularies.
  1. Plagiarism prevention: Plagiarism has long plagued scholarly publishing, but AI tools can help by identifying content similarity. This now includes novel tools that can detect plagiarism across different languages (CopyLeaks) and identify similarity in bar charts using optical character recognition. The use of AI-supported stylometry has also been suggested as a way of identifying an individual author’s writing style.
  1. Peer review and quality assessment: NLP-driven AI approaches can help identify peer reviewers in a non-biased manner. Tools have also been developed to assess statistical errors (StatCheck) and quality (StatReviewer) in submitted manuscripts.
  1. Editorial workflow and publication production: AI has the potential to simplify editorial tasks, including technical checks (UNSILO Evaluate) and journal-specific manuscript formatting. It can also help editors triage submissions by predicting future citation counts (Meta), and improve post-publication click and retention rates (UNSILO Recommend).

As AI use continues to expand in scholarly communication, Dr Razack and colleagues believe that advanced preparation will enhance AI utilisation and support the workforce by promoting human–machine collaboration. Although some professionals may be concerned that introduction of automated systems will lead to job losses, results of a 2019 survey suggest this is unlikely to occur.

—————————————————–

Will the increased adoption of AI benefit scholarly publishing?

]]>
https://thepublicationplan.com/2021/11/30/how-artificial-intelligence-is-changing-the-landscape-of-scientific-communication/feed/ 0 10330
Research integrity in the COVID-19 era: insights from Retraction Watch co-founder Ivan Oransky https://thepublicationplan.com/2021/03/17/research-integrity-in-the-covid-19-era-insights-from-retraction-watch-co-founder-ivan-oransky/ https://thepublicationplan.com/2021/03/17/research-integrity-in-the-covid-19-era-insights-from-retraction-watch-co-founder-ivan-oransky/#respond Wed, 17 Mar 2021 11:32:43 +0000 https://thepublicationplan.com/?p=8283

Ivan Oransky has been at the forefront of efforts to highlight research integrity issues for over a decade, co-founding Retraction Watch in 2010 to track and publicise retractions in the scientific literature. Following his presentation at the 2020 European Medical Writers Association (EMWA) symposium, we spoke to him about retractions during the COVID-19 pandemic and steps he believes should be taken to tackle research integrity challenges in the future.

First of all, COVID-19 is having a huge, ongoing impact on our daily lives and on scientific research – reflected in the huge number of COVID-19-related publications. At the same time, Retraction Watch’s list of retracted COVID-19 papers continues to grow. Which of the COVID-19-related retractions to date do you think have been the most notable, and what do these cases tell us about current practice in scientific publishing?

“I don’t know that I would choose any particular COVID-19-related retraction as most notable – I suppose that’s like asking which of your children is your favourite. There are certainly the ones that gained the most attention – if I had to pick one, it would be the Lancet paper about hydroxychloroquine that was based on a very questionable (at best) dataset from a company called Surgisphere. I think that paper captured the most attention, and close behind it was a New England Journal of Medicine (NEJM) paper that was also based on those alleged data, but wasn’t about hydroxychloroquine so didn’t capture quite so many eyeballs. Those are the retractions where I think a lot of people had a Casablanca “shocked, shocked!” moment, with the idea that, somehow, this was completely different from anything that’s ever happened in science before. And that’s just nonsense – complete revisionist history.

I think it’s more important, or useful in a way, to look at the whole pattern. I wouldn’t call these data so much, but there have been 87 retractions of COVID-19-related papers to date. That number isn’t all that different from what you would expect to see given the number of papers – and preprints – that have been published.

There have been 87 retractions of COVID-19-related papers to date. That number isn’t all that different from what you would expect to see given the number of papers – and preprints – that have been published.

However, 10 of these retractions were because Elsevier published manuscripts twice that authors had only submitted once. What that speaks to is the rush, or the fast pace, of publishing in the COVID-19 era. The fast pace isn’t so bad, but the system of peer review and publication hasn’t really adapted well enough to it over the years – although I would argue that there have been some strides in that direction.

The fast pace of publishing in the COVID-19 era…isn’t so bad, but the system of peer review and publication hasn’t really adapted well enough to it over the years.

To me, it’s not a particular retraction that’s important – rather the phenomenon that everyone’s rushing and there’s a lot of sloppiness. If anything, I’d say that the proportion of retractions due to misconduct is much lower than you might see in a typical dataset of retractions. I don’t know what to make of that yet, and it could be that people just haven’t found the cases of misconduct so far, but I think that that’s worth paying attention to. It really speaks more to sloppiness and rushing rather than out-and-out fraud accounting for COVID-19-related retractions.”

The proportion of retractions due to misconduct is much lower than you might see in a typical dataset…it really speaks more to sloppiness and rushing rather than out-and-out fraud accounting for COVID-19-related retractions.

While journals have acted quickly to retract some COVID-19-related publications, in general, the pace of investigation and retraction is very slow. However, you’ve recently highlighted a “double-standard” involving rapid retraction when papers draw negative attention on social media. How should journals prioritise their investigations to address allegations in a timely way?

“Well, I think that what journals and publishers should do is actually prioritise investigations. Although some argue that the problem is certain papers being retracted before other papers, the problem is that not enough papers are being retracted, full-stop. There are countless papers being flagged – whether that’s on PubPeer, through correspondence with journals or by scientific sleuths like Elisabeth Bik – where journals are doing nothing. Maybe they’re investigating the cases and it’s just taking them a long time – but why is it taking them so long?

One positive development over the past few years is that some journals are actually hiring entire staffs to look at allegations and to try to catch issues that might lead to retraction before articles are published. Those are the journals and publishers that I think everyone should emulate, such as the Journal of Biological Chemistry, PLOS ONE and FEBS PRESS.

Some journals are actually hiring entire staffs to look at allegations and to try to catch issues…before articles are published. Those are the journals and publishers that I think everyone should emulate.

So, to me, the issue is not so much whether we should retract some papers before others. The more important question is ‘why are journals not prioritising investigations, full-stop?’ If there has to be some prioritisation, then we should retract papers with fatal flaws that seem to be doing harm, or have the potential for doing harm, first. The problem is that then nobody will do anything about all of the other papers. I really hesitate to talk about prioritising certain ‘retractable offences’ over others as I know what will happen – I’ve been watching journals ignore problems for a decade. If you give journals and publishers an excuse, or a rationalisation for why they’re not getting to something they should be getting to, you’re creating more of an issue, and journals know that.”

I really hesitate to talk about prioritising certain ‘retractable offences’ over others as I know what will happen – I’ve been watching journals ignore problems for a decade.

Recently, Retraction Watch discussed a Scientific Reports article retracted following a post-publication peer review round requested by the Editor. Are changes to peer review processes needed to avoid this kind of retraction? Do you think increasing adoption of post-publication and open peer review processes will impact retraction rates?

“I think whether changes are needed to peer review processes depends on what your goal is. Is your goal to prevent retractions, or is it to actually have a transparent publication process that reflects how science works instead of having papers be the be all and end all in terms of promotions, tenure, and so on? I think you have to decide what your goals are, and once you’ve decided this, you can create a system that makes sense.

Part of what always puzzles me is why journals can’t just be honest all the time about how much gets through peer review that shouldn’t.

Part of what always puzzles me is why journals can’t just be honest all the time about how much gets through peer review that shouldn’t. In my opinion, journals have never done a good job of answering this. I hope that one of the illuminating things about the Lancet and NEJM COVID-19-related retractions is that the editors were really forced to admit that their peer review systems were not well-equipped for those papers, although the journals approached this in different ways. These lessons are a good thing, but it’s not as if these issues with peer review only happen when there’s a retraction that catches everyone’s attention.

I hope that one of the illuminating things about the Lancet and NEJM COVID-19-related retractions is that the Editors were really forced to admit that their peer review systems were not well-equipped for those papers.

The paper in Scientific Reports caught everyone’s attention because of what it’s about and the conclusions [the paper made links between obesity and dishonesty], but papers are slipping through like this all the time. Journals need to acknowledge this and provide their peer review reports. I do think that, even if it’s anonymised, publishing peer review comments is a good idea so you can have some faith in the process, see what happened, and believe what happened. I’m not sure that there’s an alternative to journals acknowledging the limitations of peer review processes – I think that they just have to be honest. At this point, every single time a retraction happens, everyone says it was an anomaly and finds a reason for why it was unique. We’re now cataloguing close to 2,000 retractions per year, suggesting that this is not true, and these cases are not unique.”

At this point, every single time a retraction happens, everyone says it was an anomaly and finds a reason for why it was unique. We’re now cataloguing close to 2,000 retractions per year, suggesting that this is not true.

Retractions can occur for any number of reasons, but retraction notices (if they appear at all) can be vague about the underlying cause. How should a retraction ‘ideally’ be conveyed? Is a nomenclature needed, particularly to help protect authors when the retraction is due to honest error?

“Over the years, I’ve actually grown to be increasingly opposed to a nomenclature for various ‘types’ of retraction. I think that in every case I’ve seen where nomenclature is involved, either journals make category errors or they use nomenclature as weasel words. Elsevier have used ‘withdrawn’ in certain cases (and other publishers have followed suit in some ways), and really this is an excuse or rationale not to include any information about why the paper was withdrawn or retracted. That’s a step way backwards. We all make category errors – I make category errors probably every day, but I hope I correct them. For whatever reason, the notion that what we really need is a better taxonomy has persisted – but how that is going to solve the problem of lawyers getting involved in the process and obfuscating reality, or journals not including reliable information in retraction notices, I don’t understand. It won’t help anyone if you still don’t know what actually happened.

What should actually happen – and this is borne out in the economics literature – is that retraction notices should state as clearly as possible what occurred, or state frankly if it’s unclear, as sometimes people have muddied the waters. If that’s the case, then say so: ‘we don’t know what’s happened here because lawyers on either side have been bickering for a year about this – but we feel we should tell readers anyway’. That’s a pretty honest way to go, unlike the approach of not saying anything.

Retraction notices should state as clearly as possible what occurred, or state frankly if it’s unclear.

For individual researchers, it’s very clear that if you retract a paper for fraud, dishonesty or misconduct, you have a retraction penalty, and your citations decline. Maybe your whole subfield’s citations decline as you bring everyone down with you. When you retract a paper due to honest error and the retraction notice very clearly explains this, you don’t see that decline. One study says you might even see a bump, although that hasn’t been replicated.

So, clarity in retraction notices is what’s needed. I think the notion that we can classify everything with a set of words – that will be argued about forever anyway – is the wrong way to go.”

Even after retraction, papers continue to be cited. Do journals need to do more to publicise retractions, and how can authors make sure they don’t fall into this trap?

“Again, it depends what journals want. Do they want to be upfront and help scientists be more efficient, make new discoveries and build knowledge, or are they more interested in protecting their reputations and hiding the fact that something has been retracted? I go by the old adage ‘never ascribe to malice that which is adequately explained by incompetence’, so I’m willing to acknowledge that the lack of action from journals may be due to incompetence rather than being intentional.

Do they [journals] want to be upfront and help scientists be more efficient, make new discoveries and build knowledge, or are they more interested in protecting their reputations and hiding the fact that something has been retracted?

There are now countless studies, conducted by librarians and bibliometrics and scientometrics scholars, showing that it can be very difficult to find that an article has been retracted. Journals and publishers are not transmitting the metadata to where they should (whether this is PubMed, Web of Science, etc) and sometimes they transmit the wrong metadata (eg they call something a correction when it’s a retraction). Even on the journal’s own pages or on the PDFs, articles often don’t show up as retracted. Journals should do more, as they’re the ones who end up publishing papers citing retracted work.

Journals should do more, as they’re the ones who end up publishing papers citing retracted work.

So, how can authors make sure they don’t fall into this trap? We created a database that is primarily for tracking retractions and we’re more comprehensive than any database of or containing retractions. At the moment, there are close to 25,000 retractions in our database – that’s almost twice as many as you’ll find in any other similar database. Authors can search for articles one-by-one using our database, if they want, or they can sign up for software suites and bibliographic management software packages that are working with Retraction Watch’s database. If you use Zotero for example, you’ll get an automatic flag every time a paper in your library is retracted. We get notes about this on Twitter all the time from people who didn’t know it existed and find it really helpful – we’re thrilled with that. We’d love the Retraction Watch database to be incorporated into more software packages too. Without automated flagging, which publishers just aren’t doing at this point, I just don’t see how authors can avoid citing retracted work – but these automated processes have become pretty easy to do.”

Without automated flagging, which publishers just aren’t doing at this point, I just don’t see how authors can avoid citing retracted work.

The extent and sophistication of journal targeting by paper mills and scams is ever-increasing. From your perspective, what can be done to tackle this problem and future-proof publishing processes against these attacks?  

“To me, this really takes a two-pronged approach. One prong is to tackle what we know is out there that no-one has seen fit to tackle yet. iThenticate and other software that looks for plagiarism and duplication follow this model: journals and publishers realised there was a lot of plagiarism, someone developed some software, and now everyone uses it. The same could be done with our database of retractions. Right now, we don’t have a good set of software tools that can detect image manipulation or image duplication, for example. We have individuals including Elisabeth Bik who are doing amazing work, but that’s not really scalable and we need a scalable solution. However, these solutions are only looking to fight yesterday’s battles. Meanwhile, the people who came up with these bad practices are coming up with more ‘clever’ approaches and we won’t know what those are until they explode. So, all of this fits into one prong – rooting out problems once we know they exist.

We also need to take a step back and move upstream to what the real issue is, which is the incentive structure. If we really want to de-incentivise bad (arguably, sometimes criminal) behaviours of misconduct and fraud, we need to decouple every career-affecting decision in academia from publishing papers in top journals. If you remove that incentive, then nobody’s going to feel a particular need to fake papers, go to a paper mill, or anything else.

If we really want to de-incentivise bad (arguably, sometimes criminal) behaviours of misconduct and fraud, we need to decouple every career-affecting decision in academia from publishing papers in top journals.

It’s probably no accident that paper mills tend to be concentrated in places, particularly China, where the incentive structure has been completely warped towards papers for so many years. If we don’t look at these incentive structures, every year or so, another scam will come out.

If we don’t look at these incentive structures, every year or so, another scam will come out.

We wrote about fake peer review back in 2012 – it turns out this hasn’t been eradicated, although it is now easier to detect and has been cut down. We broke a story about selling authorship in Russia, we’ve reported on paper mills – there’s just always something, and there’s always going to be something else. I don’t have the kind of mind to think up what will be next, although I can often find it once it happens thanks to sources like the scientific sleuths. None, or very little, of this will happen if we remove the very pervasive and poisonous incentive structures we have at the moment.”

As noted in the 10 takeaways from 10 years at Retraction Watch, pharma-funded publications account for a low proportion of retractions. You’ve noted that this is unsurprising given the increased scrutiny in pharma versus academia – what changes should academia make to reduce retraction rates? 

“Maybe this is controversial, but I don’t know that we should (certainly in the short or medium term) push to reduce retraction rates. If we mean reduce retraction rates as a proxy for reducing ‘bad behaviour’ – sloppiness or even misconduct – then yes, we should take measures to try to prevent that or to detect it better. There are still a lot of papers that should be retracted but haven’t been, so I don’t think we’ve reached the peak of retractions yet. Just like any other metric, if you suddenly decide that we need to cut down on retractions, that will make things worse. I do think that there are lots of steps that academia can take to try to cut down on these bad behaviours – this goes back to incentives, in a large part.

I don’t think we’ve reached the peak of retractions yet. Just like any other metric, if you suddenly decide that we need to cut down on retractions, that will make things worse.

On the flipside, I don’t think that we should absolve pharma-funded publications of bad behaviour or misconduct. For those sorts of papers, studies can be set up in such a way to get the desired results, but this is not something that would be considered misconduct or would be a ‘retractable offence’. There are gatekeepers and hoops that studies need to jump through (like Institutional Review Boards), but we shouldn’t assume that those systems are perfect.

Both settings have a lot of work to do – in academia you see behaviours that are ‘retractable offences’ while in pharma, that’s not the case, but research practices can have other negative effects. If universities are interested in lowering the rates of misconduct in their ranks, they need to look inwardly and examine whether they’ve created incentive structures that reward good or bad behaviour.”

Finally, in your opinion, what is the biggest challenge to research integrity right now, and how can this be overcome?

“I’m going to sound like a broken record, but I do think that incentives are my main concern and the thing that needs the most attention. That being said, one of the things that worries me is the significant tribalism in science, which has been amplified and made more visible by COVID-19.

One of the things that worries me is the significant tribalism in science, which has been amplified and made more visible by COVID-19.

You want constructive criticisms and critiques in science – you don’t want them to be ad hominem attacks. The critiques should help move the science and the evidence to a better place. Often, the most critical peer reviews are not necessarily of the papers that are most problematic (or frankly those that shouldn’t have been considered for publication in the first place), but are of papers that disagree with your point of view. I guess there’s a tribalism that cuts in every which way, whether it’s scientific, political, or due to the family tree of where and who you trained with. You end up with a lot of people shouting at each other and ‘creating heat without shedding a lot of light’. In the same way, social media has amplified and exacerbated a lot of issues in terms of politics, world events, conspiracy theories and what have you. Sometimes the loudest voices in science don’t have the evidence on their side, but their rhetorical approach is better.

Sometimes the loudest voices in science don’t have the evidence on their side, but their rhetorical approach is better.

I’m all for free speech – I think everyone should feel free to speak their mind and I encourage that, even when they disagree with me – but if we don’t figure out how to get away from this tribalism, we’re just going to polarise science even more. If we couple that with all the issues science is facing, whether it’s a real lack of funding, or publish-or-perish incentives, it’s not going to go well.”

Ivan Oransky is Editor in Chief of Spectrum, Distinguished Writer In Residence at New York University’s Carter Journalism Institute, and President of the Association of Health Care Journalists. He is also co-founder of Retraction Watch, which can be followed on Twitter @RetractionWatch. You can contact Ivan at team@retractionwatch.com and follow him on Twitter @ivanoransky.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

With thanks to our sponsor, Aspire Scientific Ltd


]]>
https://thepublicationplan.com/2021/03/17/research-integrity-in-the-covid-19-era-insights-from-retraction-watch-co-founder-ivan-oransky/feed/ 0 8283
Research integrity across the Atlantic: our summary of the first Biomedical Transparency Summit series webinar https://thepublicationplan.com/2021/03/04/research-integrity-across-the-atlantic-our-summary-of-the-first-biomedical-transparency-summit-series-webinar/ https://thepublicationplan.com/2021/03/04/research-integrity-across-the-atlantic-our-summary-of-the-first-biomedical-transparency-summit-series-webinar/#respond Thu, 04 Mar 2021 10:59:23 +0000 https://thepublicationplan.com/?p=8210

Last week, the Center for Biomedical Research Transparency (CBMRT) hosted the first of three webinars forming this year’s virtual Biomedical Transparency Summit series. The webinar, entitled ‘Research integrity – developments across the Atlantic’, was opened by the CBMRT’s CEO Sandra Petty (recently interviewed by The Publication Plan) and speakers included Professor Ana Marušić (Standard Operating Procedures for Research Integrity [SOPs4RI]) and Dr Michael Lauer (National Institutes of Health [NIH]).

Professor Marušić spoke about the importance of research ethics and integrity, which together contribute to ‘responsible research’. She also shared the ongoing efforts to develop the SOPs4RI toolbox, funded by the European Commission, which aims to assist research-performing and funding organisations to promote research integrity. SOPs4RI have found that few data exist about how institutions can effectively improve research culture, but have also identified many potential actions that can be taken.

While highlighting diverse examples of research misconduct, Dr Lauer discussed the different stakeholders responsible for ensuring research integrity and discouraging misconduct, emphasising that everyone plays a role. He noted that the NIH have previously clarified that institutions receiving funding are responsible for ensuring that their employees (and final funding recipients) adhere to research best practices, such as disclosing conflicts of interest and preventing issues like falsification of data and plagiarism.

Further topics of discussion included:

  • how collegiality impacts research integrity
  • the role of authors and peer reviewers in spotting research misconduct
  • whistleblower protections in research.

The webinar concluded with a panel discussion moderated by Dr Devon Crawford (National Institute of Neurological Disorders and Stroke) and Dr David Tovey (Journal of Clinical Epidemiology).

You can catch up on the webinar in full by viewing the recording or the slides. You can also read our summaries of the second and third webinars in the series.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Kristian Clausen MPH from Aspire Scientific


]]>
https://thepublicationplan.com/2021/03/04/research-integrity-across-the-atlantic-our-summary-of-the-first-biomedical-transparency-summit-series-webinar/feed/ 0 8210
Research integrity: putting principles into practice https://thepublicationplan.com/2021/03/02/research-integrity-putting-principles-into-practice/ https://thepublicationplan.com/2021/03/02/research-integrity-putting-principles-into-practice/#respond Tue, 02 Mar 2021 16:51:49 +0000 https://thepublicationplan.com/?p=8199

Misconduct in medical research has the potential to mislead the scientific community which, in the worst cases, can have major repercussions on patients. Such misconduct can include fabrication, falsification, plagiarism and the emerging trend for ‘post-production misconduct’. In addition to these examples of scientific fraud, a lack of transparency, reproducibility and replicability in medical publications may also affect research integrity.

While there have been several key declarations on the principles of research integrity (such as European Code of Conduct for Research Integrity) occasional high-profile cases of misconduct still occur. The reasons behind misconduct in medical research have been well documented. As outlined in an editorial by Prof Lee Harvey, it is a long-term problem associated with the immense pressure that researchers are under to publish articles that attract funding, which has led to the so called ‘publish or perish’ mentality. This research environment has been compounded by the traditional citation-based metrics that have long been adopted by the scientific community.

In order to combat misconduct, attention is now turning towards how organisations can translate the principles of research integrity into practice. As highlighted in an editorial by Prof Niels Mejlgaard and colleagues, the EU’s next research funding programme will confirm a strong commitment to research integrity. The authors note:

“It is expected that institutions receiving funding from the €81-billion (US$96-billion) programme will be required to have clear plans and procedures in place for research integrity

To evaluate which topics should be addressed by organisations in their plans to promote research integrity, Mejlgaard et al conducted a study called Standard Operating Procedures for Research Integrity (SOPs4RI). They identified nine key areas that should be considered:

  • Research environment: ensure fair assessment procedures and prevent hypercompetition and excessive publication pressure.
  • Supervision and mentoring: create clear guidelines and set up training and mentoring for PhD supervisors.
  • Integrity training: establish training and counselling for researchers.
  • Ethics structures: establish review procedures that accommodate different types of research.
  • Integrity breaches: formalise procedures that protect whistle-blowers and those accused of misconduct.
  • Data practices and management: provide training, incentives and infrastructure to curate and share data according to FAIR principles.
  • Research collaboration: establish rules for transparent working with industry and international partners.
  • Declaration of interests: state conflicts in research, review and other professional activities.
  • Publication and communication: respect authorship guidelines and ensure openness and clarity in public engagement.

Research integrity recommendations together with procedures and other resources are accessible through the SOPs4RI website. Over the next few years these will be refined; the authors urge readers to provide views, concerns, and example of best practice to help tailor these resources. While the vast majority of research is undoubtedly honest, tools and resources such as those highlighted by SOPs4RI, may be needed to help organisations implement integrity principles and improve research.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

——————————————————–

Summary by Josh Lilly PhD from Aspire Scientific

——————————————————–

With thanks to our sponsor, Aspire Scientific Ltd


]]>
https://thepublicationplan.com/2021/03/02/research-integrity-putting-principles-into-practice/feed/ 0 8199