Big data – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com A central online news resource for professionals involved in the development of medical publications and involved in publication planning and medical writing. Wed, 15 Jan 2025 11:04:04 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png Big data – The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning https://thepublicationplan.com 32 32 88258571 AI-accelerated innovation: how can publishers keep up? https://thepublicationplan.com/2025/01/15/ai-accelerated-innovation-how-can-publishers-keep-up/ https://thepublicationplan.com/2025/01/15/ai-accelerated-innovation-how-can-publishers-keep-up/#respond Wed, 15 Jan 2025 11:04:03 +0000 https://thepublicationplan.com/?p=17068

KEY TAKEAWAYS

  • AI use in scientific research is increasing both productivity and the size and complexity of datasets.
  • Adoption of AI tools by publishers could enable them to streamline the peer review process and safeguard against circulating flawed data.

Artificial intelligence (AI) is transforming scientific research and increasing productivity. But how can publishers keep up with the consequent surge in submissions, when peer reviewers are already at capacity and the current system may not be fit for purpose? In a recent article for the London School of Economics Impact Blog, Simone Ragavooloo calls on publishers to harness AI to:

Can AI-enabled peer review match increased scientific output?

The Organisation for Economic Cooperation and Development’s 2023 Artificial Intelligence in Science report states “raising the productivity of research could be the most economically and socially valuable of all the uses of AI”. To realise this potential, however, all steps of the research-to-publication process must align. Ragavooloo argues that publishers must “meet like with like”, utilising AI to streamline the peer review process. For example, Ragavooloo envisions AI doing the “heavy lifting” in areas like statistical analysis, where lack of expertise or statistical training can be limiting factors for reviewers. This would free up human reviewers to focus on aspects requiring greater human insight.

Protecting scientific discourse: can AI catch faulty data?

AI is producing increasingly large and complex datasets. This brings an increased risk of error, which, if unchecked, could lead to widespread dissemination of faulty big data. This prompts another role for AI: AI can identify methodological or statistical errors within vast quantities of information at a rate that is simply impossible for humans. While tools such as Frontiers’ Artificial Intelligence Review Assistant (AIRA) and the STM Integrity Hub are already available to help reviewers triage submitted articles, Ragavooloo believes there is still an unmet need for AI-assisted peer review applications, to ultimately prevent circulation of flawed data.

AI can identify methodological or statistical errors within vast quantities of information at a rate that is simply impossible for humans.

Looking ahead

While recognising we are in a transitional phase, Ragavooloo emphasises that publishers “have the scale and technological expertise” to develop more AI tools, calling on them to put their trust in AI and create “an open path forward” for AI-driven innovation.

————————————————–

What do you think – should publishers develop AI-assisted peer review tools?

]]>
https://thepublicationplan.com/2025/01/15/ai-accelerated-innovation-how-can-publishers-keep-up/feed/ 0 17068
How does failure to falsify influence the reliability of scientific research? https://thepublicationplan.com/2023/06/23/how-does-failure-to-falsify-influence-the-reliability-of-scientific-research/ https://thepublicationplan.com/2023/06/23/how-does-failure-to-falsify-influence-the-reliability-of-scientific-research/#respond Fri, 23 Jun 2023 14:52:38 +0000 https://thepublicationplan.com/?p=14078

KEY TAKEAWAYS

  • Failure to test and refute prominent hypotheses reduces confidence in the reliability of scientific results and hinders scientific progress.

Across many scientific fields there is a well-documented reproducibility crisis that is damaging trust in the reliability of research data. In a recent article published in eLife, Dr Sarah Rajtmajer and co-authors discuss how failure to falsify (refute) strong hypotheses through direct testing has contributed to the problem.

As a case study, the authors highlight two prominent and seemingly contradictory hypotheses in the field of connectomics:

  • Hyperconnectivity hypothesis: brain injury results in an enhanced functional network response.
  • Disconnection hypothesis: brain injury results in reduced functional connectivity.

Instead of deliberate attempts to challenge either of these positions, the research area has seen the publication of a large number of small studies examining under-specified hypotheses, which has done little to bring clarity to the existing body of literature. The authors argue that the ‘science-by-volume’ culture, coupled with the overuse of inappropriate statistical tests and lack of falsification attempts, fosters a research environment in which the quantity of scientific findings continues to grow, but the depth of understanding remains stagnant.

The article calls out the big data revolution as a factor adding to these concerns. The ability to analyse large datasets in different ways can produce false or coincidental correlations, particularly if the statistical methodologies used are not robust.

The strongest hypotheses are specific, easily testable, and clearly indicate the evidence needed to disprove their predictions.

According to Rajtmajer et al., the strongest hypotheses are specific, easily testable, and clearly indicate the evidence needed to disprove their predictions. The authors suggest embracing a ‘team science’ approach, where groups of scientists work together to form opposing hypotheses, design experiments to test them, and agree on the outcomes that would support or refute them.

Implementing a falsification approach, whereby every observation confirms or refutes a hypothesis, would be challenging in everyday research practice. However, the authors believe that regular attempts to falsify a hypothesis could guide the direction of scientific research and enhance the reliability of published science, particularly if combined with other processes aimed at improving data transparency.

Regular attempts to falsify a hypothesis could guide the direction of scientific research and enhance the reliability of published science.

—————————————————–

Could placing a greater emphasis on hypothesis testing and falsification help solve the reproducibility crisis in scientific research?

]]>
https://thepublicationplan.com/2023/06/23/how-does-failure-to-falsify-influence-the-reliability-of-scientific-research/feed/ 0 14078
Language-generating AI in science: transformational or deformational? https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/ https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/#respond Thu, 13 Oct 2022 14:57:18 +0000 https://thepublicationplan.com/?p=12376

KEY TAKEAWAYS

  • Language-generating artificial intelligence could have an empowering impact in science, but non-transparency and oversimplification of complex data could threaten scientific professionalism.
  • Authors call on government bodies to enforce systematic regulation to help realise the potential of large language models in science.

Large language models (LLMs) are artificial intelligence algorithms that recognise, summarise, and generate human language from very large text-based datasets. LLMs could well empower scientists to draw information from big data; however, researchers from the University of Michigan are concerned that without appropriate regulation, LLMs could threaten scientific professionalism and intensify public distrust in science.

A recent report examined the potential social change brought about by LLMs. In a subsequent Nature Q&A, the report’s co-author, Professor Shobita Parthasarathy, described the impact of LLMs in the scientific disciplines. She highlighted the potential for LLMs to help large scientific publishers to automate aspects of peer review, generate scientific queries, and even evaluate results, but cautioned that without systematic regulation, LLMs could exacerbate existing inequalities and oversimplify complex data.

Without appropriate regulation, LLMs could threaten scientific professionalism and intensify public distrust in science.

Developers are not required to disclose the accuracy of an LLM, and the models’ processes are not transparent, meaning that users could be unaware that LLMs can make errors, include outdated information, and remove important nuances. Furthermore, readers are unable to distinguish LLM-generated text from human-generated text, thereby highlighting that the technology could be employed to distribute misinformation and generate fake scientific articles.

For the potential of LLMs to be realised in science, Prof Parthasarathy calls on government bodies to enforce transparency in their use, stipulating that those who develop LLMs should disclose the models’ processes and make clear where LLMs have been used to generate an output.

—————————————————–

Do you think large language models could benefit science if appropriately regulated?

]]>
https://thepublicationplan.com/2022/10/13/language-generating-ai-in-science-transformational-or-deformational/feed/ 0 12376
ISMPP U previews Annual Meeting with standalone session – Data privacy regulation: Which way forward? https://thepublicationplan.com/2020/06/02/ismpp-u-previews-annual-meeting-with-standalone-session-data-privacy-regulation-which-way-forward/ https://thepublicationplan.com/2020/06/02/ismpp-u-previews-annual-meeting-with-standalone-session-data-privacy-regulation-which-way-forward/#respond Tue, 02 Jun 2020 15:25:55 +0000 https://thepublicationplan.com/?p=6813 Flat style banner. Security concept with lock and chain around laptop.

With the Annual Meeting of the International Society for Medical Publication Professionals (ISMPP) taking place as a virtual event this year, Jon Bigelow from the Coalition for Healthcare Communication, presented the second of two standalone ‘preview sessions’. This session focused on the future direction of data privacy laws in the USA.

Big data has the potential to revolutionise healthcare

Big data is an important part of the modern world, with the potential to not only augment the personal user experience, but also revolutionise the way in which technology companies interact with their customers. From the healthcare perspective, patient data from wearables combined with improved algorithms may unlock new insights into disease development and diagnosis. Topically, personal data collected en masse is proving critical in facilitating public health initiatives such as monitoring seasonal influenza outbreaks, or more recently in response to the ongoing coronavirus pandemic, in government-backed ‘track and trace’ apps.

There is low public confidence that technology companies sufficiently protect consumer data

Unfortunately, as highlighted by Bigelow, public confidence in large tech companies to safeguard their data has been eroded through unexpected uses of private user data, such as the high profile case involving Cambridge Analytica and Facebook user data. Indeed, a poll conducted by the Pew Research Center reveals a high level of mistrust surrounding data privacy, with 79% of those surveyed expressing concern about the way companies use data. Government mobile phone tracking apps to help combat the coronavirus outbreak has exacerbated this mistrust, with Bigelow pointing to a recent New York Times article on how COVID-19 surveillance data could be exploited for other purposes.

Current regulation in place in the USA and the rest of the world

Before considering new data protection legislation, Bigelow examined what lessons could be learned from current legislature in place in the USA and beyond. The 2018 European General Data Protection Regulation (EU GDPR) and California Consumer Privacy Act (CCPA) introduced important concepts in data privacy, such as an expanded definition of personal data, greater consumer consent on data storage, including opt-in, and severe penalties for non-compliance (up to 4% of company profits). While the rest of the world moves to a consent-based framework, the USA remains an outlier with its fragmented sector-based regulation and reliance on self-regulation. Bigelow also noted that much of current US legislature is out of date and did not anticipate today’s digital data usage and the threat of cyber-attack. For example, the Health Insurance Portability and Accountability Act, which covers electronic health care transactions, originally dates back to 1996.

One of the limitations of the EU GDPR is the onus on consumers to provide consent. Data opt-in often involves the review of lengthy and abstruse terms and conditions that demand a high level of literacy. In reality, only 9% of US consumers always read the terms before clicking “I accept” for everything, which raises the question if opt-in truly represents informed consent. In addition, technology such as facial recognition software may negate the practicality of informed consent.

Further challenges of data protection include the potential conflict between individual best interests and public health policy. Initiatives such as OpenTrials and the Yoda Project are striving to increase clinical trial data availability and transparency, yet in certain circumstances, for example with studies in patients with rare diseases, full publication of trial data can be difficult to reconcile with the need to protect the privacy of a potentially identifiable patient population.

The future of data privacy laws in the USA

Bigelow went on to describe his vision of the future of data protection laws in the USA. Multiple laws focusing on different areas of concern have been proposed by US senators. Many aim to address the challenges of regulating data usage by large social media platforms, including aspects such as:

Other acts aimed at protecting consumers include:

The coronavirus pandemic has also prompted new proposals, such as the introduction of limits on how health tracking information garnered as part of the public health emergency can be used and for how long.

Notably, there appears to be bipartisan agreement in the US congress that new data protection laws are needed. Implementation at the federal level, such as the Privacy for America initiative that would develop a new data protection bureau within the Federal Trade Commission, will address the cross-state nature of data privacy. This initiative aims to shift the burden of data protection away from individuals towards the government. The proposal includes aspects such as the right to access and delete data, as well as increased control of the use of data. Although COVID-19 has delayed the passing of new legislature, it is expected that new US data protection laws will be enacted during the 2021–2022 congress.

The virtual 16th Annual Meeting of ISMPP will take place 16–18 June, 2020.

——————————————————–

Summary by Julianna Solomons PhD, CMPP from Aspire Scientific

——————————————————–

With thanks to our sponsors, Aspire Scientific Ltd and NetworkPharma Ltd


]]>
https://thepublicationplan.com/2020/06/02/ismpp-u-previews-annual-meeting-with-standalone-session-data-privacy-regulation-which-way-forward/feed/ 0 6813
Meeting report: summary of day 1 of the 2019 European ISMPP Meeting https://thepublicationplan.com/2019/01/31/meeting-report-summary-of-day-1-of-the-2019-european-ismpp-meeting/ https://thepublicationplan.com/2019/01/31/meeting-report-summary-of-day-1-of-the-2019-european-ismpp-meeting/#respond Thu, 31 Jan 2019 10:24:51 +0000 https://thepublicationplan.com/?p=5573 980x245-2019 EU Meeting Banner-Header-with theme

The 2019 European Meeting of the International Society for Medical Publication Professionals (ISMPP) was held in London on 22–23 January and attracted more than 300 delegates; the highest number of attendees to date. The meeting’s theme was ‘Scientific Communications in a Fast-Paced World: Fighting Fit for the Future’ and the agenda focused on innovations in data publishing, open access, patient involvement in publications, and the expanding role of the publication professional. Industry newcomers had the opportunity to attend a satellite training session and all delegates were treated to two keynote addresses, lively panel discussions, interactive roundtables and parallel sessions. Delegates also had the chance to present their own research in a poster session.

A summary of the first day of the meeting is provided below for those who could not attend, and as a timely reminder of the highlights for those who did. A summary of the second day of the meeting is available here.

Welcome and warm up/year in review: the 2018 track record

The plenary sessions began with a review of the key events that occurred in medical publishing over the course of 2018, presented by Rick Flemming (The Publication Plan). Flemming revealed that open access, data sharing and transparency, and patient centricity were key industry themes reported by The Publication Plan last year. For example, 2018 saw an increased commitment to open access across the community, notably with the introduction of a mandatory open access publishing policy for Shire-funded research in January, and later, the introduction of an open access option for company-funded research in American Society of Clinical Oncology (ASCO) journals. In September, cOAlition S was launched, an alliance of national research funding organisations working together to implement Plan S, which aims to make publications of publicly funded research freely available to all by January 2020.

In February, TrialsTracker launched a new tool aimed at monitoring compliance with the FDA Amendments Act (FDAAA) 2007 and the ‘Final Rule’. The EU Trials Tracker and The BMJ’s unreported clinical trial of the week feature were subsequently launched, which also highlight academic and industry sponsors that are failing to report the results of clinical trials. New requirements for International Committee of Medical Journal Editors (ICMJE) member journals came into effect in July, mandating inclusion of a data sharing statement concerning deidentified individual participant data in clinical trial publications. Further to this, November saw Wellcome announce data re-use prizes to encourage the extraction of new scientific findings and insights from existing data. Flemming also mentioned the introduction of the EU General Data Protection Regulations (GDPR) and the Food and Drug Administration Modernization Act of 1997 – Section 114 (FDAMA 114) update, topics discussed in a recent article in the MAP newsletter.

The increasing focus on patient involvement in publications was also noted. In September, The BMJ recounted the journal’s positive experiences of involving patients in their editorial process, while a survey of patient and public peer reviewers for The BMJ and Research Involvement and Engagement revealed overwhelming support for patient and public review as well as identifying ways to improve the experience. Finally, the impact of the AMWA–EMWA–ISMPP Joint Position Statement on the Role of Professional Medical Writers and updates to the ICMJE’s Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly work in Medical Journals were discussed.

Harnessing the power of evidence in a data-led future

The second session of the day looked at how best to channel big data into meaningful content. Moderator Gordon Muir-Jones (Porterhouse Medical US) emphasised that data are meaningless unless interpreted into something actionable, and that inclusion of all pertinent data is important for accuracy. Initiatives such as AllTrials, which campaigns for the reporting of all clinical trial methods and summary results, are just one way in which the volume of accessible medical research data is increasing. In an era of big data, the medical publishing industry needs to address the challenge of how best to disseminate research results.

Valerie Philippon (Shire/Takeda) introduced data sharing as another way in which the pool of available data is widening. She presented the IMCJE’s data sharing statement policy, which calls upon authors to specify not only whether data will be made available, but also to provide details of the data sharing plan. This includes: whether deidentified patient data will be available; what data will be shared; what documents will be available; when data will be available (start and end dates); with whom data may be shared; for what type of analyses; and how data sharing requests should be made. Shire has already stated its commitment to sharing data from company-sponsored research and shared data with 5/7 requesters in 2018. This open approach is not without its challenges: for data from clinical trials involving patients with rare diseases, Philippon reminded delegates of the need to carefully evaluate each request against any risk to patient privacy.

Keith Evans (InScience Communications) explained how the repertoire of data sources expands even further with consideration of real world data. Evans defined this as all data related to health and use of healthcare that are not derived from clinical trials. He emphasised the strengths of real world data and real world evidence, such as the inclusion of a diverse and representative patient population, and highlighted the importance of such data in pharmacovigilance. Furthermore, real world evidence can harness the power of big data and the digitisation of health data, making it easy to collect in large quantities. Potential weaknesses were also identified, such as a lack of recognised common data elements, issues with ownership and privacy, and the potential for bias. One of the main barriers to widespread adoption of real world evidence in healthcare decision making is its current exclusion from regulatory criteria and an entrenched perception of randomised controlled trials as the accepted tool for evaluating healthcare interventions.

Leading on from the exploration of data sources, Tom Rees (OxfordPharmaGenesis) examined ways to maximise the impact of data. Firstly, he underlined the importance of publications as “more than data disclosure”, emphasising the need to explain and contextualise the data. Ancillary activities to publications, such as explanatory videos, infographics, and plain language summaries were cited as valuable tools for reaching a wider audience than via publication in a journal alone. His vision for journal articles as the anchor point to a wider ‘study ecosystem’ was welcomed by delegates as a way to connect multiple outputs from the same study. Navigating this system could utilise metadata tools such as Crossref and PubMed-style automated indexing to signpost related content. Despite the complexities of linking all outputs from a particular dataset, using diverse platforms for data dissemination was viewed as an important way to tailor outputs to different target audiences.

Keynote: PLAN S for open access – what does it mean for scholarly publishing and our industry?

The first of the meeting’s keynote talks centred around Plan S – a hot topic since the announcement in September of the European cOAlition S initiative, which aims to make all publicly funded research freely available by January 2020. This insightful session provided the perspectives of both cOAlition S and a non-profit publisher.

David Sweeney (Research England) kicked things off by recapping the ultimate goal of Plan S: to achieve “full and immediate” open access to publications from publicly funded research. He argued that this will require a paradigm shift towards new models of scholarly publishing that are “more transparent, efficient and fair”; proposals that to date have been met with some controversy. Despite this, Sweeney described a common aspiration among the research community to make research outputs as widely read and disseminated as possible, and encouraged stakeholders to help identify a sustainable model by which to make this a reality. Three possible roads to compliance with Plan S were outlined: 1) Open access journals or platforms (registered with the Directory of Open Access Journals); 2) Deposition of scholarly articles in open access repositories without embargo; 3) Hybrid journals only if under transformative agreements (i.e. with a commitment that the journal will transition to full open access). Open access publication fees (such as article processing charges) were also a topic of discussion. Sweeney outlined that article processing charges should be transparent and fairly reflect the costs involved in publishing a quality open access article, but called for the importance of ‘fee-free’ open access to be recognised.

A non-profit community journal perspective was provided by Claire Moulton (The Company of Biologists). A key takeaway from this presentation was the readiness of such journals to embrace innovation and change; Moulton encouraged an open dialogue in which stakeholders should look to the future and find the best ways forward. A number of key concerns were discussed, including the much-debated stance of Plan S against hybrid open access journals. While Moulton expressed support of hybrid journals as a method of transitioning to open access, she called for publishers to address the issue of ‘double dipping’ (taking both subscription and open access revenue for the same content), and for funders to consider payment of realistic article processing charges for quality publishing. Moulton also expressed concern around the proposed timelines for Plan S, highlighting that not all journals will transition at the same speed. It was suggested that, rather than setting a deadline, Plan S could work with journals to set target transition percentages. Overall, Moulton called for stakeholders to overcome hurdles together to reach the common objective of maximising the dissemination of research outputs.

So, what does the future look like? cOAlition S invites public feedback on the Plan S implementation guidance by 8 February 2019.

Boxing clever: the expanding role of the publication professional 

Jackie Marchington (Caudex) chaired a panel considering the value of publications to different stakeholders. The perspectives of healthcare professionals, payers and the pharmaceutical industry were represented by Pali Hungin (Newcastle University; former President of the British Medical Association), Chris Skedgel (IQVIA) and Clare Baker (Bayer), respectively.

Hungin highlighted the changing trends in how healthcare professionals access publications, moving from browsing paper journals to more selective reading of electronic outputs, often accessed via journal alerts or search services such as PubMed. He emphasised how generalist journals such as The BMJ or JAMA are the first port of call for many clinicians, followed by their individually-favoured specialist journals. Increasingly, conference presentations and proceedings, as well as industry-led symposia, are key ways for healthcare professionals to remain up-to-date with research developments. Hungin identified systematic reviews and meta-analyses as important resources, although noted that interpreting the applicability of these studies, which are based on randomised controlled trials of specific populations, is a challenge and must be supported by considering real world evidence. Finally, Hungin encouraged open peer review as a way to help healthcare professionals to identify potential pitfalls in published research, cautioned against the danger of poor research becoming legitimised through publication in predatory journals, and highlighted that in the current open access environment a revolution in the ‘democratisation of knowledge’ is redefining doctor–patient relationships.

Skedgel presented the importance of publications, including health technology assessments (HTAs) and systematic reviews, from the payer/assessor perspective. Skedgel illustrated how well-structured abstracts are crucial: 80–90% of publications initially identified by keyword-based search strings are screened out at abstract review during systematic literature searches. He also highlighted how data extraction during these searches could be facilitated by the standardisation of publications, such as inclusion of predefined, consistent tables for study characteristics and outcomes. Currently, data extraction frequently involves digitisation of figures, imperfectly extracting data that are used in reanalysis. As such, consistent deposition of original data in databases would be welcomed by payers/assessors. Skedgel suggested that publishers could also facilitate the HTA process by including reporting quality checklists with publications. Skedgel emphasised that from the payer/assessor perspective, publications are viewed as a vehicle for accessing data. Ensuring that data are presented suitably for inclusion in a systematic review helps to ensure their presence in analyses of this type which in turn inform health policy.

From the pharmaceutical industry perspective, Baker described how publications fulfil multiple purposes, including highlighting unmet needs, adding to the evidence base for a specific drug, making cost-effectiveness assessments, fulfilling reporting criteria, providing citable references for scientific communications, and showing leadership in disease understanding or management. Strategic, aligned and focused multi-channel medical communications were perceived as key across Bayer stakeholders. It was recognised that publications reporting clinical trial results alone may not lead to clinical uptake of an intervention, since clinicians require further information concerning real world implementation. Baker suggested that publication planning teams should be ‘rebranded’ to better suit the future, becoming more cross-functional and visible elements of pharmaceutical companies, with proactive, forward-thinking roles in driving strategic scientific communication.

The panel received a number of questions from the audience, including ‘Are reports of randomised controlled trials dead?’. While the unanimous verdict was no, the divergent value of these publications was emphasised, with context reported as crucial for healthcare professionals, and data the essential element for payers. Finally, the potential of artificial intelligence in the future of medical communications was raised: artificial intelligence could facilitate screening of randomised controlled trial and real world evidence publications, aiding the systematic literature review process and helping to deliver the vision of personalised and precision medicine for patients.

Team talk: our ISMPP update

Towards the end of the day, Debby Moss (Caudex; ISMPP CMPPTM recertification committee) and Al Weigel (ISMPP President/Chief Operating Officer) updated delegates on the Certified Medical Publication Professional (CMPP™) qualification and the most notable ISMPP 2018/2019 highlights.

Moss opened her presentation by highlighting that ISMPP is celebrating 10 years of the CMPPTM qualification. Delegates and the Certification Board were thanked for their support of the programme and continued commitment to best practice across the medical publications industry. Moss went on to outline key updates to the certification and recertification processes. For new CMPPTM candidates, the Candidate Handbook (updated October 2018) now includes three new sample examination questions and new scoring information. A mentor programme is also available to assist with preparation for the exam. For those already certified, the CMPP Recertification and Credit tracker Handbook (updated December 2018), now includes self-study qualifying activities. A new online learning management system is in development. Upcoming activities include a CMPPTM survey (due January/February 2019) and a recertification webinar (February/March 2019).

The session was rounded off by Weigel, who underlined the substantial growth of ISMPP over the past 15 years, both in terms of the number of members and the vision of the society itself. The most notable highlights of 2018/2019 were:

  • Initiation of an ISMPP open access white paper – aims to provide a comprehensive, multi-stakeholder discussion on open access medical publishing.
  • Advancement of an Authorship Algorithm Task Force – created to better understand experiences and challenges associated with the authorship of publications.
  • Development of an AMWA–EMWA–ISMPP joint position statement on predatory publishing.
  • The first ISMPP West Meeting was held in 2018 in San Diego, California. Following the success of this event, the second meeting will take place in November of this year.
  • At the end of 2018, ISMPP announced a new partnership with Pharma Collaboration for Transparent Medical Information (phactMITM), a non-profit association of medical information specialists in the pharmaceutical industry. This partnership will kick-off with a half-day educational meeting immediately following April’s 2019 ISMPP Annual Meeting.

Power walk: poster discussions

Day one was rounded off with poster presentations from ISMPP members, showcasing the depth and breadth of the research carried out within the community.

Watch this space —day 2 summary coming soon!

——————————————————–

By Aspire Scientific, an independent medical writing agency led by experienced editorial team members, and supported by MSc and/or PhD-educated writers

——————————————————–

With thanks to our sponsors, Aspire Scientific Ltd and NetworkPharma Ltd


]]>
https://thepublicationplan.com/2019/01/31/meeting-report-summary-of-day-1-of-the-2019-european-ismpp-meeting/feed/ 0 5573
Focusing on big data: is seeing believing? https://thepublicationplan.com/2018/04/03/focusing-on-big-data-is-seeing-believing/ https://thepublicationplan.com/2018/04/03/focusing-on-big-data-is-seeing-believing/#respond Tue, 03 Apr 2018 16:58:19 +0000 https://thepublicationplan.com/?p=4957 Focusing on big data transparency.jpg

Both honest mistakes and the deliberate manipulation of data can affect the quality of published research. In a recent Forbes article, Kalev Leetaru delves into how bad data practice impacts scientific publishing.

The research and publishing communities prioritise new discoveries, but this can be at the expense of full data documentation and validation. Leetaru suggests that this is particularly the case in the age of ‘big data’, where large datasets can be misunderstood in the race to a breakthrough. He classifies the current status of bad data practice under five broad themes and suggests possible solutions:

  • Honest statistical/computing error. Even a simple calculation error in a spreadsheet can drastically alter the understanding of a particular dataset. Statistical review processes may identify such errors, but only full disclosure of raw data, software and workflows can ensure they become known.
  • Honest misunderstanding of data. This can include a failure to understand the limitations of particular data sources, such as solely utilising English language Western-origin news sources to study global trends. The conclusions being drawn from such data may be statistically sound, yet largely irrelevant to the question being posed.
  • Honest misapplication of methods. Powerful statistical and analytical software packages may be freely available, but if used by researchers unfamiliar with their applications and limitations, the output may be unreliable. Only full documentation of the specific tools, algorithms and parameters can allow such errors to be identified.
  • Honest failure to normalise. This can be an issue in media analyses; for example, reporting changes in the number of news articles published on a specific topic over time is meaningless without also reporting changes in the total number of published articles over the same time period.
  • Malicious manipulation. Image doctoring and deliberate data falsification of are two particularly egregious examples of alleged data fraud, and highlight the need for journals to be more vigilant for the possibility of manipulated data.

Leetaru notes that errors can also propagate through scientific publishing as authors copy and paste incorrect information from one paper into another. As poor data practice can result in the broad acceptance of questionable conclusions as fact, Leetaru appeals to journals to take action and adopt dedicated data review processes to eliminate these (mostly unintentional) errors.

——————————————————–

Summary by Julia Draper, DPhil

Julia Draper is a biomedical researcher and freelance writer. Her postdoctoral research background is in leukaemia biology and developmental haematopoiesis. Julia is open to being contacted regarding career opportunities in medical communications at julia.draper@gmail.com.


]]>
https://thepublicationplan.com/2018/04/03/focusing-on-big-data-is-seeing-believing/feed/ 0 4957