This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIRx Med, is properly cited. The complete bibliographic information, a link to the original publication on https://med.jmirx.org/, as well as this copyright and license information must be included.
Infodemic is defined as an information epidemic that can lead to engaging in dangerous behavior. Although the most striking manifestations of the latter occurred on social media, some studies show that dismisinformation is significantly influenced by numerous additional factors, both web-based and offline. These include social context, age, education, personal knowledge and beliefs, mood, psychological defense mechanisms, media resonance, and how news and information are presented to the public. Moreover, various incorrect scientific practices related to disclosure, publication, and training can also fuel such a phenomenon. Therefore, in this opinion article, we seek to provide a comprehensive overview of the issues that need to be addressed to bridge the gap between science and the public and build resilience to the infodemic. In particular, we stress that the infodemic cannot be curbed by simply disproving every single false or misleading information since the belief system and the cultural or educational background are chief factors regarding the success of fake news. For this reason, we believe that the process of forming a critical sense should begin with children in schools (ie, when the mind is more receptive to new ways of learning). Furthermore, we also believe that themes such as scientific method and evidence should be at the heart of the university education of a future scientist. Indeed, both the public and scientists must be educated on the concepts of evidence and validity of sources, as well as learning how to dialogue appropriately with each other. Finally, we believe that the scientific publishing process could be greatly improved by paying reviewers for their work and by ceasing to pursue academic success at all costs.
Infodemiology was defined by Gunther Eysenbach “the science of distribution and determinants of information in an electronic medium, specifically the Internet, or in a population, with the ultimate aim to inform public health and public policy” [1,2]. The term was deliberately coined to recall epidemiology. Consequently, infodemic (ie, “epidemic” of information) represents the uncontrolled dissemination of information, including false or confusing information, during a disease outbreak [3-5]. To date, there is no univocal cataloging of the various types of infodemic information. For instance, disinformation is sometimes defined as the intersection between misinformation (eg, the creation of misleading content and false causal connections between phenomena) and malinformation (eg, leaks, harassment, and hate speech) [5]. On the contrary, Wang et al [4] argue that when the dissemination is voluntary and takes place for malicious purposes, we speak of disinformation; otherwise (ie, when it is unintentional and accidental) we speak of misinformation. Some authors enclose both meanings in the unique term “dismisinformation,” while others adopt the sometimes-criticized expression “fake news” [4,5]. Specifically, O’Hair et al [5] formally define dismisinformation as “any message or a set of messages that represent a meaning complex discrepant from or incompatible with a sender’s intent and/or a relatively informed or expert consensual evidentiary state.” In this regard, it is essential to point out that these denominations can include false news, polarized content, satire, misreporting, commentary, persuasive information, and citizen journalism [6]. In this paper, we will adopt the O’Hair et al [5] convention. Phenomena such as malinformation and conspiracy hypotheses will therefore be included in the concept of dis-misinformation. The importance of the infodemiological approach has always been known in the scientific community but was established definitively during the COVID-19 pandemic. In this regard, 132 states have signed an international document to guarantee their commitment to combat the COVID-19 infodemic as this has often resulted in damage of epidemiological and economic nature [3]. In this perspective paper, we address infodemiological issues, which, in our opinion, have been largely neglected by a significant fraction of the scientific community. Specifically, we will provide arguments to support the fact that the concept of dismisinformation is broader and more complex than it may seem at first glance.
Effects of Communications on the Lay Neutral Public
Although infodemics cannot exist without dismisinformation, it is necessary to consider that even correct information (ie, based on facts and scientific evidence) contributes to its spread. Indeed, the juxtaposition of conflicting information only aggravates the negative influence on the lay public [7]. Such a contrast can arise and grow on two different levels: the dichotomy of reliable and unreliable news (Level 1, eg, scientific evidence versus dismisinformation) and the scientific debate (Level 2, eg, differing predictions based on preliminary data). Notable cases occurred during the COVID-19 pandemic. For example, fake news has emerged about the laboratory creation of SARS-CoV-2 as a virological weapon despite the scientific literature supporting the absence of voluntary manipulation [8]. Even more striking was the alleged correlation between 5G and the COVID-19 spread [9]. A well-trained scientist understands that such news is fake since the peer-reviewed scientific literature is, in the vast majority of cases, in agreement on the nonexistence of such phenomena. However, we must strive to put ourselves in the shoes of an inexperienced person. In particular, on average, a layperson does not have the basis for knowing the concept of “peer review” or “meta-analysis” and can distinguish the reliability of a source only up to a certain point. Let us take a concrete example. I turn on the television and hear about the side effects of COVID-19 vaccines [10]. Therefore, I start looking for details on the web, finding reassurance from my health organization [11]. Some time later, a friend of mine shares a video on Facebook where a doctor (or similar) talks about the severe damage of vaccines, denouncing an international conspiracy. Searching for information on the web, I find an article from the ByoBlu news channel, which confirms the doctor’s words; meanwhile, the vaccine debate becomes hot on talk shows [10]. Then, now in a panic, I ask for help from my general practitioner, who turns out to be a convinced “anti-vaxxer” [12,13]. Hence, I decide not to vaccinate myself, and I advise my family and friends against vaccines. Unfortunately, this is a realistic scenario, as evidenced by the sources mentioned. Furthermore, the above example makes the distinction of Level 1 from Level 2 extremely relevant and subtle. In fact, who is to blame for this irrational reaction? To have a reasoned answer, we need to analyze what happened. First of all, we must consider that the influence of mass media on the population is still extremely high today [14]. Secondly, the conflicting reports, even coming from doctors and scientists, create confusion and diminish trust in the authorities [3]. Rationality gives way to anxiety and fear, increasing the likelihood of assuming harmful behaviors, in this case, not being vaccinated against COVID-19 [3,15]. Indeed, vaccine hesitancy is fueled by the constant discussion on their side effects due to the cognitive distortion of risk perception [10,16]. Such a distortion is reinforced by the fact that sensationalistic headlines can create a bias in reading or listening to the news, and the emotional impact on risk perception is, on average, much higher than that of a logical argument [17,18]. Therefore, the explanations of these phenomena are to be divided between inappropriate communication and personal unpreparedness. While the reasons for writing shocking titles and reporting news with unnecessary emphasis are related to acquiring more audience and clickbaiting [19], the personal inability to process information rationally does not derive simply or solely from one’s willingness not to. In this regard, the World Health Organization (WHO) has firmly stated that we must build resilience to infodemics [3]. Currently, we believe that national school programs are generally inadequate to form the critical and analytical sense necessary to weigh the risk perception based on the available scientific evidence. Specifically, we believe people are not guided and educated on how to judge the trustworthiness of a source. Moreover, even teachers and professors are not prepared to deal with such a vast and complicated topic. Therefore, we believe that the first fundamental step in addressing the future infodemic is creating a school program suitable for the formation of resilience to dismisinformation (Point 1). As a matter of fact, changing a psychological or behavioral attitude beyond a specific age group becomes difficult [20-23], which requires acting on the malleable minds of young people to help them become whole and independent people. Similar conclusions on the importance of health education for children and young people were reached by MacDonald et al [24]. These strategies must be added to what is already being carried out to combat the infodemic [3].
The Plague of Conspiracy Hypotheses
Conspiracy thinking originates from questionings of various kinds, including epistemic, existential, and social [25]. There is evidence that these attitudes are the aberration of mechanisms useful for the human race’s survival, such as pattern recognition, agency detection, threat management, alliance, and dangerous coalitions detection [26]. For instance, the rejection of medical science is caused by complex and unconscious phenomena, including but not limited to illusory truth phenomenon (repeated exposure to falsehood can prime us to accept it implicitly), the availability heuristic (we afford more weight to more readily recalled information, even when this might be misleading), and the fallacy of anecdotal vividness (we tend to react more viscerally to emotive claims than more sober-headed analysis) [27]. Moreover, the Dunning-Kruger effect, which states that incompetents overestimate their knowledge on a particular topic, feeds the conspirationism [28]. This makes communication with these people very difficult as they are excessively prejudicial and do not have the technical means to understand why they are wrong. The press and media coverage of fake news does not help, as confirmatory biases drive conspirationists [10]. In these cases, the implementation of infoveillance and content remotion systems such as those adopted by social platforms could be the only way to limit, at least temporarily, the infodemic on the web [14]. Nonetheless, as discussed in the previous section, conspiracy hypotheses also come from people expected to demonstrate high competence (eg, doctors and scientists). The most striking case is that of Nobel Laureate Luc Montagnier, a staunch supporter of the no-vax movement who has fostered the hypothesis that SARS-CoV-2 was born from a voluntary manipulation of HIV [29,30]. Such incidents have been far from isolated, as evidenced by many professors, doctors, and nurses demonstrating skepticism and unfounded views on vaccines [12,13,31]. In particular, Paris et al [31] highlighted the devastating impact of media communication about vaccine side effects on the class of health care workers. In this regard, it is crucial to keep in mind that it is often the social context (eg, an ingrained belief system) that makes conspiracy theorists appealing to the public [32]. At the same time, Heyerdahl et al [33] showed that fear of peer judgment prompted many health care professionals not to express their doubts about COVID-19 vaccination. This evidence exhibits that not even these people’s scientific training has been sufficient to manage the infodemic. Furthermore, conspiracy and dread often mix in a murky sea that makes them almost indistinguishable. Therefore, we believe that scientific training should focus more on adopting the scientific method and analyzing sources’ reliability (Point 2). Specifically, a science graduate should master the concept of “degrees of evidence” (eg, original article vs meta-analysis) and the credibility of a source (eg, nondeposited preprint vs peer-reviewed article). On this point, we also believe it is essential that the principle of authority be minimized; the conviction of being an expert in the sector must not induce us to think that we can ignore the most recent scientific evidence. A scientist is a real scientist only if they are constantly willing to question what they know based on the most updated literature.
Problems in Scientific Communication
Beyond the glaring errors of the press and conspiratorial characters, including scientists, we must ask ourselves the following: has the communication from the scientific community been adequate? On January 14, 2020, the WHO wrote on the official Twitter account “Preliminary investigations conducted by the Chinese authorities have found no clear evidence of human-to-human transmission of the novel #coronavirus (2019-nCoV) […]” [34]. This statement means that, at the moment, the scientific community does not know if the novel coronavirus 2019-nCoV can be transmitted from human to human. The subsequent day, Maria Van Kerkhove stated in a press conference “From the information that we have it is possible that there is limited human-to-human transmission, potentially among families, but it is very clear right now that we have no sustained human-to-human transmission” [35,36]. The first part of the statement is very cautious, as it is weighted on expressions such as “From the information that we have” and “it is possible.” On the contrary, the second sentence alludes to an implausible possibility, that is, that there is clear evidence to affirm that the virus is not transmitted easily from person to person. In fact, this affirmation was soon denied not only by robust evidence of transmission from symptomatic infected patients but also from presymptomatic and asymptomatic patients [37-39]. Information channels with a large audience shared this news adding further inaccuracies. For example, Reuters published 2 articles with the same title 5 minutes apart. In the first, the opening sentence used the verb “has [limited transmission]” [40], while in the second (the US version), the wording “may have” was adopted [41]. In summary, we have confusing, slightly inaccurate, and covertly contradictory information presented to an inexperienced audience. Even worse is the media debate that arose before the pandemic outbreak in Europe. For example, in Italy, scientists provided diametrically opposed opinions on the severity of COVID-19, breaking public opinion in half [10,14]. This contributed to the emergence of serious protests when the implementation of lockdowns was requested, which proved to be a fundamental tool in cutting down the number of cases and saving millions of lives [42]. In such delicate times, words are just boulders and must be chosen carefully. Indeed, communication errors of this type can provide material for conspiracy hypotheses and confuse the public trying to orient themselves within a new dramatic and unusual situation. Beyond the mistakes of the press for which they are not responsible, scientists have a moral obligation to predict the public reaction to certain circumstances as they possess an additional intellectual tool, that is, the scientific method. When it is necessary to communicate sensitive information, it would be advisable to write the texts or at least the essential passages of the latter to ensure the greatest possible clarity and precision. Furthermore, when consulted, doctors and professors must behave scientifically, that is, base their claims on the degree of evidence available in the literature (Point 3). In this respect, we would like to share the experience and thoughts of one of the authors.
On several occasions during COVID-19, I have had discussions with some scientists who were well-known faces of the web and Italian television. The latter’s nature concerned an aspect that I consider of absolute communicative importance: the scientific validity of public statements. For example, many argued COVID-19–related opinions on their Facebook public profiles based on a single preprint without specifying that those results were not peer reviewed nor confirmed by other literature. One of them even replied that this clarification was unnecessary since he was a peer reviewer, as if a single reviewer could replace the entire peer-review process that includes two or more reviewers—depending on the topic’s relevance—and an editor’s final judgment. What surprises me is the arrogance of people who think they can be above the scientific method and community because they have an academic title or role. All this while we have had direct proof that even Nobel laureates can assert dangerous unscientific nonsense. What lesson is being taught to the public by acting this way? Such an excessive usage of the principle of authority distances us from facts and credible communication and urges the public to give importance to the individual rather than the available evidence.
ARCurrent Challenges in Scientific Publishing and Disclosure
Returning to the previous section, we ask ourselves, “what is the reason that prompts scientists to share comments on preprints or other forms of nonpeer-reviewed literature?” During the COVID-19 pandemic, rapid and timely interventions were significant public health challenges [43]. Since COVID-19 depends on many factors and comorbidities and the variants of concern can substantially change its behavior [44-46], having reliable updated data in short times is an essential aspect of containing outbreaks. Unfortunately, the peer review and publication processes are inadequately slow to face a health crisis properly. Huisman et al [47] found that only 13%-16% of papers covering medicine, natural sciences, and public health were accepted within 1 month, and the average acceptance time ranged from 12 to 14 weeks [47]. In our experience as reviewers and authors for over 50 scientific journals during the COVID-19 pandemic, we encountered very long publication times, even for articles with a high scientific impact (eg, side effects of COVID-19 vaccines). Therefore, we understand and agree on the need to comment on preliminary findings as long as it is openly stated that these results are uncertain, and the meaning of “preprint” is clearly explained to the public. Furthermore, national and supranational agencies such as the Centers for Disease Control and Prevention (CDC), the European Medicines Agency (EMA), Food and Drug Administration, and the WHO are constant sources of up-to-date, internally reviewed data and can be consulted to obtain credible and calibrated news on the available evidence. Finally, we firmly believe that peer reviews should give absolute priority to Methods and Results over other sections, and journals should report methodological acceptance of the manuscript publicly. By doing so, researchers in the field—who are unlikely to need the Introduction and Discussion sections to understand and contextualize the paper—could receive new data more promptly. We must remember that science, especially medicine, saves lives. Delaying the publication of a manuscript for aesthetics, layout, or sections not essential for its reproducibility is just an unjustifiable academic caprice. Unfortunately, as things stand, a peer reviewer is required to report these aspects, and journals should be the ones to change their editorial policy.
Alongside this, the scientific community has to deal with internal situations. The first we want to discuss is predatory publication. Predatory journals offer rapid publication times at meager costs, making them very attractive to independent researchers who do not have funds available. However, these apparent benefits arise from poor or bogus peer-review processes [48]. Hence, the researchers identified various strategies to combat this phenomenon, including creating lists of predatory scientific journals and publishers and bibliographic abstracting and indexing [49,50]. Nonetheless, the inclusion criteria in predatory blacklists have always been the subject of criticisms and controversies, and predatory publications have managed to slip into prestigious repositories such as PubMed [51,52]. Therefore, there is no foolproof way but only general indications to recognize and avoid predatory publishers. The predatory phenomenon could be provoked and sustained by the success of the open access publishing method or the excessive editorial costs of renowned journals [53,54]. Nevertheless, even the rush to publish their results can push an author to choose alternative routes to standard publication. The second issue we want to discuss concerns the scientific role of peer review. Specifically, peer review is a fundamental procedure to ensure the accuracy of manuscripts published in scientific journals [55]. The independent judgment of 2 or more expert scientists not involved in the study examined and without conflicts of interest is a first step to skim the literature from gross errors. At the same time, a single article with new results is always a low degree of evidence until other studies confirm its findings. In fact, peer review still presents several flaws, including a possible low agreement between the referees [56-58]. In confirmation of this, it is not surprising to find numerous withdrawn articles [59]. Therefore, peer review is not and cannot be the final judgment on a scientific paper. Based on these premises, Adler [60] has proposed a quick review method that also includes a postpublication review. Yet this approach was not unanimously accepted by the scientific community. Checco et al [61] recently proposed semiautomated artificial intelligence peer review systems capable of assisting the reviewer and improving the review quality, but the authors conclude that there are still concerns to be settled. The third issue we want to highlight regards the conflict of interest of journal publications and the unpaid contribution of peer reviewers. Peer review is a time-consuming and challenging process. Some researchers believe paying reviewers could facilitate shoddy reviews with the sole scope of getting the money reward [62]. In this regard, the authors of this paper—and many other scientists [62]—consider this statement to be simply false for various reasons, including the following: First, reviewers are selected by editors responsible for judging their reliability; second, the same criticism can also be raised for unpaid reviews as a referee could carry out hasty reports to obtain easy certification for personal prestige and curriculum. Third, paid reviewers would transform peer review into a real job with merit-based selection. In this regard, getting paid to conduct a review could increase the competition for top-notch reviews. Moreover, we believe that it would be far more rewarding and fairer for reviewers to have their work generate income. In our experience, a concrete example is that of JMIR Publications; if the editor evaluates the review of a paper to be of sufficient quality, the reviewer can earn up to 100 dollars to spend to publish in one of their journals [63]. Even if it is something very distant from a real waged job, this can be a first step in proving that the model is perfectly sustainable. Beyond that, we consider the association “paid reviewer-compromised review” hypocritical within the current editorial context. Indeed, the paid publication itself is basically a conflict of interest since journals only receive money if the articles are published. For instance, the standard policy of publishers asserts that a paper retraction does not involve the return of the article process charge to the authors (ie, editors may be motivated to publish regardless of the quality of the manuscript) [64]. Notwithstanding that, forcing the public to pay to read is not an acceptable solution if the goal is to keep the scientific community and the population updated on the latest evidence. The question, therefore, is “what to do then?” The open-access model is now widespread, and there is sufficient evidence that paying reviewers does not compromise the scientific quality of manuscripts. Since science continues to work, we believe that this is the right way to follow, with the awareness that the final judgment comes from the scientific community and not from the peer review. The fourth critical aspect concerns the search for scientific prestige. Many authors are convinced that metrics such as impact factor and number of citations are quality indices, while a vast scientific literature demonstrates that such indices are unreliable and can even be misleading [65]. Two of the main reasons are that impact factor is primarily influenced by outliers (ie, very few papers with very high citations), and the citations could reflect more the media success of an article than its quality (ie, more in-depth articles may not be known). In addition, it is necessary to consider why a paper was cited (eg, many citations concern introductory outline aspects). Just as the first duty of peer reviewers is not to be influenced by the prestige of the authors, the first duty of researchers is not to be affected by the reputation of journals when asked to evaluate the scientific content of a paper. As proof of this, numerous preprints have received thousands of positive citations inherent to the methodology adopted [66]. But the obsessive pursuit of prestige can also plague publishers. For instance, editorial reluctance to publish negative or null results can strongly bias literature [67,68]. Regarding all this, we support the healthy desire to be deemed great scientists and receive well-deserved gratifications as long as these do not lead to scientific discrimination and bias. However, considering the above evidence, part of the scientific community seems more interested in self-achievements rather than facts, and we need to change this. The fifth and final point concerns the secrecy of peer reviews. On January 16, 2020, Thijs Kuiken was contacted by “The Lancet” journal to review a paper within 48 hours [69]. The research showed strong evidence that SARS-CoV-2 was transmissible between humans, but the reviewer was expected to keep silent to protect the integrity of the evaluation and the authors’ and journal’s rights. Nonetheless, Thijs Kuiken faced this ethical dilemma with courage and conscientiousness and found a way to communicate the data to the WHO quickly. This episode testifies that the omission of essential public health information must be viewed as full-fledged disinformation. We stress that rapid medical evidence can save human lives and must be prioritized above everything else, especially during health crises. Failure to immediately communicate novel results to the authorities worldwide is a serious wrongdoing, and we must avoid it in the proximal future. Therefore, a standard procedure to deal with these exceptional situations must be designed and implemented as soon as possible.
Degrees of Reliability and Final Recommendations
The concepts of degree (or level) of evidence have long been addressed in the literature and differ by health discipline [70]. Various changes have been made over time, attempting to improve the standard classifications [71]. Despite this, as shown in the previous sections, this element is culpably left out in most public statements. Further critical issues arise when presenting sensitive information to the public; indeed, it is not just a matter of communicating the degree of evidence (eg, original article vs meta-analysis) but also its credibility (eg, publication in a predatory journal vs publication in a legitimate journal). Therefore, the public should be educated on what we have termed “degree of reliability,” that is, a scale that considers both the level of evidence and the credibility of scientific works. Doing so would limit the damage of conflicting information since not all pieces of information would have the same reliability anymore. Therefore, we recommend that health agencies establish a standard classification of degrees of reliability that can be adopted internationally for each discipline by each practitioner. In the meantime, all medical professionals should adopt and specify the scale they deem most appropriate before expressing public statements on sensitive topics. As regards other types of information (eg, facts and fake news checks), we propose the following 6 hierarchy of degrees of reliability, starting from the letter “F” all the way to the letter “A.”
Class F: regardless of their authority, personal opinions must be ranked with the lowest degree of evidence. This is necessary to ensure consistency in class attribution. In particular, the authority principle must be minimized to prevent unjustified claims (eg, Montagnier) from sowing doubt, fear, and false information. Therefore, all conclusions that have not been moderated or peer reviewed fall into this category.
Class E: preprint moderation avoids the circulation of very serious fake news in the scientific world, but it is not comparable to a peer-review process. Indeed, a large number of preprints fueling conspiracy theories and unjustified assumptions have been unearthed [72]. Therefore, although the probability of finding infodemics is lower than the previous category, moderated preprints do not represent sufficient evidence to make scientific and especially medical claims. Besides, it is essential to consider that screening criteria are not uniform between the various preprint platforms; for instance, medRxiv and bioRxiv repositories operate stricter selection criteria about COVID-19 than other databases [73]. Hence, it is also necessary to consider this aspect when evaluating the classification level.
Class D: this category encompasses academic journals not yet indexed in recognized databases (eg, due to their novelty) and not listed among predatory journals (eg, Beall’s List). The degree of evidence is sufficient to expose assessments to the public, provided that it is specified that these are premature analyzes. Furthermore, inclusion in this class implies that an extensive literature search has been implemented, ascertaining the absence of known opposite results. If the number of articles reporting contrary findings is comparable, then the scientist is required to express a personal judgment (eg, comparing the validity of the 2 studies on their scope); in this case, the quality of the evidence is class E. If the number of articles supporting contrary results is significantly greater, then the scientist is required to make a personal judgment and present the quality of the evidence as class F.
Class C: the criteria of point D apply, but the probability of disseminating infodemic material further decreases thanks to bibliographic indexing.
Class B: the criteria of point C apply, but there must be at least 3 agreeing articles. Evidence proposed by recognized health agencies (eg, EMA, CDC, and WHO) can fall into this category.
Class A: the criteria in point B apply, but the articles must be systematic meta-analyses or reviews. Evidence proposed by recognized health agencies (eg, EMA, CDC, and WHO) can fall into this category.
We stress that this scale is indicative and needs to be further elaborated by the scientific community, including all the particular cases that have escaped us, for improving the basic setting. However, we believe it can serve as a general guideline, showing a possible way forward to limit the infodemic drastically. Indeed, the simple fact of specifying the sources and the type of evidence proposed can give the public an idea of the news’ relevance and weight (Table 1).
A colored version of Table 1, useful for dissemination, is presented as Multimedia Appendix 1.
Scientific communication quality classes.
Class quality
Color-related name
Complete name
Evaluation description
F
Red
Opinion (very poor reliability)
Based on raw data, personal experience, or preprint not deposited on accredited preprint repositories.
E
Orange
Indexed novel preprint (poor reliability)
Based on a new preprint deposited on one or more accredited preprint repositories.
D
Yellow
Unindexed new article (uncertain reliability)
Based on a new article not deposited on accredited article repositories. Known predatory journals are excluded.
C
Green
Indexed article (fair reliability)
Based on 1 or 2 articles deposited on one or more accredited article repositories or affirmed antihoax nongovernment websites.
B
White
Evidence (good reliability)
Based on 3 or morea articles or highly and properly cited preprints deposited on accredited article repositories or accredited gray literature.
A
Azure
Confirmed evidence (very good reliability)
Evaluation based on systematic review articles or meta-analyses deposited on accredited article repositories or accredited gray literature.
aThis number may change depending on the importance of the evidence (eg, much more evidence may be required on drug-related information).
Scientific communication quality classes. The marked line highlights the sufficiency threshold. (a) This number may change depending on the importance of the evidence (eg, much more evidence may be required on drug-related information).
AbbreviationsCDC
Centers for Disease Control and Prevention
EMA
European Medicines Agency
WHO
World Health Organization
None declared.
EysenbachGInfodemiology and infoveillance: framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the Internet20090327111e1110.2196/jmir.115719329408v11i1e11PMC2762766MackeyTBaurCEysenbachGAdvancing Infodemiology in a Digital Intensive Era202221421e3711510.2196/37115Infodemic2022-01-07https://www.who.int/health-topics/infodemicWangYMcKeeMTorbicaAStucklerDSystematic Literature Review on the Spread of Health-related Misinformation on Social Media20191124011255210.1016/j.socscimed.2019.11255231561111S0277-9536(19)30546-5PMC7117034O’HairHDO’HairMJ2021Hoboken, NJWiley-BlackwellMolinaMDSundarSSLeTLeeD“Fake News” Is Not Simply False Information: A Concept Explication and Taxonomy of Online Content2019101465218021210.1177/0002764219878224NaglerRHYzerMCRothmanAJEffects of Media Exposure to Conflicting Information About Mammography: Results From a Population-based Survey Experiment20190829531089690810.1093/abm/kay098305968305267106PMC6735717MaxmenAMallapatySThe COVID lab-leak hypothesis: what scientists do and don't know202106594786331331510.1038/d41586-021-01529-33410872210.1038/d41586-021-01529-3AhmedWVidal-AlaballJDowningJLópez SeguíFCOVID-19 and the 5G Conspiracy Theory: Social Network Analysis of Twitter Data20200506225e1945810.2196/1945832352383v22i5e19458PMC7205032RovettaAThe Impact of COVID-19 on Conspiracy Hypotheses and Risk Perception in Italy: Infodemiological Survey Study Using Google Trends202111e2992910.2196/2992934447925v1i1e29929PMC8363126Vaccini anti Covid-192022-01-07https://www.salute.gov.it/portale/nuovocoronavirus/dettaglioFaqNuovoCoronavirus.jsp?id=255Le MarechalMFressardLAgrinierNVergerPPulciniCGeneral practitioners' perceptions of vaccination controversies: a French nationwide cross-sectional study20180824885886410.1016/j.cmi.2017.10.02129104170S1198-743X(17)30594-3Aggressione di medici No-Vax durante assemblea Ordine di Roma2022-01-07https://www.quotidianosanita.it/lavoro-e-professioni/articolo.php?articolo_id=101041RovettaACastaldoLInfluence of Mass Media on Italian Web Users During the COVID-19 Pandemic: Infodemiological Analysis2021101824e3223310.2196/3223334842858v2i4e32233PMC8601032TroianoGNardiAVaccine hesitancy in the era of COVID-1920210519424525110.1016/j.puhe.2021.02.02533965796S0033-3506(21)00083-4PMC7931735TverskyAKahnemanDAvailability: A heuristic for judging frequency and probability197395220723210.1016/0010-0285(73)90033-9EckerUKHLewandowskySChangEPPillaiRThe effects of subtle misinformation in news headlines2014122043233510.1037/xap0000028253474072014-44652-001MegíasACándidoAMaldonadoACatenaANeural correlates of risk perception as a function of risk level: An approach to the study of risk through a daily life task20181011946447310.1016/j.neuropsychologia.2018.09.01230244003S0028-3932(18)30628-6BoltonDMYaxleyJFake news and clickbait - natural enemies of evidence-based medicine20170523119 Suppl 58910.1111/bju.1388328544296XieBWatkinsIGolbeckJHuangMUnderstanding and Changing Older Adults' Perceptions and Learning of Social Media2012040138428229610.1080/03601277.2010.54458022639483PMC3358790VaportzisEClausenMGGowAJOlder Adults Perceptions of Technology and Barriers to Interacting with Tablet Computers: A Focus Group Study201710048168710.3389/fpsyg.2017.0168729071004PMC5649151CookmanCAOlder people and attachment to things, places, pets, and ideas19962832273110.1111/j.1547-5069.1996.tb00356.x8854544MatamalesMSkrbisZHatchRJBalleineBWGötzJBertran-GonzalezJAging-Related Dysfunction of Striatal Cholinergic Interneurons Produces Conflict in Action Selection201604209023627310.1016/j.neuron.2016.03.00627100198S0896-6273(16)00185-9MacDonaldNEDubéEAddressing vaccine hesitancy in immunization programs, clinics and practices20181223855956010.1093/pch/pxy13131043843pxy131PMC6241901DouglasKMSuttonRMCichockaAThe Psychology of Conspiracy Theories20171226653854210.1177/09637214177182612927634510.1177_0963721417718261PMC5724570van ProoijenJvan VugtMConspiracy Theories: Evolved Functions and Psychological Mechanisms20181113677078810.1177/174569161877427030231213PMC6238178GrimesDRMedical disinformation and the unviable nature of COVID-19 conspiracy theories2021312163e024590010.1371/journal.pone.024590033711025PONE-D-20-38740PMC7954317Gonçalves-SáJIn the fight against the new coronavirus outbreak, we must also struggle with human bias202003926330530510.1038/s41591-020-0802-y3215258510.1038/s41591-020-0802-yPMC7073250ArifNAl-JefriMBizziIHPeranoGBGoldmanMHaqIChuaKLMengozziMNeunezMSmithHGhezziPFake News or Weak Science? Visibility and Characterization of Antivaccine Webpages Returned by Google in Different Languages and Countries201865910.3389/fimmu.2018.01215SallardEHalloyJCasaneDDecrolyEvan HeldenJTracing the origins of SARS-COV-2 in coronavirus phylogenies: a review2021020411710.1007/s10311-020-01151-1335588071151PMC7859469ParisCBénézitFGeslinMPolardEBaldeyrouMTurmelVTadiéÉGarlantezecRTattevinPCOVID-19 vaccine hesitancy among healthcare workers20210851548448710.1016/j.idnow.2021.04.00133964486S2666-9919(21)00104-4PMC8098031DrążkiewiczEStudy conspiracy theories with compassion202203603790376510.1038/d41586-022-00879-w3535205310.1038/d41586-022-00879-wHeyerdahlLWDielenSNguyenTVan RietCKattumanaTSimasCVandaeleNVandammeAVandermeulenCGiles-VernickTLarsonHGrietensKPGryseelsCDoubt at the core: Unspoken vaccine hesitancy among healthcare workers2022011210028910.1016/j.lanepe.2021.10028934927116S2666-7762(21)00275-1PMC8668386Preliminary investigations conducted by the Chinese authorities have found no clear evidence of human-to-human transmission of the novel #coronavirus (2019-nCoV) identified in #Wuhan, #China2022-01-06https://twitter.com/who/status/1217043229427761152Wuhan virus has limited human-to-human transmission but could spread wider: WHO202001152022-01-06https://www.straitstimes.com/asia/east-asia/wuhan-virus-has-limited-human-to-human-transmission-but-could-spread-wider-who'Possible' there was limited human-to-human transmission of new coronavirus in China, WHO says202001142022-01-06https://www.cbc.ca/news/health/china-virus-who-1.5426630GandhiMYokoeDSHavlirDVAsymptomatic Transmission, the Achilles' Heel of Current Strategies to Control Covid-1920200528382222158216010.1056/NEJMe200975832329972PMC7200054SahPFitzpatrickMCZimmerCFAbdollahiEJuden-KellyLMoghadasSMSingerBHGalvaniAPAsymptomatic SARS-CoV-2 infection: A systematic review and meta-analysis2021082411834e210922911810.1073/pnas.2109229118343765502109229118PMC8403749CevikMKuppalliKKindrachukJPeirisMVirology, transmission, and pathogenesis of SARS-CoV-220201023371m386210.1136/bmj.m386233097561WHO says new China virus could spread, it's warning all hospitals202001142022-01-06https://www.reuters.com/article/china-health-pneumonia-who-idUSL8N29F48FNebehaySWHO says new China virus could spread, it's warning all hospitals202001142022-01-06https://www.reuters.com/article/us-china-health-pneumonia-who-idUSKBN1ZD16JTalicSShahSWildHGasevicDMaharajAAdemiZLiXXuWMesa-EguiagarayIRostronJTheodoratouEZhangXMoteeALiewDIlicDEffectiveness of public health measures in reducing the incidence of covid-19, SARS-CoV-2 transmission, and covid-19 mortality: systematic review and meta-analysis20211117375e06830210.1136/bmj-2021-06830234789505RypdalKBianchiFMRypdalMIntervention Fatigue is the Primary Cause of Strong Secondary Waves in the COVID-19 Pandemic202012211724959210.3390/ijerph1724959233371489ijerph17249592PMC7767484PluchinoABiondoAEGiuffridaNInturriGLatoraVLe MoliRRapisardaARussoGZappalàCA novel methodology for epidemic risk assessment of COVID-19 outbreak20210305111530410.1038/s41598-021-82310-43367462710.1038/s41598-021-82310-4PMC7935987People with Certain Medical Conditions202205022022-01-10https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/people-with-medical-conditions.htmlHadj HassineICovid-19 vaccines and variants of concern: A review20211109e231310.1002/rmv.231334755408PMC8646685HuismanJSmitsJDuration and quality of the peer review process: the author's perspective2017113163365010.1007/s11192-017-2310-5290567942310PMC5629227ElmoreSAWestonEHPredatory Journals: What They Are and How to Avoid Them20200648460761010.1177/019262332092020932319351PMC7237319StrinzelMSeverinAMilzowKEggerMBlacklists and Whitelists To Tackle Predatory Publishing: a Cross-Sectional Comparison and Thematic Analysis2019060410310.1128/mBio.00411-1931164459mBio.00411-19PMC6550518DhammiIKHaqRUWhat is indexing2016502115610.4103/0019-5413.17757927053798IJOrtho-50-115PMC4800951VakilCPredatory journals: Authors and readers beware20190265292943076534665/2/92PMC6515480MancaAMoherDCugusiLDvirZDeriuFHow predatory journals leak into PubMed2018090419035E1042E104510.1503/cmaj.18015430181150190/35/E1042PMC6148641ShenCBjörkB'Predatory' open access: a longitudinal study of article volumes and market characteristics201510011323010.1186/s12916-015-0469-22642306310.1186/s12916-015-0469-2PMC4589914Harvard University says it can't afford journal publishers' prices2022-01-11https://www.theguardian.com/science/2012/apr/24/harvard-university-journal-publishers-pricesAliPAWatsonRPeer review and the publication process2016103419320210.1002/nop2.5127708830NOP251PMC5050543HopeAAMunroCLCriticism and Judgment: A Critical Look at Scientific Peer Review20190728424224510.4037/ajcc20191523126300428/4/242PierELBrauerMFilutAKaatzARaclawJNathanMJFordCECarnesMLow agreement among reviewers evaluating the same NIH grant applications20180320115122952295710.1073/pnas.1714379115295072481714379115PMC5866547OxmanADGuyattGHSingerJGoldsmithCHHutchisonBGMilnerRAStreinerDLAgreement among reviewers of review articles199144191810.1016/0895-4356(91)90205-n18247100895-4356(91)90205-NAndersonCNugentKPetersonCAcademic Journal Retractions and the COVID-19 Pandemic2021122150132721101559210.1177/2150132721101559233949228PMC8114243AdlerJRA new age of peer reviewed scientific journals2012314510.4103/2152-7806.10388923230526SNI-3-145PMC3515965CheccoABraccialeLLoretiPPinfieldSBianchiGAI-assisted peer review202101258110.1057/s41599-020-00703-8BrainardJThe $450 question: Should journals pay peer reviewers?202103022022-06-07https://www.science.org/content/article/450-question-should-journals-pay-peer-reviewersKarma Credits - What are they and how to collect them?2022-01-10https://support.jmir.org/hc/en-us/articles/115001104247-Karma-Credits-What-are-they-and-how-to-collect-them-WadmanMScientists quit journal board, protesting ‘grossly irresponsible’ study claiming COVID-19 vaccines kill202107022022-06-07https://www.science.org/content/article/scientists-quit-journal-board-protesting-grossly-irresponsible-study-claiming-covid-19RovettaAit is ridiculous and shameful that part of the scientific community adopts the impact factor as an index of the quality of an academic journal or, worse, of the articles published in it202105172022-01-12https://www.facebook.com/alex4lp/posts/4087687737955514ChenTGuestrinCXGBoost: A Scalable Tree Boosting System2016Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningAugust 13-17, 2016San Francisco, CA78579410.1145/2939672.2939785MatosinNFrankEEngelMLumJSNewellKANegativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture20140272171310.1242/dmm.015123247132717/2/171PMC3917235BespalovAStecklerTSkolnickPBe positive about negatives-recommendations for the publication of negative (or null) results20191229121312132010.1016/j.euroneuro.2019.10.00731753777S0924-977X(19)31719-5De VriezeJAn unpublished COVID-19 paper alarmed this scientist—but he had to keep silent20212022-06-07https://www.science.org/content/article/unpublished-covid-19-paper-alarmed-scientist-he-had-keep-silentvan DijkWBGrobbeeDEde VriesMCGroenwoldRHHvan der GraafRSchuitEA systematic breakdown of the levels of evidence supporting the European Society of Cardiology guidelines20191226181944195210.1177/204748731986854031409110PMC6886117BurnsPBRohrichRJChungKCThe levels of evidence and their role in evidence-based medicine201107128130531010.1097/PRS.0b013e318219c1712170134800006534-201107000-00046PMC3124652KoerberAIs It Fake News or Is It Open Science? Science Communication in the COVID-19 Pandemic20200922351222710.1177/1050651920958506KwonDHow swamped preprint servers are blocking bad coronavirus research202005581780713013110.1038/d41586-020-01394-63238212010.1038/d41586-020-01394-6