a. Getting Started in Research
Background
Research involvement is good for both patients and clinicians. In healthcare organisations that are research active, not only are mortality outcomes better, but clinicians are happier and retention rates better. In this chapter we consider how early career clinicians can involve themselves in research, for the benefit of your clinical practice and your patients. It is important to foster early exposure to research, which also makes further involvement in research a less intimidating venture. The only warning attached to that is that once you’ve had a taste of research involvement, and seen your first publication in print, you may find it hard to leave it behind!
Establishing research skills early in one’s career can have advantages; firstly, it encourages critical thinking on how to approach any patient, their presenting difficulties and the selection of appropriate interventions. Further, being able to appraise the expanding and complex (and often contradictory) evidence base is a vital skill any clinician will utilise throughout their career. Finally, it is, or can be, enormous fun!
Involving Trainees in Research
Providing a rigorous and successful research experience for trainees can be difficult to achieve amongst competing demands for clinical competencies. However, knowledge of research is synergistic to good clinical skills. Elements that have been found to favour good outcomes from research experience programmes for trainees are clear communication of expectations, a robust programme structure and peer support, dedicated protected time, a dedicated research curriculum, programmatic support, mentorship and oversight as well as accountability/tracking of accomplishments (see Further Reading below). Of these, perhaps the most critical is the mentorship and modelling of high quality research activity. This is best achieved by joining a productive and supportive existing research group. Given that it is possible to access researchers across the globe, early career researchers should no longer be limited by their local environment.
What You Will Need
What you will need as a trainee is curiosity, good time management and organisational skills and ability to stick with a task, sometimes over years rather than months, to see it through to completion. Skills that you will learn along the way which are transferable are the ability to synthesise information, extract critical findings, and develop new ideas, and address interesting clinical questions in a robust and thoughtful way.
Skills You Will Acquire
Good research training will teach some key skills such as
1. Identifying a good research question.
2. How to do a literature review.
3. How to do a systematic review of evidence.
4. How to write a protocol.
5. Scientific writing skills.
6. Administrative process of publication in peer-reviewed journals.
7. Appropriate referencing of articles using a reference manager.
8. Using statistical software such as R, SPSS, STATA.
9. Applying for ethical and healthcare organisational approval for a study.
10. Consenting patients for research studies.
You should also learn about the difference between research and service evaluation, audit and quality improvement, and how these approaches complement each other, and the different governance requirements for each. If you get involved in a research group, you may also have the opportunity to take part in research activity, such as taking consent, collecting data such as physiological or questionnaire data, attending research meetings where early findings are shared, and learning about the process of gaining grant funding. Our view is that all of these are valid and important, but you may wish to think in advance about which are your priorities, interests and strengths.
Setting a Goal for Acquiring Research Competencies
A note about ‘concept to publication’. Your goal for acquiring research competencies in your early career may not necessarily be taking a research idea from concept to publication. In fact, taking this approach risks setting you up for failure, and may put you off research for life. Good research should be rigorous and as such requires resources and time and collaboration. Most sound research ideas take years to attract funding and still further years to carry out studies and publish results. This is not to say that your research idea is not a good one, only that you will not see it carried out by acting in isolation. When starting out, you will be best advised to join a network or group of like-minded individuals who are already working in your area of interest, where you can learn and feed ideas in. In contemporary research, you are part of a team: everyone will bring their own strengths, expertise, and skills.
There are a number of ways you can get started in research and it is helpful to pick one or another and set this as your clear goal to achieve in research during your training period.
1. Audit, service evaluation and quality improvement – service evaluation and audit are the ways that we measure the quality of the care we are providing, against gold standards, and quality improvement (QI) is how we improve the quality to reach these standards. Research establishes the evidence for setting gold standards. As an early career clinician you will be aware of those aspects of your service which need audit, evaluation or QI approaches and you may have been involved in a project of this nature. Whilst this is not research, it does offer an introduction and gateway into some of the skills required in research, such as writing a protocol, keeping accurate records, collecting and inputting data into Excel spreadsheets, interpreting and writing up data for internal reporting or publication. You may also choose to undertake a service evaluation using large de-identified data sources. These provide information at population level and can answer questions about service utilisation, and epidemiological associations in specific cohorts of patients. In order to make best use of these data sources you should ensure that you or your supervisor have the technical skills to do the data extraction and statistical knowledge to look at population-level statistics. The approval process to use de-identified data is generally more straightforward than for patient identifiable information, but there may be charges to use the data depending on the source.
OUTPUT: Posters or publication of service evaluation, QI or audit projects.
2. Facilitating recruitment to research studies – all clinicians are, or should be, key leaders who support a research culture in their organisations. As an early career clinician you are often closest to the patients, and operating on the frontline of clinical care. This positions you to be an advocate for research participation amongst people whose care you are involved in.
In the UK, most NHS trusts will have a research delivery service consisting of research assistants, led by a research manager. These research assistants will be working to set up existing funded multi-centre studies and recruit participants to research. As an early years clinician, you might be in a position to support this work by:
Being aware of the studies going on in your area, and referring patients to the research assistants working on these studies.
Recruiting and consenting patients yourself.
Carrying out some of the study activities (such as undertaking assessments, interviews and investigative tests).
In some healthcare organisations there are clinicians who are designated research champions in the teams who actively support by referring patients to the research delivery service. As an early career clinician you can either put yourself forward to be a research champion, or you can support these champions in the team, celebrating their work and encouraging others to follow suit.
OUTPUT: On your CV list the names of the research studies you have assisted in, the number of participants recruited to the study, and your roles and responsibilities.
3. Publication – there are a number of ways you can achieve a publication in a peer-reviewed journal. Scientific writing is a skill that requires mentorship so you are advised to seek out academic support for writing an article. Your local R&D department, hospital library, training programme director, or other academic leads in your organisation can connect you with an academic mentor to support you with this.
Examples of publications you may get involved in as a trainee:
Case reports.
Systematic reviews.
Writing up data/results from a research study/service evaluation that has already been conducted.
Writing up an innovative practice or service development taking place locally.
Literature review for an article being led by another author.
Opinion or editorial piece.
Co-writing a text book chapter.
Working on dissemination e.g. through blogs, social media, etc.
OUTPUT: Publication in a peer-reviewed journal or other.
4. Being a principal investigator or sub-investigator on a study – all research studies require a local principal investigator (PI). The chief investigator (CI) holds overall responsibility for all study activities, and it is their job to ensure that the study is run according to the protocol approved by the local ethics committee and healthcare organisation. The principal investigator is a person to whom the chief investigator has delegated local responsibility for the study activity in multi-site study, or in the case of a single site local study, the person holding overall responsibility for the study. The principal investigator will not usually carry out all the study activities, and these will usually be carried out by the research delivery staff or research assistants allocated to the study.
To find out about opportunities to become a principal investigator in your organisation, contact your local R&D department. They can provide a list of local studies taking place, where you may be able to take on a sub-PI role, and will also keep a register of people interested in becoming PIs in future studies. As an early career clinician, being offered the opportunity to be a principal investigator on a study, you should be supported by a senior colleague. If you are a sub-investigator the principal investigator will delegate some duties to you and provide supervision for these duties. The complexity and risk profile of the study will determine what level of experience and skills you require to be a principal investigator. For instance, an observational questionnaire study requires less experience than an interventional study, and may be suitable for a trainee to take on as a principal investigator as their first study, provided they are supported. There is training available to develop the right skillset to be a principal investigator or sub-investigator; there are many schemes to encourage early career clinicians to develop into a PI role.Footnote 1
OUTPUT: If you are a PI, you are likely to be offered authorship on the publication resulting from the study. On your CV state that you have been a PI or sub-PI on a study. Experience as a PI is considered an important requirement for an academic clinician career pathway.
5. Early career representation on a research committee – all larger healthcare organisations will have a research sub-committee that meets monthly or quarterly. Participation as a junior or trainee representative on a research committee provides an ideal opportunity to gain insight and experience in the healthcare organisation’s priorities and research structures. This will not only give you valuable insight into how research structures operate, but also experience on how committees develop and implement strategy. Those sitting on the committees provide leadership in research in healthcare organisations and national institutions, form and maintain relationships with key academic and commercial partners, and work with the other research partners across the sector and nationally to achieve the organisation’s strategic research aims. Memberships of this kind would be a leadership development opportunity providing transferable skills not restricted to the research domain. You will also have the opportunity in this role to feed into the strategy on how early career clinicians could be better supported with research opportunities and training in the healthcare organisation. Depending on the structure of the subcommittee you may also have the opportunity to make links with academic partners, and pick up intelligence about upcoming opportunities.
OUTPUT: Leadership role which could prepare you for a research or clinical lead role.
6. Research fellowships – during your career, specific early year research fellowship posts may be available. These offer guaranteed and protected time for research, often leading to further study. These are posts paid for by research funding organisations. The posts may permit a protected split, e.g. 50/50 or 60/40, in terms of clinical and research time. Research time is usually attached to a pecific research area or team, and the early career clinician will be supported in bolstering their CV in order to make an application towards academic clinical fellowship or fellowship grant applications. Early career clinicians can typically continue to achieve clinical competencies in the clinical part of their post but may have to apply for the clinical part of their post to be accredited by suitable regulatory bodies, and therefore requires advance planning.
b. Research Integrity
Introduction
Scientific research contributes to new human knowledge. This surely cannot be controversial: Discoveries are made, commonly in our universities, by trained and committed individuals. Those people then write up their work which is published (after a rigorous and searching peer-review process) in learned journals, often with grand-sounding names. This is then collected, collated, replicated, and built upon if it is found to be reliable. After this, the new knowledge might be incorporated in an appropriate syllabus in schools, undergraduate or postgraduate courses of study. By this process, new findings, such as plate tectonics, the existence of Pluto, evolution by natural selection, the laws of motion, Ohm’s Law, or risk factors for psychosis, become shared common knowledge and serve humanity in ways large and small.
That’s a nice story, but sadly every step in the process, forces exist which delay, distort or prevent useful knowledge being made, disseminated, and used for good. This isn’t controversial either, for two reasons. The first is that nothing is perfect, so the existence of imperfections within the development, testing, dissemination, and use of scientific ideas shouldn’t necessarily be a cause for worry. The second is that problems of poor integrity of the research process have been extensively documented for decades.
Whether it is right to be concerned about poor research integrity is therefore dependent not on whether these problems exist, but on how common and how serious they are. In this chapter I will outline some of the reasons for my view that the well-known problems of poor research integrity are both common and serious enough to gravely hamper the contribution of our sector to human progress and demand urgent, system-wide action.
Doug Altman, the long-serving statistical advisor to the British Medical Journal and Professor of Medical Statistics at Oxford University, wrote a paper titled ‘The scandal of poor medical research’ in the BMJ in 1994. He has since said that he should have said ‘bad’ instead of ‘poor’. He was clear on three points: Factors exist within the ‘information architecture’ of research which hamper its functioning; these problems are not rare curios – they are common and serious; and what makes it a scandal rather than a mere problem is that ‘everyone knows, but no-one is doing anything’ [Reference Altman1].
There are three interacting sets of components which have brought us here. They are the forces which act upon the players to bring about the behaviours under discussion.
What Does ‘Research Integrity’ Mean?
‘Research Integrity’ is a description of the degree to which the information architecture of research described above is functioning properly. Good research integrity means all the steps from a research idea (including the choice of research question) to eventual incorporation into shared knowledge are carried out properly such that the body of correct knowledge is expanded. Good research integrity also means there being robust processes to detect, verify, and appropriately act upon any system failures or weaknesses which do occur [2].
How Do We Currently Measure Up?
There are hundreds of studies showing poor research integrity, from the plain individual fraud of falsifying data to more hidden problems of publication bias, P-hacking, outcome switching and the like. These are the main contributors to the estimate that 85% of research funding is avoidably wasted [Reference Chalmers and Glasziou3].
Three examples are described below. These are intended to illustrate not only the existence of these problems, but that they are sufficiently serious to be a major concern to anyone within medical research.
Even on what must be considered the top-most tip of the iceberg, the response to the appalling case now recognisable solely by the surname of the now struck-off perpetrator, Andrew Wakefield, has not been characterised by rapid, decisive action. The infamous 1998 The Lancet paper in which, for covert acquisitive reasons, the author falsely claimed a link between the MMR immunisation and autism has been an ongoing public health and science communication calamity, continuing even now, more than 25 years after the event. The paper was retracted 12 years after the initial publication, but the incalculable damage had already been done and continues in which some young people are deprived of the enormous benefits of immunisations due to misinformation directed at their unfortunate caregivers [Reference Dyer4]. Possibly most egregiously, young people who cannot be immunised for one reason or another are denied the protection that herd immunity would provide, were more people immunised.
More widely, we also know that about 5% of a large sample of papers published in high-impact journals are likely to have fraudulent results, detected solely from figures published in the papers, requiring no detective-work at all. This is necessarily a subset of all data fraud [Reference Carlisle5, Reference Carlisle6].
More broadly than that, an outstanding examination of the whole of Medline 1976–2019, over a million papers, shows just how serious a problem we all face. The marked discontinuities in Figure 1b.1 [Reference Barnett7] are at the p-value of 0.05. (There are two discontinuities because they show the exposure being associated with an increase or a decrease in the probability of the outcome.) Huge numbers (perhaps 90% from looking at the graph) of ‘non-significant’ findings are either missing from the literature or have been changed into ‘significant’ findings by fraud. Accounts differ over exactly what this means [Reference Altman1, Reference van Zweet and Cater8], but what is certain is it isn’t good.

Figure 1b.1 The distribution of more than one million z-values from Medline (1976–2019). Taken from van Zwet, E.W. and Cator, E.A., 2021. The significance filter, the winner’s curse and the need to shrink. Statistica Neerlandica, 75(4), pp. 437–452 with permission from Wiley.
Figure 1b.1Long description
The horizontal axis represents z-value ranging from negative 10 to 10 in increments of 5. The vertical axis represents frequency ranging from 0 to 60000 in increments of 20000. The distribution is bimodal, showing two distinct peaks: 1. A smaller peak centred around negative 2.5, reaching a height of about 40000. 2. A larger peak centred around 4, reaching a height of approximately 75000. Between these two peaks, there is a valley where the frequency drops to about 15000 at the z-value of 0.
Is There a Particular Problem in Mental Health Research?
So, we know poor research integrity is common and serious in medical research as a whole. These may be worse in mental health research than elsewhere, for two reasons, beyond the principal observation that it is regrettably easy to ‘see what one wants to see’ in our field.
First, it has been hypothesised that research areas with higher prevalences of ‘positive’ studies will be areas with lower research integrity than other sectors, because all the actions comprising poor research integrity serve to misleadingly increase that prevalence. The one study I know of looking at this found space science and engineering to have the lowest prevalence of positive studies and mental health and psychology to have the highest [Reference Fanelli9].
Second, unlike many other branches of medicine, outcomes for people with mental health problems do not seem to be improving, despite no shortage of research spending. This may well be an indicator of something being quite seriously amiss, given the large advances in treatments in other major non-communicable morbidities such as cancer, heart disease, and stroke.
Players
The players in this tragedy are, in no particular order, Researchers, Institutions, Journals, and Funders.
Researchers
Researchers have ideas of questions which they wish to answer, do research to answer those questions, and disseminate their findings by writing a few-page ‘paper’, the unit of academic dissemination. Researchers must also apply for jobs, apply for funding, build a career, supervise, teach, apply for promotion, move city or country, work for comparatively low salaries, and work for free outside those hours as well if they want to get anything on the first list done.
Institutions
Institutions employ researchers and handle grants won by researchers to carry out research. They also provide the physical space and regulatory infrastructure for research to happen. Institutions also pay subscriptions to publishers of journals so their researchers can read what other researchers have been doing.
Journals
These were once bound periodicals printed on paper and sent regularly by post to paying subscribers. By and large, journals still consist of papers written by researchers, peer reviewed, collected into issues, and published regularly, though all are now accessible via the internet and an increasing number are abandoning physical paper altogether.
Funders
These are governmental, charitable or philanthropic operations, and learned societies or similar. They give away money for research, usually after an application process, which itself has followed a ‘call’, an announcement of an intention to fund work on a topic.
Forces
Two main forces drive the current situation: competition between players and poorly defined and inadequately policed rules by which those competitions are undertaken.
Competition
Our sector is defined by cut-throat competition. Researchers, journals, and institutions all compete in their own ways.
The entry qualification for an academic career is a PhD. The career-grade job title is Professor, but only 0.5% of people with PhDs in the UK will ever carry that title [10]. The competition for advancement is therefore intense, being as it is between people who are already sufficiently committed (and commonly already indebted and under-salaried) to have completed a PhD.
Because so few can advance, differences between all these bright, motivated people who have already committed so much, must therefore be found. Methods include measuring how many times a particular paper has been ‘cited’ (referenced) in other papers, which journals that researcher’s work has been published in, and how much research funding that researcher has been awarded.
It doesn’t seem to matter that the most cited papers are often the worst [Reference Serra-Garcia and Gneezy11], that counting citations incentivises researchers to undertake reviews (and make rating scales) rather than primary science [Reference Miranda and Garcia-Carpintero12], that papers published in ‘higher impact’ journals are often the worst [Reference Brembs13, Reference Brembs, Button and Munafo14, Reference Munafo15], and that measuring the money used for research has been likened to judging an artist’s merit only by the costs of the materials used.
Journals compete to publish the most ‘citeable’ papers. This is because a journal’s ‘Impact Factor’ is how many times the mean paper in that journal has been cited over a particular period. Impact Factors affect subscription numbers, readership, advertising revenue, and even whether a publisher would wish to continue publishing the journal. So, papers with striking, eye-catching, findings rise to the top. Sadly, it is these papers which are the least likely to make a lasting contribution to knowledge [Reference Brembs13].
Institutions compete for researcher ‘talent’. This is because, as you will have seen above, ‘talent’ is thought to be equal to grant success. It isn’t necessarily obvious why institutions would want to administer large research grants if it’s all to be spent on researchers’ salaries and equipment anyway. The specifics differ in different places and times, but currently in England and Wales, a university charges most funders approximately double the out-of-pocket costs of employing researchers on a grant. A ‘successful’ academic is therefore bringing in lottery-win scale additional, unexpected, funds to institutions. This is desirable in a sector where institutions make a loss on teaching many biomedical courses.
Poorly Defined and Inadequately Policed Rules of Competition
Hope for advancement for academics includes being cited by other researchers. This is intended to be an indicator of the quality and influence of the work. But what if it is the worst papers which are cited most [Reference Serra-Garcia and Gneezy11]? And what if it’s possible for researchers to agree to cite each other’s work, to boost their citation scores collaboratively? And what if no-one was to notice that a currency which costs someone nothing to spend is, in reality, worthless? This phenomenon is so common it has a name. It’s called a ‘citation club’ [Reference Biagioli16].
Hope for advancement for academics is also enhanced by being listed as an author on papers. This is another currency which it costs nothing to spend, so it will surprise nobody to see the numbers of authors on papers increasing steadily over the years [Reference Plummer17]. This has a name too. It’s called ‘gift authorship’ and there are even documented cases of people selling authorship on their papers to desperate young academics wishing to progress against these seemingly impossible odds [Reference Else18].
Finally, as seen above, papers with positive, striking, and interesting findings are more likely to be published, more likely to be published in a prominent journal, and more likely to be cited. This heavily incentivises researchers to do wrong rather than right, and would explain the shape of the graph above. There is no process by which poor practice or outright fraud is screened for, and even when an investigation is underway, there is a regrettable tendency for the benefit of the doubt to be offered. In a recent episode of the Freakonomics podcast the wry double observation was made that the rational academic would cheat, and the great majority of those cheating academics will not be caught and will therefore enjoy a successful career [19, 20].
Journals are similarly competing for survival to publish citeable papers, so they are incentivised to not scrutinise papers with striking, exciting findings too carefully. No surprise to see both low standards of reviewing, and an increase in retractions, especially from prestige journals [Reference Steen, Casadevall and Fang21, Reference Van Noorden22].
Institutions, locked into a business model of teaching undergraduates but being unable to charge what it costs to do this, seek superstar researchers to bring big grants. They may also seek to influence their employees in post to maximise grant income. Some institutions have explicit annual grant ‘capture’ targets for researchers, with threats of redundancy if these targets are missed [Reference Moriarty23]. Lives have been lost due to this practice [Reference Parr24].
Behaviours
We have seen that the forces of competition and poor definition/policing of rules act on the players to bring about a host of harmful behaviours, including research fraud and other poor practices. These were touched upon above, and it is important to note there may well be hundreds or thousands of these altogether, many of them undescribed. The Catalogue of Bias is an attempt to describe all the types of bias which may apply to medical research. It currently lists 67 biases, but is open to submissions and of course bias is only a part of poor research integrity [25].
It’s easy to understand how a bad person might act badly to get what they want. It’s fairly easy to understand how a good person might find themselves acting badly out of the desperation of competition, the need to succeed, and/or the belief that everyone else is doing it. I don’t think either of these two factors are the main driver of what we see. I think it is selection: Researchers who cannot tolerate the culture of over-promoting their own work and of rule-bending, leave. Researchers who scrupulously do their jobs are overlooked for promotion, recommended for redundancy, or their contracts run out. This would leave the people who are prepared to do the things required of them to survive, without them needing to know what it is they are doing or even why they are doing it. They might have just learnt it from their elders. They might have just never thought about it all that much.
Synthesis
All the factors seem to reinforce and support each other: An idealist researcher is soon … not a researcher. A perfectly scrupulous Journal or University also risks extinction. This is likely inevitably true, because only robust systems can come into existence on their own. If it was easy to dismantle by applying only one reform or change, that may well have happened by chance by now, and we wouldn’t have this problem.
However, there are numerous signs of change: Since 2005, all Randomised Controlled Trials have had to be pre-registered so their existence could not be denied and their intended measured outcomes could not be switched later. This was a tremendous reform, but unfortunately the only systematic attempt to look at outcome switching found it to be rife, with denial, delay, and disinterest from the journals when it was made known to them [Reference Goldacre26]. In addition, a 2018 review of studies searching for pre-registrations of published RCTs finds lack of registration also to be rife [Reference Trinquart, Dunn and Bourgeois27], even in 2015.
In the early 1990s a legitimate concern was the poor reporting of Randomised Controlled Trials which made replications impossible to do from looking at the paper alone, and made analyses opaque and vulnerable to alteration, both conscious and unconscious, to achieve the desired result. In 1996 the Consolidated Standards of Reporting Trials were published, the product of two separate groups. Known as CONSORT, it was quickly recommended by the International Committee of Medical Journal Editors (ICMJE) and by the World Associate of Medical Editors (WAME). These two bodies are influential on the ‘instructions to authors’ documents published by medical journals in the world of medical scientific publishing. The intention was to improve scientific reporting, and therefore conduct, and therefore the reliability of findings, and therefore medical knowledge, and therefore the health of the public. However, by 2003, only 20% of major journals recommended its use. In the 21 years since it is likely this situation has improved and there are now similar reporting standards for other study designs, now (including CONSORT) collected under the EQUATOR banner (Enhancing the QUality And Transparency Of health Research [28]), so all researchers have the opportunity to know, and all journals have the opportunity to insist upon, agreed and reliable reporting standards. However, poor adherence to these guidelines is high, with 86% of studies examining the issue finding this [Reference Samaan29].
The Cochrane Collaboration (now just Cochrane) was founded by obstetrician Iain Chalmers in 1993 from his own frustration at there being inadequate systems for collating all available research addressing particular clinical questions. Cochrane teaches how to undertake high quality systematic reviews, including how to find unpublished studies and assess risk of bias in those studies, and publishes those reviews. All are publicly available from the Cochrane Library, and all are pre-registered [30]. Commendably, Cochrane also has a group looking at research prioritisation [31]. It is an effort to bring more rationality to what funders want to fund. Cochrane has had some controversy in its history of over thirty years, and has lost much of its funding and its rate of production is down, but still exists and Cochrane Reviews remain the global gold standard for systematic reviews for medical questions.
Centre for Open Science [32] is a growing initiative for root-and-branch reform of scientific practice and publication. It publishes guidelines for journals’ transparency and openness, and rating journals for adherence to those guidelines [33], and the Open Science Framework. This is a free online tool to improve openness, accountability, reliability, and value of scientific activity at every stage, from idea generation and literature searching, through study design, data collection, analysis, and reporting. This is increasingly popular with forward-looking researchers, institutions, funders, and journals [34]. It does not have universal support, however, and take-up has been slower than it might have been [Reference Norris and O’Connor35].
The San Francisco Declaration Of Research Assessment [36] (DORA) is a 2012 attempt to promote the abandonment of the low-validity and poorly targeted journal-level metrics such as the Impact Factor as a tool for evaluating researcher quality or contribution.
The above is a small subset of what is happening to improve the integrity of scientific research. Each seems to be a self-evident good thing to supporters, and each is delayed, ignored, undermined, or destroyed (to a greater or lesser degree) by the forces which necessitated the creation of these initiatives in the first place. Like much change, it is erratic and unpredictable. It is gathering pace though and there are reasons for some optimism.
What Can You Do?
I believe the first and most important political act is to talk. Not everyone can be an activist, a politician, a reformer, or a writer, but everyone can talk about this at the dinner table, at work, or in the pub, to spread the word, to be part of shifting the Overton Window [37] of what is acceptable discourse. Nothing can change until, as a community, it becomes impossible for decision-makers to minimise this issue.
Everyone can also know enough to accept change when it comes and not find themselves influenced by the very natural tendency to resist change just because it is change [Reference Norris and O’Connor35].
Everyone can get behind reforms which will improve things, in the areas over which they have influence. Hundreds of these reform efforts exist already, each aiming to fix one or more small parts of the bigger problem. More appear constantly. As I note above, there are reasons for some optimism.
Everyone can defend rationality and reasonableness from the forces of anti-science. The problems of poor research integrity are serious and must be addressed, but science remains the best way to determine and share highly generalisable truths for the common good. Science is of enormous value and is humanity’s greatest creation. It needs your help to reach its potential.
