12.1 Introduction
In this penultimate chapter, we turn to the process of disseminating research findings of corpus research in health communication. Readers of this book are likely to have access to advice on publishing within their own discipline from other sources. Therefore, in this chapter we focus particularly on (1) academic dissemination outside our own discipline, specifically in medical journals; (2) non-academic dissemination, including engagement with practitioners in healthcare, patient groups, and the media; and (3) potential evidence of influence on policy and practices (known as research ‘impact’ in the UK), and the difficulties of achieving and documenting such impact. The latter two aspects of doing research have increased in importance for the purposes of achieving funding in the UK and other countries over the last decade and are also the subject of debate (for linguistics, see McIntyre and Price, Reference McIntyre and Price2018, Reference McIntyre and Price2023; Mullany, Reference Mullany2020).
Here we will present two case studies. The first is concerned with the project on NHS feedback that we have already discussed in Chapters 2, 6, and 11. The second relates to the project on metaphors and cancer that we have already discussed in Chapters 5 and 9. One of the ways in which these two projects contrast is that the former was initiated by an approach from an external partner, and the researchers then worked with that partner all along (although not, as we will explain, with the same people). The latter project did not involve a single pre-established partner, but researchers worked both proactively and reactively to reach and interact with different potential groups of stakeholders. In both cases, what can be described as successful outcomes were achieved by facing partly unexpected challenges and taking up opportunities as they arose along the way.
12.2 Working with an External Partner: The NHS Feedback Project
This section is concerned with the experiences that two of the authors (Baker and Brookes) had when engaging with an external partner who had invited them to carry out work on data that the partners owned. In theory, dissemination in such a context ought to be relatively straightforward, as the researchers had been approached by the organisation, who clearly wanted them to analyse their data. However, we also discuss a number of complicating factors and unforeseen events which resulted in the engagement process being more challenging than expected.
12.2.1 Building Relationships and Achieving Impact
The project in question involved analysing a 40-million-word corpus of patient feedback and health practitioner responses that had been posted on a public NHS Choice website. A senior member of the Patients and Information Directorate at NHS England contacted the CASS research centre and asked if CASS members would be willing to carry out a corpus-assisted discourse analysis of the dataset, setting the researchers 12 questions (described in Section 2.4). The team was able to obtain funding from the ESRC to employ a full-time researcher for 18 months to assist with the analysis. During the project, there were regular meetings between CASS and members of the NHS England team, where the CASS team reported their findings and demonstrated the methodological techniques that they had used on the corpus.
There was a delay of several months between the first contact with NHS England and the point at which the analysis could begin. This delay was constituted by waiting for the funding application to the ESRC to be evaluated and processed, and then the period of advertising and appointing the research assistant. As a result, by the time that the researchers were ready to start the project, the contact person at NHS England had moved on to another post within the organisation. Additionally, the name of the team who dealt with NHS feedback had been changed. This meant that the researchers at CASS started the project with an unfamiliar contact person who did not know much about corpus linguistics or the reputation of the research centre. In large organisations, this kind of ‘churn’ is common, due to people seeking promotion or moving to and from other organisations. Thus, this is something to bear in mind when working with external partners. The researchers had to establish a relationship with a newly appointed contact person, and this meant that early meetings involved explaining their methodological approach to an extent that had not been anticipated.
Towards the end of the 18-month lifetime of the project, the new contact person also moved to a different position, so the researchers were assigned a third person to report to. It was fortunate that with both of these changes, the new contact person was open to working with CASS, and the third contact person arranged for the researchers to work on a second dataset, which involved feedback relating to cancer care (see Chapter 6). Therefore, it was felt that despite the changing nature of the organisation, it was possible to keep up a good relationship with key different individual members.
Not every NHS England staff member viewed the involvement of CASS as favourably, though. The researchers gave presentations at a range of NHS England meetings, with different people present, from different units. At one early meeting, the researchers spoke to a staff member who revealed that his unit had recently invested a large sum of money in a piece of software that was going to carry out all the analysis of NHS feedback without human intervention. When the researchers described the approach taken by CASS – using corpus software to identify the relevant aspects of the data, which would then need to be analysed and interpreted qualitatively by humans – the staff member made it clear he was not interested in working with them, and no more meetings with this particular person took place. At another meeting, where the researchers presented some of their findings, a member of the NHS England seemed unimpressed. He implied that some of the findings were irrelevant because they confirmed results already obtained through his own approach. Then he noted that another particular finding was irrelevant because it was different from what he had found.
Working with a large organisation can involve the navigation of complex human hierarchies where pre-existing and sometimes long-standing workplace rivalries and alliances are in existence and unlikely to be made transparent at the outset. Compatibility in terms of interaction style and personality might play a much larger role than expected in terms of whether an outside researcher will be accepted by members within the organisation. It should be borne in mind that there may be numerous elements of the ongoing relationship that researchers will have little control over.
During their presentations to NHS England, the researchers generally felt that their findings were taken seriously and welcomed, however. One of the presentations, involving a demonstration of a step-by-step corpus analysis of data, was recorded and placed on the NHS intranet system as training for others who wanted to follow the procedures. Team members who sat in on presentations were usually engaged, asking helpful questions or making useful observations at the end. However, it was seemingly difficult to evidence the impact that the CASS team’s analysis had made on the organisation as a whole. For example, one of the research findings was that of all the different NHS staff members who received feedback from members of the public, receptionists received by far the worst forms of evaluation, often being described as rude or unhelpful, refusing to give appointments or asking invasive questions about patients’ medical conditions. The analysis indicated that this was chiefly due to patients misunderstanding the roles of receptionists, who were implementing booking systems that they did not design, being sometimes required to ask questions about medical conditions in order to direct the patient to the most appropriate doctor. On the other hand, dentists often received extremely positive feedback from patients, but the researchers concluded that this was due to the fact that patients were often afraid of experiencing pain during a visit to the dentist, and when this did not happen, they were pleasantly surprised. Both the feedback regarding receptionists and dentists were governed by patient expectations which were not met. The team from CASS advised that rather than sending receptionists on social skills training courses, it would be sensible to engage in a public information campaign to educate people about receptionists’ roles. It was also noted that a campaign to make people less afraid of dentists might reduce positive feedback about them, which raises an interesting dilemma.
These kinds of findings were received positively by the NHS England team, who reported how interesting they were, although it is not clear what they did with the information. It was not possible to ascertain the extent to which recommendations made by the CASS team would be implemented throughout the NHS, nor was it clear whether the methodological approaches that had been outlined to the team would be adopted to analyse subsequent sets of feedback. The NHS contact did provide a written statement outlining how useful the corpus approach and findings had been, which was used when reporting back to funders. The researchers also felt that they had made an impact on the NHS England team. However, the goal of showing that the NHS itself had changed as a result of our analysis was much more difficult to demonstrate, due to the unwieldly and nebulous nature of the large organisation and the length of time it can take for new recommendations to be implemented throughout a large organisation.
12.2.2 Disseminating Findings
In terms of wider dissemination, the findings were published as a monograph (Baker et al., Reference Baker, Brookes and Evans2019) and a journal article in the British Medical Journal Open (Brookes and Baker, Reference Brookes and Baker2017). The journal was targeted, as it was felt to be particularly likely to be read by medical practitioners, as opposed to academics. Additionally, the researchers used their university’s press office to put out a press release about their findings on 2 May 2019.Footnote 1 This resulted in an article in The Mail Online (3 May 2019) which had an average daily readership of 2.18 million people between April 2019 and March 2020. Another article written by the team appeared in The Conversation (6 May 2019), which had a monthly readership of 10.7 million people in September 2019. The findings were also reported on several other websites, including Yahoo.com (6 May 2019), Practiceindex.co.uk (7 May 2019), Metro (7 May 2019; https://metro.co.uk), Dentistry.co.uk (9 May 2019), and The Independent (13 May 2019; www.independent.co.uk).
Writing a press release for a piece of corpus-based health research requires ‘unlearning’ some of the writing skills that have been accrued during an academic career. The press release that the researchers wrote was relatively short (760 words) and used non-technical language. Each sentence was written as separate paragraph. The researchers did not provide a summary of the entire project (which would have required answers to 12 research questions) but instead focussed on a single finding which was felt would be most interesting to the public. Galtung and Ruge’s news values framework (1965) was considered in order to identify which of the findings would be likely to attract readers’ attention. The pair of findings about people not liking receptionists but liking dentists was deemed to be a good possible story because it keyed into news values of negativity (people disliking receptionists), familiarity (most people have made an appointment to see a doctor), and unexpectedness (the reason why people thought receptionists were rude was not what we would expect). The researchers did not make any reference to corpus linguistics or technical terms like collocation, keywords, or concordances in the press release, as that would have required further explanation which would have detracted from the main point of the story. Instead, in the fourth paragraph the researchers wrote, ‘The Lancaster University linguists used computer software to identify frequent or unusual patterns of language in the data but had to interpret the patterns themselves by reading hundreds of examples of feedback.’
Recommendations relating to a hypothetical public information campaign relating to receptionists were incorporated into the press release.
‘Rather than suggesting that receptionists need retraining … we instead noted that feedback is very much linked to expectations and constraints around different staff roles,’ says Professor Paul Baker, who led the research.
‘So jobs that involve saving your life or delivering a new life are seen as more impressive than the more support-based work carried out by nurses and receptionists – feedback has a role bias in other words.’
The poorer evaluation of receptionists was strongly linked to people taking it personally when the receptionist could not give them an immediate appointment or was required to ask the patient questions to conduct triage.
‘In other words they are often taking the flak for things that are not their fault but actually are indicative of patient frustration at bigger systemic issues – fewer appointments and longer waiting times are more likely to be the result of funding shortages that are beyond the receptionist’s control,’ added Professor Baker.…
‘Patients are then pleasantly surprised when the experience is not painful and so they leave excellent feedback as a result of their negative expectations not being met’ explained Professor Baker. ‘While this is good for dentists, it raises a dilemma if we simply measure NHS success in terms of patient ratings.
‘If the NHS embarked on awareness campaigns to counter fears about dentists, then more people would be likely to visit the dentist, although ultimately patients would be likely to end up being pleasantly surprised less, resulting in the amount of positive feedback around dentists perhaps going down over time.’
It could be argued that to an extent, the reporting of findings in the press release would have functioned as a public information campaign in its own right, as it would have been seen by potentially millions of people in the UK, from a range of different demographic groups. The aim in writing the press release was to encourage people to be less resentful (or abusive) towards receptionists and also be less afraid of visiting the dentist. However, while evidencing dissemination (how many people find out about a piece of research) is relatively easy, evidencing impact (the extent to which the research changes people’s lives) can be more challenging. It would have taken a further study to investigate whether people’s attitudes and behaviours towards receptionists and dentists had changed, and this was not something the researchers had funding to do, although for future projects it would be useful to consider how impact could be evidenced and to build this into funding proposals from the outset.
Finally, the researchers experienced an additional hurdle in the process of publishing findings when working with an external partner such as NHS England, which is somewhat different from working independently or with other academics. Publications needed to be signed off by the external partner. This process took longer than expected, particularly because with a large, complex organisation like the NHS, no single person was able to give permission to publish. Rather, the proposed publications were sent to multiple people at the NHS, who were all required to comment on them. Many of these people had other commitments and it was to their credit that they took the time to read the research drafts and provide their feedback and questions. For a later piece of research that the CASS team wanted to publish, the NHS used a staff member whose job was to ensure that the piece used language in a sensitive way, particularly relating to different identity groups. The paper to be published involved the analysis of language differences between men and women, and the NHS reader had quite specific requirements relating to how terms like sex, gender, man, male, woman, and female were used. This resulted in the terminology of the paper being changed accordingly, along with a delay, as the reader had numerous other documents to examine. It is therefore worth bearing in mind that publications resulting from collaboration with external partners may take longer to come to fruition than usual, particularly if the publications are to be submitted to journals and also reviewed by academics.
In summary, the collaboration with the NHS was one of the most rewarding pieces of research carried out within CASS, and it resulted in the research being among the most widely publicised, with articles in several national news outlets. This was also one of the first projects where members of CASS worked with an external partner. As noted, a key finding of the research was that the prior expectations of people who left feedback strongly impacted their responses, particularly in cases where these expectations did not meet reality. It is ironic, perhaps, that the researchers’ own expectations of working with an external partner did not meet with reality, either. The researchers were treading uncharted territory during this project, and there was nobody who could advise on what their expectations should be and how to respond to the challenges that appeared. A lot of the time, the response involved improvisation or simply ‘going with the flow’ of the partner organisation. We hope that this section will alert readers to some of the possible ways that such a partnership might develop. However, we would not expect future research with external partners to work in the same way, and more than anything, we would advise analysts to expect the unexpected when working in similar contexts.
12.3 Disseminating the Findings of a Corpus-Based Project on Metaphors and Cancer
In this section we present some additional experiences of disseminating the findings of corpus-based research on health communication by referring to the Metaphor in End-of-Life Care’ (MELC) project on metaphors and cancer, which has already been discussed in Chapters 5 and 9. In contrast with the example discussed in the previous section, the MELC project did not involve a collaboration with a single partner, but the funding for the project required interactions with stakeholders in cancer care and end-of-life care, as well as some evidence of what changes had resulted from these interactions.
As a reminder, the project was inspired by ongoing debates about the dominance and potential pitfalls, especially for patients, of metaphors whereby having cancer is presented as a fight or a battle with the disease – or, as the project team labelled them, Violence metaphors for cancer. The team combined manual analysis with the use of a range of corpus tools to study patterns in metaphor use in a 1.5-million-word corpus of interviews with and online writing by members of three different stakeholder groups in cancer care: patients, family carers, and healthcare professionals (Semino et al., Reference Semino, Demjén, Hardie, Rayson and Payne2018b).
As discussed in Chapter 9, the analysis provided evidence of the potentially harmful effects of Violence metaphors, particularly when a patient perceives themselves as responsible for ‘losing the battle’ when they have no prospect of recovery. However, a systematic analysis of occurrences of Violence metaphors in context revealed that, for some patients and depending on the circumstances, these metaphors can be empowering rather than disempowering. Indeed, the same observations about the importance of context and individual preferences and circumstances were found for Journey metaphors for cancer, which are sometimes perceived to be an alternative to Violence metaphors (e.g., ‘my cancer journey’). The analysis of the MELC corpus revealed that Violence and Journey metaphors are the most frequent types of metaphors for all three groups of stakeholders in both genres represented in the corpus (Semino et al., Reference Semino, Demjén and Demmen2018a). Eight additional main types of metaphors were also identified:
Restraint: ‘I feel like a prisoner with all the rules about don’t eat this don’t do that.’
Animal: ‘I am so sorry to hear that the beast is back.’
Openness: ‘From then on we were open we talked about it.’
Sports and games: ‘Caring for somebody with a terminal illness is more of a marathon rather than a sprint.’
Religion and the supernatural: ‘I just feel like a sitting duck waiting for the green eyed monster to come up and swallow me whole.’
Obstacle: ‘Add the dispensery [sic] pharmacist to the GP’s receptionists and you’ve got the two big blockers to me getting what I need.’
Wholeness: ‘You realise you are only half a person.’
Machine: ‘I’m not really happy about being back on the treadmill of treatment.’
The project team was interdisciplinary. The Investigators included three linguists, a computer scientist, and a specialist in cancer and palliative care. In addition, the researchers regularly interacted with the Lancaster University Research Partner forum – a group of about a dozen people from northwest England with experience of cancer as patients or carers.
The MELC team aimed to disseminate the findings of the study to different academic audiences, including linguists and healthcare researchers, as well as to practitioners in healthcare and in cancer-related charities, in order to maximise the potential impact of the work on communication about cancer and the experiences of patients. To pursue this goal, they targeted journals from different disciplines as venues for publication and carried out a programme of engagement activities that included public lectures, training sessions for healthcare professionals and charity staff, blog posts, podcasts, and interactions with the media. This project therefore differed from the one described in the previous section, in that it was not commissioned by a specific stakeholder but rather aimed to reach a variety of different stakeholders, both proactively and, as the project became better known, reactively.
Broadly speaking, these goals were facilitated by the fact that the project involved large quantities of data and that the analysis was both quantitative and qualitative – in other words, by the adoption of corpus-based discourse analysis. This meant that the project could be taken seriously by audiences who favoured quantitative research on large datasets, as well as by audiences who favoured in-depth qualitative studies of authentic communication in context. In addition, the presence on the team of a highly respected healthcare researcher (Sheila Payne) led to initiatives that would have been difficult to achieve otherwise, because of the additional experience and credibility that this Co-investigator brought to the team. On the other hand, these goals were sometimes hampered by the fact that corpus linguistic methods were not known to many of the audiences targeted by the team, as well as by the lack of demographic information in the data from online forums. On many occasions, the team was asked whether there were any differences in metaphor use between men and women, older and younger people, people with different kinds or stages of cancer, and so on. When faced with these questions, the team could provide some general observations but no solid evidence of differences or lack of them, because of the anonymity afforded by the online forums from which data was drawn and the additional ethical guidelines that the project needed to follow in terms of anonymisation.
In the rest of this section we focus on three specific aspects of the project team’s experiences in disseminating their findings beyond linguistics and beyond academia, and in trying to achieve concrete improvements in the support of patients and communication about cancer: writing for a healthcare journal, dealing with the media, and going beyond corpus data to create a metaphor-based resource for communication about cancer.
12.3.1 Writing for a Healthcare Journal
The project’s findings were published in a monograph (Semino et al., Reference Semino, Demjén, Hardie, Rayson and Payne2018b), several linguistics journals (e.g., International Journal of Corpus Linguistics and Applied Linguistics; Demmen et al., Reference Demmen, Semino, Demjén, Koller, Payne, Hardie and Rayson2015; Semino et al., Reference Semino, Demjén, Hardie, Rayson and Payne2018b), and one healthcare journal (BMJ Supportive and Palliative Care; Semino et al., Reference Semino, Demjén, Demmen, Koller, Payne, Hardie and Rayson2017). This subsection is focused on the latter.
The original idea to publish some of the findings in BMJ Supportive and Palliative Care came from the healthcare researcher on the team, Sheila Payne. Payne suggested that the journal’s readership would be interested in the quantitative and qualitative analyses of Violence and Journey metaphors in the data and particularly in the finding that the most crucial difference to be considered in communication about cancer is whether particular uses of metaphor are empowering or disempowering for patients in context, rather than what type of metaphors they are.
The biggest challenge at this point was to produce an article that followed the journal’s guidelines, which were typical of healthcare and medical journals, particularly with regard to length. Linguistics journals typically have word count limits between 7,000 and 9,000 words for research articles. For BMJ Supportive and Palliative Care, however, the maximum length was 3,500 words. The lead author of the article and co-author of this book (Semino) struggled with producing a draft of the right length until Co-investigator Payne gave the following advice: ‘Imagine a General Practitioner reading the article while eating a sandwich during their lunch break’. This advice helped Semino produce a draft of the appropriate length; it is also good advice generally, especially when writing for an audience outside one’s discipline.
When it was submitted, the paper received largely favourable reviews. This was primarily because of the relevance of the findings to the journal’s audience and because of the combination of quantitative and qualitative evidence from a large corpus that was mentioned earlier. This particular paper was in fact the most read in the journal for the first 12 months after publication, and at the time of this writing (April 2025) has more than 32,000 downloads.
Overall, corpus-based discourse analysis has considerable potential to cross disciplinary boundaries, as has been shown not just by this particular paper but also by articles published in a variety of journals by corpus linguists in our team at Lancaster and from many other institutions in the UK and across the world (e.g., Brookes and Baker, Reference Brookes and Baker2017; Jaworska, Reference Jaworska2018). The effort involved in adapting to different academic conventions and different audiences is well worth it in terms of the additional reach and potential influence of the research findings.
12.3.2 Dealing with the Media
The MELC project attracted a considerable amount of media attention, with reports and interviews published in news outlets such as the Daily Mail in the UK and the New York Times in the US. This was a result of a combination of proactive initiatives on the part of the team and responses to media queries. For example, at an early point in the project, a press release was issued by the project’s funders (the UK’s Economic and Social Research Council) and Lancaster University. This led to several requests for interviews from national UK newspapers, as well as a couple of reproductions of the original press release in online outlets from different parts of the world. In other cases, media requests were received at times when communication about cancer, and specifically metaphors in communication about cancer, became topical, for example, when a celebrity or a politician used a particularly striking metaphor or explicitly rejected Violence metaphors for cancer.
Inevitably, journalists aimed to maximise the newsworthiness of the findings. However, in doing so, they sometimes misrepresented the research. Some of the misrepresentations were due to the attempt to simplify the message emerging from the research and confirm the audience’s expectations about the negative consequences of Violence metaphors, in particular. For example, headlines would sometimes suggest that the project’s main finding was that Violence metaphors should never be used, when in fact the analysis suggested that such metaphors can be both empowering and disempowering for patients:
Mind your language: ‘Battling’ cancer metaphors can make terminally ill patients worse
Cancer should not be called a ‘battle’ say language experts who fear metaphor makes people feel guilty if their condition gets worse
The most extreme misrepresentation in this respect occurred in a book about metaphor and persuasion, where the author attributed to the project team the claim that people who used Violence metaphors had a poorer outlook than people who did not. This claim was such a serious and potentially dangerous misrepresentation of the project’s findings that the team found a way to have it removed from the ebook version and subsequent print runs (Demjén and Semino, Reference Demjén, Semino and Mullany2020).
A more specific kind of misrepresentation relates to the kind of data and findings that are typically involved in corpus linguistic projects. As corpus linguists, we typically use word counts or, even more precisely, token counts to report the size of corpora. This is not something that people outside our discipline are used to. For example, Demjén and Semino (Reference Demjén, Semino and Mullany2020) recall an occasion when a table reporting word counts for each of the sections of the MELC corpus initially caused some consternation (see Table 9.3). An audience of healthcare researchers interpreted figures over 80,000 in rows (within a table) corresponding to the interviews section of the corpus as reflecting the number of interviews that had been conducted, rather than the number of words in that section of the corpus. Something similar happened with the headline of an article in the Mail Online, which reported some project findings on the use of humorous metaphors in the data. The headline read:
Laughter really is the best medicine for cancer patients: sufferers mock ‘Mr C’ to get through their illness and create a sense of community, reveals study of 1.5 million forum posts
In fact, the project team analysed a corpus of 1.5 million words, and while the number involved was reported accurately, the distinction between words and posts was lost on the journalists involved in producing the headline.
On a different occasion, the misrepresentation was more concerning. The Principal Investigator on the project (Semino) was interviewed by a New York Times journalist. The journalist was particularly interested in the finding that, in the corpus data, patients used Violence metaphors more frequently than healthcare professionals. When the article was published, the headline read:
Fighting words are rarer among British doctors
Here the crucial point that the comparison was between British doctors and British patients was lost, and therefore the headline was interpreted as reflecting a comparison between British doctors and US doctors. This was, of course, inaccurate because the project did not involve a comparison between the UK and the US, never mind a finding concerning that particular difference. The mistake, we should add, was made by the sub-editor who created the headline. The writer of the piece was as distressed about the inaccuracy as the researchers were. Unfortunately, the headline was then picked up on Twitter, as it was then called, and that inaccurate finding was amplified. The project team therefore had to issue some clarification even though, in cases such as this, the clarification usually receives much less attention than the original misleading report. This whole incident actually then inspired the researchers to carry out the comparison that the headline assumed had already happened. That comparison did not in fact reveal a difference in the frequency of use of Violence metaphors between online writing produced by UK and US doctors (Potts and Semino, Reference Potts and Semino2017).
12.3.3 Going beyond Corpus Data to Create a Communication Resource
The most enduring and successful outcome of the MELC project is a resource for communication about cancer called ‘The Metaphor Menu for People Living with Cancer’ (http://wp.lancs.ac.uk/melc/the-metaphor-menu/). This was not a planned output of the project. It was initially inspired by a question posed to the project team by a member of the previously mentioned Lancaster Research Partners Forum, along the lines of ‘Are you going to produce something useful based on the research, such as a list of good and bad metaphors, so that people know what to do?’
When this question was asked in a meeting with the group, the project team replied by saying that this was not possible because the analysis had shown that the same metaphors work differently for different people, and that context makes a big difference in terms of whether a particular type of metaphor is empowering or disempowering. However, after more thought and discussion, the team came up with an idea that could meet the spirit if not the letter of the question.
The Metaphor Menu is a collection of 17 different metaphors for cancer, accompanied by images. They include some Violence and Journey metaphors, but also metaphors to do with nature, music, unwelcome visitors, and so on. The idea is not to prescribe which metaphors to use but to provide a wide range of possible metaphors as tools or resources for people to make sense of their experiences and to communicate about them, as well as inspiration to create their own metaphors. Indeed, the Metaphor Menu is precisely intended to suggest that different people may prefer different metaphors, and that the same metaphor will not appeal to everyone, as is the case with dishes on a restaurant menu.
The Metaphor Menu is recommended by Cancer Research UK and is being used in many different healthcare settings around the world. The process of selecting metaphors for inclusion, however, required considerable time and thought. The metaphors needed to be authentic (i.e., drawn from actual language use). Collectively, they needed to provide a wide variety of perspectives on cancer. They needed to be reasonably creative and striking. And, while they were not intended to sugar-coat the experience, they should also not be so negative as to potentially create distress. To achieve all these goals, even the variety of metaphors identified in the MELC corpus did not provide enough suitable candidates. Therefore, some of the metaphors in the Menu are from other sources, where the team had permission to include them. This flexibility was necessary to arrive at the best possible metaphor-based resource that could be created, even if it meant that the resource was not entirely based on corpus data.
Overall, the process of disseminating the findings of the MELC project was both challenging and rewarding, as with the NHS case study from the previous section. The MELC team has not initially expected both the challenges and opportunities that came their way, but the final outcomes were rewarding and worthwhile. Indeed, engagement with different stakeholders about the MELC findings is ongoing, and as we write this chapter, the Metaphor Menu is being extended as part of a new EU-funded project and versions of the Menu in other languages are being planned.
12.4 Conclusion
Looking back, the process of disseminating the findings of the two projects discussed in this chapter and throughout the book can be described as successful, in terms of publications within and outside linguistics, media reports, and even evidence of impact on, for example, practices in the NHS and communication about cancer. Indeed, these two specific projects were selected to be part of the ‘impact’ component of Lancaster University’s linguistics submission to the 2021 ‘Research Excellence Framework’ (REF, the national evaluation of research in the UK). That component was awarded the maximum score as part of the REF evaluation.
What we hope to have conveyed in this chapter, however, is what lies behind outcomes that can eventually be described as successful. As we have explained, dissemination beyond linguistics and beyond academia involved skills, situations, and activities that we, as corpus linguistics researchers, were not necessarily prepared for and did not always get right from the beginning, from coping with the turnover of people in a partner organisation to trying to write for healthcare audiences or the general public.
We hope that by sharing the difficulties and compromises of our own experiences, we might prepare readers of this book for their own efforts, so that they can achieve their desired outcomes faster and more easily than we did. Overall, however, as we hope to have made clear, the hard work, setbacks, and compromises that we have described in this chapter were definitely worthwhile, and we all, as authors of this book, are proud of the reach and influence of CASS research on health communication over the years.