Policy Significance Statement
Personalised recommendations for goods and services are becoming increasingly common, but they depend on the collection of potentially sensitive user data, which users generally have limited control over. Our study aims to help develop better and more acceptable digital technology policies that improve health outcomes and public trust while preserving user data privacy. Based on more than 2000 survey responses from Hong Kong and London, we recommend persuading users about the benefits of data use for personalised health advice, enabling users to have granular control over data sharing, educating users on data use purposes and settings, designing new social environments such as data trusts for collective data management, and co-creating clear guidelines for diagnostic and generative artificial intelligence (AI) for public health purposes.
1. Introduction
Personalised digital health interventions are becoming increasingly viable. Health apps have been developed for a wide range of health issues, from chronic illness (JA Lee et al., Reference Lee, Choi, Lee and Jiang2018) to mental health (Alqahtani et al., Reference Alqahtani, Meier and Orji2022) and air pollution exposure (Che et al., Reference Che, Frey, Fung, Ning, Qu, Lo, Chen, Wong, Wong, Lee, Carruthers, Cheung, Chan, Yeung, Fung, Zhang, Stocker, Hood, Hohenberger, Leung, Louie, Li, Sun, Wei, Li, Zhang, Wang, Shen, Huang, Lee, Patwary, Lei, Cheng, Hossain, Tang, Lao, Leung, Chan, Li, Yuan and Lau2020). Preliminary results indicate that personalisation can lead to improved health-related behaviours, although further research is needed to verify these results (Tong et al., Reference Tong, Quiroz, Kocaballi, Fat, Dao, Gehringer, Chow and Laranjo2021). Meanwhile, AI is becoming cheaper, faster, and more accurate (D Lee and Yoon, Reference Lee and Yoon2021; Alowais et al., Reference Alowais, Alghamdi, Alsuhebany, Alqahtani, Alshaya, Almohareb, Aldairem, Alrashed, Saleh, Badreldin, Yami, Harbi and Albekairy2023), demonstrating the potential to improve diagnoses and facilitate the design of personalised treatments (Johnson et al., Reference Johnson, Wei, Weeraratne, Frisse, Misulis, Rhee, Zhao and Snowdon2021).
Personalised digital healthcare is purported to bring a multitude of benefits. One is patient centricity and empowerment—patients can conveniently receive tailored information and thereby become empowered to make better decisions about their own health (Odone et al., Reference Odone, Buttigieg, Ricciardi, Azzopardi-Muscat and Staines2019). For this reason, personalisation appears to be a desired feature in health apps (Tang et al., Reference Tang, Abraham, Stamp and Greaves2015; Carter et al., Reference Carter, Robinson, Forbes and Hayes2018). Another is that innovations in this area can create economic value in the form of investments, profits, and job opportunities (Vicente et al., Reference Vicente, Ballensiefen and Jönsson2020). Additionally, if done carefully, the health data can be securely managed while giving patients the autonomy to control how their data is used, leading to a trustworthy data governance environment (Vicente et al., Reference Vicente, Ballensiefen and Jönsson2020). These benefits motivate the development of new digital health interventions, as well as relevant policies.
However, personalisation requires the processing of health data. The sensitive and personal nature of health data poses privacy risks that can cause widespread public concern, which, in turn, can lead to the erosion of public trust in digital health interventions (Gille et al., Reference Gille, Smith and Mays2022). This was demonstrated during COVID-19 when governments around the world attempted to rollout digital health technologies, such as contact tracing apps that collected users’ live location data in relation to landmarks or other users. This sparked public debate over how personal data would be used or misused, as well as whether people would be able to have agency in deciding whether to use the technologies (Budd et al., Reference Budd, Miller, Manning, Lampos, Zhuang, Edelstein, Rees, Emery, Stevens, Keegan, Short, Pillay, Manley, Cox, Heymann, Johnson and McKendry2020; Li et al., Reference Li, Ma and Wu2022). There is also growing wariness surrounding the role of research institutions and private companies in the handling of health-related data (Aitken et al., Reference Aitken, Porteous, Creamer and Cunningham-Burley2018; Braunack-Mayer et al., Reference Braunack-Mayer, Fabrianesi, Street, O’Shaughnessy, Carter, Engelen, Carolan, Bosward, Roder and Sproston2021), which has made the public more apprehensive towards sharing their health data with organisations (Middleton et al., Reference Middleton, Milne, Almarri, Anwer, Atutornu, Baranova, Bevan, Cerezo, Cong, Critchley, Fernow, Goodhand, Hasan, Hibino, Houeland, Howard, Hussain, Malmgren, Izhevskaya, Jędrzejak, Jinhong, Kimura, Kleiderman, Leach, Liu, Mascalzoni, Mendes, Minari, Wang, Nicol, Niemiec, Patch, Pollard, Prainsack, Rivière, Robarts, Roberts, Romano, Sheerah, Smith, Soulier, Steed, Stefànsdóttir, Tandre, Thorogood, Voigt, West, Yoshizawa and Morley2020). High-profile cases, such as genetic testing company 23andMe’s bankruptcy, have fuelled concerns about the possible selling of consumer data without consent and the disproportionate consequences of misusing said data on racial minorities (Kukutai, Reference Kukutai2025), among other issues.
Failing to address these concerns could have severe implications not only on the adoption of new interventions, but on broader societal issues as well, such as trust in government and in public health activities (K Hogan et al., Reference Hogan, Macedo, Macha, Barman and Jiang2021). It is therefore important to study the factors that affect individuals’ willingness to share their personal data to receive personalised health recommendations.
2. Theoretical framework
Many health apps generate personalised advice to help individuals make behavioural changes to improve their health and well-being in a variety of health areas, such as physical activity and fitness (Rabbi et al., Reference Rabbi, Pfammatter, Zhang, Spring and Choudhury2015; Laranjo et al., Reference Laranjo, Ding, Heleno, Kocaballi, Quiroz, Tong, Chahwan, Neves, Gabarron, Dao, Rodrigues, Neves, Antunes, Coiera and Bates2021), alcohol consumption (Attwood et al., Reference Attwood, Parke, Larsen and Morton2017), or diabetes management (Quinn et al., Reference Quinn, Shardell, Terrin, Barr, Ballew and Gruber-Baldini2011; Årsand et al., Reference Årsand, Frøisland, Skrøvseth, Chomutare, Tatara, Hartvigsen and Tufano2012). However, apps are only effective in preventing health crises or supporting health condition management effectively, if users actively engage with apps and are willing to share their personal data in order to receive personalised advice. Engagement with these apps and data sharing are forms of human behaviour requiring behavioural changes (i.e., someone to do something differently, either by adopting or discontinuing a practice)—in this case, adopting the practice of using an app to share their personal data and receive personalised advice. Given the potential benefits, healthcare professionals, intervention designers, and policymakers are increasingly interested in supporting and encouraging this behaviour. This requires a clear understanding of the factors that influence this behaviour, specifically the factors that enable or hinder it (i.e., enablers and barriers). Identifying these drivers of behaviour change is therefore essential to designing better health apps and overarching policies that can encourage health-promoting behaviours.
One helpful framework to understand behaviour and its influences is the Capability, Opportunity, Motivation model of Behaviour (COM-B) model, which has been widely used to identify the influencing factors categorised as Capability, Opportunity, and Motivation. Positioned at the core of the Behaviour Change Wheel (BCW) (Michie et al., Reference Michie, van Stralen and West2011), COM-B highlights how behaviour is shaped by these components, which are further divided into six sub-constructs. Capability includes both psychological factors (e.g., knowledge, behavioural regulation) and physical factors (e.g., skills, strength). Opportunity encompasses physical factors (e.g., resources, environment) and social factors (e.g., social norms, support) that enable behaviour. Motivation involves reflective processes (e.g., attitudes, intentions, identity) and automatic processes (e.g., emotions, habits). The COM-B model provides a comprehensive framework for understanding behaviour and identifying factors that influence it. It also facilitates the selection of evidence-based intervention strategies (West and Michie, Reference West and Michie2020), which can be directly aligned with intervention types proposed within the BCW. The BCW integrates multiple behaviour change theories into a single model, offering a systematic approach for understanding behaviour and designing interventions.
Several research gaps still exist in the behaviour change literature on using health apps. For one, while theoretical frameworks for understanding behaviour and behaviour change like COM-B are increasingly being applied to digital health services (e.g., Chen et al., Reference Chen, Lieffers, Bauman, Hanning and Allman-Farinelli2017; Issom et al., Reference Issom, Henriksen, Woldaregay, Rochat, Lovis and Hartvigsen2020; Mauch et al., Reference Mauch, Laws, Prichard, Maeder, Wycherley and Golley2021; Szinay et al., Reference Szinay, Perski, Jones, Chadborn, Brown and Naughton2021), such studies are still scarce. For another, many studies focus on one specific, sometimes niche context, whether a medical condition (e.g., less common chronic illnesses), temporal (i.e., one moment in time), or political (e.g., focusing on one country), raising questions about whether their results are generalisable or context-specific. There are even fewer studies that use the COM-B framework to understand privacy-related behaviours (Gerber and Stöver, Reference Gerber, Stöver, Gerber, Stöver and Marky2023), especially how data privacy beliefs could facilitate or inhibit the adoption of digital health tools (Bondaronek et al., Reference Bondaronek, Dicken, Jennings, Mallion and Stefanidou2022). Further research is needed to understand how perceptions about data privacy can affect health apps’ engagement and effectiveness.
3. Research design and methodology
3.1. Study aims
In our study, we compared data collected from two different contexts to yield more comprehensive insights on how data privacy concerns towards personalised health apps can affect user engagement. Our study uses respiratory health as a context for participants to consider, since many of the epidemics and pandemics that have drawn the most attention in recent decades (SARS, H1N1, MERS, and COVID-19) have been respiratory illnesses. However, we also worded most of the questions to be about health apps in general for the results to be broadly applicable to health apps. From a social standpoint, our study compares two cities that adopt different perspectives on trust and stakeholder interactions in the data governance space. Temporally, our study is situated in 2023 with declining public concerns towards COVID-19.
3.2. Data collection and survey design
We designed an online survey with mostly closed multiple-choice questions to assess the effects of various factors on citizens’ willingness to use personalised health apps. We mapped the three components of behaviour in the COM-B model to data literacy (psychological capability), control over data sharing (physical opportunity), social pressure to use the app (social opportunity), emotional responses to health and data issues (automatic motivation), and trust in entities and organisations regarding data governance (reflective motivation).
The survey was conducted in both Hong Kong and London to allow for a comparative analysis. These cities were chosen because they have similar populations of 7–9 million, with about 95% of adults in each having access to smartphones and the Internet (Office for National Statistics, 2021; Ofcom, 2024; Census and Statistics Department, 2024a). However, there are also significant social and political differences that may affect how health apps and data privacy risks are perceived across the two cities.
The survey was constructed in Qualtrics and distributed via two different online survey panels for Hong Kong and London in November 2023. The Hong Kong Public Opinion Research Institute (PORI) distributed the survey through its email list to its general population survey panel, whereas Dynata recruited participants via an associated get-paid-to website and used Internet Protocol (IP) addresses to ensure that the respondents were from London. Neither distributor had any involvement in the formulation of survey questions. More than 1000 adult participants were recruited in each city to ensure a sufficiently large sample. The survey was made available in the cities’ official languages, which were English for both and Traditional Chinese for Hong Kong.
The survey landing page was an informed consent form. Only participants who checked the statement indicating that they agreed to participate in the survey were allowed to continue the survey. Those who consented were then randomly and equally assigned to one of two variants of a mock-up design for an application that claims to generate personalised health advice based on the participant’s personal data. The designs varied based on whether the putative app was presented as having been designed specifically for COVID-19-related advice or respiratory health advice in general. Details on data processing were clearly written in a simplified privacy policy, which participants were asked to read carefully.
After viewing the app mock-up, the participants were asked to rate their agreement with several closed questions about their data literacy, respiratory health concerns, and trust in various entities. These questions were constructed and categorised as either Capability, Opportunity, or Motivation questions, and the answers were scored on a 5-point Likert scale from “Strongly Agree” to “Strongly Disagree” where higher levels of disagreement were associated with a higher numerical value. Participants were then asked to rate their willingness to use the application on another 5-point scale, from 1 representing “Completely Willing” to 5 representing “Completely Unwilling.” Participants were also asked their age, gender, and level of education as demographic variables. The way the Likert scale values were assigned to the questions allowed participants to read the most positive options first while preserving the intuitive interpretation of the linear regression results (i.e., positive coefficients indicate positive correlations between variables). Finally, participants were given a large text box where they could voluntarily share more detailed comments on the proposed app, or on health apps in general.
3.3. Data analysis methods
The COM-B variables and demographic variables were used as independent variables for linear regression in Stata, with willingness to use the application as the outcome variable of interest (see the Data Availability Statement for the Stata code). Demographic variables were included in the quantitative analysis to account for between-group differences. Linear regression models were constructed for bivariable analysis with only one independent variable, as well as multivariable analysis with the full model and with only the statistically significant variables from the full model. As a sensitivity analysis, ordinal logistic regressions were performed to check that the data was sufficiently well-behaved for the linear regression models to return robust results.
Variance inflation factors (VIFs) were calculated for each linear regression model for the full, Hong Kong, and London samples to test for multicollinearity. None of the VIFs for the variables were above 5, indicating that there were no significant multicollinearity issues that would warrant reconsidering the validity of the model.
4. Results
4.1. Sample characteristics
The total number of responses for the survey was 2774, with 1437 responses from Hong Kong and 1337 responses from London (see the Data Availability Statement for the aggregate data) (Li et al., Reference [dataset] Li, Yarime, Antonopoulou, Potts and Washbourne2025). In each city, 49.1% of respondents were shown the putative app for COVID-19 management, whereas 50.9% of respondents were shown the general respiratory health management version. After filtering out duplicates, incompletes, and “speeders” (i.e., completing the survey in under 2 minutes), the number of valid responses was reduced to 2362 (see Table 1 for the sample’s demographic profile). Forty more participants were excluded from the quantitative analysis due to selecting “Other/Prefer not to say” for the gender and education questions, resulting in 2322 data points for analysis (1308 from Hong Kong and 1014 from London). This response option was made available for inclusion purposes. However, because the question does not distinguish between who falls under the “other” category and who preferred not to disclose, and because the small number of responses affected the ability for the regression model to converge, the 40 responses were excluded. It is worth noting that 35 of the 40 people who chose “other/prefer not to say” for gender and education were from Hong Kong, suggesting that Hongkongers may be more averse to provide personal data than Londoners.
Table 1. Breakdown of the Hong Kong and London samples by age, gender, and education

a Excluded from regression analysis due to low incidence and ambiguity in categorisation.
The age, gender, and education profiles of the samples do not exactly correspond to the same profiles of both cities’ populations. For example, more men than women in both Hong Kong and London participated in the survey, despite Hong Kong having more women than men overall (Census and Statistics Department, 2024b) and London having a balanced ratio of men to women (Greater London Authority, 2019). To account for these demographic discrepancies, we incorporate the age, gender, and education variables directly into our regression analyses.
4.2. Survey responses by question category
Participants in both cities generally expressed confidence in their psychological capability to understand how their data is being used and what the associated privacy implications are. Nearly half of the respondents in each city “somewhat agree[d]” with the competency statements. Londoners indicated a higher level of confidence than Hongkongers, with over 35% of London respondents selecting the “completely agree” option compared to 19% of Hong Kong respondents.
Similar response patterns were observed in the physical opportunity question about whether participants would be willing to share data with a health app if they could control how their data is used. However, responses to social opportunities for app use were more mixed. Whereas over 60% of Hongkongers and 76% of Londoners shared a positive inclination to disclose personal data through an app when prompted by a healthcare provider, responses were polarised regarding sharing data through an app to a government agency—over half of the Hong Kong respondents answered with “strongly disagree” or “somewhat disagree,” whereas 69% of London responses indicated some level of agreement. Responses were more spread out for the acquaintance variable, although Hong Kong responses still skewed negatively while London responses skewed positively.
For automatic motivations, London and Hong Kong respondents shared a similarly high level of concern towards respiratory infections and a similarly low level of worry towards potentially contracting severe symptoms, although Hongkongers’ levels of concern were slightly lower than Londoners’. However, the degree to which participants of both cities are comfortable with sharing data visibly diverged. Only 7% of Hong Kong participants strongly agreed with being comfortable with sharing health data with a health app, compared to over 23% of London participants. This difference became even starker where over 40% of Hong Kong participants expressed strong discomfort towards sharing location data, compared to less than 10% of London participants.
With respect to reflective motivations, we observed that personalisation was a driver of data sharing, with over 60% of Hong Kong participants and over 75% of London participants somewhat or strongly agreeing with the statement. We also determined that the source of personalised health advice mattered. The question as to whether participants would trust health advice from a medical expert received similarly positive responses. Meanwhile, the reception of AI-generated health advice (a proposed and increasingly practicable alternative or complement to expert medical advice) was much more mixed, as about 80% of the responses in both cities were split relatively evenly across the “somewhat agree/disagree” and “neither agree nor disagree” options. Figure 1 illustrates the full sample comparison between trust in expert advice and trust in AI-generated advice.

Figure 1. Comparison of trust in medical advice provided by medical experts and generated by AI.
Regarding sector groups developing the app or having access to the app’s data, however, the only commonality between cities is that respondents were more wary of private companies overall. Otherwise, the Hong Kong respondents were much less likely to want to share their data across any of the scenarios, with the most extreme cases being that over half of the respondents strongly disagreed with sharing data if private companies and government agencies outside of public health could access it. About 60% of Hong Kong respondents also chose “strongly disagree” or “somewhat disagree” for sharing data with a government health agency, regardless of whether the agency developed the app (see Figure 2[a]) or had access to the data as a third party (see Figure 2[b]). In contrast, about 60% of London respondents chose “strongly agree” or “somewhat agree” for the same questions. This indicates a much greater amount of resistance to general government access to personal data in Hong Kong than in London, where hesitation was only somewhat greater if the government agency accessing the data operated outside of public health.

Figure 2. Willingness by city to share data with a health app if a government health agency (a) developed the app or (b) had access to the data.
Respondents’ willingness to use personalised health apps is shown in Figure 3, with “somewhat willing” being the most popular response (790), followed by “not very willing” (631) and “very willing” (400). However, splitting the responses by city, we can see that more than 80% of the “completely unwilling” and “not very willing” responses are from Hong Kong, whereas the London responses account for 67% and 82% of the “very willing” and “completely willing” answers, respectively.

Figure 3. Willingness to use personalised health advice according to each city.
4.3. General regression results
Several predictors have a statistically significant effect on willingness to use a health app across all three regression models (see Table 2 with p-values below 0.05 in bold). One is the self-reported psychological capability to understand how the health app uses the person’s data—a higher understanding by one point on the Likert scale corresponded to an increase of 0.069 to 0.091 in willingness. The second is the automatic emotional response of worrying about respiratory infections, as a greater concern towards respiratory health would be a motivator for someone to use an app to manage or avoid them. This had a slightly greater effect, with coefficients across models ranging from 0.092 to 0.113. Some reflective motivations were also identified as factors in determining an individual’s willingness to use the app. They included the willingness to share personal data to fulfil the purpose of personalised health advice, and the extent to which an individual trusts the entity (i.e., medical experts or AI) who provides said advice. Particularly, trust in health advice from a medical expert had the greatest consistent effect on willingness across all three models, with a coefficient that was 2–3 times higher than that of trust in health advice from AI.
Table 2. Linear regression results for Total, Hong Kong, and London samples, with willingness to use the app as the dependent variable

Note: Statistically significant p-values are in bold.
The regression models also share some variables that were not statistically significant. One was the psychological capability to understand the privacy risks associated with sharing personal data to an app. The social opportunities where an organisation, whether a healthcare provider or a government agency, demands for data to be shared also did not appear to affect how much a person would want to use a health app. Who developed the app did not seem to change users’ preferences either, except in the case of private developers in Hong Kong. Private companies being able to access the data did not have a statistically significant effect. Finally, whether the app was for respiratory illness in general or for COVID-19 specifically did not appear to alter people’s willingness to use the app.
The full sample model identified a few additional statistically significant variables on participants’ willingness to use personalised health apps. These included knowing someone who shared data with the app (social opportunity), being comfortable with sharing health and location data with the app (automatic motivation), worrying about severe respiratory symptoms (automatic motivation), and willingness to share data if any government agency can access it (reflective motivation). All these variables had a positive correlation with willingness. Gender and education also had statistically significant effects on participants’ willingness to use a health app—women were less likely to want to use the app than men, and individuals with a Bachelor’s degree or above were less likely to want to use the app than their peers with or without a high school diploma. The ordinal logistic regression model only returned a few differences at the p-value thresholds of 0.05 and 0.001, with overlapping confidence intervals. This indicates a high level of agreement between the two models, suggesting that linear regression is sufficiently robust for our analysis.
4.4. City comparisons
In the overall model, there was a large effect between the two cities. Being a Londoner rather than a Hongkonger corresponded to a 0.392-point increase in willingness to use a health app.
When comparing the two cities’ individual linear regression models, there are a few points in common. The variables with statistically significant effects on willingness in both cities were understanding how apps use data (psychological capability), worrying about the risk of severe respiratory symptoms (automatic motivation), personalisation as an app feature (reflective motivation), and trust in medical advice from an expert or from AI (reflective motivation). As for the variables that were not statistically significant, these were understanding privacy risks (psychological capability), having data requested by a healthcare provider or government agency (social opportunity), having a government health agency or public research institution as the app developer (reflective motivation), giving private companies access to user data (reflective motivation), whether the hypothetical app was for general respiratory illness or COVID-19 management, and the participant’s highest level of education.
There were also notable differences across the two models. The Hong Kong regression model indicated that trust in general is crucial to the acceptance of personalised health apps. This included trust in acquaintances (social opportunity), trust in private app developers (reflective motivation), and trust in government health agencies’ or public research institutions’ access to data (reflective motivation). Three other variables were statistically significant in the Hong Kong model but not in the London model—the first was comfort with sharing location data (automatic motivation), the second was age, specifically with the 40–49 and 60+ age groups indicating a higher level of willingness to use health apps than the 18–29 age group, and the third was gender, although London’s p-value for gender was 0.057. Meanwhile, there were only three variables that were statistically significant in London but not in Hong Kong. These were the options for users to directly control how the app uses their data (physical opportunity), comfort with sharing health data (automatic motivation), and willingness to share data if a government agency outside of public health had access to the data (reflective motivation), although this variable had a p-value of 0.074 in Hong Kong.
As with the general model, the ordinal logistic regression models for the two cities returned broadly similar results.
5. Discussion and policy implications
5.1. Implications for data governance
Our survey determined that tailored health advice is a desired feature of health apps, and that users are often willing to share their personal data to receive it. Clearly communicating the potential benefits of data sharing for personalised health advice and building digital health literacy (van Kessel et al., Reference van Kessel, Wong, Clemens and Brand2022) can help increase public acceptance. This could involve highlighting success stories and emphasising the potential for improved health outcomes.
However, the level of trust in the generated advice depends on whether the user trusts the provider, whether a medical expert or AI. This highlights the need for policies that enhance transparency over data and the algorithms that process it (Kernbach et al., Reference Kernbach, Hakvoort, Ort, Clusmann, Neuloh, Delev, Staartjes, Regli and Serra2022), or technical approaches that minimise the need for data to be collected or stored. That would require governments and app developers to implement robust data governance frameworks. Guidelines on data collection, usage, storage, and sharing are crucial, including specifying the purpose of data collection, minimising data collected to only what is necessary, and ensuring secure data storage (Morley et al., Reference Morley, Cowls, Taddeo and Floridi2020; Nurgalieva et al., Reference Nurgalieva, O’Callaghan and Doherty2020). Open communication about data practices, independent audits, and mechanisms for redress can also help build public trust in institutions handling data (Budd et al., Reference Budd, Miller, Manning, Lampos, Zhuang, Edelstein, Rees, Emery, Stevens, Keegan, Short, Pillay, Manley, Cox, Heymann, Johnson and McKendry2020; Li and Yarime, Reference Li and Yarime2021).
Another consideration to make is that trust in health advice from a medical expert not only was higher overall than trust in health advice from AI, but it also had a more significant effect on willingness to use a health app. As AI plays an increasingly important role in healthcare (Johnson et al., Reference Johnson, Wei, Weeraratne, Frisse, Misulis, Rhee, Zhao and Snowdon2021), policies should address public concerns about the use of AI in health. It would be crucial to establish reliable standards for AI in health devices. Clear guidelines on the development, validation, and deployment of diagnostic and generative AI are necessary to ensure the safety, effectiveness, and ethical use of novel health technologies (Rajpurkar et al., Reference Rajpurkar, Chen, Banerjee and Topol2022). Making AI algorithms more transparent and understandable to users (Shin, Reference Shin2021) could help build public trust in health advice made by AI. AI implementation also does not need to replace medical experts in the health app space; instead, experts can be heavily involved in the design of AI-powered health apps, use analytical AI as a supporting tool for diagnosis, and fact-check the advice generated by large language models (Dzobo et al., Reference Dzobo, Adotey, Thomford and Dzobo2020). Transparency on how medical experts utilise AI-powered health apps could then facilitate trust-building with patients.
It would also be important to empower users to better understand and control the use of their personal data. As indicated by the linear regression analysis, willingness to use personalised health apps increases with people’s perceived psychological capability to understand how health apps use their data, the physical opportunity to control exactly how the app uses data, and the availability of data access only to trusted parties. This could be achieved by designing public health apps with more granular data control options (AR Lee et al., Reference Lee, Koo, Kim, Lee, Yoo and Lee2024). Users should have the ability to easily choose what data they share and with whom, where possible. This could involve allowing users to opt in or out of specific data-sharing features and providing clear explanations of the implications of their choices (e.g., Kaye et al., Reference Kaye, Whitley, Lund, Morrison, Teare and Melham2015; Baker et al., Reference Baker, Kaye and Terry2016; Scoccia et al., Reference Scoccia, Autili, Pelliccione, Inverardi, Fiore and Russo2020). It would also be helpful to explore innovative data governance models, such as data trusts or data cooperatives, to balance individual control over data with the need for data sharing for public health purposes (Hogan et al., Reference Hogan, Shenkman, Robinson, Carasquillo, Robinson, Essner, Bian, Lipori, Harle, Magoc, Manini, Mendoza, White, Loiacono, Hall and Nelson2022; Bartlett et al., Reference Bartlett, Ainsworth, Cunningham, Davidge, Harding, Holm, Neumann and Devaney2024; Redhead et al., Reference Redhead, Bowden, Ainsworth, Burns, Cunningham, Holm and Devaney2025). These models could allow individuals to collectively determine how their data is used, while ensuring that data is available for legitimate research and public health purposes.
At the same time, it is crucial to recognise that factors beyond app functionality and data governance affect app uptake, including the underlying social and political context. As demonstrated by the survey responses and the linear regression model, Hong Kong participants were much more hesitant than their London counterparts to share data with a health app, especially if a government agency or private company developed the app or had access to its data. This result is consistent with a greater level of scepticism in Hong Kong towards data sharing in general, as discussed in the Sample Characteristics section. Open-ended text responses from Hong Kong participants illustrated a scepticism regarding the government’s ability to respect citizen’s privacy, lack of perceived punishment for companies or (government) departments leaking personal data, and concern about the adequacy of current data protection and privacy laws in Hong Kong with respect to technological advancements such as Big Data, the Internet of Things, and AI. The Personal Data (Privacy) Ordinance in Hong Kong (Cap. 486) (Ordinance) has not been fully updated to address challenges posed by these emerging digital technologies. This could make people in Hong Kong concerned about the handling of their data and the protection of their privacy. In addition, some respondents directly stated their concern that providing personal data to the government might lead to direct surveillance, also seen in earlier work on the response to government-created contact-tracing applications (Li et al., Reference Li, Ma and Wu2022). Given the political unrest and societal concerns observed in the past years in Hong Kong, it would be particularly important to facilitate building trust in those institutions involved in dealing with sensitive data in the public and private sectors (Cole and Tran, Reference Cole and Tran2022; Martin et al., Reference Martin, Mikołajczak, Baekkeskov and Hartley2022). The Privacy Commissioner for Personal Data (PCPD) could play a key role to enhance transparency and accountability in data-handling practices (Chung and Zhu, Reference Chung and Zhu2024). As the government is keen to promote open data and cross-boundary data flows within the Greater Bay Area including cities in mainland China, public trust in data governance would be critical.
There was also a gendered difference in the results, as women were overall less willing than men to use health apps. Women also tended to be less comfortable with sharing health or location data. Although we may not be able to draw causal inferences from our quantitative data, we can turn to past studies on gender and data privacy that corroborate our findings. For example, women may be more concerned about protecting anonymity and intimacy in health apps (Wilkowska and Ziefle, Reference Wilkowska and Ziefle2012), and they may be less willing to share sensitive information such as fingerprints and IP addresses than men (Sørum et al., Reference Sørum, Eg and Presthus2022). The reasons for these behavioural differences may include higher levels of anxiety and risk aversion, as well as greater perceived risks and harms posed online and offline (Tifferet, Reference Tifferet2019). More efforts should be made to understand how digital health technology regulations can be better designed and enforced to address the privacy concerns of people of all genders. This is crucial when current health apps marketed towards women may not satisfactorily comply with existing data privacy laws, leading to a gender gap in data protection (Alfawzan et al., Reference Alfawzan, Christen, Spitale and Biller-Andorno2022; Hammond and Burdon, Reference Hammond and Burdon2024).
5.2. Study strengths and limitations
Our study benefits from several research design features. Firstly, we utilised robust theoretical models (COM-B and the BCW) to categorise barriers and enablers for behaviour change. The variables that we chose allowed for a holistic analysis of factors affecting app adoption, including app features, data implications, health considerations, and demographics characteristics. To achieve generalisable results, we recruited a sample with a representative sample size, and we accounted for age, gender, and education in the statistical model to make up for differences in demographic features between the sample and population. Our analysis also compares across two cities, London and Hong Kong, to consider social and political effects on data governance. Finally, we designed strong methodological tools to analyse the survey responses, including VIFs to account for multicollinearity and ordinal regression to confirm the validity of the linear model.
Even with these design considerations, we must acknowledge the limitations of our study. For one, participants were self-selected by opting into the study, especially the Hong Kong participants who responded to an email invitation. We cannot eliminate the possibility that people with stronger, perhaps more extreme opinions may have been more inclined to participate. For another, our study prompted participants to consider their behaviour with a hypothetical app, but their responses may change if they were to reflect on their behaviour when using a real app. Moreover, the quantitative results from our survey do not provide in-depth insights, namely the underlying context as to why they chose specific multiple-choice answers.
We aim to investigate this further by conducting a thorough qualitative analysis of the open-ended text responses that we received at the end of the survey. We also encourage additional research on the social and political effects on the acceptance of digital health interventions, especially on the variables that we were unable to include in our model.
6. Conclusion
In this paper, we explored the drivers of health app acceptance in two cities. We determined that understanding and control of data use, comfort with data sharing, health concerns, data sharing for personalisation features, trust in medical advice, and acceptance of data access by different parties all had positive correlations with willingness to use health apps. The demographic factors that affected app acceptance were gender, with women being less likely to want to use health apps than men, and city, with Hong Kong citizens being less likely to want to use health apps than London citizens. We then identified app design and data governance considerations that can improve health outcomes while protecting users’ data privacy. These include persuading users about the benefits of data use for personalised health advice, enabling users to have granular control over data sharing, educating users on how apps use data and how data settings can be changed, and designing new social environments such as data trusts that can allow for collective decision-making on data management.
Data availability statement
Due to the sensitive nature of the raw data, and the assertion in our consent form that we would not share the participants’ responses publicly, we have decided not to disclose the raw dataset to protect the participants. However, in the interest of research transparency, we have made the aggregate data for each question, including breakdowns by city, age, gender, and education, as well as our Stata code available for review on Zenodo at https://zenodo.org/records/16637368. Please note that our Stata.do file was written in Stata MP 18.5, so adjustments may need to be made if another version of Stata is being used.
Acknowledgements
The authors would like to thank our colleagues at The Hong Kong University of Science and Technology’s Division of Public Policy (Prof Kira Matus and Prof Donald Low) and University College London’s Digital Technology Policy Lab (Dr Jesse Sowell, Dr Irina Brass, Dr Andrew Mkwashi, and Dr Edison Bicudo) for providing thoughtful feedback on the project at the pilot stage.
Author contribution
Conceptualisation: V.L., H.P., M.Y.; Data curation: V.L.; Formal analysis: V.L.; Funding acquisition: C.W.; Investigation: V.L.; Methodology: V.L., H.P., V.A.; Project administration: C.W.; Resources: C.W., M.Y.; Software: V.L.; Supervision: C.W.; Validation: H.P., V.A.; Visualisation: V.L., H.P.; Writing – original draft: V.L., C.W., M.Y.; Writing – review & editing: H.P., V.A.
Funding statement
This project received funding from UCL Global Engagement under the title “Public perspectives on personal data use for personalised COVID-19 advice.” The sponsor had no role in the design of the study, the collection and analysis of data, the decision to publish, or the preparation of the manuscript.
Competing interests
H.W.W.P. provides or has provided consultancy on digital health evaluation for Flo Health Inc., Prova Health, and Thrive Therapeutic Software Ltd. He has PhD students in the field employed by or previously employed by, and with fees paid by or previously paid by AstraZeneca, Patients Know Best, and BetterPoints Ltd.
Ethical standard
This study received research ethics approval from the Department of Science, Technology, Engineering and Public Policy’s Research Ethics Committee at University College London (reference number: 8349/014) and the Human and Artefacts Research Ethics Committee at The Hong Kong University of Science and Technology (protocol number HREP-2023-0268). This study is also registered with the Data Protection Office at University College London (reference number: Z6364106/2023/08/04 social research).
Comments
No Comments have been published for this article.