1. Introduction
As corporations across many sectors explore how generative artificial intelligence (Gen AI) can be deployed across different operational tasks and functions (McKinsey & Co, 2025), one area that is attracting interest is the use of Gen AI to produce corporate reporting (Blankespoor, deHaan & Qianqian, Reference Blankespoor, deHaan and Qianqian2025; de Villiers et al. Reference de Villiers, Dimes, Charl, Ruth and Molinari2023), much of which is mandatory under companies and securities legislations, as well as under other legislation.Footnote 1 This article focuses on such reporting obligations in the UK and EU. Gen AI leverages the big data processing powers of machine learning systems, which have been developed as “specific” AI applicationsFootnote 2 in many industries, providing efficiencies in data organisation, structuration and generating data-based outcomes such as analytics and predictions. Gen AI goes beyond specific AI in that it is able to create new narrative content through machine learning, based on sophisticated forms of word association and statistical technologies (Hacker, Engel, Hammer & Mittelstadt, Reference Hacker, Engel, Hammer and Mittelstadt2025). Where specific AI may, for example, assist an equity financial analyst in organising and structuring data and generating a sell or buy recommendation, Gen AI would be able to take this role one step further by writing up a presentation or an analyst’s report to accompany the sell or buy recommendation (Zhou, Wang, Yilin & Yang, Reference Zhou, Wang, Yilin and Yang2024).
In this light, the key corporate compliance task of mandatory reporting, which requires processing of significant volumes of internal and external data, and subsequent assimilation into a coherent narrative report, could leverage upon the newfound prowess of Gen AI. However, there are significant legal risks attached to mandatory reporting. One, defects in reporting can attract market discipline by investors and/or their civil litigation. Furthermore, regulatory enforcement, fines and reputational harm can also follow defective disclosures. In view of such legal risks, can Gen AI be appropriately used for mandatory corporate reporting?
It is observed that there is an indefatigable trend towards some deployment of Gen AI for the purposes of corporate mandatory reporting, and increasingly, investors also utilise Gen AI to process and analyse corporate disclosures themselves (de Villiers et al. Reference de Villiers, Dimes, Charl, Ruth and Molinari2023; Jain et al. Reference Jain, Gupta, Yalciner, Joglekar, Khetan and Zhang2023; Financial Reporting Council, 2022). Regulators keen on RegtechFootnote 3 are also exploring how Gen AI can be used to process and analyse regulatory reporting in order to assist in supervisory work (de Villiers et al. Reference de Villiers, Dimes, Charl, Ruth and Molinari2023; Hope, Reference Hope2025). In this manner, the legal risks surrounding the use of Gen AI in mandatory reporting are not merely firm-centric but should be understood in an approach termed in this article as a “reporting chain” approach. This means that the initially recognised legal risks for each firm (firm-centric risks) must be mapped against the behavioural tendencies of the firm’s reporting recipients, principally, investors, their service providers and regulators, where mandatory reporting is concerned. How the reporting chain perceives and responds to the firm’s reporting output defines the nature and extent of legal risks of disclosure. The lens of the reporting chain sheds new light on the nature and extent of firm-centric legal risks, giving rise to implications as to how these risks should be managed.
Section 2 first provides a brief overview of the increasing expanse of mandatory corporate reporting, particularly in terms of the rise in narrative reporting, and the incentives for turning to technology for efficient production of such reporting. Section 2.1 discusses the trends in terms of deployment of Gen AI in the corporate universe and appraises its observed strengths and weaknesses for narrative corporate reporting. Section 3 distils the firm-centric risks for using Gen AI in mandatory reporting and presents a review of the development of firm-centric risk management in this area. Section 4 then introduces the reporting chain approach, and suggests that chain entities’ own use of Gen AI in processing and analysing corporate reports can itself change the nature and extent of firm-centric risks. This section posits that risks can be heightened and sharpened under conditions of certain behavioural patterns in the reporting chain, but this analysis also offers insights as to how such risks can be mitigated. Section 4 also offers policy proposals for proactive regulatory supervision and testing of Gen AI systems used for mandatory corporate reporting. Section 5 concludes.
2. Trends in narrative corporate reporting
Companies are subject to transparency and mandatory reporting on various matters, as the privilege of incorporation entails certain expectations of accountability (Villiers, Reference Villiers2007), primarily to shareholders, but increasingly to regulators and stakeholders. The intensity of reporting also increases with companies’ quoted or traded status as they meet the need to keep securities markets informationally and price efficient (Fama, Reference Fama1970; Gilson & Kraakman, Reference Gilson and Kraakman1984). Corporate reporting has traditionally been focused on financially material information, targeted at shareholders’ interests, which has also been standardised for disclosure as financially framed items subject to accounting standards.Footnote 4 However, since the 1990s, the US Securities Exchange Commission’s requirement of a mandatory “Management Discussion and Analysis” (MD&A),Footnote 5 which is focused on material information of a narrative nature, ushered in the indefatigable trend of mandatory narrative reporting requirements for quoted or traded companies, in many leading financial jurisdictions including the EU and UK. The UK’s first mandatory equivalent of an MD&A was enshrined in section 417 of the Companies Act 2006, although this has now evolved into multiple narrative reporting requirements in UK company law today.
Material information of a narrative nature could pertain to matters such as business strategy, risk discussion and forward-looking information, all of which are difficult to quantify under accounting standards, but which are nevertheless relevant and important to a company’s likely future performance. Increasingly, non-financial information, which may comprise quantifiable and qualitative elements, has also become perceived as increasingly material to companies’ future performance. These include companies’ exposure to climate risks and how these may affect companies’ physical assets and future market positioning and profitability, as well as their exposure to social and reputational risks, such as their involvement in human rights violations and possible entanglement with civil litigation. There is also a trend towards more explicit integration of non-financial material matters in institutional investment management (Freshfields Bruckhaus Deringer, 2005; Law Commission of England and Wales, 2014;Chiu, Reference Chiu2025; but see Cifrino, Reference Cifrino2025)Footnote 6 and many savers’ proclivity towards pro-sociality in their investments (although varying to different extents (Brandon, Rajna & Krüger, Reference Brandon, Rajna and Krüger2018; Delsen & Lehr, Reference Delsen and Lehr2019) have also inspired explicit labelling of investment products with pro-social claimsFootnote 7 in order to attract such savers. In the EU and UK, there is a regulatory expectation that such labelled investment products would “walk the talk” and investment managers must engage with investee companies’ sustainability performance to the extent consistent with the claims for the labelled product.Footnote 8
The demands for corporate reporting, especially in relation to non-financial matters which are susceptible to a mixture of quantifiable and qualitative reporting, have increased sharply in the last decade in the EU and UK. In 2014, the EU introduced mandatory reporting for quoted and traded companies with at least 500 employees, in relation to “environmental, social and employee matters, respect for human rights, anti-corruption and bribery matters,” for the purposes of their management’s report, in order to shed light on both corporate performance as well as the “impact” of their activities.Footnote 9 There was some ambiguity as to whether such a reporting requirement pertained only to material non-financial information (i.e. single materiality, relating to the company’s performance), or whether the reporting of such information should inform of “double materiality” ie the corporate performance or impact relating to the environmental, social, human rights or anti-corruption matter as such (Chiu, Reference Chiu2017). The open-ended framing in the Directive left corporations to implement how they would report, although the provision can be interpreted to contain expectations of companies reporting their due diligence policies and risk findings, in terms of their double material impact, and not just the financial significance of those matters.Footnote 10
The Directive was transposed in the UK in the form of the directors’ “Strategic Report” under the Companies Act,Footnote 11 whose focus was on single materiality.Footnote 12 Nevertheless, corporate scandals in the UK, such as the mismanagement and collapse of large private retailer BHS leaving large pension deficits and job losses,Footnote 13 and sustained criticisms of pay inequalities in the corporate sector, led to increasing regulation of corporations through mandatory disclosure of matters that pertained more to their reputational and social aspects. The “section 172” statement has become mandatory for large companies,Footnote 14 whether or not publicly traded or quoted, in relation to how directors have considered a range of stakeholder-related matters in the discharge of directors’ duties. Further, pay ratio reporting became mandatory under the Companies Act in 2018Footnote 15 while gender pay gap reporting was enshrined in the Equalities Act.Footnote 16 The UK’s experience of the Sportsdirect scandal involving inhumane working conditions (House of Commons BIS Committee, 2016-7), and the UK’s endorsementFootnote 17 in 2013 of the UN Human Rights Council’s adoption of the Guiding Principles for Business and Human Rights FrameworkFootnote 18 ultimately culminated in mandatory reporting imposed on a range of private and public companies in relation to due diligence in their supply chains for matters regarding modern slavery.Footnote 19
The range of mandatory narrative reporting by a range of companies, usually large and/or publicly quoted and traded, has expanded in response to social perceptions that companies need to be governed responsibly (Choudhury, Reference Choudhury2016), while regulators try to avoid intrusive means of behavioural regulation (Chiu, Reference Chiu2018). Recent EU legislation now requires a comprehensive range of mandatory non-financial disclosure, in the Corporate Sustainability Reporting DirectiveFootnote 20 that supersedes the 2014 measure. Furthermore, although implicit in the 2014 Directive is the need for due diligence (Samuel, Reference Samuel2018; Sherman, Reference Sherman., III2022) to be carried out in order to make meaningful reporting, the EU has now gone further to embrace an explicit need for corporations of a certain economic size and impact that are either incorporated in or sell to European markets,Footnote 21 to comply with due diligence requirements with a view to taking proactive action to prevent harm (de Oliveira & Parlak, Reference de Oliveira and Parlak2025; McCullagh, Reference McCullagh2024),Footnote 22 and not just to inform reporting. Although new deregulation initiatives in the EU to boost economic growth are paring back the Corporate Sustainability Due Diligence Directive,Footnote 23 due diligence obligations would remain for a smaller scope of companies with their immediate suppliers, with discretion to widen the supply chain scope in the future. The Brussels effect (Moon, Reference Moon2024) of the Corporate Sustainability Due Diligence Directive can still create incentives for corporations to leverage upon artificial intelligence capabilities, particularly Gen AI, to fulfil these requirements.
It may be argued that companies have resourced themselves and been complying with the reporting requirements since they came into force, so would Gen AI make a difference to the compliance mechanisms already put in place? Furthermore, mandatory disclosure entails legal risks. Where the disclosure is investor-facing, such as mandatory reports under Companies and securities legislation, investors may bring civil actions for inaccurate and misleading disclosures or omissions.Footnote 24 All regulatory reporting also arguably attracts the risk of regulatory enforcement for misdisclosures. The UK Financial Conduct Authority has penalised companies for defective securities market-facing disclosures. Notable episodes include a significant fine levied upon Metro Bank in 2022Footnote 25 for Listing rule breaches and Tesco in 2017 for a defective securities disclosure made in 2014,Footnote 26 and which included civil restitution.Footnote 27 Theoretically, other relevant regulators can also take enforcement action for poor or defective mandatory disclosures placed in other legislations, for example, the Home Office for companies’ modern slavery statements. Hence, it may be queried if having Gen AI automate mandatory disclosures would be consistent with managing the compliance risk of mandatory disclosure. Instituting human responsibility to check the sufficiency and accuracy of mandatory disclosures would seem to be fundamental to legal compliance.
However, Gen AI boasts of certain capabilities that may improve the quality and sufficiency of mandatory disclosure. The incorporation of Gen AI use, perhaps not to the extent of making human production of mandatory reporting redundant, could also introduce new efficiencies, especially since explicit requirements for due diligence imply the need for greater data analytical capabilities for corporations. Further, even if corporations do not officially integrate Gen AI into operational systems and work flows, it has been reported that many employees turn to Gen AI periodically to assist in tasks and gain efficiencies.Footnote 28 Hence, it may be beneficial to have explicit policies to manage the risks of employees using Gen AI in the shadows. It is not inconceivable that, despite the legal risks of mandatory disclosure, the use of Gen AI would sharpen, for corporate production of mandatory reporting, including the implementation of due diligence procedures that are part of the process for generating corporate reporting. The next section discusses the potential achievements and drawbacks of Gen AI for the production of corporate reporting, based on a more broadly curated literature review of Gen AI in narrative production.
2.1 Gen AI and narrative corporate reporting
First, Gen AI has been observed to be useful for the purposes of organising and structuring raw data or information, in order to help companies obtain a set of relevant and structured data for their due diligence purposes and analyses (Blankespoor et al. Reference Blankespoor, deHaan and Qianqian2025; Financial Reporting Council, 2019; Liu & Wongsosaputro, Reference Liu and Wongsosaputro2023). For example, on stakeholder relations, companies are likely to collect and store many different raw data points in relation to their stakeholder relations, such as customer complaints, engagement with stakeholders, such as non-governmental organisations, political lobbying and donations, charitable work etc, and such internal data would also need to be mapped against external data, such as media or social media coverage regarding the corporation, whether favourable or unfavourable, in order to organise information that is salient for reporting, for example, for the purposes of the company’s s172 report on stakeholder relations. AI systems have extensive data analytic capabilities and have also been trained on publicly available big data. In terms of Gen AI, they are arguably “supercharged” machine learning systems as they are trained on even greater quantities of data in order to generate word associations based on ever-more sophisticated combinations of proximity and probability (Kai, Reference Kai2024; McKinsey & Co, 2024). Taking another example, in relation to a company’s environmental impact and performance, Gen AI can also be usefully deployed to organise and structure company-related data that may be collected for internal repositories, mapped against publicly available information on environmental impacts (Brière, Keip, Berthe, Tegwen & Murad, Reference Brière, Keip, Berthe, Tegwen and Murad2024), in order to inform companies’ due diligence processes that can then feed into action and reporting.
Empirical literature has documented vast improvements in artificial intelligence systems in relation to sorting, organising and producing structured data, from raw data inputs, for various professional uses. Such use includes organising supply chain information (Welsen & Sulkowski, Reference Welsen and Sulkowski2024), organising raw accounting information for the audit process (Blankespoor et al. Reference Blankespoor, deHaan and Qianqian2025), producing legal and regulatory updates tailor-made to different industries (Ioannidis, Harper, Quah & Hunter, Reference Ioannidis, Harper, Quah and Hunter2023), determining salient ESG information for single and increasingly, double materiality (Jain et al. Reference Jain, Gupta, Yalciner, Joglekar, Khetan and Zhang2023), and even for genre identification for literary works (Jenner et al. Reference Jenner, Raidos, Anderson, Fleetwood, Ainsworth, Fox and Barker2025) where textual writing is even less technical in nature for machine learning systems to organise. Furthermore, some empirical research has found that AI systems’ consistency in their sorting, organisational and structuring results is highly robust and maintains constancy even after many runs (Wang & Wang, Reference Wang and Wang2025 Bui & Barrot, Reference Bui and Barrot2025).
Next, Gen AI can be very useful for generating the corporate report itself. Gen AI has been empirically observed to be capable of carrying out the analysis needed of structured data and presenting it in a narrative format suitable for investors and regulators’ consumption. Corporations need not regard Gen AI’s product as the final deliverable, but the task can arguably be completed by Gen AI as a “first draft.” Empirical research, for example, has shown that Gen AI not only is able to organise and structure many raw data points relevant for an equity analyst’s research, but it is able to “perform” the analysis and produce a report (Zhou et al. Reference Zhou, Wang, Yilin and Yang2024), which would be the equivalent of an equity analyst’s research report on particular securities to buy or sell. This is based on the machine’s learning of vast quantities of analyst reports. Gen AI’s autonomous products also seem highly competitive against human equity analysts (Zhou et al. Reference Zhou, Wang, Yilin and Yang2024). Much empirical research has also commented on superior writing produced by Gen AI systems compared to human writers, in relation to the clarity and logic of structure in writing (Wihlidal et al. Reference Wihlidal, Wolter, Propst, Lin, Michael, Amin and Siu2025), the salience of content (Zongxiao, Dong, Yaoyiran & Shi, Reference Zongxiao, Dong, Yaoyiran and Shi2025), as well as general readability tailored to an intended audience (Wihlidal et al. Reference Wihlidal, Wolter, Propst, Lin, Michael, Amin and Siu2025). Gen AI has also been increasingly relied on to improve expression and writing, such as to improve sentences, correct writing errors and enhance presentability (Andersen et al. Reference Andersen, Degn, Fishberg, Graversen, Horbach, Schmidt and Sørensen2025). In this manner, there is relative confidence, surveyed in empirical literature, that Gen AI is suitable for deployment in generating both quantitative and narrative corporate reports (Blankespoor et al. Reference Blankespoor, deHaan and Qianqian2025; Krause, Reference Krause2023b), (de Villiers et al. Reference de Villiers, Dimes, Charl, Ruth and Molinari2023; Financial Reporting Council, 2019).
Thirdly, Gen AI can be specifically used by corporations to identify locations of ESG risks in a forward-looking manner, and Gen AI can be prompted to make suggestions as to how corporations can deal with them. In this manner, Gen AI is not only used to assist in the compliance with due diligence and reporting obligations, but can also actively be used in relation to forward-looking risk management (Chambers & Martin, Reference Chambers and Martin2022), feeding into subsequent rounds of due diligence and reporting implementation. AI systems have been extensively used for predictive analytics, from predicting loan default by borrowers (Langenbucher, Reference Langenbucher2020) to the securities price performance of various companies (Kim, Muhn & Nikolaev, Reference Kim, Muhn and Nikolaev2024). By processing corporate information in relation to ESG risks that are increasingly required for narrative reporting, Gen AI systems can similarly be trained to make predictions of the hotspots of ESG risks for companies. This seems already deployed by investors (Financial Reporting Council, 2019; Jain et al. Reference Jain, Gupta, Yalciner, Joglekar, Khetan and Zhang2023) who need to analyse vast quantities of corporate reporting and publicly available information to determine their allocations. Corporations may be able to leverage upon Gen AI for more proactive forms of ESG risk management that may affect financial performance and/or their reputations. Further, empirical research (Eisenreich, Just, Gimenez-Jimenez & Füller, Reference Eisenreich, Just, Gimenez-Jimenez and Füller2024) has also found, in another domain, that Gen AI is able to offer suggestions for actions or reform when asked for solutions to address a specific problem, even if some suggestions may be impractical in nature. It is not inconceivable to leverage upon Gen AI to suggest how certain ESG risks identified may be abated or mitigated, therefore assisting in corporations’ broader risk management processes.
The capabilities of Gen AI make them highly conducive to some form of adoption for assisting in mandatory corporate due diligence and reporting, especially narrative reporting. How corporations would integrate Gen AI likely depends on the risks they perceive, in connection with any deficiencies on the part of Gen AI. The next section explores these risks and how companies are likely to manage them as individual firms.
3. A firm-based approach to risk management of Gen AI in corporate reporting
Although Gen AI offers many advantages from data organisation and analysis to narrative production, there are three key risks for corporations looking to rely on Gen AI for producing their due diligence findings and narrative corporate reporting. First, corporations may be exposed to legal risks for any misdisclosures, including Gen-AI-generated misdisclosures. Next, there may be a potential for copyright infringement on the part of corporations, as their disclosure products could be similar to or based on copyrighted material processed by the AI system without the copyright holder’s consent. Third, there are privacy risks for the reporting corporation. Although privacy risks in relation to Gen AI have often been discussed in relation to the use of data by corporations without subject-holders’ consent,Footnote 29 in the context of corporate reporting, the privacy of the corporation may be more at stake. AI systems’ outputs have also often been associated with fairness and discrimination risks (Yap et al. Reference Yap, Lim, Flick, Hackner, Haidmayer, Handzhiev and Prosch2022). In the context of due diligence implementation and narrative reporting, this article sees these risks as being of a secondary nature in relation to the compliance obligations, but they could surface in relation to risk management steps recommended by Gen AI. If discriminatory biases in big data or training data are entrenched, AI systems could recommend immoral actions to be taken.Footnote 30 Although such outputs would cause concern, this article leaves this issue to be dealt with elsewhere, focusing on the narrative reporting paradigm, in terms of risks for the reporters and recipients of reports.
The most oft-cited criticism of Gen AI’s output is that it can produce hallucinations (Hacker et al. Reference Hacker, Engel, Hammer and Mittelstadt2025; Remolina, Reference Remolina2024), i.e. fake information presented as fact, as a result of mistaken correlations processed by the system in relation to data and word production. In terms of mandatory corporate reporting, this would result in misdisclosures and/or omissions, attracting potential legal risk in terms of regulatory enforcement and civil enforcement for misreporting. Such civil enforcement can be carried out by investors based on securities litigation actions,Footnote 31 or based on common law misrepresentations causing economic loss,Footnote 32 where the reports are not covered within the scope of securities litigation. Further, other stakeholders may be able to bring actions for misrepresentation depending on the nature of the cause of action. For example, consumer stakeholders can sue for false advertising where incorrect corporate reporting affects the integrity of companies’ products, and consumers are thus mis-sold (Pereira & Giovana, Reference Pereira and Giovana2024).
In a firm-based context, it can be argued that such hallucinations can be mitigated if there is human oversight of Gen AI’s disclosure product before it is disseminated. Human responsibility to check the inferences and legal implications of language, as well as the content of disclosure, could help to gatekeep the externalisation of hallucinations. This form of risk management also strikes a good balance in terms of preserving human roles in compliance tasks and jobs. However, corporations would have to address the potential behavioural challenge of mechanistic reliance on Gen AI by humans (Schemmer, Hemmer, Kühl, Benz & Satzger, Reference Schemmer, Hemmer, Kühl, Benz and Satzger2022 Li, Lu, & Yin ). Besides automation bias (Kahn, Probasco & Kinoshita, Reference Kahn, Probasco and Kinoshita2024), which is the human tendency to over-rely on an automated system and trust in it, empirical research has found that Gen AI in particular provides a pseudo-social context of “conversation” with users and acting as “personal assistant,” inducing a form of crutch reliance or even behavioural reliance close to addiction (Yankouskaya, Liebherr & Ali, Reference Yankouskaya, Liebherr and Ali2025).
Mechanistic reliance is not merely a behavioural issue based on the human tendency to take the path of least resistance. Furthermore, as the informational context for corporate disclosure becomes ever more expansive and complex, such that Gen AI has been recruited to assist, it may become practically difficult for human overseers to be on top of this information matrix in order to check on Gen AI’s output. Nevertheless, empirical research has also found that human interactions with AI systems are not always doomed to automation bias. Experiments have found that human users could exhibit behaviours of reflectively using Gen AI systems or using with caution, i.e. being more critical of Gen AI’s output, detecting errors and using such output judiciously (Hou, Zhu & Sudarshan, Reference Hou, Zhu and Sudarshan2025). Such behaviours can be trained. If users become more AI literate, hence becoming aware of the limitations of these systems, or they are empowered to think more critically in general, these characteristics would be able to moderate users’ reliance behaviour (Hou et al. Reference Hou, Zhu and Sudarshan2025). In this manner, there is a need for the development of suitable training for humans acting as AI systems’ overseers. Besides, corporations should also subject systems to regular testing in order to develop categories and checklists for common errors. Human overseers can then perceive the limitations of AI systems as a norm, and also benefit from having a more structured approach to verify Gen AI’s outputs.
Further, it is arguable that corporate leadership should have the responsibility of clarifying the role of human oversight, and ensuring that human overseers maximise the potential of Gen AI while mitigating risk. There seems to be a well-documented acknowledgement by Boards of corporations of the need to integrate AI risks and opportunities into decision-making at the leadership level (KPMG, n.d.). This is however not well-understood in terms of the “how to” in relation to implementation (Orbach & Boettcher, Reference Orbach and Boettcher2023). The operational and procedural levels of integrating Gen AI in corporate reporting, and overseeing such outputs, as well as determining how such outputs are to be used, should be guided by high-levels of strategic and risk management frameworks approved at Board level (Cunningham, Maskin & Carlson, Reference Cunningham, Maskin and Carlson2023). Ideally, corporate Boards need to attain adequate training on the nature of Gen AI and the systems the company procures or commissions, and maintain a high level committee to design frameworks for risk management (Orbach & Boettcher, Reference Orbach and Boettcher2023), not only for the purposes of addressing legal risks from firms’ point of view (i.e. the three main information-related risks highlighted in this section) but also for the purposes of managing broader reputational or ethical risks (Chiu & Lim, Reference Chiu and Lim2020-21).
Such Board-level determinations can then cascade into communications (Zarifis & Yarovaya, Reference Zarifis, Yarovaya, Zarifis and Cheng2025) for executive teams to operationalise risk management processes in relation to procuring systems, ongoing monitoring and testing of systems, instituting oversight capacity and what to monitor, in order to critically assess Gen AI’s output and other problems. Furthermore, where corporations supply to EU markets, even if they may not be domiciled in the EU, the EU’s pioneering AI ActFootnote 33 applies to demand that corporate users comply with obligations relating to high or low-risk systems they deploy. In that manner, the frameworks and procedures for compliance with the AI Act need to be designed with the compliance obligations in mind (Passador, Reference Passador2025). In this manner, managing the risks of misdisclosures in Gen AI deployment for corporate reporting involves levels of Board-level leadership for risk management frameworks as well as executive and operational implementation of these frameworks. Firms design such frameworks in an experimental and dynamic environment, as AI systems’ performance and problems are not necessarily easily determined- reliant not only on in-house expert oversight, but also on vendors’ communications, ongoing support and the constant need to ensure that data is relevant, valid and legitimately used. The use of Gen AI in corporate reporting may seem intuitive, but crucially needs to be supported by a comprehensive framework of decision-making in a firm or enterprise-wide manner (Sriram & Harish, Reference Sriram and Harish2022), from Board-level leadership to executive-level risk management and compliance.
The next issue corporations need to address is the potential leak of confidential or private information, as firm-based information can be fed into Gen AI systems in order to generate structured data, analyses and reporting (Krause, Reference Krause2023b; Needham, Reference Needham2025). This risk is particularly pronounced if employees use Gen AI in the shadows, relying on publicly available large language models like ChatGPT or Claude while sharing firm-based information that may be private and confidential. The solution may not be to discipline or terminate employees as cases arise but the firm may consider explicitly embracing and having a policy on Gen AI. This approach may be superior for the purposes of instituting the right culture at firms and harnessing the benefits of new technology. Some Gen AI vendors provide subscriptions that meet the needs of customers’ privacy needs, such as “zero retention” of their shared information.Footnote 34 Further, researchers have also found that tailor-made large language models that are not trained on as vast an amount of data as Gen AI systems, but are trained specifically with industry-specific data for specific purposes, can perform more accurately and well than just using a Gen AI system (Hajikhani & Cole, Reference Hajikhani and Cole2024; Buckmann & Hill, Reference Buckmann and Hill2025 ). Corporations can therefore consider commissioning smaller-scale, less expensive but effective machine learning systems for their own data organisation and analytics, such as for mandatory reporting, which would safeguard their privacy. The robustness of these systems may be enhanced, particularly if they are also mainly offline and protected by barriers.
Finally, corporations run a risk of copyright infringement if their Gen AI-produced narrative report is alleged to be similar to or derivative of other firms’ reports. As Gen AI processes firm-based data and maps onto a vast amount of publicly available/retrievable data, a firm’s reporting output can be made to resemble another’s, particularly in the same industry. Empirical research finds that many firms use the same templates and boilerplate language in narrative reporting of ESG matters (Lin, Shen, Wang & Julia Yu, Reference Lin, Shen, Wang and Julia Yu2024), and it is queried if Gen AI, learning from these precedents, would also converge upon very similar reporting products. Further, developers of Gen AI may be using copyrighted material for their training (Pillai, Vishnu & Matus, Reference Pillai, Vishnu and Matus2024), and although this is regulated explicitly under the EU Artificial Intelligence Act,Footnote 35 the position is less clear in many other jurisdictions. In either case, a ripe context for litigation arises for copyright infringement allegations. Subsequent to such allegations, regulators, investors and stakeholders could also allege the disclosures to be inaccurate or misleading if they excessively resemble another firm’s, as this calls into question the extent of their reliability, reflecting the firm’s own position.
Firms could manage the potential copyright infringement risk by refraining or discouraging employees from using Gen AI models like ChatGPT or Microsoft Co-pilot without an arrangement that is bargained with the firm’s needs in mind. Firms may wish to enter into negotiated subscription arrangements that secure disclosures from developers regarding the nature of their training data, or firms could opt for commissioning bespoke systems that would utilise training data subject to the firm’s curation and supervision, such as internal data and even synthetic data that would not be in breach of copyright (Tkachenko, Reference Tkachenko2023). In bespoke arrangements, it is also possible to arrange for the shifting of the risk of copyright infringement to the AI system developer, in order to constrain the developer’s misbehaviour in using copyrighted information without consent. This is not arguably unfair, as developers both enjoy superior expertise and knowledge in terms of how commissioned systems are designed.
In sum, taking a firm-based perspective of Gen AI risk management in the area of mandatory corporate reporting, firms need to engage at all levels of risk management, and to develop explicit processes and policies for adopting and integrating Gen AI into their operations. They would also likely harness benefits while mitigating risks by commissioning their own systems or designing their tailor-made subscriptions, instead of relying on generic but powerful systems such as ChatGPT or Microsoft Copilot. However, a firm-based perspective of Gen AI risk management is not the complete picture. The next section explores how firm-based risks should be better understood through the lens of the reporting chain, which also highlights collective risks that a firm-based approach can miss.
4. A reporting chain approach to managing Gen AI risks for mandatory corporate reporting
Mandatory corporate reports are made to account for corporate management, and can be scrutinised by shareholders/investors, regulators and other interested third parties. How these stakeholders perceive and engage with the risks of Gen AI is also relevant to and should shape corporate managers’ understanding and management of Gen AI risks for the purposes of corporate reporting. For example, if investors, particularly institutional investors, also use Gen AI to process and analyse volumes of corporate reports, and/or prepare reports themselves for accountability to their end-investors/beneficiaries, the penetration of Gen AI through the reporting chain can change the nature of legal risks for companies in terms of how Gen AI may shape investors’ market discipline for their investee companies. Similarly, the regulatory risks for companies using Gen AI in their corporate reporting processes can also be reshaped or redefined if regulators, as recipients of these reports, process and analyse them using Gen AI for supervisory purposes.
There are three key groups of actors in the reporting chain whose behaviour may shape and redefine the nature of legal risks associated with mandatory corporate reporting using Gen AI.
First, shareholders/investors of the reporting company are staple consumers of corporate reports. This universe not only covers institutional shareholders and investors, such as asset owners like pension funds and insurance companies, but also their service providers, including asset managers, the research analyst industry, financial journalists, proxy advisers for institutional voting, rating agencies and infomediaries. This universe also covers brokerage firms servicing both the institutional and the direct retail trading market. Many intermediary actors in the financial sector perform their functions based on intermediating information regarding investee subjects and assets, and would likely perceive the prowess of Gen AI in processing and analysing information, as well as generating infomediation outputs, to be beneficial in one way or another to their business models.
Empirical research has found that there is increased uptake of Gen AI in the asset management industry, particularly in relation to analytical processes in order to generate strategies for yield, as well as in wealth management and financial advice (Ernst & Young, Reference Ernst and Young2023). An Aviva study also found that two-thirds of UK brokerages have implemented Gen AI to deliver more personalised recommendations to customers (Aviva, Reference Aviva2025). Research analysts, credit rating agencies and ESG assurance services (Zhou et al. Reference Zhou, Wang, Yilin and Yang2024; Moody’s, undated; Nichole, Kim, Dai & Vasarhelyi, Reference Nichole, Kim, Dai and Vasarhelyi2024) have been reported as leveraging upon Gen AI, and investor relations management services, which include proxy advisory services, can also be enhanced with Gen AI tools (Broadridge, undated). The intermeshing of AI into fund management is rising in intensity due to the analytical demands for volumes of data, increasingly perceived as essential to investment management (Hedges, Reference Hedges2025).
Next, securities regulators are also staple consumers of corporate reports, but for more particular purposes of supervision. Regulators would be more interested in detecting red flags, potential misdisclosures and omissions in order to police well-functioning securities markets. Regulators could be keen to adopt Gen AI to process and analyse the increasing reams of regulatory reporting that corporations are being asked to produce (see Section 1.1). The European Central Bank, for example, as bank regulator to key Eurozone banks, has reported an increased use of machine learning tools to process and analyse volumes of raw data, such as raw loan data from banks, in order to supervise credit quality issues (Hope, Reference Hope2025). Key securities regulators, such as the US SEC or UK Financial Conduct Authority (FCA), are also adopting Gen AI. The SEC is not specific about particular use cases but demonstrating their use of AI in ways that are subject to proper governance.Footnote 36 The FCA is more specific about use of AI tools in combating financial fraud and scams in particular,Footnote 37 and has also appointed a new Chief Data and Intelligence Officer as part of its governance framework. It is unclear if securities disclosures are also being processed through Gen AI tools for the detection of defects. As regulators are also keen to demonstrate that they are risk aware, and are monitoring their regulated firms in terms of Gen AI use that impacts upon regulatory objectives such as consumer protection and financial stability, their use of Gen AI could be more nuanced or conservative, but nevertheless appears to be an indefatigable trend.
Finally, with Gen AI capabilities available to be utilised by the general public subscribing to ChatGPT, Claude, Gemini, Microsoft Co-pilot or Deepseek, increasing groups of informal social media finfluencers can be mobilised, capitalising on a trend already observed (IOSCO, 2025; WeForum, Reference Weforum2024), purporting to steer the general public with the results of their financial analyses parsed through Gen AI. Finfluencers can generate lucrative opportunities by sponsoring rewards and referral fees, besides enhancing popularity that can be used as social capital for other entrepreneurial endeavours. Gen AI in particular can enhance its visual appeal, help with creating engagement media with the public, and used for data analytical capabilities. The capabilities of Gen AI may also help many of the “antiskilled” or “unskilled” finfluencers (Kakhbod, Kazempour, Said, Dmitry & Schuerhoff, Reference Kakhbod, Kazempour, Said, Dmitry and Schuerhoff2023 Mölders et al. Reference Mölders, Bock, Barrantes and Zülch2025) become more skilled and actually able to make profitable recommendations for followers. The traditional reporting chain for corporations, ie investors and financial intermediaries as well as regulators, needs to include these new self-branded infomediaries and influencers, as the way they mediate information for general public consumption can affect the well-functioning of securities markets.
4.1 How the reporting chain changes the nature of legal risk for Gen AI-enabled corporate reporting
The industry for Gen AI development features a handful of key developer companies, such as OpenAI, Microsoft, Google, Meta, Anthropic and Deepseek. Developers need significant amounts of resources to train and develop these models, and the industry presents natural barriers to entry for entrepreneurs without such capital and resource advantage. The concentration of developers also means that users may converge upon one or a few publicly available models for adoption, being more cost-effective than commissioning bespoke models for their use. In this manner, it is questioned if a form of analytical homogenisation can occur throughout the reporting chain, meaning that users in the whole reporting chain are led towards similar analytical conclusions, when they look for companies’ financial or ESG risks, or short or medium-term profitability, even if their purposes are different. The Bank of England and FCA broadly warn of such “concentration risks” in terms of whether errors, misimpressions or blind spots introduced by Gen AI’s analytical results would become amplified (Financial Policy Committee, 2025). In this manner, a potential misdisclosure in a corporate report, if presented by Gen AI as a salutary fact, can be perpetuated in users’ analytical results, where that “fact” is relevant. In that manner, a systemic error can be amplified.
If a narrative report produced by a company could contain a misleading disclosure perhaps due to an incorrect inference or assertion, broadly called “hallucinations,” and the reporting chain uses the same Gen AI model to analyse the report, the error or misleading disclosure may not be discovered as such by the same Gen AI model that has treated or categorised that data as fact or as being correct. In this manner, it is arguably more likely that the convergent use of the same publicly available Gen AI models can create systemic common blindspots and perpetuate errors that “monitoring” roles are unable to detect if they rely on the same Gen AI models for analysis. This would also apply to finfluencers using the same Gen AI models for information analysis. Both market discipline roles and supervisory detection can be adversely affected. The perpetuation of blissful blind spots in intermediating corporate disclosures can result in securities mispricing for companies, and finfluencers can play a role in encouraging herding amongst their retail followers. If there is a subsequent trigger that leads to price correction, such price correction may be abrupt and sudden as markets and regulators “did not see it coming.” It is questioned whether the pervasive use of Gen AI in generating and intermediating corporate reporting can ultimately lead to cliff-edge corrections for securities prices, which may bring about greater market instability than before.
However, the contrary may occur. It can also be possible that the reporting chain is able to detect an early problem with corporate reporting, perhaps by a different Gen AI system. This can mobilise a range of traditional investor and intermediary actions and perhaps also finfluencer actions in order to compel companies towards a correcting path.
If Gen AI can usefully detect errors or misdisclosures in corporate reports, market discipline in terms of investors’ actions may be more effectively mobilised, and this can also assist in regulators’ supervisory interventions. If the reporting chain’s monitoring capabilities are perceived as enhanced by Gen AI, companies may perceive their legal risk in reporting as being enhanced, and this could produce incentives towards companies instituting more robust governance of Gen AI’s use in corporate reporting in a landscape of deterrence against mis-disclosures. In turn, companies’ investment in sounder governance of Gen AI adoption for corporate reporting could also mitigate their exposure to privacy risks and copyright infringement risks, as companies may endeavour to ensure their reporting truly accounts for their activities and achievements in a manner material to their recipients. Which possibility is probable- the systemic error paradigm or the virtuous reinforcement power of the reporting chain?
It can be argued that Gen AI’s propensity to create such analytical homogenisation is unlikely, as users can prompt the Gen AI systems in different ways (Imperial, Jones, Madabushi & Harish, Reference Imperial, Jones, Madabushi and Harish2025; Krause, Reference Krause2023a), therefore yielding results particular to their purposes. There are only a few dominant global search engines, namely Google and Baidu, but users worldwide have met their diverse information querying needs, even if they rely only on these. However, it can be argued that there have been more deliberate development efforts to secure the consistent quality of Gen AI’s outputs (Wang & Wang, Reference Wang and Wang2025), due to concerns about the black box effect of many AI systems. Developers have prized ensuring that similar queries yield similar results in order to mitigate the fears of arbitrary results produced in AI black boxes. It is therefore important to explore to what extent Gen AI systems may be susceptible to analytical homogenisation for similar ranges of queries, especially if many corporations and their reporting chain converge upon using the same or a few Gen AI systems.
Another potential consequence of analytical homogenisation is that corporations reporting in similar industries may generate highly similar narrative corporate reports with Gen AI adoption. This is already the case with isomorphic influences for ESG narrative reporting (Lin et al. Reference Lin, Shen, Wang and Julia Yu2024), but can be reinforced or enhanced with Gen AI adoption, trained on such existing data. There may be some ironic advantages to this development. One may argue that copyright infringement complaints can become more tenuous if many companies are generating their reporting out of one or a few common Gen AI moulds, making it unclear who should be the plaintiff and who should be the defendant. In this manner, litigation risk may ironically decrease.
Further, corporations may see the opportunity to learn from Gen AI-generated narratives as the “wisdom of the crowd” on best practices in areas of ESG due diligence or engagement (Chen & Ge, Reference Chen and Ge2025). In this manner, a form of reporting convergence may be fostered in a bottom-up manner by corporations’ adoption and engagement with one or few common Gen AI models, overcoming a longstanding problem of divergence and incomparability of corporations’ narrative reports, especially in relation to ESG matters. Where the reporting chain is concerned, however, analytical homogenisation across an industry could amplify an error or blindspot at an industry scale, and this may further exacerbate the potential systemic effects of securities price correction across a wider range of companies in an affected industry.
4.2 Potential risk mitigation measures
Corporations’ adoption of Gen AI for corporate reporting, particularly narrative reporting, raises issues of risks not only for individual firms but also at industry and market systemic levels, when viewed through the lens of the reporting chain. The key issue may be that the reporting chain and reporting companies converge upon one or a few publicly available Gen AI models, and if there is model diversity and market choice for users, the systemic effects of any particular model can arguably be mitigated. This does not mean that defects or hallucinations on the part of any Gen AI model would not pose problems. One policy measure may be to encourage AI development diversity so that there is a market for choice. However, given the history with dominant search engine companies and online platforms, the fostering of market competition is not necessarily easily achieved.
It is suggested that regulators provide leadership on how companies use Gen AI to make mandatory corporate reporting, particularly narrative reporting. Such reporting is already a mandatory obligation, and regulators can provide further guidance on how the governance of reporting processes should be secured. For one, companies should disclose if Gen AI is used in their reporting, a brief description of the model used and the governance and oversight measures in place surrounding the use of Gen AI. Regulators could encourage companies to put in place their own bespoke machine learning systems for corporate reporting, which would greatly alleviate privacy concerns. But given the likely cost of such systems, depending on training capacity, turning to publicly available large language models like ChatGPT is a more likely prospect. However, a mandatory disclosure obligation for companies in terms of what Gen AI models they have used and their governance processes forms a foundational rubric for regulators to undertake appropriate supervision. Regulators could ascertain if any particular Gen AI models are becoming dominant, and how they affect corporate reporting, such as whether analytical homogenisation can be observed. Regulators can also commission audits or testing of dominant Gen AI systems, perhaps using synthetic data, in order to discern the risks of hallucinations and amplified errors. In this manner, regulators need to be more proactive in understanding and engaging with Gen AI risks within their regulated domains, and not leave it to a sub-optimal form of meta-regulation, where firms are left to implement their own governance processes without critical supervisory oversight.
It is also suggested that securities and financial regulators who oversee many entities in the reporting chain should also proactively engage with Gen AI risks in this user base. As many entities in the reporting chain, such as asset management firms, brokerage firms, financial advisers, credit rating agencies, ESG rating agencies, proxy advisory firms are subject to forms of organisational and governance regulation as part of their authorisation processes and the continued maintenance of their licence to operate, financial regulators such as the FCA should require relevant periodic disclosure by firms in relation to what operations or business lines Gen AI is deployed in, which Gen AI is being used, and firms’ oversight and governance processes for the use of Gen AI. It is noted that at the international level, the Financial Stability Board monitors and encourages national financial regulators to collect data on financial sector AI use in order to monitor risks that may affect regulatory objectives (Financial Stability Board, 2025). This would, in turn, entail information collection from regulated firms, whether on an ad hoc basis or more structured regulatory returns.
Financial regulators also impose organisational and governance regulation,Footnote 38 generally on regulated firms. This area deals with the integrity and competency of managers, sound structures and business processes to ensure stability, resilience and continuity, and the availability of accountability and oversight structures to deal with a range of risks firms face that may affect financial regulatory objectives such as investor protection and financial stability. This body of regulatory standards should more explicitly include Gen AI risk management, in particular, aiming at potential adverse effects for firms, markets and the financial system. Besides supervising firms’ governance of Gen AI, the FCA is able to have a bird’s eye view of any concentration risk in model adoption, and should also sample regulated entities’ outputs, such as analyses, advisory recommendations, ratings outputs, etc, in view of the risks of analytical homogenisation and model risk concentration.
The FCA also subjects finfluencers to financial promotions regulation (Financial Conduct Authority (FCA), RegTech, 2024) in order to ensure that they are accountable for conflicts of interest and promote financial investments only in ways that are compliant, fair, clear and not misleading. In addition, the FCA can require all finfluencers to declare if they use Gen AI, for what purposes and what model has been used, in order to detect trends of analytical homogenisation.
As the Bank of England and FCA joint report (2025) points out, much of supervisory effort is aimed at surveying and understanding industry use and potential implications, so as not to inhibit users from reaping the dividends of engaging with innovations, and not to erect disproportionate regulatory barriers that may be uncompetitive. However, what is important is that such surveying should be proactive and also forward-looking, in order to anticipate risks from trends that are found, and should lead to supervisory testing and auditing too.
Ultimately, it can be argued that securities regulators monitoring the risks of Gen AI amongst their regulated user base are but a limited effort towards governing the quality of Gen AI systems, in terms of their integrity, avoidance of privacy and copyright breaches, their mitigation of errors and hallucinations and their general reliability. These may more appropriately be governed by an omnibus AI regulation, such as has come into force in the EU. Many governments, such as in the US and UK, prefer to opt for broad-based guidelines (UK Department of Science, 2024) for AI developers that are not susceptible to hard enforcement, in order not to stifle the competitive and innovative benefits that the industry can bring. An omnibus AI regulation for the UK, for example, is not necessarily the answer.
The dangers of having a general cross-sectoral AI regulator are that risks emanating from the use of AI systems may differ in type and extent across sectors, and a cross-cutting regulator may not be attuned to or respond to specific Gen AI risks, such as those generated by corporate reporting and the reporting chain. A governance framework for AI risks in various sectors may need to be innovative itself, involving the joint stewardship of general and sector-specific regulation and regulators (Chiu, Reference Chiu2024). Furthermore, in a dynamic and evolving space for technological development, it needs to be ascertained if the regulation of developers and aspects of AI systems would become outpaced or inappropriate. Nevertheless, the disadvantages of regulation need to be weighed against the drawbacks of soft law, which may suffer from enforcement deficit and can also be framed very broadly, leaving it in the hands of AI development firms to determine how exactly to define and implement broad notions of explainability, accountability, robustness or fairness. These qualities can take on more particular meanings in different sectoral contexts with particular regulatory objectives. For example, AI systems’ “robustness” can take on particular quality implications in light of the need for market discipline and stability in the context of Gen AI’s involvement with corporate reporting and the reporting chain.
Faced with the dynamic developments in AI systems’ innovations, industry uptake and market response, an optimal governance framework may need to be highly dynamic too, involving perspectives across the upstream and downstream chains relating to a particular use of an AI system, as well as involving multiple regulatory objectives, regulators and stakeholders. A mixture of governance instruments from soft to hard law may be appropriate to govern different types of risks. The study of how Gen AI works in mandatory corporate reporting shows the need for governing a web of firm-centric risks, and these risks are contextualised against the reporting chain. This article hopes to shed light not only on the governance roadmap ahead for corporate reporting and Gen AI but also on the need to appreciate the complex needs for AI regulatory governance and its design.
5. Conclusion
This article discusses the rise in the use of Gen AI for the production of mandatory corporate reporting, particularly narrative and ESG reports. The capabilities of Gen AI can potentially deliver many benefits, but firms are exposed to legal and regulatory risks in connection with Gen AI adoption. This article discusses how firms may address these risks, but more importantly, these risks should not be appreciated only at firm-level but at a broader industry, market and systemic level. When viewed through the lens of the reporting chain, which is the universe of recipient entities that use such corporate reporting, including numerous financial intermediary entities, regulators and finfluencers, these risks take on new implications that require regulatory and supervisory efforts for their oversight and mitigation. The article makes specific proposals for securities and financial regulators in particular, against a broader context of the more general, cross-cutting nature of AI systems regulation and governance.
Acknowledgements
*Professor of Corporate Law and Financial Regulation, UCL. I thank the anonymous reviewers for their helpful comments, Professor Ernest Lim and his students at the Centre of Transnational Legal Studies, King’s College London, for their comments at my talk, which was drawn from this paper. All errors and omissions are mine.
Funding statement
This article does not benefit from any funding.
Competing interests
There are no competing interests to declare.