Reference Materials

Below are links to external AI reference materials.

 

General reference materials

AI Opportunities Action Plan

 

The independent report by Matt Clifford CBE to devise an AI Opportunities Action Plan commissioned by the Sec. of Stat for Science, Innovation and Technology. (More about Matt Clifford CBE).

 

AI Opportunities Action Plan: Government Response

 

  • The AI Opportunities Action Plan Government Response sets out the Government’s response, including the proposed implementation timeline, to each of the 50 recommendations. Particular attention should be given to those recommendations addressing:
    • Unlocking data assets in the public and private sector (recommendations 7-13)
    • Enabling safe and trusted AI development through regulation, safety and assurance (recommendations 23-30)
    • A number of the recommendations re explicitly linked to the Government’s Plan for Change missions, which includes, ‘Build an NHS fit for the future’; the ‘milestone’ for this mission is to meet the NHS RTT standard that 92% of patients should wait no longer than 18 weeks from referral to start consultant-led treatment of non-urgent health conditions – the Government intends to achieve this by focussing on:
      • Transforming how elective care is delivered
      • Transforming patients’ experience of care
      • Transforming the model of care to make it more sustainable.
    • The Action Plan recommendations to note are:
      • (3)  Strategically allocate sovereign compute by appointing mission[1]focused “AIRR programme directors” with significant autonomy
      • (26) Ensure all sponsor departments include a focus on enabling safe AI innovation in their strategic guidance to regulators
      • (31) SCAN – Appoint an AI lead for each mission to help identify where AI could be a solution within the mission setting, considering the user needs from the outset
      • (32) SCAN - A cross-government, technical horizon scanning and market intelligence capability that understands AI capabilities and use[1]cases as they evolve to work closely with mission leads and maximise the expertise of both
      • (40) SCALE - Mission-focused national AI tenders to support rapid adoption across decentralised systems led by the mission delivery boards
      • (46) In the next three months, the Digital Centre of Government should identify a series of quick wins to support the adoption of the scan, pilot scale approach and enable public and private sector to reinforce each other (NB. This includes running Hackathons, aligned to the 5 key missions to engage startups in mission delivery and to appoint an AI lead for each mission to help identify where AI could be a solution.)

The regulation of artificial intelligence as a medical device

 

The Regulatory Horizons Council (RHC) published recommendations on how the UK can support the safe and rapid development of AI as a medical device (AIaMD). The Parliamentary Under-Secretary of State for Patient Safety, Women’s Health and Mental Health has set out in a letter the response of the government to the recommendations made within the report. Set out below are key extracts from the government response – full details are set out in the letter.

Theme A: regulatory capacity and capability

Recommendation 1

The MHRA should be specifically resourced with a long-term funding commitment to enable them to create and service a regulatory framework that is efficient, proportionate and safe, and supports the UK in being a leader in the innovation, evaluation and utilisation of AIaMD.

Government response

Accept in principle: the government continues to support the appropriate funding required for the operation of the MHRA, taking into account all of its functions. The MHRA’s regulatory functions for medical devices are primarily funded by DHSC, with the remaining funding from fees charged for services.

In October 2023, the NHS AI Lab provided MHRA with £1 million to pilot an AI focused regulatory sandbox, to uncover challenges and solutions to regulating AIaMD safely. In spring 2024, the MHRA launched the AI Airlock, a regulatory sandbox for AIaMD in collaboration with the NHS and Team AB (UK association for medical device approved bodies). The pilot is running until Spring 2025, using real-world AI products and prototypes to tackle regulatory challenges, informing future policy and sandboxes. 

Recommendation 2

Strengthen regulatory capacity and capability in AIaMD, addressing pre-market and post-market phases, through targeted training of regulators and other gatekeepers and key contributors across the total product lifecycle.

Government response

Accept: the MHRA is supportive of this recommendation and is working to expand its regulatory capacity in AI within existing resources.

Note - the MHRA continues to welcome applications from organisations that wish to be designated as UK approved bodies, expanding capacity for the assessment and approval of all medical devices including software and AI.

Theme B: whole product lifecycle

Recommendation 3

The UK should aim for an AIaMD regulatory framework that is ‘legislatively light’ and maximises the role of standards and guidance and builds on existing regulations for SaMD while also addressing the specific challenges of AI technologies.

Government response

Accept: the MHRA supports this recommendation which aligns with the UK’s cross-sector pro-innovation approach to regulating AI. It also recognises that the identified risks most associated with AIaMD products are appropriately covered within the existing UK medical device regulations to provide safety oversight.

Recommendation 4

Manufacturers should be required to provide evidence that they have evaluated and mitigated risks of the 2 major issues of (1) poor generalisability and (2) AI bias that can arise due to the use of AI.

Government response

Accept: the MHRA supports this recommendation. The UK medical devices regulations require that products are designed to be sufficiently safe for their intended use population under normal conditions of use. Risks relating to generalisability and bias that may arise for AI products would be covered by the general requirements of the essential requirements which manufacturers are required to comply with under regulation 8 of the UK MDR 2002.

Recommendation 5

Manufacturers should provide information regarding the extent to which the basis of the outputs of the AIaMD is interpretable and can be interrogated.

Government response

Accept: the MHRA accepts and support this recommendation. The UK MDR 2002 requires manufacturers to take into account the state of the art when conforming to safety principles when reducing risks. As such, the level of understanding of the intended user must be accounted for in the design of the medical device. In the context of AIaMD products, this would include the information, tools and access required for relevant stakeholders to interpret and interrogate the outputs of the AIaMD.

Recommendation 6

The regulatory framework should support innovative mechanisms that enable accelerated access with more evidence generation occurring after deployment.

Government response

Accept: the MHRA supports a pro-innovation approach to the safe regulation of software and AI products and accepts this recommendation. The MHRA is strengthening the post-market surveillance (PMS) aspects of the UK MDR 2002 through its legislative reform work, with new legislation taking effect in June.

Recommendation 7

Prior to a local deployment, manufacturers should work with health institutions to provide evidence that the AIaMD is likely to perform safely within their local setting, and work with them to provide that evidence where still needed.

Government response

Accept in principle: the UK MDR 2002 requires manufacturers to design medical devices to operate safely and effectively within their intended use environment. For AIaMD this includes both the generalisation of model to the intended use environment and the assessment and management of local variability issues during deployment.

This issue is not unique to AI products, but current evidence indicates that AIaMD products may be more sensitive to differences in infrastructure and sub-group distributions within the intended patient population that can be experienced at a local level. To appropriately address this, the Software and AI as a Medical Device Change Programme contains work items that will identify aspects of the development and deployment lifecycle for best practice compliance (work package 9-04). This includes the use of staged deployment approaches and on-site validation activities ahead of full deployment.

In addition, the MHRA has outlined both legislative and guidance aspects to the introduction of predetermined change control plans (PCCPs). PCCPs are a new regulatory tool that allows pre-approved modifications to software products to be implemented without requiring further regulatory review. 

Recommendation 8

Key stakeholders including NHS, regulatory agencies (MHRA, Care Quality Commission (CQC)), and manufacturers should work together to create standards that ensure that post-market monitoring of performance and safety should be pro-active, systematic and an essential condition of deployment.

Government response

Accept: the MHRA accepts this recommendation. Under the UK MDR 2002, manufacturers are already required to conduct post-market surveillance activities. These requirements are being strengthened through legislative reform activities expected to come into force from June. Additionally, the MHRA is working with the NHS on the Artificial Intelligence Deployment Platform (AIDP) project to ensure post-market surveillance is factored into deployment plans from the start. Finally, the MHRA is engaged with academic partners and CQC on the topic of AI safety monitoring.

Recommendation 9

In addition to safety monitoring, stakeholders should work together to create systems in which AIaMD performance can be optimised through model updating and innovation within a secure data environment of the NHS.

Government response

Accept in principle: much of this recommendation falls beyond the scope of the MHRA’s remit, which focuses on safety monitoring through reporting and updating AIaMD to address safety concerns. The MHRA also recognises the patient benefits to ensuring optimal product performance and is supportive of this recommendation.

Recommendation 10

The health institution and device manufacturer should be required to agree, as part of contractual negotiations prior to deployment, an approach for monitoring and responding to performance and safety issues that adequately assures patient safety and ensures that there is a ‘plan B’ in case of the need for device withdrawal.

Government response

Accept in principle: the MHRA’s interpretation is that this recommendation should be included in wider procurement requirements rather than within medical devices regulations themselves. Accordingly, this requirement does not directly apply to the UK MDR 2002. Nevertheless, the MHRA agrees with the spirit of the requirement and is supportive of this recommendation, which increases patient safety.

Manufacturers should have a plan in place should they withdraw their product from the market.

Theme C: open transparency, patient and public involvement

Recommendation 11

The end-to-end regulatory pathway for AIaMD needs to be clearly communicated and supported by guidance that is accessible to innovators that are new to medical device regulation.

Government response

Accept: the MHRA supports the need to clearly communicate medical device regulatory requirements; greater education and understanding of healthcare product safety regulations will improve patient safety. The MHRA accepts this recommendation and strives for high quality, accessible guidance through the Software and AI as a Medical Device Change Programme to convey legislative requirements and best practice for innovators looking to create medical device products. This clarity is of particular importance for the manufacturers of SaMD and AIaMD products, where the MHRA observes many new innovations developed by established organisations whose prior operations are outside of the healthcare space and may not be as experienced in medical device regulatory requirements.

Secondly, the digital health space in general, and medical device market more widely, has a large proportion of small and medium-sized businesses that typically have more limited resources. Therefore, the MHRA is investing in clear guidance which will benefit the full range of stakeholders to enable greater access to safer medical devices. Further, the agency is working across the healthcare sector with other agencies via the Artificial Intelligence and Digital Regulations Service (AI DRS) project that brings together the MHRA, National Institute for Clinical Excellence (NICE), Health Research Authority (HRA) and CQC to help navigate the regulatory landscape for developing and adopting AI in health and social care.

Recommendation 12

Regulatory processes for AIaMD should have adequate explanation for public and patients to have trust in the system, which should be supported by the MHRA providing a public-facing register to include all AIaMD on the UK market, including their risk class, their intended use statement and a plain English summary of their intended use statement.

Government response

Accept in principle: the MHRA places patients and the public at the heart of its organisational strategy, striving for better engagement and public-facing communication activities across all of its functions. For guidance relating to software and AI products through the Software and AI as a Medical Device Change Programme, the MHRA endeavours to produce plain English summaries to better communicate technical and regulatory requirements with as wide an audience as possible.

There are no plans to legally enforce the publishing of information on a device’s intended use within the current legislative reform.

Regarding the recommendation for a public facing register for all AIaMD on the UK market, the MHRA has a public access database for medical devices registered with the agency, called Public Access Registration Database (PARD). PARD allows users to identify the risk classification of the product and the legal manufacturer or UK responsible person, and registration is a legal requirement for all medical device products placed on the UK market.

Recommendation 13

Manufacturers, regulators and other stakeholders should demonstrate and role model greater patient and public involvement in the design, evaluation and regulation of AIaMD.

Government response

Accept: the MHRA accepts this recommendation and has work items within the Software and AI as a Medical Device Change Programme that will encourage the inclusion of patient and public involvement in the design, evaluation and regulation of AIaMD products.

Theme D: UK leadership and international collaboration

Recommendation 14

The UK should demonstrate international leadership in the regulation of AIaMD, leveraging its expertise and position to support international harmonisation in this area.

Government response

Accept: AIaMD products, and SaMD more broadly, are increasingly becoming global products as the development and deployment barriers can be lower than those for hardware products which must be physically transported between manufacturing sites and end users. 

Since the UK’s departure from the European Union, the MHRA has gained full membership to the International Medical Device Regulators Forum (IMDRF) as a sovereign regulator to ensure the agency has a voice at the international level. Further, the MHRA is co-chairing the Artificial Intelligence/Machine Learning-enabled (AI/ML) Working Group, which seeks international harmonisation on AI regulatory positions held by IMDRF members and regional harmonisation initiatives. This co-chairing, along with having a strong independent voice in other working groups, provides the UK the opportunity to demonstrate international leadership both in the regulation of AIaMD and harmonisation of medical device regulations more broadly.

Beyond IMDRF, the MHRA has established relationships with several other medical device regulatory bodies around the world and this is leading to bilateral and/or multilateral working relationships. For example, a trilateral working relationship focused on AI and digital health between the MHRA, the USA’s Food and Drug Administration (FDA) and Health Canada has developed over the past couple of years. Through this group, the good machine learning practice (GMLP) guiding principles were developed and published. Following on from the GMLP principles, 2 additional work items have been jointly published, the first on principles of PCCPs and the second on transparency in machine learning medical devices.

Recommendation 15

The UK should aim for regulatory efficiency in AIaMD by adopting good reliance practices (GReIP) in medical device regulatory decision making.

Government response

Accept: the MHRA has developed strong working relationships with many international regulators in the digital and AI spaces to promote consensus generation and alignment. Additionally, the MHRA is working both domestically and internationally with standards bodies such as the British Standards Institution (BSI), public-private partnership initiatives such as the Medical Device Innovation Consortium (MDIC) and centres of academic excellence.


National Institute for Health and Care Excellence (NICE) resources

NICE Artificial intelligence technologies to help detect fractures on X-rays in urgent care: early value assessment

 

This NICE Health technology evaluation (HTE20) published 14th arly value assessment (EVA) guidance on artificial intelligence technologies to help detect fractures on X-rays in urgent care. – Note: the Specialist Committee members included Azizul Haque, Consultant trauma and orthopaedic surgeon, University Hospitals of Leicester NHS Trust.

 

Navigating the future – AI at NICE

 

The statement of intent sets out NICE’s plans for:

  • providing guidance for technology developers on best practice for AI-based methods to support the evidence generation for their technologies
  • evaluating AI-based technologies for use in the NHS
  • exploring the use of AI-based tools in our own internal processes.

The NICE position statement aims to:

  • outline what we expect when AI methods are considered or used for evidence generation and reporting
  • indicate existing regulations, good practices, standards and guidelines to follow when using AI methods, where appropriate
  • support our committee members and external assessment groups to understand and critique the potential uses of AI methods.

 

AI at NICE virtual event

 

AI is a dynamic and rapidly advancing field, with the potential to have a transformative impact on healthcare.

At a recent NICE recent event, Pall Jonnson, Programme Director: Data and Real-World Evidence, NICE and a panel of experts explained how NICE is preparing for this shift. View a recording of the event.

 

AI at NICE podcast

 

This podcast has Dr Pall Jonnson, Programme Director for Data and Real-World Evidence at NICE and Dr Stephen Duffield, Associate Director for Real-World Evidence at NICE discuss the impact of AI.