This site is intended for health professionals only


BMA warns GPs of AI risks as regulation currently in ‘state of flux’

ipopba/iStock / Getty Images Plus/ via Getty Images

by Eliza Parr
12 May 2025

Share this article

The BMA has warned GPs of the potential risks when using artificial intelligence (AI) while regulations are still in a ‘state of flux’. 

In interim guidance, the GP Committee (GPC) told GP practices they must have ‘absolute clarity around the use of confidential patient data’ when using AI software.

It also said the GPC is pushing for ‘any necessary regulation’ around AI to take place ‘at a national level’ and for GPs to have ‘protections’ if these technologies will be adopted more widely.

The guidance stressed that Data Protection Impact Assessments (DPIAs) must be completed before any patient data processing considered ‘high risk’ occurs. 

GPs were also advised to ‘ensure they have appropriate indemnity’ and to use the Yellow Card Reporting System for AI technologies that are medical devices if their outputs ‘adversely affect’ patient care. 

The BMA has published this advice ahead of ‘more substantial guidance’ given the ‘renewed focus on the role of AI in general practice’ in recent weeks. 

Last month, NHS England published guidance promoting the use of ‘ambient scribes’, which also said GP practices ‘may still be liable’ for clinical negligence claims arising from the use of AI products. 

The GPC recognised the ‘importance evolving technologies’ can play in a GP’s daily work, especially as it is increasingly possible to integrate AI software with GP clinical systems. 

‘However, we feel it is important to make it clear that there are risks associated with the use of technologies, especially if they are to be considered medical devices, and appropriate regulatory approval must be in place before clinical use occurs,’ the guidance said. 

On patient data, it said: ‘It is important to have absolute clarity around the use of confidential patient data, where it is transferred, when being processed, and where it may later be stored, and if it is made available for secondary purposes.’

GPC IT leads also said: ‘In summary, practices, as data controllers, need to understand the risks they may be taking on if using such AI technologies, particularly at this early stage when the regulatory landscape is in a state of flux.

‘In the coming months we will be working with external bodies to ensure any necessary regulation occurs at a national level and that GPs have the protections  they require if these tools are to be adopted more widely, ensuring at all times that patients maintain their high level of trust in their GP.’

Earlier this year, GP practices in one area were warned against using AI without seeking approval from their ICB first.

And a medical defence organisation warned GPs not to use AI for complaint responses due to the risk of ‘inaccuracy’ or ‘insincerity’.

A version of this article was first published on our sister title Pulse.