This site is intended for health professionals only
NHS England’s recent guidance on ambient voice technologies (AVT), which covers transcription tools used by GPs in consultations, caused major confusion for GP practices. Dr Dom Pimenta, CEO and co-founder of British AI company TORTUS, explains the issues and practices’ requirements.
It is highly likely that your GPs and healthcare professionals are already using ambient voice technology (AVT). Around 30% of organisations are already employing these medical transcription services in their operations. For GPs and clinical staff, we went through the ‘it’s magic’ phase – explaining to clinicians what this technology can do. But now, practice managers are coming to the fore as we enter the ‘how do we adopt’ phase, especially around the governance issues of data handling, cybersecurity and clinical safety.
Practice managers may feel confused about their obligations, especially following new guidance from NHS England last month. So what do you need to know about AI and governance?
Data handling
This refers to how data is being stored, processed, retained and where is that happening geographically (literally where the physical servers are located).
Although GPs are data controllers, practice managers need to understand what is required to act within GDPR legislation. Under the GDPR, all patient data should be processed within the UK if possible and definitely within the EU. It also requires any data, especially patient data, is handled as minimally as possible to reduce the risk of breach or exposure, which means a clear justification for every data decision is needed. Why does the system store patient data for X days instead of Y days, for example?
In terms of AVT specifically, we need to identify the type of data the system handles. For ambient voice technology, there is the conversation audio data, any additional text data (eg, patient demographics, perhaps problems) and the user data. The first two are considered patient information and highly sensitive, the latter is not.
There are also two types of models of AI; inference, when the tool produces an output from an input (such as a summarised note from a transcript); and ‘machine learning’ or ‘training’, which is when data is used to make the model better.
In the case of training models, there may be a governance issue for suppliers. Technically the patient data used to train models can be stored inside the model, and studies have shown it can actually be extracted again in some circumstances.
This is a very new area, but may be perceived in future as ‘data retention’. It also changes the nature of who controls the data. It is likely that, in the future, new legislation will be needed around this to clarify it, but if an AI company is training models on patient data, then that should be very explicitly agreed to by the data controller (the GP), and ideally the patient as well. Therefore, it might be worth asking suppliers if they train models on patient data, and take advice from your data protection officer, outsourced companies as a service, or the Information Commissioner’s office if you have worries.
Cybersecurity
Once the data is handled well, the next question is how secure the supplier’s system is from potential external attacks. This can be accidental or on purpose. All software should undergo a penetration test (from an accredited supplier, such as CREST), which essentially involves paying hackers to attempt to break in and expose any vulnerabilities. Other issues include the handling of users and passwords, whether the company has independent certifications (such as Cyber Essentials Plus), whether suppliers have completed a Data Security and Protection Toolkit, and whether the cloud services they use are secure from attack.
Sometimes the AI itself can be a weak point in a security set up. For example, if the user can interact directly with it via a chatbot, then depending on what the model does or holds (eg, patient data) or system data (eg, the prompt or instruction itself) can be attacked by unscrupulous individuals. This is known as ‘prompt injection’.
Again, your supplier should be able to provide all of this data needed as well – and certifications for CyberEssentials Plus and ISO are available directly from the auditing companies.
Clinical safety
For any clinical software, the NHS has a system called DTAC (Digital Technology Assessment Criteria) that covers a lot of the data and CSQ requirements, but also ensures that clinical risk is appropriately looked at. In my view, this is one of the hardest areas to understand.
Most IT systems should follow a system of recording risk and safety called DCB0129/160. Essentially, the supplier looks at their safety risks and how they’ve mitigated them, such as potential hazards, what impact they might have for the patient and how the system has tested and mitigated these risks. This is called DCB0129. As part of this process, they must involve clinical safety officers – clinically qualified individuals who have undertaken further training specifically in digital clinical safety. The buyer also needs to repeat a similar process, which is called DCB0160 in their case.
When there is a higher level of clinical safety requirements – for example, software that makes a diagnosis or automates a part of clinician-facing work – the software might be considered a medical device. This means a higher level of regulation under the Medicines Health Regulatory Authority (MHRA), which includes more stringent testing and continuous safety monitoring.
NHS England guidance suggests that as the AVT is processing the data in some way (ie, to summarise the conversation) it can be defined as ‘high functionality’ and therefore is at least a class I medical device, and if it goes further such as making a diagnosis or impacting a clinical decision for example, it’s at least a IIa.
From an AI perspective, there are a few specific considerations that are a bit different:
Models change: They change their output even given the same input, and they also change over time (for lots of reasons, including training, updating the models, and changing the hardware). It means that a safety assessment carried out on initiating the use of a product may not be valid anymore over time. So monitoring and continuous assessment are a must for any system using AI, and definitely any system using large language models (AKA generative AI).
Hallucinations: Generative AI specifically is prone to creating content that looks complete and readable to a human, but this sometimes means it will add information that may not be true. This is called a Hallucination. Monitoring and reducing these types of errors is a relatively new science and needs careful evaluation.
Reliance bias: While a clinician may initially be vigilant to errors, over time, they become accustomed to systems and trust them more, leading to a bias in favour of trusting the outputs. This ‘reliance bias’ will become increasingly problematic in AI as it becomes mainstream, and is an important risk for them and practice managers to be aware of.
Medical device regulation: NHS England has issued specific terms for ambient voice technology, and the MHRA is periodically reviewing technologies. For medical device errors, the Yellow Card system (like for drugs) can be used to report errors that the MHRA can then investigate should they receive a lot of them.
Monitoring: Any medical device specifically needs to be continuously monitored for errors – how this is evidenced and provided by the vendor is really important, and whether it’s in real-time or periodically tested, the organisation needs to be comfortable it is being monitored for patient safety.
Other resources
So that’s everything you need to know about AI and governance to start to deploy safely. There are several outsourced companies that can help with assurance, as well as free resources available to practices and NHS organisations. Unfortunately, ignorance of the guidelines is no defence.
Here are some more resources to help with this – and rely on your organisation’s governance teams for support with specific questions as needed:
Please also feel free to reach out to us anytime if you have a question about our systems.
In May, TORTUS AI, whose Ambient Voice Technology (AVT) was described as a ‘game changer’ by Health Secretary Wes Streeting, confirmed a strategic partnership with X-on Health, the largest primary care telephony provider in the UK, serving over 3,500 GP surgeries.
Surgery Intellect, powered by TORTUS, is a voice-enabled AI assistant that uses ambient voice technology (AVT) to listen, transcribe and code consultations in real time. Find out more here.
IMPORTANT LINKS
Sign up today to receive the latest news, business insight, blogs and case studies via newsletters as it happens.