From Tech to Tale: Why Communications Plans are Essential for Adopting AI in Healthcare 

Harriet Goldman-Thompson

Account Executive

Racial bias in digital healthcare tools made national headlines last week. A Government-commissioned review of AI-powered medical tools reported that, because of such biases, people with darker skin tones are at risk of poorer healthcare.  

Such headlines highlight that there is still much to consider when integrating AI into the healthcare sector efficiently and safely, and they also show an understandable media interest in AI in healthcare. Digital healthcare providers need to be prepared to adequately respond to and address concerns when they arise to ensure an understanding of the benefits of AI and public trust in the adoption of their products.  

Media coverage on the ethical concerns surrounding AI can have a detrimental effect on the public perception of AI in healthcare because if individuals do not trust the use of these digital tools, they will be harder to integrate. When it comes to ethical challenges surrounding AI, digital healthcare providers must have a clear and proactive communications strategy that speaks to both the benefits and addresses concerns.  

The integration of AI in healthcare has the capacity to improve the sector for both doctors and patients. Generative AI wields the power to analyse vast amounts of medical data and to interpret diagnostic imagery. By analysing hundreds of scans and patient records, AI can swiftly detect patterns and abnormalities that could be missed by the human eye. That translates into early diagnosis and subsequent tailored treatment strategies. Thus, patient outcomes are improved and overall health quality enhanced. And going beyond diagnosis and treatment, generative AI streamlines administrative tasks, which in turn frees up healthcare professionals’ time, allowing more focus on direct patient care.  

Yet there are ethical challenges that come with adopting AI. The two primary concerns are data privacy and bias. The former looms large in the realm of digital healthcare, as the AI algorithms used require vast amounts of data, including personal health records, to run effectively. The latter, as seen in this week’s headlines, happens because AI systems are trained on vast datasets which can inadvertently reflect biases. If a medical tool has been trained on data from testing done primarily on patients with lighter skin, for instance, it will not be able to recognise the same symptoms on a patient with darker skin.  

Ethical concerns coming from using AI in healthcare cannot be fixed overnight. Yet without a clear communications strategy to address concerns, share benefits and clarify misconceptions, digital healthcare providers risk eroding patient trust. It is paramount that digital healthcare providers have a response ready for these concerns when they do arise in the media, alongside a programme of media coverage that shares the benefits of the technology. 

It goes without saying that generative AI offers unprecedented opportunities for diagnosis, treatment, and personalised care. Realising this potential, however, requires a concerted effort to address ethical challenges. A clear communications strategy means that healthcare providers can navigate these concerns and win public trust while simultaneously ensuring the highest level of patient care. 

PLMR’s Founder and CEO, Kevin Craig, appeared on Talk TV’s Prime Time to discuss a rise in shoplifting, Prince William’s recent school visit and Sam Smith performing at The Proms

Protecting reputations in a deepfake era

Add PLMR to your contacts

PLMR’s crisis communications experience is second to none, and includes pre-emptive and reactive work across traditional and social media channels. We work with a range of organisations to offer critical communication support when they are faced with difficult and challenging scenarios.

Menu