The good, the bad, and the ugly (of AI and data)

AI’s antibiotic breakthrough

While trust in official data is being questioned, innovation in the lab offers a glimpse of what the future could hold. Researchers at the Massachusetts Institute of Technology (MIT) have revealed that they have used generative AI to design two entirely new antibiotics capable of killing drug-resistant gonorrhoea and MRSA– two of the most dangerous and persistent superbugs in circulation today.

Unlike previous AI efforts, which focused on screening existing chemical libraries for antibiotic potential, this approach involved designing molecules atom-by-atom. The AI was trained on millions of chemical structures and their effects on bacteria, enabling it to create novel compounds that were then synthesised and tested on infected mice with promising results.

The breakthrough could signal a “second golden age”for antibiotic discovery – a field that has seen decades of stagnation even as antimicrobial resistance now causes more than a million deaths globally each year. However, the two candidate drugs will require years of refinement, rigorous safety testing, and clinical trials before they can be prescribed. Economic barriers also loom large, with new antibiotics often generating little profit due to the need to preserve their effectiveness through limited use.

Despite these barriers, the work demonstrates how AI could radically speed up and lower the cost of drug discovery, potentially transforming the way we combat some of the most pressing health threats of our time.

Concerns over NHS waiting list data

This week, the Government’s claims of significant progress in tackling the NHS elective care backlog have come under scrutiny. Ministers have repeatedly highlighted a fall in the waiting list of more than 260,000 planned treatments since last summer as proof that their policies are getting the NHS moving in the right direction. However, new analysis from the Nuffield Trust and the Health Foundationtells a very different story.

The analysis finds that most of the reduction is not the result of more patients receiving treatment, but rather due to “unreported removals” – cases taken off the waiting list during validation exercises or because of quirks in NHS software and reporting processes, rather than actual completed care. These include duplicate entries, patients who have moved, or those whose referral was never progressed.

Between April 2023 and March 2025, an estimated 245,000 such removals occurred every month – often outnumbering the new cases joining the list. The researchers warn this creates a “contradiction in the data”and means the waiting list is not an accurate measure of system performance. This creates a risk that, without greater transparency and consistent public reporting of all removals, the metric will continue to give a misleading picture of progress.

The Department of Health and Social Care maintains that the impact of validation is “small”and that reductions are largely the result of increased NHS activity. But with public trust in the NHS’s ability to recover already fragile, the gap between the official narrative and independent analysis risks deepening scepticism about the extent to which the Government is really tackling the system’s myriad of challenges.

AI downplaying women’s health needs

A new study from the London School of Economics’ Care Policy and Evaluation Centre has revealed that AI tools already being used by more than half of England’s local councils may be embedding gender bias into social care decision-making.

Researchers tested several large language models on 617 real adult social care case notes, changing only the gender of the subject. They found that Google’s “Gemma” model in particular used language such as “complex”, “disabled” and “unable” far more frequently for men, while describing women with the same needs as “independent” and “able to manage”.

In one striking example, an 84-year-old man living alone was described as having “a complex medical history, no care package and poor mobility,” whereas the same profile, gender-swapped, became “an 84-year-old living alone… able to maintain her personal care.”

Because decisions about the level of support a person receives are based on perceived need, this type of bias could mean women end up receiving less care than men despite living in identical circumstances. Without mandatory bias testing, transparency about which models are used, and robust legal oversight, such disparities risk becoming baked into care systems at scale.

With AI adoption in health and care accelerating, the findings are a stark reminder that new technologies can entrench inequalities as easily as they can help address them unless fairness, accountability, and human oversight are prioritised from the outset.

 

The Employment Rights Act: What Businesses Need to Know

A Lesson in Drama: Managing your reputation

Add PLMR to your contacts

PLMR’s crisis communications experience is second to none, and includes pre-emptive and reactive work across traditional and social media channels. We work with a range of organisations to offer critical communication support when they are faced with difficult and challenging scenarios.