As our digital environment plays an ever increasing role in our day to day lives, who is responsible for creating an ethical framework for our brave new world?

Today in Davos Prime Minister Theresa May put pressure on the investors behind social media providers to ensure those platforms take more action to remove extremist content.

Asking financial backers to take responsibility for setting the ethical standards of tech companies might seem like an odd thing to ask, after all financiers are not historically renowned for putting ethics at the forefront of investment decisions.  But this highlights a significant, and increasingly urgent question as the world enters the fourth industrial revolution.  Who decides what is ‘good’ or ‘bad’ in this digital age?

For years social media providers have fought against being made responsible for ‘moderating’ or controlling content that users post on their sites.  The argument from providers has basically tended to be; “Hey, we just make and manage the platform.  We don’t publish the content, you guys do. So it’s not really our responsibility.”

But more and more stakeholders have been calling for providers to stop ‘washing their hands’ of this responsibility and take action, from governments concerned about the potential of extreme terror content to radicalise vulnerable people, to parents horrified about the damage done to their children by cyber-bullying.

Critics of platform providers have also pointed out the seeming hypocrisy of their stance – YouTube, as a ‘family-friendly’ platform, is very effective at stopping sexually explicit content from appearing, but scenes of real-world explicit and extreme violence is readily available.

In her speech today, the Prime Minister also spoke about the ‘revolutionary potential’ of artificial intelligence (AI) to improve lives around the world.  The rise of AI, whether it is in interrogating big data, piloting autonomous vehicles, or enabling any number of other technological innovations, brings up another host of ethical questions that need to be considered urgently.  Some of the ethical conundrums are obvious, such as who sets the programming for how autonomous vehicles behave in accident scenarios.  Others are more subtle, like who controls access to big data, how it is utilised, and who benefits from it.

So, who should set the ethical frameworks that define ‘good’ or ‘bad’ in relation to social media content, the exploitation of big data and the parameters by which AI is deployed.  Do we rely on the people creating the technology?  The reluctance of social media providers to regulate their platforms suggests some do not want this responsibility.  Should Volvo have the responsibility of deciding the programming that will determine if your autonomous Uber swerves to avoid a small child, but then hits an elderly couple?

Is it the job of governments, lawmakers or academia?  If so, which governments, which legal bodies, or which academic institutions?  These issues cross national and international boundaries and impact on nearly everyone on the planet.  Technology is evolving so quickly that there is no time for societies to adapt slowly, as we have adapted to much technological innovation in the past.

These are increasingly urgent questions, to which the answers are not easily forthcoming.  Who do you think should be setting out our new digital ethics – Theresa May?

‘Time to Fix the Damage the Conservatives Have Done’: What the answers to this week’s Oral Questions can tell us about the new Government’s plans for Health and Social Care

Myth Busting Working in Communications

Myth-busting a career in Communications

Add PLMR to your contacts

PLMR’s crisis communications experience is second to none, and includes pre-emptive and reactive work across traditional and social media channels. We work with a range of organisations to offer critical communication support when they are faced with difficult and challenging scenarios.