Shoosmiths' key takeaways from techUK’s Data Ethics Summit

Shoosmiths’ AI lead and partner Alex Kirkhope attended the Data Ethics Summit, focused on exploring the fundamental principles of ethical technology, the agenda evolved to delve into the practical applications of ethics, spanning from private sector strategies to policy and regulatory development, many of which are now being put into practice.

In the current year, the Summit aimed to evaluate the effectiveness of these approaches and to determine whether the tech sector has gleaned insights from past errors. The agenda extended to assessing the influence of recent regulatory advancements on the role of ethics, and delved into emerging discussions regarding ethics in the metaverse, AI 'sentience,' and the consequences of cutting-edge technologies such as high-performance computing.

Across this, the primary objective was to address the question of what actions should be taken in the coming years to ensure that technology – with an inevitable focus this year on AI - is developed and deployed in a manner that fosters human benefits and safeguards individuals from harm.

Below, we reveal our key takeaways from the event:

Public engagement: Building trust in AI

With the constant flow of international summits and emerging rules and regulations to address its dangers, and the understandable media focus on high-profile providers like OpenAI, Microsoft and Google, it’s all too easy to forget that one of the key challenges to the widespread adoption of AI is public trust. The Summit welcomed voices from outside the tech sector discussed some of the common misconceptions and fears around AI, and how those fears could be addressed through engagement, education, and familiarisation. Ultimately the speed and confidence of AI adoption will be slower in those countries who do not seek to nurture understanding amongst their citizens of the benefits – as well as the risks – that AI presents.

The International Dimension

A consistent thread throughout the day was the need to weave national domestic regulation within a global institutional and regulatory framework, with much focus on the potential evolution of the UK’s proposed regulatory approach in its AI White Paper released earlier this year. The Government’s formal response to its consultation on the White Paper is expected soon, and it will be intriguing to see how it responds to some of the consistent themes beginning to emerge. With a number of delegates making clear that regulation doesn’t always stifle innovation – in fact that it can instil the confidence needed for companies to experiment within known boundaries – it remains to be seen whether the Government has either the appetite or the bandwidth to change tack from its current resistance to statutory intervention.

Don’t forget the current law

The UK’s Information Commissioner, John Edwards, delivered a timely reminder that whilst much of the noise around AI suggests the risks it poses are novel, the rules and principles that apply to many of those risks are far from new, particularly when it comes to the use and protection of data. The same can be said (to a certain extent at least) of issues such as IP rights, discrimination and bias which come up so often. So whilst it’s tempting to rewrite the rulebook for AI we mustn’t forget the well-established laws that already apply to the technology, software and systems we all use today.

Looking ahead: Was 2023 the peak of the AI hype curve?

With AI poised at an intriguing inflection point, will 2024 see the AI hype curve continue to rise, or have we already reached its peak? With boardroom wrangles in Silicon Valley, and a wider appreciation within organisations of what AI can – and as importantly can’t – deliver in the short to medium term, some have speculated that this is the point the promises of AI begin to unravel. However, with the EU on the verge of concluding its AI Act (albeit with its typical last minute horse-trades between member states), the US beginning to shape its own regulatory agenda, it seems more likely we’ll reach the end of the year as a moment to draw breath, survey the technology and regulatory landscape and plan a less-fevered but hopefully more strategic conversation on how AI can be deployed in a targeted way within businesses to support broader goals, processes and improved customer outcomes.

To summarise, AI is a powerful and transformative technology that has the potential to improve many aspects of society and it’s certainly safe to say, AI isn’t going anywhere. However, it also poses significant challenges and risks that need to be addressed with care and caution and businesses that build out their governance and legal processes to accommodate it seem likely to stay ahead of their competitors as adoption becomes more widespread. To ensure that AI is used in a responsible and ethical way, it is vital to build public trust in AI, to regulate AI proportionately in alignment with international standards, and to apply the existing laws that govern AI usage. Moreover, it is important to move beyond the hype and focus on the strategic and targeted use of AI to achieve specific business goals and customer outcomes. By doing so, we can harness the benefits of AI while mitigating its risks to create a more prosperous and sustainable future for all.


This information is for general information purposes only and does not constitute legal advice. It is recommended that specific professional advice is sought before acting on any of the information given. Please contact us for specific advice on your circumstances. © Shoosmiths LLP 2024.



Read the latest articles and commentary from Shoosmiths or you can explore our full insights library.