All Things Newz
Law \ Legal

UK’s New Pro-Innovation Approach To Regulating AI – Product Liability & Safety

To print this article, all you need is to be registered or login on

UK publishes policy paper that sets out an ambitious ten-year
plan to remain a global AI power

On 18 July 2022, the UK Government published the policy paper
Establishing a pro-innovation approach to
regulating AI
, which headlines an approach
that steers clear of AI legislation in the UK. This represents a
significant departure from the EU’s approach which has been
driving a comprehensive legislative agenda for AI. The proposed
“light touch” approach will be welcomed by those in the
AI industry who feel that onerous regulations can hold back
innovation. However, in an increasingly global world, many
companies will be required to comply with the EU regime in any
event, so the benefits of the UK’s approach may be limited.
However, it remains unclear how the proposal impacts the plans of
the Office for Product Safety & Standards (OPSS), the UK
product safety regulator, to amend the product safety and product
liability framework to address new technologies including AI.

Establishing pro-innovation, clear and flexible approaches

The policy paper recognises that there are challenges facing AI
businesses, including a lack of clarity as to what law applies,
overlaps in regulation and inconsistency in approach in different
sectors. In an attempt to remedy these challenges, the paper sets
out an ambitious ten-year plan for the UK to remain a global AI
power that will be grounded in a set of cross-sectoral principles,
which are:

  • to ensure that AI is used safely;

  • to ensure that AI is technically secure and functions as

  • to make sure that AI is appropriately transparent and
    explainable, e.g., requiring information to be provided for the
    data being used, relating to training data, the logic and process
    used, information to support the ‘explainability’ of
    decision making and outcomes by the AI product, to name a few;

  • to embed considerations of fairness in AI;

  • to define legal persons’ responsibility for AI governance,
    i.e., accountability for the outcomes produced by AI must rest with
    an identified/identifiable legal person; and

  • to clarify routes to redress or contestability relating to
    users’ ability to contest an outcome.

The principles must be:

  • Context-specific – regulations should be based
    on the use and impact of the AI technology and to delegate
    responsibility for designing and implementing proportionate
    regulatory responses to regulators (e.g. Ofcom, the Competition and
    Markets Authority, the Information Commissioner’s Office, the
    Financial Conduct Authority, and the Medicine and Healthcare
    Products Regulatory Agency).

  • Pro-innovation and risk-based – regulators
    should focus on high-risk concerns rather than hypothetical or low
    risks associated with AI.

  • Coherent – cross-sectoral principles should be
    tailored to the distinct characteristics of AI, and regulators
    should interpret, prioritise and implement these principles within
    their sectors and domains.

  • Proportionate and adaptable – cross sectoral
    principles should be set on a non-statutory basis to remain
    adaptable, and regulators should consider “lighter touch
    ” such as issuing guidance or voluntary

The principles track the EU’s approach in its draft AI law
but lack the proposed statutory underpinning of that

Why is this important?

A light touch approach will be welcomed by many stakeholders in
the AI ecosystem. However, the UK does not operate in a vacuum and
AI technologies are often expected to operate globally and comply
with the legal obligations of major markets such as the EU and US.
Where technology relies on dataset-trained algorithms, it could be
all too easy inadvertently to build-in problems that become
difficult to unpick later when faced with a different regulatory
regime. A UK approach that tracks the fundamental principles of
other major markets, but doesn’t create an additional set of
statutory controls, is a pragmatic recognition of global

What about product safety?

While safety is flagged as a key cross-sectoral principle, the
UK Government’s product safety regulator, OPSS, is not listed
in the proposal alongside other regulators with responsibility for
designing and implementing regulatory responses. However, we know
that AI is very much on the agenda of OPSS, which published a
report in May this year (covered by Productwise here) addressing product safety and liability
issues. This followed a call for evidence on updating the
legislative framework to reflect new technologies (see here). The OPSS is expected to set out its
approach to the product safety and product liability framework
later this year, and it will be interesting to see whether they
also pursue the “light touch” approach. Their absence
from this proposal may be a sign that they are planning to chart a
different course.

What next?

The UK Government is currently seeking views from stakeholders
on their proposed approach to regulating AI. If you’d like to
submit views, the window is now open until 26 September 2022. You
may send your views to

Further, there will be a White Paper and public consultation in
late 2022 providing further detail of the UK Government’s

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

POPULAR ARTICLES ON: Consumer Protection from United States

Productwise 3-2-1 (August, 2020)

Cooley LLP

Here’s the next edition of our monthly bite-sized digest, Productwise 3-2-1 where each month, we’ll be bringing to the top of your inbox (and your agenda):

Source link

Related posts

High Value Finance Available For Purchasing Property – How Should I Structure My Loan? – Financial Services

Aftermath Of Poor Patent Drafting: A Series – Part I – Patent

Synergistic ambition: Unpacking the Federal Government environment and climate change reform agenda – Climate Change