All Things Newz
Law \ Legal

The Future Of AI Regulation In The UK: Light-touch And Pro-innovation – New Technology



To print this article, all you need is to be registered or login on Mondaq.com.

The UK government has published a policy paper on where it sees AI regulation
heading in the UK and has put out a call for views. What is
encouraging is that the importance of not interfering more than
necessary with innovation in this area is highlighted all over the
paper, starting with the title (“Establishing a pro-innovation
approach to regulating AI”) and a stated desire for the UK to
be the best place in the world to found and grow an AI business.
The government’s stated ambition is to support responsible
innovation in AI – unleashing the full potential of new
technologies while keeping people safe and secure. How is this feat
to be achieved?

The paper sets out a framework that is:

  • Context-specific – the responsibility for regulation is
    delegated to individual regulators rather than proposing a unified
    set of rules as in the current version of the EU’s AI Act.

  • Pro-innovation and risk-based – a focus on issues where
    there is clear evidence of genuine risk or missed opportunities,
    with a focus on high risks rather than hypothetical low risks and
    avoiding the creation of barriers to innovation

  • Coherent – a set of light-touch cross-sector principles
    to ensure regulation remains coordinated between different
    regulators.

  • Proportionate and adaptable – in the first instance,
    allowing regulators to get on with regulating their areas rather
    than introducing additional regulation and encouraging light touch
    options such as guidance and voluntary measures in the first
    instance.

An important aspect of the proposal is the no-definition
definition of what AI is. Perhaps with an eye on the controversy as
to how AI should be defined in the EU legislative process for the
AI Act, the proposal avoids having to define what AI is. Instead,
it sets out two key characteristics of AI that need consideration
in regulatory efforts: AI systems are trained on data rather than
expressly programmed, so the intent or logic behind their outputs
can be hard to explain. This has potentially serious implications,
such as when decisions are being made relating to an
individual’s health, wealth or longer-term prospects, or when
there is an expectation that a decision should be justifiable in
easily understood terms – such as legal dispute. This is, of
course, well-recognised, and much current research is looking to
address this. The other aspect is autonomy (although I prefer the
term automation as more accurate of the reality of AI as a
deterministic technology); that is, decisions can be made without
express intent or ongoing control of a human. The best example is
probably the use of AI to control self-driving cars, and the
implications are clear regarding responsibility and liability for
decisions made and actions taken by AI.

The government sets out its purpose behind this no-definition
definition: “To ensure our system can capture current and
future applications of AI, in a way that remains clear, we propose
that the government should not set out a universally applicable
definition of AI. Instead, we will set out the core characteristics
and capabilities of AI and guide regulators to set out more
detailed definitions at the level of application.” The
decision to forego attempts at a universal decision and put
detailed definitions within the remit of regulators at the
application level is, in my view, highly sensible and avoids much
unnecessary and unhelpful debate that is not necessarily based on
technical reality. It has, incidentally, been proposed before in
one of my favourite papers on the topic, illustrating the problems
of trying to define AI at an ontological level rather than in terms
of its concrete technical applications.

That is all well and good, I hear you say, but is this not going
to lead to chaos and more rather than less red tape as each
regulator adopts different, potentially overlapping and conflicting
regulations? Well, maybe. There is always the potential for
unintended consequences in any regulation. Still, at least the
government is aware of this issue and proposes coordination between
regulators and a set of overarching principles that all regulators
should abide by as the solution.

The policy paper marks an early stage in the government’s
approach to formulating its policy on AI regulation. At this stage,
they propose the following overarching principles, explained in
detail in the paper:

  • Ensure that AI is used safely

  • Ensure that AI is technically secure and functions as
    designed

  • Make sure that AI is appropriately transparent and
    explainable

  • Embed considerations of fairness into AI

  • Define legal persons’ responsibility for AI governance

  • Clarify routes to redress or contestability

This policy paper sets out the government’s current
thinking. It provides an opportunity for stakeholders to make their
views heard ahead of the White Paper and public consultation the
government plans to publish later in the year (and the paper asks
some particular questions on which views are thought).

I am no expert in regulation, AI or otherwise, but I work with
AI innovation daily and welcome the government’s pro-innovation
focus. I also think it will be incredibly difficult to make
meaningful regulation for a whole field of engineering/technology
independent of its application, as the current legislative
initiative in Europe is seeking to do. To my scientific mind,
regulating AI per se makes about as much sense as regulating
electromagnetism or statistics. I, therefore, appreciate the
clarity the government’s no-definition definition brings. Of
course, in the end, all will depend on how this is implemented:
will we see a light-touch regulatory regime in which regulators
work together to provide clarity and certainty while protecting the
public and meshing seamlessly with the international regulatory
context? Or a byzantine set of conflicting and ineffective
regulations suffocating innovation and enterprise in reams of red
tape while leaving the UK isolated internationally? The legislative
journey this paper starts will at least be fascinating to follow,
and it is interesting to see the UK considering a different
approach.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

POPULAR ARTICLES ON: Technology from UK

Ankura CTIX FLASH Update – August 5, 2022

Ankura Consulting Group LLC

The Ukrainian cyber police (SSU) have shut down a massive bot farm used to spread disinformation on social networks. The goal of the 1,000,000 bots was to discredit information coming…



Source link

Related posts

Copyright in building plans after termination of building contract – Copyright

Protecting Health Information Post Roe – Part 2: Steps For Health Care Providers – Healthcare

Full Court makes its decision on non-human inventors of patents – Commissioner of Patents v Thaler [2022] FCAFC 62 – Patent