All Things Newz
Law \ Legal

California Attorney General Probes Bias In Health Care Algorithms – Healthcare



To print this article, all you need is to be registered or login on Mondaq.com.

A spurt of letters from California Attorney General Rob
Bonta to leaders of hospitals and other health care facilities sent
on August 31, 2022 signaled the kickoff of a government probe into
bias in health care algorithms that contribute to material health
care decisions. The probe is part of an initiative by the
California Office of the Attorney General (AG) to address
disparities in health care access, quality, and outcomes and ensure
compliance with state non-discrimination laws. Responses are due by
October 15, 2022 and must include a list of all decision-making
tools in use that contribute to clinical decision support,
population health management, operational optimization, or payment
management; the purposes for which the tools are used; and the name
and contact information of the individuals responsible for
“evaluating the purpose and use of these tools and ensuring
that they do not have a disparate impact based on race or other
protected characteristics.”

The press release announcing the probe describes
health care algorithms as a fast-growing tool used to perform
various functions across the health care industry. According to the
California AG, if software is used to determine a patient’s
medical needs, appropriate review, training, and guidelines for
usage must be incorporated by hospitals and health care facilities
to avoid the algorithms having unintended consequences for
vulnerable patient groups. One example cited in the AG’s press
release is an Artificial Intelligence (AI) algorithm created to
predict patient outcomes may be based on a population that does not
accurately represent the patient population to which the tool is
applied. An AI algorithm created to predict future health care
needs based on past health care costs may misrepresent needs for
Black patients who often face greater barriers to accessing care,
thus making it appear as if their health care costs are lower.

Not surprisingly, the announcement of the AG’s probe follows
research summarized in a Pew Charitable Trusts blog post highlighting
bias in AI-enabled products and a series of discussions between the
Food and Drug Administration (FDA) and software as a medical device
stakeholders (including patients, providers, health plans, and
software companies) regarding the elimination of bias in artificial intelligence and
machine learning technologies
. As further discussed in our series on the FDA’s Artificial
Intelligence/Machine Learning Medical Device Workshop, the FDA is
currently grappling with how to address data quality, bias, and
health equity when it comes to the use of AI algorithms in software
that it regulates.

Taking a step back to consider the practical constraints of
hospitals and health care facilities, the AG’s probe could put
these entities in a difficult position. The algorithms used in
commercially available software may be proprietary and, in any
event, hospitals may not have the resources to independently
evaluate software for bias. Further, if the FDA is still in the
process of sorting out how to tackle these issues, it seems
unlikely that hospitals would be in a better position to address
them.

Nonetheless, the AG’s letter suggests that failure to
“appropriately evaluate” the use of AI tools in hospitals
and other health care settings could violate state
non-discrimination laws and related federal laws and indicates that
investigations will follow these information requests. As a result,
before responding hospitals should carefully review their AI tools
currently in use, the purposes for which they are used, and what
safeguards are currently in place to counteract any bias that may
be introduced by an algorithm. For example:

  • When is an individual reviewing AI-generated recommendations
    and then making a decision based on their own judgment?

  • What kind of nondiscrimination and elimination of bias training
    do individuals using AI tools receive each year?

  • What kind of review is conducted of software vendors and
    functionality before software is purchased?

  • Is any of the software in use certified or used by a government
    program?

  • What type of testing has been done by the software vendor to
    address data quality, bias, and health equity issues?

On the flip side, software companies whose AI tools are in use
at California health facilities should be prepared to respond to
inquiries from their customers regarding their AI algorithms and
how data quality and bias have been evaluated, for example:

  • Is the technology locked or does it involve continuous
    learning?

  • How does the algorithm work and how was it trained?

  • What is the degree of accuracy across different patient groups,
    including vulnerable populations?

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

POPULAR ARTICLES ON: Food, Drugs, Healthcare, Life Sciences from United States

TCPA: Health Care Exemption

Duane Morris LLP

The U.S. District Court, Northern District of Illinois recently held that a plaintiff’s Telephone Consumer Protection Act (“TCPA”) suit survived a motion to dismiss…



Source link

Related posts

Barrs v The Queen: A Canadian Tax Lawyer’s Update On Taxpayer Relief Under S.220(3.1) – Tax Authorities

Horace Hayward

Merger And Concerted Action Controls In Ukraine: Usual Norms Are Resumed – Corporate and Company Law

Horace Hayward

Brasília Em Pauta – Edição Nº 78 – Trials & Appeals & Compensation

Horace Hayward