Child protection services to stop using trained algorithms to detect child abuse

At Disney University Hospital, researchers are currently testing an algorithm to detect child abuse, detecting pathology and trauma when very young children are hospitalized. In the United States, in many states, child protection services already use screening equipment but have been shown to be harmful: trained with data such as mental health, drug addiction, incarceration, they will target black families. While sure that AI can help, Oregon has therefore announced the abandonment of the algorithm currently in use to decide whether a family investigation is needed.

When child abuse or neglect is reported, social workers have to conduct an investigation to save the child’s life.
In the United States, as child protection agencies consider using or enforcing algorithms, an AP (Associated Press) investigation highlights issues of transparency, reliability, and racial discrimination in the use of AI, which could potentially increase bias in child protection systems.

Algorithms from Allegheny County, Pennsylvania

The algorithm currently in use in Oregon is inspired by Allegheny County, which conducted research by a team from Carnegie Mellon University over which AP had access. Allegheny’s algorithm flagged an unequal number of black children for an “obligatory” negligence investigation compared to white children. Independent researchers also noticed that social workers did not agree with 1/3 of the risk score created by the algorithm.

The training was conducted to predict the risk of keeping a child in foster care within two years of the survey using detailed personal data collected from birth, health insurance, substance abuse, mental health, incarceration and probation records, among other government datasets. The algorithm then calculates the risk score from 1 to 20: the higher the number, the higher the risk. Negligence, for which this algorithm was trained, may include a number of criteria ranging from inadequate housing to poor hygiene, but similar tools can be used in other child protection systems with minimal or minimal human intervention. Zero, similar algorithms have been used Decisions in the criminal justice system in the United States and thus strengthen the existing racial disparities in child protection systems.

A member of the research team said:

“If the tool were to work automatically to screen for comparable call rates, it would recommend that two-thirds of black children be investigated, compared to about half of all other reported children.”

Abandonment of algorithms by Oregon

A few weeks after the results, the Oregon Department of Social Services announced in an email to its staff last May that ” After a “thorough analysis”, the agency’s hotline staff will stop using the algorithm by the end of June to reduce discrimination between families investigated for child abuse and neglect by child protection services. A
Agency director Lacey Andresen says:

“We are committed to the continuous improvement of quality and fairness. A

Democratic Oregon Senator Ron Wyden says he is concerned about the growing use of artificial intelligence tools in child protection services.
He said in a statement:

“There’s a lot of work to be done about what happens to children and families, leaving out unexpected algorithms. I’m glad the Oregon Department of Social Services is taking seriously my concerns about racism and suspending the use of their screening tools.” A

Leave a Comment