How can ethical AI support efforts to safeguard our children?
- 18 March 2024
- Posted in: Healthcare, Education, Planning & Development, Management & Leadership
How can ethical AI support efforts to safeguard our children?
Present safeguarding arrangements are largely manually driven, reactive and subjective. The result – Child Safeguarding failures: 40% are related to data silos, 41% related poor risk assessment (MacAlister, Independent Review of Children's Social Care 2022). Over the past five years a ground breaking ethical AI solution, CESIUM, has been co-designed and adopted by Lincolnshire Police. A validation of the solution in 2022 identified 16 vulnerable children up to six months before existing referral. Further analysis in 2023 identified three children who would not otherwise have been found.
CESIUM, developed by Trilateral Research, is currently being implemented across Lincolnshire Police. In 2024, it is attracting interest across child safeguarding in the UK. Designed to work and share information across multi-agencies, CESIUM’s full potential is fully realised when used in a partnership setting.
Why was CESIUM developed?
In 2018, statutory UK guidance was released on inter-agency safeguarding; reflecting ongoing issues throughout Safeguarding Partnerships. Despite best efforts, the admin-heavy processes and procedures led to failures in the system - ultimately having a negative impact on at risk children.
With a background in criminology, policing, data science, data privacy, cyber security, engineering and responsible AI, experts at Trilateral recognised the need for a more rigorous approach to the information exchange and identification of risk within the Partnerships. They recognised that their skills and experience could support these Partnerships in bridging the gap, and approached Lincolnshire Police with a proposal to validate a responsible AI solution that would address these system failures.
Over the next five years, Trilateral worked with Lincolnshire Police to inform and validate the design of the world’s first responsible AI solution – CESIUM – that provides a cross border, cross agency information sharing platform, vastly improving child safeguarding efforts in the UK. Trilateral’s interdisciplinary team developed ground breaking responsible AI solutions throughout the process, working closely with their counterparts in Lincolnshire Police to ensure the solution is user friendly, fit for purpose and ultimately streamlines processes and increases efficiency throughout day-to-day safeguarding efforts.
How does it work?
The development of CESIUM took five years – and for good reason. It is a world first, which has meant not only the development of new software, but a fundamental shift in how current issues are understood, and novel innovations implemented to overcome these. The team at Trilateral have revolutionised the approach to the use of AI within safeguarding with user workflow, data protection, ethics, security and machine learning governance built into CESIUM. Further, they have developed a governance framework – Trilateral’s Shared Responsibility Model, for the deployment of AI with a focus on training and ongoing monitoring and validation of the models.
CESIUM takes traditional methods of data collection, collation and analysis, and uses AI to improve these. Rather than being held in separate systems, with manual research and review, all data is stored and accessed centrally, allowing users instant access to the most up-to-date records available. Where analysis would have taken days, CESIUM provides dynamic reporting, with users able to search and analyse all available information about a child in an instant – pulling out the most relevant information for further analysis. What took 5 people 5 days, can now be completed in 20 minutes.
CESIUM collates, analyses and reports on all available information, but does not provide recommendations, or make decisions on next steps – that is left to the professionals. Rather, CESIUM learns from past decisions to predict how likely a child is to be referred for intervention. Beyond a risk score, CESIUM provides a timeline of events in a child’s life, abilities to quickly read volumes of missing statements, and network capabilities to understand persons and locations of interest around the child. Safeguarding professionals can explore why CESIUM is suggesting a child is vulnerable and proceed to their own evidence-based decision.
What are the results, and what can CESIUM achieve longer-term?
A preliminary validation of CESIUM in 2022 confirmed the identification of 16 vulnerable children in Lincolnshire, up to six months before they were found using existing processes. Further analysis in 2023 proactively identified three children for pre-screening risk assessment who would not have otherwise been identified. This data gives insight into the difference the CESIUM could make in the months and years to come – and this is just version 1, the team at Trilateral are already developing and testing additional functionality to support safeguarding efforts.
The impact on operational capacity represents a monumental shift in how officers will be able to spend their time. Rather than ‘at-desk’ administrative work, CESIUM provides officers with an unmatched level of efficiency – with a 400% capacity increase. The solution can do in 20 minutes what it currently takes a team of five people, five days to achieve. This decrease in admin means one thing – more time for safeguarding professionals to spend with those children most at risk from exploitation.
CESIUM is currently being rolled out as a fully integrated system in Lincolnshire Police. As we seek to implement CESIUM across the UK, with full Partnership uptake, the impact on safeguarding efforts will be revolutionary.
If you’d like to find out more about CESIUM and the ground breaking work at Trilateral, get in touch.