The use of artificial intelligence (AI) and predictive algorithms used to be science-fiction technologies of the future. Now, they’re prevalent in our everyday lives, transforming the way we live, whether we know it or not.

From applications in health care, finance, transportation, security, market research and social media, AI is a wide-ranging tool that often works in the background, allowing us to process, translate and apply information to improve the way we do things.

The promise of the next generation of digital tools is powerful. Indeed, for areas of social good, like health and public safety, it has the potential to be the ultimate social-equality tool. Because if we can predict where and when we might be harmed, then taking care of people just got a whole lot easier.

Technical Safety B.C. is using a proprietary predictive algorithm that assists and supports clients and safety officers in reducing the number of safety incidents in the province.

As a regulator overseeing the safe installation and operation of equipment and systems, Technical Safety B.C. assesses equipment and systems that over four million British Columbians use daily. From SkyTrain, to electrical systems in condo buildings, to hot-water boilers in schools, elevators in malls and more.

Five years ago, we questioned how we could improve our processes to reduce the number of incidents and to better locate hazards before they become harmful.

We realized quickly that the relationships between our clients and our safety officers are crucial to informing better safety behaviour, whether it’s through educating clients on best practices, being a resource to answer questions or working with them to correct hazards found.

To improve how we can better locate safety hazards, we developed an in-house computer algorithm known as the Resource Allocation Program (RAP). This algorithm uses permit and inspection data from our own safety officers and a simple model to prioritize work for safety officers with the focus on the areas where the highest potential risks would lie.

As part of RAP’s machine-learning process, each and every time our safety officers assess the work and equipment on a site, they record a data point on their findings that the algorithm then uses to adjust or confirm the accuracy of its safety prediction. This is how AI works — with a continuous feedback loop — just as we do as people, continuously adapting based on the latest information. That’s why it’s called machine learning.

Machines are tools to help humans. And we use machine learning in exactly that way. As a tool.

We have since developed new models for RAP using the latest machine-learning technology, and have seen it adapt even more quickly to reflect emerging risks. Our teams continually work on improvements through testing to see how machine learning could enhance our approach. Our tests showed that through machine learning the algorithm’s prediction of high-hazard electrical sites improved by 80 per cent.

But what if machines behave badly?

Recent reports in the media of rogue algorithms show us that, left unmonitored, machine learning can recreate and reinforce biases and cause undue harm.

We think algorithms working for the public good should meet an extremely high ethics test, to ensure that they meet the highest test of accuracy, are free from undue bias and that those using them as tools are also offered a role in designing in protections to meet societies’ values. In other words, that there is “a human-centred” approach.

When we first introduced the concept of using algorithms and machine learning to help with our decision-making, our own employees voiced concerns that the new automated prediction algorithm could create privacy issues. That it might miss or misjudge risks; or that reliance on algorithms might displace human jobs.

To address these issues Technical Safety B.C. employees and local AI ethics consultancy, Generation R, worked to introduce an ethics roadmap that lays out a framework for using data and advanced algorithms to expand on the safety system in B.C. This is leading-edge work, even raising the interest of a UN committee on the use of machine learning.

As humans, we are all challenged with working in a world full of changing complexities. For Technical Safety B.C., our clients and stakeholders are moving forward, adopting digital solutions in their own businesses and facilities. Alongside their rigour, we must be proactive in seeking solutions to reduce the number of technical safety hazards and improve how we mitigate risks. Embracing technology and leveraging the latest AI tools to better assist our safety officers and clients is the natural next step in helping us deliver on safety services.

Artificial intelligence is often met with hesitation. Certainly, its implementation requires special consideration to ensure the adoption into an organization’s work flow goes smoothly. But when applied with consideration, this technology can provide the tools to better connect an organization to its purpose — with accuracy, efficiency, and with a moral code.

This article by Catherine Roome, former president and CEO of Technical Safety BC, appeared in the Vancouver Sun.