Can ChatGPT help your workplace build health and safety policies?
ChatGPT has become an extremely popular tool worldwide for its ability to do a multitude of tasks quickly and easily. This AI chatbot can answer questions on virtually any topic and write documents in human-like language, crunch data and much more. It spits out answers that look professional, authoritative and convincing.
So should time-crunched health and safety managers and small business employers turn to it for health and safety information or help writing policies, programs, procedures and more? “On the surface, It seems like an easy way to get everything from one source,” says WSPS Consultant Kristin Onorato.
But here’s the problem. “While ChatGPT’s answers look convincing, they may not stand up to closer scrutiny,” says Kristin. A recent study found that although some of the safety-related information it produced was sound, other information was oversimplified, had gaps, inconsistencies, inaccuracies or was completely wrong.1 OpenAI, the tool’s creator, admits, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers."
Relying on information that may not be accurate is a huge risk in a field where human lives are at stake, and where employers must meet regulatory standards, says Kristin. “If you ask for a banana bread recipe, you’ve got little to lose if the information is flawed. That’s not the same here.”
Kristin decided to put ChatGPT to the test by asking it three health and safety questions and looking at the quality of its responses. Read on for the results.
Three safety questions for ChatGPT
Kristin asked three random questions:
-
What are the heat stress requirements under B.C.’s Health and Safety Regulation? ChatGPT responded that there are no requirements – that heat stress is only covered by the general duty clause to take all reasonable precautions to protect workers. “That’s completely wrong,” says Kristin. “B.C. has changed its regulation and has one of the strictest hot weather programs in Canada.”
So why didn’t ChatGPT pick that up? It turns out this language model has no knowledge of events after 2021 if you are using the free service. “So if you are using it to look up legal requirements about health and safety in Ontario or Canada, it could very well be out of date,” notes Kristin.
-
What are the guarding requirements for a pedestal grinder? Kristin quickly realized that the answer ChatGPT gave was based on U.S. information. “Another user might not have spotted this, implemented the advice, and may not have met Ontario’s regulatory requirements or passed an inspection by the Ministry of Labour, Immigration, Training and Skills Development (MLITSD).”
-
What are the health and safety posting requirements in Ontario? The chatbot identified the following:
-
‘Health and Safety at Work – Prevention Starts Here’ poster
-
a copy of the Occupational Health and Safety Act (OHSA) and regulations
-
Workplace Violence and Harassment Policy
-
Emergency Contact Information
-
WSIB poster
-
the contact information and responsibilities of the joint health and safety representative or Joint Health and Safety Committee (JHSC) Information should be posted.
Sounds good? Actually, some critical information is missing, which could put your workplace out of compliance, says Kristin. For example:
-
workplaces must also post their health and safety policy
-
the OHSA must be posted in the primary languages of the workplace
-
“the JHSC information must be posted, not should,” says Kristin, “and must also include the names and work locations of committee members.”
-
other items that must be posted are MLITSD orders, notices of compliance and a summary of claims from the WSIB, if it is requested.
“This is a good example of a ChatGPT response being “close” but not exactly right,” says Kristin, “which is a significant concern.”
No sources listed
“Health and safety advice and information from reputable bodies, such as the MLITSD, WSPS, or the Canadian Centre for Occupational Health and Safety (CCOHS), is based on scientific evidence, the latest research and proven best practices so we know it is accurate, up-to-date, and trustworthy,” says Kristin.
But it’s impossible to determine where ChatGPT safety information comes from because it doesn’t list its sources. “Users have no way of knowing whether it’s trustworthy,” says Kristin.
With all its drawbacks, is it possible to use this AI tool in a way that doesn’t put the health and safety of employees at risk? ”If you're going to use it at all, use it as a starting point,” suggests Kristin. “Verify the information, using the many reputable health and safety resources that are available. Ask subject matter experts to review and make changes where necessary.”
How WSPS can help
Check out our Building Blocks Solution
Connect with one of our experts (instead of a chatbot) to help you build your health and safety policies and programs.
Get customized policies, programs, procedures, forms and checklists at an affordable price.
• Incident Injury Illness Reporting, Investigation and Analysis
• Hazard Identification, Risk Assessment and Control
• Hazard Reporting
• First Aid
• Emergency Preparedness and Response
• Return to Work
Each “Building Block” can be purchased alone or as a bundled solution with other blocks, related training, and/or consulting time.
The information in this article is accurate as of its publication date.
1 The risks of using ChatGPT to obtain common safety-related information and advice, Safety Science, Volume 167, November 2023, 106244 https://doi.org/10.1016/j.ssci.2023.106244