“The fabric of the future of work”
- Safety-related AI offerings include continuous observation, deeper insights and real-time alerts for both employers and workers.
- Employers can help alleviate concerns over privacy and other issues workers may have by giving them a “seat at the table” when these technologies are introduced, as well as being transparent about the use of AI.
- AI requires the expertise of safety professionals to ensure it’s effective and continues to capture useful data.
Artificial intelligence already is part of our everyday lives: in our web searches, in our interactions with digital assistants, and even helping us decide what movies and TV shows to watch.
In the world of worker safety, AI is providing “great opportunities.” That’s according to Jay Vietas, chief of the emerging technologies branch of the NIOSH Division of Science Integration.
“Not only will it be in the fabric of the future of work, but it’s going to be in the fabric of solutions to the future of work as well,” Vietas said during a webinar hosted by the agency in June. Some of the benefits AI is providing to the safety field: deeper insights, continuous observations and real-time alerts to help employees avoid unsafe situations and organizations respond to incidents quicker.
Experts say making use of AI requires collaborative efforts between safety professionals and other departments, namely information technology, to ensure transparency as well as alleviate privacy concerns and other issues workers may have.
“Our recommendation is, basically, try to understand AI and try to see how it can work for you,” said Houshang Darabi, a professor at the University of Illinois Chicago and co-director of the occupational safety program at the school’s Great Lakes Center for Occupational Health and Safety.
Some uses of AI
AI is defined as the use of computers and/or machines to try to replicate human decision-making, problem-solving and other abilities.
“AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data,” according to software company SAS.
Subsets of AI include machine learning, neural networks, computer vision and natural language processing.
One safety-related example is the use of cameras that can detect whether workers are wearing their personal protective equipment. Specifically, the devices can monitor employees who are working at height and need to be wearing harnesses. Not only can the cameras detect whether the workers are wearing their harness, but they also can identify if the PPE is tethered, said Donavan Hornsby, corporate development and strategy officer with Benchmark Digital Partners and the Benchmark ESG digital platform.
During a technical session at the 2021 NSC Safety Congress & Expo, Hornsby and Dave Roberts, vice president of environmental, health and safety at The Heico Cos., offered other examples of tasks that AI-enabled cameras can perform. These include tracking interactions between workers and machinery, monitoring the status of machine guarding, checking if workers are in or outside of designated areas, and performing ergonomic assessments. The devices also can be paired with sensors or wearables that are attached to hard hats, vests or other items.
That continuous eye on workers means that safety pros don’t have to rely solely on observations, walkarounds or inspections to ensure workers are wearing PPE or to identify other safety issues.
“Instead of depending on one person doing their round once a shift or once a day,” Hornsby said, “what if the cameras are always looking and that person can now spend time working on more value-added activities?”
Heat mapping and fatigue monitoring
Cameras and/or sensors and wearables also have the ability to generate heat maps, which can show where high-risk activities are taking place in a facility.
It’s important, Hornsby noted, to layer that data with operational data for greater knowledge and analysis.
“Then you have this kind of multilayer perspective on risk: high-risk operations, people that are working long hours, high concentrations of activity,” he said.
For employees working long hours, Hornsby said certain AI-enabled programs can help measure cognitive impairment. That can come in the form of, say, a 30-second visual puzzle. With an established baseline for each worker, the cognitive screening can test personnel before each shift.
“They can get a sense of whether or not they’re cognitively impaired,” Hornsby said, “which may have been a result of working too long of a shift the day before or not getting enough sleep or personal issues, or whatever the case might be.”
Natural language processing
Natural language processing can help safety pros by taking data sets from incident reports, observations and inspections, among other items, and finding insights within them.
Reading over hundreds or thousands of reports, and potentially millions of words, is a time-consuming task for people. Deriving insights from all of that data takes even more time or bandwidth.
In addition, reports may be loosely written narratives or contain unstructured data. Natural language processing has the ability to take those reports and find patterns – such as near misses or incidents happening at certain times or in certain areas of a facility.
During the NIOSH webinar, experts highlighted another use of natural language processing: automated coding of workers’ compensation claims.
One part of natural language processing is sentiment analysis. When programmed correctly, that analysis can identify certain words and phrases that denote feelings or attitudes.
During their Congress & Expo session, Hornsby and Roberts highlighted the use of sentiment analysis on safety culture surveys for better insights. That analysis, in turn, allows for more open-ended answers instead of, say, multiple choice or “prescriptive” answers.
“What has always lacked in those scenarios is the ability to talk openly and freely about what workers are seeing and what they’re experiencing,” Hornsby said.
How to start
Many employers likely already have AI deployed in their business practices, Vietas said, adding that it’s important to understand how the basic concepts of AI may help strengthen workplace safety and health. Naturally, each organization or location within it may have different needs and may have to figure out which programs might work best.
Before employers introduce new technology, though, Hornsby recommends thinking about what safety issues need addressing, to see which technology can aid your organization and employees.
“I think there’s the problem with a lot of people is they fall in love with the technology, this whole ‘shiny object’ syndrome,” Hornsby said. “We need to first think about what the problems are and decide whether or not there’s a technology that can help us.”
Another piece of advice from Hornsby: Start small – a proof of concept or pilot program, for example – with one safety issue, instead of trying to tackle a larger one or multiple problems.
“Figure out something that you can get your arms around, get quick buy-in, see what happens,” he said. “Let it inform what you might do on a broader scale. Once you prove the value, improve the concept.”
He pointed out that many technologies can work with existing equipment, such as a facility’s closed-circuit cameras.
Privacy and other concerns
A continuous eye on workers from cameras or wearables will likely raise concerns over privacy and data security. One of the best ways to address these concerns, Hornsby said, is to allow workers to have a “seat at the table” when developing AI strategies and considering any related issues.
“I don’t know how you would build trust if you’re just dumping a technology out there and asking folks to ‘trust us on this,’” he said. “If they have a seat at the table, then they understand the motivations and they understand the objectives. Ultimately, organizations are just trying to find a way to keep people safe.”
Management and safety leaders should remain upfront and open while implementing new technologies. AI shouldn’t be a “black box,” Vietas said during the webinar. It should be “transparently implemented.”
Expertise still needed
With the introduction of new technology in the workplace, a common – and sometimes justifiable – fear among workers is that it’ll make their jobs expendable. However, the experts say that’s not likely to happen with AI and safety pros.
The use of AI requires experts to guide it and keep it on track. During the webinar, Darabi offered the example of AI being used in the mining industry.
“You need people who understand mining and understand the hazards,” he said. “They understand the situations where unsafe events could happen. You need the workers to tell you how they feel and how they can use the technology. So, in order for an AI system to work, you can’t just bring a programmer in and expect everything to work.”
Hornsby pointed out that “management of change” is a big factor in ensuring an AI program continues to gather the right data. An organization might add a new shift, new personnel, new processes or expand its plant.
“All these things introduce new variables that the AI machine learning is going to have to factor in.” he said. “In a lot of cases, it’s going to take humans to help influence that.”
Vietas said AI likely will prove “very complementary to most, if not all, jobs” in the future. Therefore, it’s important to have an understanding of how to employ it and improve “the human outcomes that we’re all interested in,” whether it’s in a corporation or in society.
“It’s a tool,” he added. “It’s just another tool for humans to use.”