View source: Jason Morgan

So you’ve decided to use your truck telematics to help you improve operational safety and inform driver training. Great! And then your phone explodes with notifications—one driver had harsh braking, following distance and lane departure warning incidents in rapid succession, and you fear the worst. What actually happened and how do you approach the conversation? Just looking at notifications, it’s hard to say. Was your driver not paying attention? Did a car cut off your truck?

Answering all of these questions comes down to context—what actually happened during the event in question? That’s what video telematics systems show you, and they’re getting smarter. The latest in-cab video system updates leverage artificial intelligence to better identify other vehicles, pedestrians and even your driver’s behavior.

You don’t have to watch the dailies

Artificial intelligence engines identify potential safe driving infractions and aim to reduce the amount of video footage you have to parse through.

“One of the biggest challenges we see in fleets adopting Integrated Video is they’re worried that a video product is going to create masses of extra work for them, including time spent sifting through footage,” said Stewart Wright, global product success, Verizon Connect. “In the design process, we have developed a solution that does the hard work for them by identifying, analyzing and presenting meaningful footage in the Reveal platform, based on harsh driving events. The system proactively presents footage from these events, ultimately helping reduce fleet manager workload while enabling continuous safety improvements to be made.”

In Verizon Connect’s case, as the fleet manager, you also provide feedback to how you reacted to the video to help train the A.I. to better identify instances that you feel are safety-critical.

“As we collect their feedback, the A.I. engine uses machine learning to continually refine what is important to our customers and presents this in the platform,” Wright said. “In terms of new offerings on the platform, we are looking to provide more context to our customers for each event to allow for more informed decision-making to help improve driver safety.”

This is a trend that continues to grow in the video telematics space. Lytx, which has been developing its A.I. technology for more than a decade, recently announced that its machine vision and artificial intelligence (MV+A.I.)-powered technology can be used to detect and alert drivers and fleets of distractions and risky behaviors in real time. Lytx put the system to work in and out of the cab to identify risks. Using the system it found:

  • No seat belt — identified 397% more often.
  • Unsafe following distance — identified 332% more often.
  • Incomplete stop – identified 243% more often.
  • Food or drink — identified 173% more often.
  • Failure to stop — identified 188% more often.
  • Handheld device — identified 133% more often.

“Artificial intelligence is able to make decisions about what it sees to call out safety concerns more effectively,” said Del Lisk, vice president of safety services at Lytx. “In the old world, you’d only capture video during a hard braking or swerving event, for example, but in the new world with artificial intelligence, it’s able to identify things like stop signs or that the truck continues going 9 MPH and doesn’t stop at the sign—and it captures video of that event for a coaching opportunity.

“In the cab,” Lisk continued, “we know how dangerous cell phones and driving are, but there are still some people who do that unfortunately. So the artificial intelligence can identify an object up by the driver’s head, and then looking at previous data, it determines that it’s 95% confident that the object is a phone. The system captures the video, issues a real-time in-cab alert for the driver to react to and flags it for the customer to take additional training action if needed.”

This is just the beginning. As algorithms become more complex and artificial intelligence becomes (for lack of a better word) smarter, insight into your drivers’ behavior and habits will become more actionable. Chris Orban, vice president of data science at Trimble, explains:

“The coolest thing for me that’s going on with A.I. and video is moving past object detection and behavior detection and moving toward action detection,” he said.

But what does that mean? “Action detection is, for me, what the human brain does super well and computers do poorly a lot of the time. So if you see a video of a car cutting off a truck, the human brain thinks, ‘Oh, that guy’s a jerk. He cut the truck off.’

“If you show that same video to a computer, most of the time the analysis is frame by frame. It says, ‘Nothing’s in front of me. Nothing’s in front of me. Nothing’s in front of me,’ and then, ‘Something’s in front of me’ and that’s all of the information you get. Action detection is where we’re moving past that. Instead of analyzing data frame by frame, we analyze and compare data across a time horizon. So now we see that everything was clear, the truck was maintaining the proper following distance and then something imposed itself in front of the truck, rather than the truck running up on the car in front of it.”

The difference is obvious to you and me, but the technology needs to be taught. As A.I. starts to understand the nuances in the video data, Orban said that recommendations as to what to do next will become more actionable.

“Action detection leads us to algorithms that can then recommend the right coaching plan,” he said. “Right now, we have a coaching platform and fleet managers can go in and select the coaching plan they want. But we want to apply A.I. to the data to be able to say, ‘This driver action you’re coaching is actually following too closely plus failure to maintain their lane.’ We, as Trimble, know the coaching plan fleets tend to choose when this action occurs. So if we can detect the action and we know the proper countermeasure, now we can actually coach an entire fleet of 500, 1,000 or 5,000 vehicles every single day before the driver starts driving again.”

Truck, camera, action!

It’s exciting to hear where video telematics systems are going, but don’t assume that A.I. replaces good old-fashioned human interaction. Video telematics are making it easier for you to get at the data you need to take action, but that action still has to be very human. Remember, the end goal of implementing a truck video telematics system is improving driver safety—and that means impacting behavior. If anything, putting a high-tech video system to work means you have to be even more understanding, engaging and patient, but staying focused on the improvement metrics and seeing your entire fleet’s safety score improve will make it all worth it.