machine learning

Q4 2021 Online Hackathon: Identifying Downed Power Lines

From our online hackathon, read Team Brazuca's journey on using machine learning to create a model that can identify downed power lines.


In February, a historic ice storm hit Oregon. Many were without power for days, with temperatures below or near freezing. Personally, I was without power for about four days. The most frustrating part was when I checked on the status for when the power would be restored. I heard a narrative that because of the extent of the damage (down trees and lines) and the treacherous roads, the utility company couldn’t identify all the problems and was taking them one problem at a time. 

I know the hard-working people at the utility did everything they could, even getting linemen from other states to help get power restored. Even with that, I thought, could there be a better way to detect lines, identify the damage, and work through the outage in a more organized way. 

For this hackathon, I wanted to see how hard it would be to do this. Imagine if you would: a fleet of drones that are staged at district locations and are used to fly paths, surveying neighborhoods or trunk lines. That video is processed at the edge or back in a facility to alert utilities to problems, without needing to wait for roads to thaw or be cleared.

What we did:

In two days, we needed to take a video, label it with what we wanted to detect, and then automatically train a model to detect the objects. 

  • Our team found videos from different YouTube channels of neighborhoods and storm-damaged utilities. 
  • We divided the videos into different images, about one image per second. 
  • Next, we used Amazon SageMaker Ground Truth to label the videos for utility poles, power lines, damaged power lines, and damaged utility poles. 

Last, we used Rekognition Custom Labels to process the images and train a model that could flag information in videos.

Screenshot of power lines with scores from the machine learning algorithm

What we learned:

I have done some machine learning (ML) with AWS Rekognition, Cognito, and other high-level services at Amazon before. This was my first experience training a model from scratch. The first thing that was reinforced from all my ML training was how essential and time-intensive good data could be. 

  • Random videos probably weren’t the best data source for training ML. We had some success, but the quality of any object was lacking, and with more controlled videos or images, we may have had better data to use for our ML model. 
  • I probably spent five hours (and there were at least that much from other team members) labeling videos. Part of this was because we tested different Ground Truth configurations for the best user flow, and not all of those would flow directly into Rekognition Custom Labels. 

The second thing that was reinforced was that training takes time. With our data, the AWS Rekognition Custom Label training took 3-4 hours, but there is a clear warning stating that it could take up to 24 hours.

Where did we end up: 

With all that, where did we end up at the end of our 2nd day? 

Well, we had two different data models. One trained off 499 images, one trained off 286 images.

The large data set had zero false positives but many more false negatives while processing videos. The small data set detected more things but had false positives and still had false negatives. 

You can see why we have data problems.

Conclusion

If we had more time to work on this, I’d take the Ground Truth video data and try to integrate it into YOLO. YOLO is an ML algorithm that processes video with incredible accuracy and could also be used to process videos at an edge device on a drone. 

Overall, though, I felt that we did enough to prove that it is technically possible in two days.

Similar posts

Get notified on new marketing insights

Be the first to know about new B2B SaaS Marketing insights to build or refine your marketing function with the tools and knowledge of today’s industry.