A joint project between the University of California, Berkeley, Google Brain, and the Intel Corporation aims to teach robots how to perform sutures — using YouTube.
The AIs we can produce are still limited, but they are very good at rapidly processing large amounts of data. This makes them very useful for medical applications, such as their use in diagnosing Chinese patients during the early months of the pandemic. They’re also lending a digital hand towards finding a treatment and vaccine for the virus.
But actually taking part in a medical procedure isn’t something that they’ve been able to pull off. This work takes a step in that direction, showing how deep-learning can be applied to automatically create sutures in the operating room.
The team worked with a deep-learning setup called a Siamese network, created from two or more deep-learning networks sharing the same data. One of their strengths is the ability to assess relationships between data, and they have been used for language detection applications, facial detection, and signature verification.
However, training AIs well requires massive amounts of data, and the team turned to YouTube to get it. As part of a previous project, the researchers tried to teach a robot to dance using videos. They used the same approach here, showing their network video footage of actual procedures. Their paper describes how they used YouTube videos to train a two-armed da Vinci surgical robot to insert needles and perform sutures on a cloth device.
“YouTube gets 500 hours of new material every minute. It’s an incredible repository,” said Ken Goldberg from UC Berkeley, co-author of the paper. “Any human can watch almost any one of those videos and make sense of it, but a robot currently cannot—they just see it as a stream of pixels.”
“So the goal of this work is to try and make sense of those pixels. That is to look at the video, analyze it, and be able to segment the videos into meaningful sequences.”
It took 78 instructional videos to train the AI to perform sutures with an 85% success rate, the team reports. Eventually, they hope, such robots could take over simple, repetitive tasks to allow surgeons to focus on their work.
We’re nowhere near having a fully-automated surgery team, but in time, the authors hope to build robots that can interact with and assist the doctors during procedures.
The report “Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos” is available here.
Was this helpful?