Evaluating robotic-assisted partial nephrectomy surgeons with fully convolutional segmentation and multi-task attention networks.
We use machine learning to evaluate surgical skill from videos during the tumor resection and renography steps of a robotic assisted partial nephrectomy (RAPN). This expands previous work using synthetic tissue to include actual surgeries. We investigate cascaded neural networks for predicting surgical proficiency scores (OSATS and GEARS) from RAPN videos recorded from the DaVinci system. The semantic segmentation task generates a mask and tracks the various surgical instruments. The movements from the instruments found via semantic segmentation are processed by a scoring network that regresses (predicts) GEARS and OSATS scoring for each subcategory. Overall, the model performs well for many subcategories such as force sensitivity and knowledge of instruments of GEARS and OSATS scoring, but can suffer from false positives and negatives that would not be expected of human raters. This is mainly attributed to limited training data variability and sparsity.
Duke Scholars
Published In
DOI
EISSN
Publication Date
Volume
Issue
Start / End Page
Location
Related Subject Headings
- Surgery
- Surgeons
- Robotic Surgical Procedures
- Nephrectomy
- Laparoscopy
- Humans
- 3202 Clinical sciences
- 1103 Clinical Sciences
- 0801 Artificial Intelligence and Image Processing
Citation
Published In
DOI
EISSN
Publication Date
Volume
Issue
Start / End Page
Location
Related Subject Headings
- Surgery
- Surgeons
- Robotic Surgical Procedures
- Nephrectomy
- Laparoscopy
- Humans
- 3202 Clinical sciences
- 1103 Clinical Sciences
- 0801 Artificial Intelligence and Image Processing