Surgical simulation plays a pivotal role in training novice surgeons, accelerating their learning curve and reducing intra-operative errors.However, conventional simulation tools fall short in providing the necessary photorealism and the variability of human anatomy.In response, current methods are shifting towards generative model-based simulators.Yet, these approaches primarily focus on using increasingly complex conditioning for precise synthesis while neglecting the fine-grained human control aspect.To address this gap, we introduce SG2VID, the first diffusion-based video model that leverages Scene Graphs for both precise video synthesis and fine-grained human control.We demonstrate SG2VID's capabilities across three public datasets featuring cataract and cholecystectomy surgery.While SG2VID outperforms previous methods both qualitatively and quantitatively, it also enables precise synthesis, providing accurate control over tool and anatomy's size and movement, entrance of new tools, as well as the overall scene layout.We qualitatively motivate how SG2VID can be used for generative augmentation and present an experiment demonstrating its ability to improve a downstream phase detection task when the training set is extended with our synthetic videos.Finally, to showcase SG2VID's ability to retain human control, we interact with the Scene Graphs to generate new video samples depicting major yet rare intra-operative irregularities.
The paper presents SG2VID, a pioneering diffusion-based video model that utilizes Scene Graphs to provide fine-grained control in video synthesis for surgical simulations, particularly in cataract and cholecystectomy surgeries. It addresses limitations in existing simulation tools which lack realism and human control. SG2VID showcases improvements not only in visual and quantitative performance but also enhances user interactivity by allowing control over various scene components, such as tool movement and anatomical details. The methodology includes a graph encoder trained on each frame of surgical videos to ensure precise synthesis, and the model is validated using publicly available datasets. Experimental results suggest significant advancements in video generation quality compared to existing models, alongside the ability to generate rare surgical scenarios effectively. The findings imply substantial implications for surgical training and phase recognition tasks.
This paper employs the following methods:
- Graph Attention Networks (GATv2)
- Diffusion Models
- SG2VID
- SG2VID-XIMG
- Mask R-CNN
- MS-TCN++
- RAFT
- MiDaS
The following datasets were used in this research:
- Cataract-1k
- CATARACTS
- Cholec80
- FID
- FVD
- LPIPS
- F1-score
- BB IoU
- Accuracy
- SG2VID improves video synthesis quality in surgical simulations
- Outperforms baseline models qualitatively and quantitatively
- Enhances downstream surgical phase recognition tasks with generatively augmented data
- Number of GPUs: 4
- GPU Type: A40
- Compute Requirements: Each of the graph encoders is trained on a single A40 GPU, while the video diffusion model is trained on four A40 GPUs, though training with a single A40 GPU using gradient accumulation remains feasible.