Crowd-sourced and expert video assessment in minimally invasive esophagectomy.
Publication year
2023Source
Surgical Endoscopy and Other Interventional Techniques, 37, 10, (2023), pp. 7819-7828ISSN
Publication type
Article / Letter to editor
Display more detailsDisplay less details
Organization
Surgery
IQ Healthcare
Journal title
Surgical Endoscopy and Other Interventional Techniques
Volume
vol. 37
Issue
iss. 10
Page start
p. 7819
Page end
p. 7828
Subject
Radboudumc 14: Tumours of the digestive tract Surgery; Radboudumc 18: Healthcare improvement science IQ Healthcare; Radboud University Medical CenterAbstract
BACKGROUND: Video-based assessment by experts may structurally measure surgical performance using procedure-specific competency assessment tools (CATs). A CAT for minimally invasive esophagectomy (MIE-CAT) was developed and validated previously. However, surgeon's time is scarce and video assessment is time-consuming and labor intensive. This study investigated non-procedure-specific assessment of MIE video clips by MIE experts and crowdsourcing, collective surgical performance evaluation by anonymous and untrained laypeople, to assist procedure-specific expert review. METHODS: Two surgical performance scoring frameworks were used to assess eight MIE videos. First, global performance was assessed with the non-procedure-specific Global Operative Assessment of Laparoscopic Skills (GOALS) of 64 procedural phase-based video clips < 10 min. Each clip was assessed by two MIE experts and > 30 crowd workers. Second, the same experts assessed procedure-specific performance with the MIE-CAT of the corresponding full-length video. Reliability and convergent validity of GOALS for MIE were investigated using hypothesis testing with correlations (experience, blood loss, operative time, and MIE-CAT). RESULTS: Less than 75% of hypothesized correlations between GOALS scores and experience of the surgical team (r < 0.3), blood loss (r = - 0.82 to 0.02), operative time (r = - 0.42 to 0.07), and the MIE-CAT scores (r = - 0.04 to 0.76) were met for both crowd workers and experts. Interestingly, experts' GOALS and MIE-CAT scores correlated strongly (r = 0.40 to 0.79), while crowd workers' GOALS and experts' MIE-CAT scores correlations were weak (r = - 0.04 to 0.49). Expert and crowd worker GOALS scores correlated poorly (ICC ≤ 0.42). CONCLUSION: GOALS assessments by crowd workers lacked convergent validity and showed poor reliability. It is likely that MIE is technically too difficult to assess for laypeople. Convergent validity of GOALS assessments by experts could also not be established. GOALS might not be comprehensive enough to assess detailed MIE performance. However, expert's GOALS and MIE-CAT scores strongly correlated indicating video clip (instead of full-length video) assessments could be useful to shorten assessment time.
This item appears in the following Collection(s)
- Academic publications [246515]
- Electronic publications [134102]
- Faculty of Medical Sciences [93308]
- Open Access publications [107627]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.