Stanford University

Surgical Support

Overview

Early excision of burns improves patient outcomes and reduces healthcare costs. Unfortunately, accuracy of burn depth assessment by experts is only 40-70%, with non-specialists being much less accurate. In this work, we aim to create an automated visual system that is able to predict both the burn severity and spatial outline (which allows for even more accurate treatment decisions). Given the low-cost, ubiquitous nature of smartphones, It's easy to imagine this system, and others similar to it, to be deployed and accessed by people all over the world.

With minimal training, we're able to achieve an impressive 85% pixel accuracy in discriminating between burnt skin and the rest of the image, with clear pathways towards performing even better. Currently, we're experimenting with extending this to predicting multiple burn depths as well as tweaking the dataset to allow for accurately calculating TBSA. Data collaborations are more than welcome.

We have collected a novel dataset named BURNED. It is the largest dataset to date of segmented and annotated burns. It consists of over 1,000 burn images, constituting more than 1,600 individual burns. Each image's burns were outlined by an experienced plastic surgeon. Each burn was then labeled by 3 randomly assigned plastic surgeons (out of 6). The available burn depth choices were: superficial, superficial partial thickness, deep partial thickness, full thickness, and undebrided.

People


Orion Despo
Stanford AI Lab

Serena Yeung
Stanford AI Lab

Julia Lee
CERC

Contact

Click the box to show contact information