Early excision of burns improves patient outcomes and reduces healthcare costs. Unfortunately, accuracy of burn depth assessment by experts is only 40-70%, with non-specialists being much less accurate. In this work, we aim to create an automated visual system that is able to predict both the burn severity and spatial outline (which allows for even more accurate treatment decisions). Given the low-cost, ubiquitous nature of smartphones, It's easy to imagine this system, and others similar to it, to be deployed and accessed by people all over the world.
With minimal training, we're able to achieve an impressive 85% pixel accuracy in discriminating between burnt skin and the rest of the image, with clear pathways towards performing even better. Currently, we're experimenting with extending this to predicting multiple burn depths as well as tweaking the dataset to allow for accurately calculating TBSA. Data collaborations are more than welcome.