Some of the biggest medical advances of the last few decades have been in diagnostic imaging, but how those images are viewed is pretty much the same as it was in 1950 – visual data displayed on a 2D flat screen. Augmented reality (AR), technology that superimposes digital information onto the physical world, has the potential to change all of that. Neurotologist Jonathan McJunkin, MD, hopes to channel that potential to improve ear surgery.
“The long-term goal is to improve surgical outcomes in otologic procedures by improving anatomic visualization using augmented reality,” says McJunkin, associate professor in the Department of Otolaryngology. “One specific goal is to improve visualization of cochlear orientation to allow for less traumatic cochlear implant procedures.”
Dr. McJunkin predicts that as the technology develops, there will be broad application for all areas of technology.
Dr. McJunkin described the technology and their efforts to build it:
“First, we have been fortunate to collaborate with Dr. Jonathan Silva and his team from the Department of Biomedical Engineering to develop this technology.
“Our initial efforts focused on using the Microsoft Hololens to overlay 3D holograms of temporal bone structures onto cadaveric specimens. We first obtained and segmented CT scans of cadaveric specimens to generate 3D anatomic models of individual structures of the temporal bone and whole head.
“Next, Dr. Silva developed programming to allow manual registration (placement) of the holographic model to the original specimen. We then quantified the accuracy of our registration by measuring target localization error for a number of specific anatomic points.
“To improve registration of the hologram, we worked on aligning individual temporal bone structures such as the carotid artery, sigmoid sinus and facial nerve. This early version of the Hololens, however, demonstrated a relatively large registration error and would be difficult to use for intraoperative navigation.
“Each step in the process requires refinement to make the technology more user-friendly and accessible. We are currently working on automating the segmentation process to generate 3D anatomic models from radiographic data quickly. We are also employing optical flow algorithms (similar to facial recognition) that will use visual landmarks for registration and tracking through a surgical microscope.
“The technology certainly shows promise,” says Dr. McJunkin. “We need to improve things like registration accuracy before it can be used for image-guided surgery, but the system clearly has the ability to provide x-ray vision of human anatomy. And, this technology has significant promise to improve surgical navigation by helping the surgical trainee more efficiently develop a mental 3D anatomical map.”