Invented at the University of Arizona, “sparse holography” uses artificial intelligence to create 3D virtual models. The technology has applications for any situation in which 360-degree perspectives are critical but can’t be obtained by direct measurement or imaging, including studying living tissue or helping autonomous vehicles interpret the world around them.
Developed and named by David Brady, professor in the Wyant College of Optical Sciences, sparse holography begins with two-dimensional holograms, created by passing lasers over an object. Computers then read interference patterns in the light and translate that data into flat images with limited depth information when viewed from different angles.
Sparse holography transforms those 2D holograms into 3D models that can be printed into physical objects or manipulated on screen for views from any perspective. Its novel engineering offers views of features smaller than a human hair from across an area the size of a football field. The innovation is already improving diagnostic capabilities of commonly used technologies like X-ray systems.
A decade ago, Brady’s research drove another breakthrough in optical sciences: the world’s first gigapixel camera, with applications now ranging from high-stakes surveillance to medical imaging, and astronomy to documenting cultural artifacts in precise detail. For reference, a gigapixel is a billion pixels – a thousand times the resolution of the megapixel cameras ubiquitous in today’s smartphones.