Researchers at the University of Michigan (U-M) have developed a real-time, 3D motion tracking system developed that combines transparent light detectors with advanced neural network methods. The system could one day replace LiDAR and cameras in autonomous technologies and future applications include automated manufacturing, biomedical imaging and autonomous driving.
The imaging system relies on transparent, highly sensitive graphene photodetectors developed by Zhaohui Zhong, U-M associate professor of electrical and computer engineering, and his group. They’re believed to be the first of their kind.
The graphene photodetectors in this work have been tweaked to absorb only about 10% of the light they’re exposed to, making them nearly transparent. Because graphene is so sensitive to light, this is sufficient to generate images that can be reconstructed through computational imaging. The photodetectors are stacked behind each other, resulting in a compact system, and each layer focuses on a different focal plane, which enables 3D imaging.
In addition to 3D imaging, the team also tackled real-time motion tracking, which is critical for a variety of autonomous robotic applications. To do this, they needed a way to determine the position and orientation of an object being tracked. Typical approaches involve LiDAR systems and light-field cameras, both of which suffer from significant limitations, the researchers say. Others use metamaterials or multiple cameras. Hardware alone was not enough to produce the desired results.
They also needed deep learning algorithms. Helping to bridge those two worlds was Zhen Xu, a doctoral student in electrical and computer engineering. He built the optical setup and worked with the team to enable a neural network to decipher the positional information.
The neural network is trained to search for specific objects in the entire scene, and then focus only on the object of interestâfor example, a pedestrian in traffic, or an object moving into your lane on a highway. The technology reportedly works particularly well for stable systems, such as automated manufacturing, or projecting human body structures in 3D for the medical community.
It takes time to train your neural network, said project leader Ted Norris, professor of electrical and computer engineering. But once it’s done, it’s done. So when a camera sees a certain scene, it can give an answer in milliseconds.
Doctoral student Zhengyu Huang led the algorithm design for the neural network. The type of algorithms the team developed are unlike traditional signal processing algorithms used for long-standing imaging technologies such as X-ray and MRI. And that’s exciting to team co-leader Jeffrey Fessler, professor of electrical and computer engineering, who specializes in medical imaging.
In my 30 years at Michigan, this is the first project I’ve been involved in where the technology is in its infancy, Fessler said. We’re a long way from something you’re going to buy at Best Buy, but that’s OK. That’s part of what makes this exciting.
The team demonstrated success tracking a beam of light, as well as an actual ladybug with a stack of two 4Ã4 (16 pixel) graphene photodetector arrays. They also proved that their technique is scalable. They believe it would take as few as 4,000 pixels for some practical applications, and 400Ã600 pixel arrays for many more.
While the technology could be used with other materials, additional advantages to graphene are that it doesn’t require artificial illumination and it’s environmentally friendly. It will be a challenge to build the manufacturing infrastructure necessary for mass production, but it may be worth it, the researchers say.
Graphene is now what silicon was in 1960, Norris said. As we continue to develop this technology, it could motivate the kind of investment that would be needed for commercialization.