Tiny solid-state LiDAR device can 3D map a full 180-degree field of view

South Korean researchers have developed an ultra-small, ultra-thin LiDAR device that splits a single laser beam into 10,000 points covering an unprecedented 180 degree field of view. It is capable of 3D depth mapping an entire hemisphere of vision in a single shot.

Autonomous cars and robots must be able to perceive the world around them with incredible precision if they are to be safe and useful in real-world conditions. In humans and other autonomous biological entities, this requires a range of different senses and quite extraordinary real-time data processing, and the same will likely be true for our technological offspring.

LiDAR – short for Light Detection and Ranging – has been around since the 1960s, and it’s now a well-established ranging technology that’s especially useful for developing 3D point cloud representations of a given space. It works much like sonar, but instead of sound pulses, LiDAR devices send out short pulses of laser light and then measure reflected or backscattered light when those pulses hit an object.

The time between the initial light pulse and the returned pulse, multiplied by the speed of light and divided by two, tells you the distance between the LiDAR unit and a given point in space. If you measure a bunch of points repeatedly over time, you get a 3D model of that space, with information about distance, shape, and relative speed, which can be used with data streams from cameras multipoint sensors, ultrasonic sensors and other systems. to flesh out an autonomous system’s understanding of its environment.

According to researchers from Pohang University of Science and Technology (POSTECH) in South Korea, one of the main problems with existing LiDAR technology is its field of view. If you want to image a large area from a single point, the only way to do so is to mechanically rotate your LiDAR device or rotate a mirror to direct the beam. This type of equipment can be bulky, power-hungry and fragile. It tends to wear out quite quickly, and the rotational speed limits how often you can measure each point, reducing the frame rate of your 3D data.

Solid-state LiDAR systems, on the other hand, do not use any physical moving parts. Some of them, researchers say — like the depth sensors Apple uses to make sure you don’t trick an iPhone’s face-detection unlocking system by holding a flat photo of the owner’s face — project an array of dots all together, and watch for dot and pattern distortion to discern shape and distance information. But field of view and resolution are limited, and the team says these are still relatively large devices.

Pohang’s team decided to aim for the smallest possible depth-sensing system with the widest possible field of view, using the extraordinary light bending abilities of metasurfaces. These 2D nanostructures, one-thousandth the width of a human hair, can effectively be thought of as ultra-thin lenses, constructed from arrays of tiny, precisely shaped individual nanopillar elements. Incoming light is split into multiple directions as it travels across a metasurface, and with the right nanopillar array design, portions of this light can be diffracted at an angle of nearly 90 degrees. A completely flat ultra-fisheye, if you like.

Left: Front and side views of the diffraction pattern of the beam, showing both the loss of intensity at higher bend angles and the loss of point-to-point resolution as distance increases. Right: The array of precisely shaped nanopillars on the metasurface itself, which can deflect light nearly 90 degrees

POSTTECH

The researchers designed and built a device that sends laser light through a metasurface lens with nanopillars tuned to split it into about 10,000 points, covering an extreme field of view of 180 degrees. The device then interprets the reflected or backscattered light through a camera to provide distance measurements.

“We have proven that we can control the propagation of light from all angles by developing technology that is more advanced than conventional metasurface devices,” said Professor Junsuk Rho, co-author of a new study published in Communication Nature. “It will be an original technology that will enable an ultra-small and full-space 3D imaging sensor platform. »

Light intensity decreases as diffraction angles become more extreme; a point bent at an angle of 10 degrees hit its target at four to seven times the power of a point bent closer to 90 degrees. With the equipment in their lab configuration, the researchers found that they achieved the best results at a maximum viewing angle of 60° (representing a 120° field of view) and a distance of less than 1 m (3. 3 feet) between sensor and object. They say more powerful lasers and more precisely tuned metasurfaces will increase the sweet spot of these sensors, but high resolution at greater distances will always be a challenge with ultra-wide lenses like these.

This tiny speck of metasurface is all you need to split a single laser wide enough to map everything in front of you.
This tiny speck of metasurface is all you need to split a single laser wide enough to map everything in front of you.

POSTTECH

Another potential limitation here is image processing. The “coherent point drift” algorithm used to decode sensor data into a 3D point cloud is very complex and processing time increases with the number of points. So high-resolution full-frame captures decoding 10,000 points or more will place quite a heavy load on processors, and running such a system at over 30 frames per second will be a big challenge.

On the other hand, these things are incredibly tiny, and metasurfaces can be easily and cheaply fabricated on a huge scale. The team printed one on the curved surface of a set of safety glasses. It is so small that it is barely distinguishable from a speck of dust. And that’s the potential here; Metasurface-based depth mapping devices can be incredibly small and easily integrated into the design of a range of objects, with their field of view set at an angle that makes sense for the application.

The team sees these devices as having huge potential in areas like mobile devices, robotics, self-driving cars, and things like VR/AR glasses. Very neat stuff!

The research is open access in the journal Nature Communication.

Source: POSTTECH

Comments are closed.