European researchers backed by Amazon have developed a technique to use a simple time of flight (ToF) to generate moving 3D images without using spatial daya.
Rather than capturing where photons land on an image sensor array, a ToF sensor measures the time a photon arrives. These are already well established, low cost sensors used in mobile phones for example. But they only provide 1D data to measure a distance. 3D ToF sensors use an array of pixels to combine special and temporal data such as the latest array from Teledyne e2V: HIGH RESOLUTION SENSOR FOR 3D VISION
In a paper published in the journal Optica, the researchers at the University of Glasgow, TU Delft and the Polytechnic University of Milan and backed by e-commerce giant Amazon, used a ToF SPAD (single photon avalanche detector) sensor receiving data from a pulsed laser that illuminated a scene. This created a graph of the time of all the received photons bouncing off objects in the scene.
The trick is to train a neural network with the graph and conventional photographs of the same scene. After several thousand training sets, the AI network had learned enough about how the temporal data corresponded with the photos that it was capable of creating highly accurate images from the temporal data alone. This will shift the processing from the digital signal analysis into on-chip AI accelerators. This could also drive on-chip AI learning where the neural network is refined.
In the proof-of-principle experiments, the team managed to construct moving images at about 10 frames per second from the temporal data, although the hardware and algorithm used has the potential to produce thousands of images per second.
The advantage is that the technique can be used with any type of photon, including radar. This opens up low cost sensors in driverless cars and delivery drones. Amazon, which is developing delivery drones, was one of the funders of the research.