The US Defense Advanced Research Projects Agency (DARPA) is set to develop a new camera sensor that can adapt the way it captures images based on its environment.
The sensor will be developed as part of the new Reconfigurable Imaging (ReImagine) programme, which seeks to develop software-configurable applications based on a common digital circuit and software platform.
Researchers aim to design and fabricate various megapixel detector and 'analogue interface' layers, as well as associated software and algorithms, in order to convert 3-D LIDAR data into digital information.
The digital data would be used for machine learning procedures, through which the sensors could become autonomously aware of specific objects and occurences within their field of view, DARPA said in a statement.
The sensor could be reconfigured to work in a variety of imaging modes.
ReImagine programme manager Jay Lewis said: “With ReImagine, we would be giving machine learning and image processing algorithms the ability to change or decide what type of sensor data to collect.”
The smaller and cheaper platforms developed under the initiative would provide the same situational awareness provided by single-purpose sensors fitted onto larger airborne, ground, space-based, and naval vehicles and platforms.
During the four-year programme, MIT-Lincoln Laboratory will provide the common reconfigurable digital layer for the three-layer sensor.
Image: An artistic impression of DARPA's ReImagine camera sensor. Photo: courtesy of Defense Advanced Research Projects Agency.