Multispectral Adaptive Networked Tactical Imaging System (MANTIS)

3063

The Multispectral Adaptive Networked Tactical Imaging System (MANTIS) studies the benefits multi-spectral fusion performed on the helmet or hand held viewer, integrating advanced sensing and newly designed ‘system on a chip’ processor. MANTIS aims to improve the soldier’s ability to see at night, under difficult visibility conditions including typical urban ambient lights (light bulbs, fires, car lights etc.), under moonless or cloudy skies, penetrating through smoke, fog, dust and flares. The system will also support video sharing trough ‘picture in picture’ functionality. The program was introduced at Soldier Technology 2007 by Jeffrey Paul, program manager at DARPA responsible for the program.


The program used an integrated Visual and Near Infra Red (VNIR) and imaging infra-red sensors, covering the visible, a new short wave infrared (SWIR) spectral band ranging from 1 – 2 micron wavelength, using passive, uncooled sensors which can better benefit from the natural starlight illumination, operating side by side with existing, passive uncooled thermal imagers operating in the 8-12 micron “long range infrared” (LWIR), watching a target simultaneously, the three feeds are fused together into a single picture, where each spectral band contributes specific attributes to the final picture, enabling the viewer to see more details in the shades, better spot movement or track suspicious targets.

The V/NIR sensor covers the same bandwidth covered by current night vision devices, without the downside of imaging infrared. With color support it also provides additional cues that cannot be gained by other sensors. SWIR sensors better perform under low light conditions they can operate through fog and add details to the viewed scene. LWIR sensors uses thermal signature and therefore requires no light at all. It can penetrate smoke, dust and can spot partially hidden targets by their thermal signature. Sofar, MANTIS was demonstrated in PC based hardware, performing the multi-sensor fusion in real-time using nine processors. The next phase currently in progress is developing the MANTIS Vision Processor (MVP), a much smaller ‘system on a chip’ that will be integrated into a helmet and hand held viewer. The new system utilizes four ARM-11 processors consuming only 1.6 watts – which purpose built ,was demonstrated with integral communications capabilities over low-bandwidth tactical radios, offering advanced collaborative functions by using picture-in-picture display technology, enabling remote viewing, video sharing and image analysis capabilities. Initial MANTIS tests will using specially geared helmets systems integrating the three sensors in a stacked configuration, the MVP and near-eye miniature display offering a x1 magnification and 40 degrees field of view. It will have batteries sustaining nine continuous hours of operation. The system will weigh 2.5 pounds, added to the helmet’s weight of 3.3 lbs (total 5.8 lbs). The hand held viewing devices will have LWIR and SWIR sensors, offering two level magnifications of x3.6 and x8.2 (11.2 and 4.6 deg. fields of view respectively). The viewer will weigh 6 pounds and include batteries sustaining four hours of operation. The systems will be tested in the summer of 2008.

As an extremely efficient and powerful video processor, MVP will offer further uses beyond MANTIS, offering high speed, low power processing capabilities for future adaptive image fusion and networked image sharing applications. One such application already studied at DARPA is the Dichoptic system, fusing two images received from V/NIR sensors to generate a wide field of view (70 degrees) image, integrating a high resolution inset color image (covering 40 deg) embedded into a low or medium resolution monochrome image. The entire system is integrated into a helmet system weighing 4.8 pounds.