Weapons controlled by Machines?
When targets are becoming too illusive for effective human response, networked computers and guided weapons join forces, combining sensors, processing and datalink on one platform, eliminating much of the traditional ‘kill chain’, By replacing significant elements of the targeting process with automated processing, performing rules of engagement compliance checks, this largely eliminates much of the complexity previously involved with multi-level human evaluation and approval process. Such procedures were considered imperative, when lethal effects were manually employed in the past. To further accelerate the kill chain, a new concept called “Warplane Warfighter Forwarder” (WWF) is being evaluated. This method supports a rapid ‘machine-to-machine’ update, to establish common picture between the manned or unmanned ‘shooter’, guided weapon and command center. Lengthy coordination process, between aircraft, forward controller and the operations center tends to exceed the ‘lifetime’ of typical ‘time critical targets’, engaged in today’s asymmetric warfare. This process commonly results in different pictures displaying the same target at different points in time – one showing the target to the pilot, in real time and another, depicted at the air operations center, showing near-real-time or historic data. WWF is aiming to establish a single image or a set of annotations clarifying potential conflict. Furthermore, revised targeting and fire approval process is being evaluated, utilizing datalink-capable weapons such as the future version of Joint Air to Surface Standoff Missile (JASSM), which could dramatically shorten retargeting and response to time-critical targets, enabling, post-launch retargeting or retasking and engagement of mobile targets. A future IP based airborne internet protocol which could be established in the future, will greatly enhance machine-to-machine connectivity, rapidly and effectively share actionable intelligence and shorten the kill-chain.
Two-Way Video Datalink Accelerates Air/Ground Coordination
Through its participation and support of joint air-land operations in Afghanistan and Iraq, the US Air Force and Navy exploited a wide range of intelligence, surveillance and reconnaissance systems (ISR) in “Non Traditional ISR” applications. Particularly popular is the use of the new generation Advanced Targeting Pods (ATP) to acquire reconnaissance data and disseminate such data in near-real-time to combat units on the ground. As targeting pods are frequently employed by ‘shooters’ (fighter or attack aircraft) Closely-coupled NTISR+guided weapons establish highly responsive, precision attack capability. To further accelerate the ‘kill chain’, the targeting pod is equipped with a vide datalink, transmitting the target view directly to the supported unit on the ground. Most of these datalinks are utilizing makeshift adaptations from other systems. For example, a datalink from a Raven mini-UAV is fitted into the Sniper, sending video directly from the pod, to be received directly by the ground forces’ Raven control unit. The same video can also be viewed with the Rover video datalink and display, used by the Joint Terminal Attack Controller (JTAC) controlling the attack and leading the pilot and guided weapons to the target.
Last year (September 2007) Northrop Grumman demonstrated an advanced targeting data link capability with the Litening ATP, using the Defense Advanced Research Project Agency’s (DARPA) Quint Network Technology (QNT), a network development effort supported by the US Navy and Air Force. The advanced data link provided encrypted, bi-directional airborne transmission of streaming video, and bi-directional cursor-on-target metadata-tagged still imagery, both at full sensor resolution allowing ground forces to receive imagery and its associated geo-positional data for battlefield situational awareness, aircraft position, sensor point of interest and target selection. QNT uses advanced waveforms and forward error correction coding, packetized video and metadata connections to communicate at ranges that exceeded 50 nautical miles, using omni-directional antennas on both the airborne and ground nodes. The architecture uses multicast transmissions over the QNT link such that each node in the network publishes its available services as such streaming video, still imagery, and a number of situational awareness updates from the ground and air nodes from which users can select. This eliminates the need for the user to deal with internet protocol addresses and other network functions and allows the user to focus instead on mission needs and execution In February 2008 Lockheed Martin demonstrated the use of a two-way Video Data Link (VDL) with the Sniper ATP. The datalink allows forward deployed forces to receive the Sniper’s high resolution streaming video in full resolution, and upload annotated images directly back to the Sniper pod. Pilots can review uplinked tactical video on their cockpit displays. According to Lockheed Martin, the Sniper is also the only ATP providing critical VDL digital metadata to the ground user today.
Introducing the Rover
The Rover remote Video unit was developed by L3 Communications (shown in the photo below) and has proved itself as one of the most successful tools that contributed immensely to accelerate targeting and close air support. Rover is proving an essential element for improving the effectiveness, precision and safety of air support. About 3,000 sets are currently deployed in Iraq and Afghanistan. These devices proved indispensable particularly for the support of special operations, were airpower fills the gap acting as fire support and snipers. The Rovers displays the view acquired by the targeting pod or the UAV payload, side by side to a FalconView map, which provides both ground and airborne elements a common perspective of the area and the target. Situational pictures can be created and exchanged between the two sides using graphical annotations superimposed on the map, eliminating the use of voice communications only to critical conditions. This capability enables fire support very close to friendly forces. To establish a two-way communication Rover has to be integrated with a VHF/UHF radio supporting data transfer to and from the aircraft (such as the ARC-210). L-3 Communications is currently producing the Rover 4 video datalink receiver. This unit is a receive-only terminal that displays sensor data from multiple airborne platforms. It supports Ku-band Digital, C-band Digital, C-band Analog, S-band Analog and L-band Analog signals. The smaller Rover 5 Handheld device (also known as mRover) is a two-way portable transceiver offering improved collaboration for air-ground operations. The unit displays images received from the remote sensor and transmits time-sensitive targeting data to airborne platforms. The system supports Ku-band, C-band, S-band, L-band and UHF signals. AAI is integrating the Rover with the Army One System UAV ground control services. In Army service the remote video display unit is designated OSRVT.
Other topics covered in this series:
- Who’s in Control?
- F-22 Enters the Network – Linking IFDL, TTNT, Link 16
- Navy Tests new Global Command Architecture