/ Navigation and Tracking

In the context of vehicle operations navigation and tracking are two of the most commonly used capabilities. Navigation is used for locating and pathfinding for individual agents, including people, vehicles, robots, and ships, as well as for groups or swarms of agents. Depending on the surrounding environment, navigation may be done with the help of external aids and measurements, or if operating in a novel or non-cooperative environment, my require the use only on-board sensing and decision making. Groups or swarms of agents can also work together to locate themselves in an environment or can coordinate to complete a series of tasks. While navigation is more concerned with self-awareness, tracking is more focused on identifying, labeling, and following other stationary or moving targets.

In tracking an object, feature, or agent the three different levels of fidelity at which a target may be perceived are: detection, classification, and identification. Detection is the lowest fidelity perception, which essentially states that “something of potential interest” has been observed. Classification takes perception one step further and labels what kind of target has been detected. Identification is the final step that offers the most actionable intelligence, identifying unique characteristics of a perceived target and even being able to differentiate between multiple simultaneously observed targets. Tracking also requires that perception of a target, or targets, be performed over some duration of time.

In performing either navigation or tracking, it is helpful to have a dynamic model which describes the movement of the bodies or features being studied. When performing navigation of, for example, a vehicle, it is useful to understand how your vehicle moves and behaves, so that you can propagate your current state (position, velocity, orientation, and other relevant parameters) into the future. Being able to predict how you are going to move in the near-term future is incredibly valuable in navigation problems and for robust navigation requires the use of state estimation algorithms and filters. These same methods can be used for tracking targets of interest as well. Once a target has been perceived, having an idea of how it’s moving can help maintain tracking, especially in crowded or cluttered environments.

Below are some of the capabilities we’ve developed while working on problems in the domain of Navigation and Tracking:

Capabilities

The ability to provide Position, Navigation, and Timing (PNT) services when GPS is not available is a critical capability for U.S. forces. Lynntech is developing the algorithms and hardware which will fill this operational need. We envision application on a stationary or mobile platform which uses inertial measurements units (IMUs), visible or non-visible cameras, and any integration of optional additional sensor to provide GPS-denied PNT services. With the core methods and technologies developed, trade studies can be performed to optimize accuracy, availability, and Size, Weight, and Power (SWaP) to accommodate a desired operational scope.

Target id is comprised of detection, classification, and identification of a target of interest. Before the advent of modern technologies, these were the responsibility of trained scouts, lookouts, and hunters. With the advent of Radar more and more computerized systems have been developed to help operators make sense of the world around them culminating in the development of automatic target recognition (ATR) systems. Contemporary ATR systems leverage computer vision and more and more machine learning models. These models require extensive training on high-quality data sets that span a broad scenario space. The performance of these models is highly dependent on the ability to either collect or synthesize large amounts of representative high-quality data, thus tying current developments in ATR to developments in data science. Being able to identify a target at a point in time and maintaining that identification, or being able to reidentify it a future point, is the process of tracking. If it’s possible to surmise the dynamics, goals, and heuristics of an identified target, tracking becomes easier, as you have a model of expected target behavior which aids in reidentification in future states. This capability is strongly enhanced by state estimation and along with decision theory one can track targets with complex movement patterns, or targets which are difficult to spot against background clutter. These capabilities can be extended to anything from satellite orbit determination, human behavior modeling, to vehicle and projectile tracking, to industrial process control.

Sensor fusion is the ability to make sense of multiple disparate data streams from sensors with varying sensing modalities, sampling rates, fields of view, and time horizons. In the context of navigation these sensors often include, though are not limited to, optical cameras and lidar, radar, and even sonar systems. When leveraged correctly, a sensor fusion approach increases both capability and robustness of navigational systems, as one sensor either fills the sensing gap of other sensors or corroborates and provides a redundant reading which allows for more precise state estimation and interpretation of the world. One simple example would be the use of GPS and an on-board cameras for navigation. GPS readings allow for coarse locating and path-planning while cameras provide sensing for short-term immediate relative navigation and obstacle avoidance. Another example would be using multiple cameras, however each camera may span different fields of view, or sense in different spectrums such as visible and infrared. Thus, the camera inputs must be fused together to make sense of a world which is being observed from different perspectives.

Autonomous systems and vehicles must navigate scenarios containing static terrain as well as dynamically evolving conditions. This requires not only sensors capable of seeing the surrounding world, but also the ability to perceive objects along with a contextual decision-making framework that allows the system to evaluate its surroundings and respond intelligently according to mission goals and constraints. Once objects are detected and identified they must be processed appropriately. Is the object a static feature that can serve as a reliable landmark? Is it a transient feature that will be gone soon? In novel environments these problems must be solved live as the autonomous system performs simultaneous localization and mapping (SLAM). Once the surrounding world is mapped it must be interpreted. Is the environment hazardous? Is a particular object a threat to avoid or goal to move towards? These questions are asked by decision making and path planning systems which try to balance the immediate and short-term needs of navigating the current surroundings against the long-term mission objectives. Classic path planning and search techniques such as potential fields and A* search algorithms can be paired with modern machine learning and computer vision with the latter aiding with scene interpretation and the former with decision making.

Selected Project: Sky Compass

The DOD is in need of a portable Comprehensive Sky Compass (CSC) that addresses the need for rapid, high-accuracy azimuth estimation for Far-Target Location (FTL) systems. In scenarios where GPS is challenged, and unavoidable magnetic fields degrade the precision of a traditional compass, celestial reference offers a robust way to determine rotation from true north. This rotation, or azimuth, is essential for long-distance alignment, surveying, and precision targeting. It is also playing an increasing role in autonomous navigation. Non-magnetic, local measurement of absolute azimuth is ideal as a method to correct for drift errors inherent to inertial navigation units. Across multiple applications, the data fusion of complimentary azimuth sensor feeds can greatly enhance the accuracy, robustness, with graceful degradation geolocation.

Operational Need – The DOD is in need of a portable Comprehensive Sky Compass (CSC) that addresses the need for rapid, high-accuracy azimuth estimation for Far-Target Location (FTL) systems. In scenarios where GPS is challenged, and unavoidable magnetic fields degrade the precision of a traditional compass, celestial reference offers a robust way to determine rotation from true north. This rotation, or azimuth, is essential for long-distance alignment, surveying, and precision targeting. It is also playing an increasing role in autonomous navigation. Non-magnetic, local measurement of absolute azimuth is ideal as a method to correct for drift errors inherent to inertial navigation units. Across multiple applications, the data fusion of complimentary azimuth sensor feeds can greatly enhance the accuracy, robustness, with graceful degradation geolocation.

Lynntech Solution – An accurate, compact, and inexpensive Comprehensive Sky Compass sensor that leveraged the latest generation of Commercial Off-The-Shelf miniaturized camera units. These small electro-optical imagers offer high-sensitivity, high-resolution, and high-quality optics. Lynntech’s solution combines a practical, readily manufacturable polarimeter design, with a state-of-the-art image processing and sensor fusion approach, to determine azimuth, day or night.

Revolutionary Performance – Lynntech’s Comprehensive Sky Compass project developed software and compact hardware to detect non-magnetic true north, at any time of day or night. This technology enables survey teams and forward observers to instantly specify the relative direction from point A to B with an absolute, global frame of reference, without the need for multiple measurements over time. Baseline data was gathered at day and night using a polarimeter camera, which allows for the estimate of the sun’s direction, even when it’s out of sight.