The core technologies behind much of our work span multiple projects and multiple domains. These technologies take the form of techniques, methods, processes, algorithms, computational and simulation tools which we utilize every day to help further our research and commercialization goals. As our projects evolve and change, our tools must also change. We’re continuously developing our core technologies and honing our skills while expanding our toolkit.
Technologies and Expertise
Novel Sensing Modalities: Innovation is driving the rapid development and commercial production of new sensors for a variety of modalities across the electromagnetic and acoustic spectra. Some new sensors offer improvements to inertial measurements, chemical sensing, or those integrated together for systems control. These may be new sensors for existing already widely-used modalities, that offer better resolution, signal-to-noise ratio (SNR), higher efficiency, smaller size or lower cost. Other novel sensors are the first to bring a new modality to the market (e.g. polarized imaging chips) or are developed for specific scientific or medical instruments. The disruptive effects of novel sensing technology can on the one hand affect capability gaps in the defense sector, and on the other hand enable automated or remote control systems to perform existing or new tasks better. Ultimately, utilizing novel sensors requires the development of modality-specific AI that offers similar performance as state-of-the-art computer vision solutions while also potentially dealing with big data issues.
Dynamics-based Modeling and Simulation: Control systems and vehicle tracking systems leverage state estimation models to predict the movement of vehicles. When developing these systems, testing is greatly aided by being able to model vehicle kinematic behavior and generate true trajectory data. This data serves to validate estimation models, and in the context of machine learning applications, can serve as a source of training data. We use a wide variety of simulation environments, sometimes writing basic simulations using low-level scripting, and other times using high-level simulation environments, such as Unreal Engine.
Estimation and Filtering: The problem of filtering and estimation go hand-in-hand. Estimation seeks to understand dynamical system behavior and produce trustworthy models that predict system evolution. This is often achieved by incorporating measurements from sensors, sometimes pulling in data from a wide variety of data sources through a process known as “data fusion”. Performing this data fusion and reconciling any discrepancies between the predicted system state and state estimated from measurements is generally done using an estimation filter. This filter, often a Kalman Filter or one of its variants, incorporates system dynamics models, sensor models, statistical noise models, and sensor measurements to produce an optimal estimate of the system’s evolution over time.
Remote Sensing and Geo-spatial analytics: Remote sensing is the capability to monitor a situation from a far from either airborne or spaceborne systems. Geo-spatial analytics involves extracting information about our changing Earth, typically from remote sensing platforms. With the maturation of the commercial satellite industry, the amount of data being produced from such platforms is now a big data problem – too much for expert users to process. Thus, remote sensing workflows need to be streamlined to utilize the power of these information streams in as close to real time as possible. The Intelligent Systems Group is looking at potential applications of this technology in ISR and defense applications, as well as planning humanitarian missions after natural disasters, environmental monitoring, the tracking of illegal fishing and smuggling, and even in the prediction of volcanic eruptions and landslides.
Computer Vision: The goal of computer vision is to make sense of the world seen around us by analyzing image and video data. At Intelligent Systems, we work extensively with visual data and dedicate a significant portion of our time both manipulating and interpreting data, thus computer vision is equal parts image processing and perception. Image processing is often necessary to get visual data formatted correctly for analysis, or for image/video augmentation. Perception is the next step, where algorithms and learning models try to make sense of the data presented to them. Without perception, there can be no detection, classification, or identification and thus no ability to inform decision making systems. We work with many classic perception algorithms, such as edge detectors and Hough transforms as well as contemporary deep learning-based approaches.
Deep Learning: We have yet to see if the buzz and promise of artificial intelligence has reached its zenith. Although there are many ways to implement AI, the second wave AI has crested with the use of Deep Learning. Deep Learning is a framework of machine learning that can automatically learn features and the statistics of their distributions to perform certain tasks. Deep neural networks and computational graph networks in general are powerful but have high data requirements. Convolutional Neural Network and Transformer architectures have been used to win many computer vision and natural language processing competitions. Indeed, super-human performance has been demonstrated by deep learning in searching for pathologies in medical images in certain tests. However, deep learning systems can suffer from lack of generality, explainability and have certain vulnerabilities. Our deep learning research involves applying existing architectures to new modalities, using advanced data augmentation to get the most out of what data is on hand, evaluating bias in pre-trained models, and pursuing AI security research.
Generative Modeling: Data domains containing sensitive information, such as satellite imagery or medical data, tend to be highly restricted. This creates a data shortage problem, limiting the data’s use in training AI/ML models. Therefore, a critical need in the AI/ML community are methods that can be used to generate synthetic data that preserves the statistical behavior of empirical data, while avoiding any privacy or security constraints. At Lynntech, we use generative models, such as generative adversarial networks (GANs), to address sensitive data issues. Generative models learn the underlying data distributions of a data domain and have proven useful in generating novel examples of high-quality data. Lynntech has a proven track record of using generative models to solve the data shortage problem in sensitive data domains, including the use of models for style and domain transfer, denoising, super-resolution, data validation and anomaly detection.