4Sight

4Sight: The Intelligent Sensing Platform

AEye’s development of 4Sight™ began with the need to create intelligent solutions for a full range of dynamic sensing applications. Our pioneer work led to a new solid-state, adaptive LiDAR that captures data more accurately, setting the requirements for artificial intelligence-driven software to direct and process data in the most efficient and effective way. This formed the building blocks of the 4Sight platform and its products.

4Sight captures more information with less data, enabling faster, more accurate and more reliable perception. This intelligence is enabled by AEye’s patented bistatic architecture, which keeps transmit and receive channels separate, allowing 4Sight to optimize for both. As each pulse is transmitted, the receiver is told where and when to look for its return – enabling artificial intelligence to be introduced into the sensing process at the point of acquisition. Ultimately, this establishes the 4Sight platform as adaptive – allowing it to focus on what matters most in the field of view.

The result mimics how the human visual cortex conceptually focuses and evaluates the environment and improves the probability of detection and accuracy of classification.

4 Levels of 4Sight

The Problem with Other LiDAR

Other LiDAR systems are passive, lack intelligence, and do not take into account how conditions evolve or know how to balance competing priorities, often assigning every pixel the same priority, causing them to respond poorly to complex or dangerous situations.

These LiDAR systems simply gather data about the environment indiscriminately and without discretion, passing it to a central processor where 75 to 95 percent of it is discarded because it is redundant or useless. This creates a huge strain on interrogation times, bandwidth, and processing – causing latency.

4Sight solves those challenges.

AEye’s 4Sight combines breakthrough innovations to solve critical challenges in perception and path planning.

In its first instantiation, 4Sight combines solid-state, adaptive LiDAR, an optionally fused low-light HD camera, and integrated deterministic artificial intelligence to capture more intelligent information with less data, enabling faster, more accurate, and more reliable perception.

4Sight collects 4 to 8 times the information of conventional, fixed pattern LiDAR while reducing power consumption 5 to 10 times. It does this by decreasing how much irrelevant data is conveyed to the motion-planning system – in many cases, by more than 90 percent.

Adaptive LiDAR

4Sight uses the world’s first solid-state, high performance, adaptive LiDAR.

The 4Sight platform offers extremely fast scanning capabilities and solid-state reliability in a small form factor. Its modular architecture is a fraction of the cost of existing sensors and is designed to be “future proof,” evolving as technology and application requirements change over time. This gives OEMs the flexibility to configure AEye’s 4Sight products for different application use cases in automotive, industrial, mobility, rail, trucking, and ITS.

AEye’s Dynamic Vixels In True Color Point Cloud

Optional Integration of 2D Camera and 3D LiDAR

We live in a color-coded world. From lane lines to road signs, our driving infrastructure is built on contrasting color cues. 4Sight’s intelligent approach to sensing creates the unique ability to capture camera pixels and 3D LiDAR voxels, enabling vehicles to visualize better than humans.

We call this new, patented data object a Dynamic Vixel™, which captures both RGB and XYZ data all at the point of acquisition. This real-time integration of pixels and voxels means the data is handled more quickly, efficiently, and accurately at the sensor level, rather than in later processing. The resulting content empowers deterministic artificial intelligence to evaluate a scene using 2D, 3D, and 4D information to identify location, track objects, and deliver insights with less latency, bandwidth, and computer power.

Deterministic Artificial Intelligence and Software-Configurability

Deterministic artificial intelligence enables 4Sight to provide full scene coverage while collecting and analyzing only the data that matters – without missing anything.

With adaptive targeting and intelligence in the data collection process, this flexible 4Sight platform can increase and place resolution where needed throughout a scene, radically improving the probability of detection and the accuracy of classification. Ultimately, this scalable, software-configurable approach allows the system to capture more intelligent information with less data – enabling faster, more accurate, and more reliable sensing and path planning – key to the safe rollout of autonomous vehicles and other applications.

The 4 Levels of 4Sight

The 4Sight platform can be configured into specialized levels, each designed to meet the needs of specific use cases or applications in mobility, ADAS, trucking, transit, rail, intelligent traffic systems (ITS), aerospace, and beyond.

4Sight at Design

iDAR "At Design"
  • Single scan pattern determined at design
  • Deterministic pattern customized for specific use case

4Sight at Design enables customers to create a single, deterministic scan pattern to deliver optimal information for any specific use case. This level is particularly beneficial for repetitive motion applications, such as powerline or pipeline inspection (which cameras alone can not achieve), or robots in a closed loop environment that is unlikely to experience anything unexpected. Through 4Sight at Design, the customers’ unique, deterministic scan pattern will give them precisely the information they need for their repetitive pattern application.

Triggered 4Sight

Triggered iDAR
  • Library of deterministic patterns created at design
  • Select pre-determined scan pattern per situation/use case
  • Scan pattern triggered by external input – map, IMU, speed, weather

With Triggered 4Sight, customers can create a library of deterministic, software-configurable scan patterns at design time, each one addressing a specific use case. Maps, IMU, speed, tilt, weather, and direction of the vehicle can all trigger the sensor to switch from one scan pattern to another. For example, a customer can create different scan patterns for highway, urban, and suburban driving, as well as an “exit ramp” pattern. In addition, the customer can create scan patterns for those same driving environments, but optimized for bad weather (i.e., “Highway rain scan pattern” vs “Highway sunlight scan pattern”).

Please note: Customers can define and create their own scan patterns, but they will need to be certified eye safe by AEye before implementation.

Responsive 4Sight

Responsive iDAR
  • Scan patterns created at design AND created at run-time
  • Camera, radar cue, perception or higher-level logic changing scan real time
  • Feedback loops
  • Non-Deterministic
  • Dynamic ROI

With Responsive 4Sight, scan patterns are created at design and run time. In this level, the entire platform is completely software-configurable and situationally aware, adjusting, in real time, how it scans the scene, where to apply density and extra power, and what scan rate to employ. In this level, feedback loops and other sensors, such as camera and radar, inform the lidar to create dense, dynamic Regions of Interest (ROIs) at various points throughout the scene. It can also dynamically alter its scan pattern on the fly. Responsive 4Sight is akin to human perception. The system is intelligent, proactively understanding and interrogating the scene, and perpetually optimizing its own scan patterns and data collection to focus on the information that matters most.

Predictive 4Sight

Predictive iDAR
  • Scan patterns created at design AND created at run-time
  • Scene based predictive scanning
  • Forecasting based
  • NN predictions
  • Solves edge case scenarios

Predictive 4Sight takes what is offered in Responsive 4Sight but looks ahead and, therefore, is even smarter about where (and what) it interrogates. In this level, basic perception can be distributed to the edge of the sensor network. Just like a human, the system understands the motion of everything it sees, which enables it to deliver more information with less data, focusing its energy on the most important objects in a scene while paying attention to everything else in its periphery. The end result is Motion Forecasting through neural networks. Like human intuition, Predictive 4Sight can “sense” (i.e., predict) where an object will be at different times in the future, enabling the vehicle to solve even the most challenging edge cases.

4Sight: The Intelligent Sensing Platform