The Problem with Other LiDAR
Other LiDAR systems are passive, lack intelligence, and do not take into account how conditions evolve or know how to balance competing priorities, often assigning every pixel the same priority, causing them to respond poorly to complex or dangerous situations.
These LiDAR systems simply gather data about the environment indiscriminately and without discretion, passing it to a central processor where 75 to 95 percent of it is discarded because it is redundant or useless. This creates a huge strain on interrogation times, bandwidth, and processing – causing latency.
iDAR solves those challenges.
AEye’s iDAR combines breakthrough innovations to solve critical challenges in perception and path planning.
In its first instantiation, iDAR combines solid-state, active LiDAR, an optionally fused low-light HD camera, and integrated deterministic artificial intelligence to capture more intelligent information with less data, enabling faster, more accurate, and more reliable perception.
iDAR collects 4 to 8 times the information of conventional, fixed pattern LiDAR while reducing power consumption 5 to 10 times. It does this by decreasing how much irrelevant data is conveyed to the motion-planning system – in many cases, by more than 90 percent.
iDAR uses the world’s first solid-state, high performance, active LiDAR.
The iDAR platform offers extremely fast scanning capabilities and solid-state reliability in a small form factor. Its modular architecture is a fraction of the cost of existing sensors and is designed to be “future proof,” evolving as technology and application requirements change over time. This gives OEMs the flexibility to configure AEye’s products for different autonomous or partially automated use cases in automotive, industrial, mobility, rail, trucking, ITS, and beyond.
Dynamic Vixels – integration of an optional 2D camera and 3D active LiDAR
We live in a color-coded world. From lane lines to road signs, our driving infrastructure is built on contrasting color cues. AEye’s intelligent approach to sensing creates the unique ability to capture camera pixels and 3D LiDAR voxels, enabling vehicles to visualize better than humans.
We call this new, patented data object a Dynamic Vixel, which captures both RGB and XYZ data all at the point of acquisition. This real-time integration of pixels and voxels means the data is handled more quickly, efficiently, and accurately at the sensor level, rather than in later processing. The resulting content empowers deterministic artificial intelligence to evaluate a scene using 2D, 3D, and 4D information to identify location, track objects, and deliver insights with less latency, bandwidth, and computer power.
Deterministic Artificial Intelligence and Software-Configurability
Embedded deterministic AI enables iDAR to provide full scene coverage while collecting and analyzing only the data that matters – without missing anything.
With active targeting and intelligence in the data collection process, the flexible iDAR platform can increase and place resolution where needed throughout a scene, radically improving the probability of detection and the accuracy of classification. Ultimately, this scalable, software-configurable approach allows the system to capture more intelligent information with less data – enabling faster, more accurate, and more reliable sensing and path planning – key to the safe rollout of autonomous vehicles.
The 4 Levels of iDAR
The iDAR platform can be broken down into four simple levels, each designed to meet the needs of specific use cases or applications in mobility, ADAS, trucking, transit, construction, rail, intelligent traffic systems (ITS), aerospace, and beyond.
- Single scan pattern determined at design
- Deterministic pattern customized for specific use case
iDAR at Design enables customers to create a single, deterministic scan pattern to deliver optimal information for any specific use case. This level is particularly beneficial for repetitive motion applications, such as powerline or pipeline inspection (which cameras alone can not achieve), or robots in a closed loop environment that is unlikely to experience anything unexpected. Through iDAR at Design, the customers’ unique, deterministic scan pattern will give them precisely the information they need for their repetitive pattern application.
- Library of deterministic patterns created at design
- Select pre-determined scan pattern per situation/use case
- Scan pattern triggered by external input – map, IMU, speed, weather
With Triggered iDAR, customers can create a library of deterministic, software-configurable scan patterns at design time, each one addressing a specific use case. Maps, IMU, speed, tilt, weather, and direction of the vehicle can all trigger the sensor to switch from one scan pattern to another. For example, a customer can create different scan patterns for highway, urban, and suburban driving, as well as an “exit ramp” pattern. In addition, the customer can create scan patterns for those same driving environments, but optimized for bad weather (i.e., “Highway rain scan pattern” vs “Highway sunlight scan pattern”).
Please note: Customers can define and create their own scan patterns, but they will need to be certified eye safe by AEye before implementation.
- Scan patterns created at design AND created at run-time
- Camera, radar cue, perception or higher-level logic changing scan real time
- Feedback loops
- Dynamic ROI
With Responsive iDAR, scan patterns are created at design and run time. In this level, the entire platform is completely software-configurable and situationally aware, adjusting, in real time, how it scans the scene, where to apply density and extra power, and what scan rate to employ. In this level, feedback loops and other sensors, such as camera and radar, inform the LiDAR to create dense, dynamic Regions of Interest (ROIs) at various points throughout the scene. It can also dynamically alter its scan pattern on the fly. Responsive iDAR is akin to human perception. The system is intelligent, proactively understanding and interrogating the scene, and perpetually optimizing its own scan patterns and data collection to focus on the information that matters most.
- Scan patterns created at design AND created at run-time
- Scene based predictive scanning
- Forecasting based
- NN predictions
- Solves edge case scenarios
Predictive iDAR takes what is offered in Responsive iDAR but looks ahead and, therefore, is even smarter about where (and what) it interrogates. In this level, basic perception can be distributed to the edge of the sensor network. Just like a human, Predictive iDAR understands the motion of everything it sees, which enables the system to deliver more information with less data, focusing its energy on the most important objects in a scene while paying attention to everything else in its periphery. The end result of Predictive iDAR is Motion Forecasting through neural networks. Like human intuition, Predictive iDAR can “sense” (i.e., predict) where an object will be at different times in the future, enabling the vehicle to solve even the most challenging edge cases.
iDAR is smarter than LiDAR