iDAR

iDAR enables self-driving cars to see only what matters.

iDAR is smarter than LiDAR

AEye overcomes the pitfalls of conventional LiDAR for autonomous vehicles with an intelligent approach to artificial perception. We began by asking ourselves—

“What is the most intelligent way to design a perception platform for autonomous vehicles that is safe, cost-effective, and reliable?”

Answering this question led to the development of iDAR, AEye’s innovative artificial perception platform which mimics how the human visual cortex conceptually focuses on and evaluates the environment around the vehicle, driving conditions, and road hazards.

iDAR stands for Intelligent Detection and Ranging

Current LiDAR sensors rely on an array of independent sensors that collectively produce a tremendous amount of data. This requires lengthy processing time and massive computing power to collect and assemble data sets by aligning, analyzing, correcting, down sampling, and translating them into actionable information that can be used to safely guide the vehicle.

In addition, these systems lack intelligence and gather information indiscriminately about the environment. They do not take into account how conditions evolve or know how to balance competing priorities, often responding poorly to complex or dangerous situations by assigning every pixel the same priority.

iDAR solves those challenges.

In its first instantiation, iDAR takes solid-state agile LiDAR, fuses it with an optional low-light HD camera, then integrates artificial intelligence to create a smart, software definable platform that enables faster, more accurate, and more reliable artificial perception.

AEye’s iDAR combines breakthrough innovations to solve critical challenges in perception and path planning.

The bottom line is this: Correctly implemented iDAR increases the speed of a car’s artificial perception by up to 10 times, while reducing power consumption 5 to 10 times. It does this by decreasing how much data is conveyed to the motion-planning system—in many cases, by more than 90 percent.

Agile LiDAR

iDAR uses the world’s first solid-state agile LiDAR (Light Detection and Ranging).

AEye’s iDAR platform offers extremely fast scanning and automotive reliability in a small form factor that is designed to be mass produced at a fraction of the cost of existing sensors.

While AEye’s agile LiDAR delivers ranges up to 300 meters at high resolution, its ability to dynamically adapt its scanning patterns and to identify and focus on specific Regions of Interest (ROIs) within a single frame is what really gets perception engineers excited. Integrating embedded artificial intelligence enables the platform to learn and adapt as driving situations change.

Leveraging the unique capabilities of Dynamic Vixel data means AEye’s agile LiDAR can target and identify objects within a scene 10 to 20 times more accurately than LiDAR-only products—and deliver the information 10 times faster.

The result is greater reliability and longer range than existing LiDAR sensors along with greater safety and performance at lower cost.

Dynamic Vixels – integration of an optional 2D camera and 3D agile LiDAR

AEye’s iDAR platform enables vehicles to see and perceive like people so self-driving cars can intelligently assess hazards and respond to changing conditions. An innovation at the core of this capability is Dynamic Vixels, a new sensor data type that combines pixels from 2D cameras with voxels from LiDAR.

This real-time integration of pixels and voxels means the data is handled more quickly, efficiently, and accurately at the sensor level, rather than in later processing. The resulting content empowers artificial intelligence to evaluate a scene using 2D, 3D, and 4D information to identify location, track objects, and deliver insights with less latency, bandwidth, and computer power.

In essence, using this new data type within iDAR’s architecture makes it possible to give self-driving cars better-than-human reflexes without the distraction factor that we humans are prone to. So, the autonomous vehicle with iDAR can more intelligently assess and respond to conditions in the environment, improving safety and reliability.

Artificial Intelligence and Software Definability

Embedded artificial intelligence enables iDAR to use thousands of existing and custom computer vision algorithms for insights that can be leveraged by path-planning software.

Current LiDAR-only solutions lack this intelligence and face severe limitations in their ability to respond to changing conditions.

Current LiDAR-only solutions simply collect as much data as possible without discretion and pass it to a central processor where 75 to 95 percent of it is discarded because it is redundant or useless. This creates a huge strain on interrogation times, bandwidth and processing, causing latency.

With agile targeting and intelligence in the data collection process, the iDAR platform collects and/or selects and analyzes only the data that matters—without missing anything.

This scalable, software definable approach allows the system to capture more intelligent information with less data, enabling faster, more accurate, and more reliable perception and path planning—key to the safe rollout of autonomous vehicles.

The 4 Levels of iDAR

The iDAR platform can be broken down into four simple levels, each designed to meet the needs of specific use cases or applications in mobility, ADAS, trucking, transit, construction, rail, intelligent traffic systems (ITS), aerospace, and beyond.

iDAR at Design

iDAR "At Design"
  • Single scan pattern determined at design
  • Deterministic pattern customized for specific use case

iDAR at Design enables customers to create a single, deterministic scan pattern to deliver optimal information for any specific use case. This level is particularly beneficial for repetitive motion applications, such as powerline or pipeline inspection (which cameras alone can not achieve), or robots in a closed loop environment that is unlikely to experience anything unexpected. Through iDAR at Design, the customers’ unique, deterministic scan pattern will give them precisely the information they need for their repetitive pattern application.

Triggered iDAR

Triggered iDAR
  • Library of deterministic patterns created at design
  • Select pre-determined scan pattern per situation/use case
  • Scan pattern triggered by external input – map, IMU, speed, weather

With Triggered iDAR, customers can create a library of deterministic, software definable scan patterns at design time, each one addressing a specific use case. Maps, IMU, speed, tilt, weather, and direction of the vehicle can all trigger the sensor to switch from one scan pattern to another. For example, a customer can create different scan patterns for highway, urban, and suburban driving, as well as an “exit ramp” pattern. In addition, the customer can create scan patterns for those same driving environments, but optimized for bad weather (i.e., “Highway rain scan pattern” vs “Highway sunlight scan pattern”).

Please note: Customers can define and create their own scan patterns, but they will need to be certified eye safe by AEye before implementation.

Responsive iDAR

Responsive iDAR
  • Scan patterns created at design AND created at run-time
  • Camera, radar cue, perception or higher-level logic changing scan real time. Feedback loops.
  • Non-Deterministic
  • Dynamic ROI

With Responsive iDAR, scan patterns are created at design and run time. In this level, the entire platform is completely software definable and situationally aware, adjusting, in real time, how it scans the scene, where to apply density and extra power, and what scan rate to employ. In this level, feedback loops and other sensors, such as camera and radar, inform the LiDAR to create dense, dynamic Regions of Interest (ROIs) at various points throughout the scene. It can also dynamically alter its scan pattern on the fly. Responsive iDAR is akin to human perception. The system is intelligent, proactively understanding and interrogating the scene, and perpetually optimizing its own scan patterns and data collection to focus on the information that matters most.

Predictive iDAR

Predictive iDAR
  • Scan patterns created at design AND created at run-time
  • Scene based predictive scanning
  • Forecasting based
  • NN predictions
  • Solves edge case scenarios

Predictive iDAR takes what is offered in Responsive iDAR but looks ahead and, therefore, is even smarter about where (and what) it interrogates. In this level, basic perception can be distributed to the edge of the sensor network. Just like a human, Predictive iDAR understands the motion of everything it sees, which enables the system to deliver more information with less data, focusing its energy on the most important objects in a scene while paying attention to everything else in its periphery. The end result of Predictive iDAR is Motion Forecasting through neural networks. Like human intuition, Predictive iDAR can “sense” (i.e., predict) where an object will be at different times in the future, enabling the vehicle to solve even the most challenging edge cases.

A Platform for Perception Innovation

The world’s first commercially available 2D/3D perception platform designed to run in the sensors of autonomous vehicles.

Basic perception can now be distributed to the edge of the sensor network, enabling the collection of data in real time, enhancing existing centralized perception software platforms by reducing latency, lowering costs, and securing functional safety.

Perception advancements will be made available through a software reference library, which includes the following features that will be resident in AEye’s 4Sight A (ADAS) and 4Sight M (Mobility) sensors:

  • Detection: Identification of objects (e.g., cars, pedestrians, etc.) in the 3D point cloud and camera. The system accurately estimates their centroids, width, height and depth to generate 3D bounding boxes for the objects.
  • Classification: Classifying the type of detected objects. This helps in further understanding the motion characteristics of those objects.
  • Instance Segmentation: Further classifying each point in the scene to identify specific objects those points belong to. This is especially important to accurately identify finer details, such as lane divider markings on the road.
  • Tracking: Tracking objects through space and time. This helps keep track of objects that could intersect the vehicle’s path.
  • Range/Orientation: Identifying where the object is relative to the vehicle, and how it’s oriented relative to the vehicle. This helps the vehicle contextualize the scene around it.
  • Instant True Velocity: Leveraging the benefits of agile LiDAR to capture the speed and direction of the object’s motion relative to the vehicle. This provides the foundation for motion forecasting.
  • Lane Marking Detection: Detection and classification of every type of lane marking, ensuring safe vehicle navigation.
  • Drivable Area: Detection of empty space in front of the vehicle until the next road obstacle (e.g., another vehicle, pedestrians etc.), as well as detection of the lane markings on either side of the vehicle or in the lane to its left or right.
  • Motion Forecasting: Forecasting where the object will be at different times in the future. This helps the vehicle to assess the risk of collision and charter a safe course.

AEye Platform for Perception Innovation

iDAR is smarter than LiDAR.