Frequently Asked Questions

Product Features & Capabilities

What is the main challenge addressed in AEye's 'False Positive' use case?

The 'False Positive' use case addresses the challenge of advanced driver assistance systems (ADAS) and autonomous vehicles reliably detecting and classifying objects—such as a balloon floating across the road—that may not pose a real threat. Conventional perception systems often struggle with these edge cases, leading to unnecessary braking or missed detections. AEye's iDAR technology is designed to resolve these scenarios by accurately identifying and classifying such objects, improving overall safety. Source

How does AEye's iDAR technology prevent false positives in ADAS and autonomous driving?

AEye's iDAR technology prevents false positives by intelligently prioritizing the classification of detected objects. When a soft target like a balloon is detected, iDAR immediately flags it with a Dynamic Region of Interest (ROI), increasing the density of laser pulses in that area. This enables rapid and accurate classification, ensuring only relevant data is sent to the domain controller for optimal path planning. This approach minimizes unnecessary braking or evasive maneuvers, improving safety and efficiency. Source

What are Dynamic Regions of Interest (ROI) in AEye's iDAR system?

Dynamic Regions of Interest (ROI) in AEye's iDAR system are areas where the lidar sensor dynamically increases scan density upon detecting an object of interest. This allows the system to gather more detailed data for accurate classification and decision-making, especially in complex or ambiguous scenarios. Source

How does iDAR combine camera and lidar data for better object classification?

iDAR combines the camera's 2D pixel data with lidar's 3D voxel data to create Dynamic Vixels. This fusion enables the system to refine the lidar point cloud around objects of interest, effectively eliminating irrelevant points and improving classification accuracy, even in challenging lighting or environmental conditions. Source

What role do feedback loops play in AEye's iDAR sensors?

Feedback loops in AEye's iDAR sensors allow the system to cue itself or other sensors for additional data collection. If the camera lacks sufficient data due to poor lighting, the lidar will generate a dense pattern of laser pulses to gather more information about the target, ensuring accurate classification without relying solely on camera input. Source

How does AEye's iDAR system handle soft, shape-shifting objects like balloons?

AEye's iDAR system excels at detecting and classifying soft, shape-shifting objects by dynamically increasing scan density and using advanced perception algorithms. This ensures accurate identification and appropriate vehicle response, even when conventional sensors like cameras and radar may fail. Source

Why do conventional camera and radar systems struggle with false positives?

Conventional camera systems struggle because they rely on pixel-based detection, which is highly sensitive to lighting and object appearance. Radar systems often miss soft or non-reflective objects, as they are designed to ignore stationary or non-metallic items. Together, these limitations make it difficult to reliably detect and classify ambiguous objects, leading to false positives or missed detections. Source

How does AEye's iDAR improve safety in autonomous vehicles?

AEye's iDAR improves safety by enabling rapid and accurate detection and classification of objects, including challenging edge cases. By minimizing false positives and ensuring only relevant data is sent for path planning, iDAR helps autonomous vehicles make safer decisions in real-world scenarios. Source

What is the value of using AI-embedded lidar sensors like iDAR?

AI-embedded lidar sensors like iDAR prioritize object classification and can flexibly adjust point cloud density around objects of interest. This ensures that only the most important data is processed, reducing false positives and improving the efficiency and safety of ADAS and autonomous driving systems. Source

How does iDAR's cueing mechanism enhance object detection?

iDAR's cueing mechanism allows the lidar to prompt the camera for additional information about an object's color, size, and shape, and vice versa. This collaborative approach ensures a richer dataset for perception algorithms, leading to more accurate object classification and safer vehicle responses. Source

What is the main focus of AEye's use case on false positives?

The main focus is to address edge cases in ADAS and autonomous driving where conventional perception systems struggle to reliably detect and classify objects. AEye's iDAR technology improves safety by successfully perceiving and responding to these challenges, such as detecting and classifying a balloon floating across the road. Source

How does AEye's iDAR system minimize unnecessary braking or maneuvers?

By accurately classifying objects and reducing false positives, iDAR ensures that only real threats trigger braking or evasive actions. This leads to smoother driving and prevents unnecessary incidents caused by misclassification of harmless objects. Source

What is the advantage of using lidar over camera and radar for object detection in ADAS?

Lidar is more resilient to lighting conditions and object material, providing precise 3D position data. Unlike cameras and radar, lidar can accurately detect and classify soft or non-reflective objects, making it more reliable for challenging scenarios in ADAS and autonomous driving. Source

How does AEye's iDAR system handle ambiguous or edge-case scenarios?

iDAR dynamically adjusts scan density and uses advanced perception algorithms to gather sufficient data for accurate classification, even in ambiguous or edge-case scenarios. This ensures reliable detection and response where conventional systems may fail. Source

What is the benefit of Dynamic Vixels in iDAR's perception system?

Dynamic Vixels are created by combining 2D camera pixels with 3D lidar voxels, allowing iDAR to focus its point cloud on objects of interest. This results in more efficient and accurate object detection and classification, especially in complex environments. Source

How does AEye's iDAR system support optimal path planning?

By ensuring only the most relevant and accurately classified data is sent to the domain controller, iDAR enables optimal path planning for autonomous vehicles. This reduces the risk of unnecessary or unsafe maneuvers and enhances overall driving safety. Source

What is the challenge described in the false positive use case involving a balloon?

The challenge involves a vehicle equipped with ADAS encountering a balloon floating across the road. The perception system must quickly detect and classify the object to determine whether it poses a threat. Conventional systems often struggle with this scenario, either failing to detect the balloon or misclassifying it, which can lead to unnecessary braking or accidents. Source

How does AEye's iDAR system ensure only relevant data is sent to the domain controller?

iDAR uses edge-based classification algorithms and dynamically adjusts point cloud density around objects of interest. This ensures that only the most important and relevant data is sent to the domain controller, optimizing processing and decision-making. Source

What are the limitations of low-density scanning lidar in false positive scenarios?

Low-density scanning lidar may not provide enough data points quickly enough to accurately classify soft, shape-shifting objects like balloons. This can result in missed detections or misclassifications, making it less effective for handling edge cases compared to AEye's iDAR system. Source

Use Cases & Industry Applications

What industries benefit from AEye's lidar technology?

Industries that benefit from AEye's lidar technology include automotive, trucking, smart infrastructure, aviation, defense, rail, and logistics. These sectors use AEye's solutions to enhance safety, enable autonomous systems, and improve operational efficiency. Source

What are some real-world use cases for AEye's lidar solutions?

Real-world use cases include detecting pedestrians in challenging scenarios, obstacle avoidance, differentiating between real and false obstacles, abrupt stop detection, and adapting to new challenges through software updates. Detailed examples are available in AEye's case studies and use case pages. Source

How does AEye's lidar technology support smart infrastructure projects?

AEye's lidar technology enhances smart infrastructure by providing precise measurement imaging and reliable detection in complex environments. This supports connected environments and intelligent transportation systems (ITS) for smarter, safer cities. Source

Can AEye's lidar solutions be used in logistics and supply chain management?

Yes, AEye's lidar solutions are used in logistics to improve operational efficiency and safety, reduce false positives, and enable advanced automation in transportation and supply chain management. Source

Are there case studies demonstrating AEye's lidar performance in adverse conditions?

Yes, AEye provides case studies such as 'False Positive' and 'Abrupt Stop Detection' that demonstrate reliable lidar performance in challenging conditions like rain, darkness, and fog. Source

How does AEye's lidar technology help reduce operational costs?

AEye's software-defined lidar allows for over-the-air updates, reducing the need for costly hardware changes. This future-proof approach minimizes operational costs and ensures long-term adaptability. Source

What types of vehicles can benefit from AEye's lidar solutions?

AEye's lidar solutions are suitable for a wide range of vehicles, including passenger cars, trucks, autonomous vehicles, and vehicles used in logistics, defense, and rail applications. Source

Does AEye provide use cases for its technology?

Yes, AEye provides several use cases to demonstrate the practical applications of its technology, such as detecting cargo protruding from a vehicle and avoiding false positives. These are available on AEye's resources page. Source

Where can I find case studies about lidar applications for ITS use cases?

Case studies about lidar applications for Intelligent Transportation Systems (ITS) use cases are available in AEye's lidar case studies document, which can be downloaded from their resources page. Download here

What is the value of AEye's iDAR in preventing false positives?

AEye's iDAR prevents false positives by intelligently prioritizing classification of detected objects, adjusting point cloud density, and using edge-based classification algorithms. This ensures only the most relevant data is sent to the domain controller, minimizing unnecessary braking or evasive maneuvers. Source

Technical Documentation & Resources

Where can I find technical documentation about AEye's lidar solutions?

Technical documentation, including specification sheets, white papers, and case studies, is available on AEye's resources page. For example, the Apollo solution spec sheet and white papers on lidar technology can be downloaded directly. Source

What types of learning materials does AEye provide?

AEye provides a variety of learning materials, including white papers, videos, case studies, and technical documentation. These resources help users understand the technology and its applications. Source

Where can I find AEye's resource library?

AEye's resource library, containing documents, videos, and other materials, is available on their resources page. Visit here

What kind of resources does AEye offer for technical evaluation?

AEye offers specification sheets, white papers, validation reports, and case studies for technical evaluation. These resources provide detailed insights into product performance and application scenarios. Source

Where can I find general resources about AEye's offerings?

General resources, including case studies, white papers, and technical documents, can be found on AEye's resources page. Visit here

Does AEye provide downloadable case studies and white papers?

Yes, AEye provides downloadable case studies and white papers on topics such as lidar performance, edge cases, and technology comparisons. These are available on the resources page. Source

Competition & Differentiation

How does AEye's lidar technology differ from Velodyne's?

Velodyne offers traditional lidar systems with fixed scan patterns, focusing on high-resolution imaging but lacking software-defined architecture. In contrast, AEye's lidar features dynamic scan patterns, software-defined customization, and over-the-air updates, offering greater adaptability and future-proofing. Source

What sets AEye apart from Luminar in lidar technology?

Luminar focuses on long-range lidar for autonomous vehicles with a primarily hardware-based approach. AEye differentiates itself with dynamic scan patterns, adaptability to challenging environments, and flexible mounting options, making it suitable for a wider range of applications. Source

How does AEye compare to Innoviz in terms of lidar customization?

Innoviz offers solid-state lidar with a focus on automotive applications but has limited software-defined customization. AEye's lidar solutions are customizable without hardware changes, providing greater versatility and adaptability for different industries. Source

What are the key differentiators of AEye's lidar technology?

Key differentiators include dynamic scan patterns, software-defined architecture, future-proof design with over-the-air updates, high performance (ultra-long-range detection and high resolution), and flexibility in placement. These features provide scalability, adaptability, and efficiency across industries. Source

Implementation & Support

How easy is it to integrate AEye's lidar solutions with existing systems?

AEye's solutions are designed for ease of integration, with comprehensive technical support, validation testing tools, and user education resources to ensure a smooth and efficient onboarding process. Source

What support does AEye provide during implementation?

AEye provides direct technical assistance, extensive training resources, and validation testing tools to help customers quickly and confidently adopt its products. Source

AEye Reports Fourth Quarter and Full-Year 2025 Results; Strengthened Foundation for Commercial Growth Read more AEye Joining NVIDIA Halos AI Systems Inspection Lab to Advance Safety-Certified Physical AI Solutions Read more AEye Reports Fourth Quarter and Full-Year 2025 Results; Strengthened Foundation for Commercial Growth Read more AEye Joining NVIDIA Halos AI Systems Inspection Lab to Advance Safety-Certified Physical AI Solutions Read more

False Positive


PERCEPTION INNOVATION
Resolving Edge Cases in ADAS & Autonomous Driving

Human drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.

AEye Edge Case: False Positive
Download

Challenge: A Balloon Floating Across The Road

A vehicle equipped with an advanced driver assistance system (ADAS) is traveling down a residential block on a sunny afternoon when the air is relatively still. A balloon from a child’s birthday party comes floating across the road. It drifts down and ends up suspended almost motionless in the lane ahead. If the driver of an ADAS vehicle isn’t paying attention, this is a dangerous situation. Its perception system must make a series of quick assessments to avoid causing an accident. Not only must it detect the object in front of it, it must also classify it to determine whether it’s a threat. The vehicle’s domain controller can then decide that the balloon is not a threat and drive through it.

How Current Solutions Fall Short

Today’s advanced driver assistance systems (ADAS) will experience great difficulty detecting the balloon or classifying it fast enough to react in the safest way possible. Typically, ADAS vehicle sensors are trained to avoid activating the brakes for every anomaly on the road because it is assumed that a human driver is paying attention. As a result, in many cases, they will allow the car to drive into them. In contrast, level 4 or 5 self-driving vehicles are biased toward avoiding collisions. In this scenario, they’ll either undertake evasive maneuvers or slam on the brakes, creating an unnecessary incident or causing an accident.

Camera. It is extremely difficult for a camera to distinguish between soft and hard objects; everything is just pixels. In this case, perception training is practically impossible because in the real world, soft objects can appear in an almost infinite variety of shapes, forms, and colors—possibly even taking on human-like shapes in poor lighting conditions. Camera detection performance is completely dependent on proper training of all possible permutations of a soft target’s appearance in combination with the right conditions. Sun glare, shade, or night time operation will negatively impact performance.

Radar. An object’s material is of vital significance to radar. A soft object containing no metal or having no reflectivity is unable to reflect radio waves, so radar will miss the balloon altogether. Additionally, radar is typically trained to disregard stationary objects because otherwise it would be detecting thousands of objects as the vehicle advances through the environment. So, even if the balloon is made from reflective metallic plastic, because it’s floating in the air, there might not be enough movement for the radar to detect it. Therefore, radar will provide little, if any, value in correctly classifying the balloon and assessing it as a potential threat.

Camera + Radar. Together, camera and radar would be unable to assess the scenario and react correctly every time. The camera would try to detect the balloon. However, there would be many scenarios where the camera will identify it incorrectly or not at all depending on lighting and perception training. The camera will frequently be confused—it might identify the balloon as a pedestrian or something else for which the vehicle needs to brake. And radar will be unable to eliminate the camera confusion because it typically won’t detect the balloon at all.

LiDAR. Unlike radar and camera, LiDAR is much more resilient to lighting conditions, or an object’s material. LiDAR would be able to precisely determine the balloon’s 3D position in space to centimeter-level accuracy. However, conventional low density scanning LiDAR falls short when it comes to providing sufficient data fast enough for classification and path planning. Typically, LiDAR detection algorithms require many laser points on an object over several frames to register as a valid object. A low density LiDAR that passively scans the surroundings horizontally can experience challenges achieving the required number of detects when it comes to soft, shape-shifting objects like balloons.

Successfully Resolving the Challenge with iDAR

In this scenario, iDAR excels because it can gather sufficient data at the sensor level for classifying the balloon and determining its distance, shape, and velocity before any data is sent to the domain controller. This is possible because as soon as there’s a single LiDAR detection of the balloon, iDAR will immediately flag it with a Dynamic Region of Interest (ROI). At that point, the LiDAR will generate a dense pattern of laser pulses in the area, interrogating the balloon for additional information. All this takes place while iDAR also continues to track the background environment to ensure it never misses new objects.

Software Components and Data Types

Computer Vision. iDAR is designed with computer vision that creates a smarter, more focused LiDAR point cloud. In order to effectively “see” the balloon, iDAR combines the camera’s 2D pixels with the LiDAR’s 3D voxels to create Dynamic Vixels. This combination helps iDAR refine the LiDAR point cloud on the balloon, effectively eliminating all the irrelevant points.

Cueing. For safety purposes, it’s essential to classify soft targets at range because their identities determine the vehicle’s specific and immediate response. To generate a dataset that is rich enough to apply perception algorithms for classification, as soon as LiDAR detects an object, it will cue the camera for deeper information about its color, size, and shape. The perception system will then review the pixels, running algorithms to define the object’s possible identities. To gain additional insights, the camera cues the LiDAR for additional data, which allocates more shots.

Feedback Loops. Intelligent iDAR sensors are capable of cueing each other for additional data, and they are also capable of cueing themselves. If the camera lacks data (due to light conditions, etc.), the LiDAR will generate a feedback loop that tells the sensor to “paint” the balloon with a dense pattern of laser pulses. This enables the LiDAR to gather enough data about the target’s size, speed, and direction to effectively aid the perception system in classifying the object without the benefit of camera data.

The Value of AEye’s iDAR

LiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. When iDAR registers a single detection of a soft target in the road, it’s priority is classification. To avoid false positives, iDAR will schedule a series of LiDAR shots in that area to determine that it’s a balloon, or something else like a cement bag, tumbleweed, or a pedestrian. iDAR can flexibly adjust point cloud density on and around objects of interest and then use classification algorithms at the edge of the network. This ensures only the most important data is sent to the domain controller for optimal path planning.