Frequently Asked Questions

Use Case: A Pedestrian in Headlights & Edge Case Resolution

What is the 'A Pedestrian in Headlights' use case?

This use case describes a scenario where a vehicle equipped with advanced driver assistance systems (ADAS) encounters a pedestrian at night, with oncoming headlights causing glare and challenging the perception system's ability to detect the pedestrian. It highlights the need for advanced perception solutions to address real-world edge cases that conventional systems struggle with.

Why are edge cases like 'A Pedestrian in Headlights' important for ADAS and autonomous driving?

Edge cases, such as a pedestrian suddenly appearing in challenging lighting conditions, occur frequently in real-world driving and are difficult for conventional perception systems to handle reliably. Addressing these scenarios is critical for achieving advanced levels of autonomy and breakthrough ADAS features, as they directly impact safety and system effectiveness.

How do conventional perception systems struggle with the 'A Pedestrian in Headlights' scenario?

Conventional systems like cameras can be blinded by glare from oncoming headlights, causing image sensor saturation (blooming) and loss of critical data. Radar, while unaffected by light, provides only low-resolution detection and struggles to classify soft objects like pedestrians. Combined camera and radar systems may still fail to distinguish pedestrians from other objects in these conditions, leading to missed detections or false positives.

What limitations do traditional LiDAR systems face in this scenario?

Traditional LiDAR systems scan the environment uniformly, giving equal attention to all objects, which can result in insufficient data on critical moving objects like pedestrians. Low-density, fixed scanning LiDAR may not prioritize or track pedestrians quickly enough for the vehicle to respond in time, especially in complex or dynamic environments.

How does AEye's iDAR technology address the 'A Pedestrian in Headlights' challenge?

AEye's iDAR (Intelligent Detection and Ranging) dynamically adjusts LiDAR's temporal and spatial sampling density, selectively focusing on moving objects like pedestrians. When camera data is lost due to glare, iDAR increases LiDAR shots in the affected area, gathering comprehensive data for accurate classification, direction, and velocity estimation—enabling faster and safer responses.

What is 'selective foveation' in AEye's iDAR system?

Selective foveation refers to iDAR's ability to dynamically allocate more LiDAR shots to the most important objects in a scene, such as moving pedestrians, similar to how the human eye focuses on areas of interest. This ensures critical objects are detected and tracked with high accuracy, even in challenging conditions.

How does iDAR use feedback loops to improve detection in low-light or glare conditions?

When the camera experiences pixel saturation and returns little or no data, iDAR's intelligent perception system generates a feedback loop that instructs the LiDAR to increase scanning in the affected area. This targeted approach helps search for potential threats, such as pedestrians, even when visual data is compromised.

What types of data does iDAR collect to classify and track pedestrians?

iDAR collects data on object position, direction, true velocity, and intensity of reflected laser light. This information enables the system to distinguish pedestrians from other objects, determine their movement, and provide actionable data to the vehicle's domain controller for safe navigation decisions.

How does iDAR's intensity data help distinguish pedestrians from vehicles?

Pedestrians are less reflective than metallic objects like vehicles. iDAR uses intensity data from reflected laser light to help its perception system differentiate soft objects (pedestrians) from the surrounding environment, improving detection accuracy in complex scenes.

What is the value of intelligent LiDAR sensors with AI for perception?

Intelligent LiDAR sensors like iDAR, embedded with AI, can selectively allocate resources to moving objects, classify them, and extract critical attributes such as direction and velocity. This enables faster and more accurate responses to immediate threats, enhancing road safety for pedestrians and other vulnerable road users.

How does AEye's iDAR improve safety compared to conventional perception solutions?

AEye's iDAR improves safety by dynamically focusing on moving objects, providing high-resolution, actionable data even when cameras are blinded or radar is insufficient. This targeted approach enables vehicles to react more rapidly and accurately to pedestrians and other hazards, reducing the risk of accidents in complex scenarios.

Where can I download the technical brief for the 'A Pedestrian in Headlights' use case?

You can download the technical brief for this use case directly from AEye's website at this link.

What other edge cases does AEye address with its technology?

AEye addresses a variety of challenging edge cases, including 'Flatbed Trailer Across Roadway', 'Cargo Protruding from Vehicle', 'False Positive Mitigation', 'Abrupt Stop Detection', and 'Obstacle Avoidance'. Each use case demonstrates how AEye's adaptive LiDAR technology improves detection and safety in complex scenarios. See more at AEye's resources page.

How does AEye's iDAR help prevent false positives in ADAS systems?

iDAR's intelligent perception system can distinguish between real obstacles and irrelevant objects or reflections by focusing LiDAR resources on moving objects and analyzing intensity and velocity data. This reduces unnecessary braking or maneuvers, improving both safety and operational efficiency. See the 'False Positive' use case for more details: False Positive Use Case.

How does AEye's technology support advanced driver-assistance systems (ADAS)?

AEye's adaptive LiDAR solutions enable smarter and safer ADAS by providing high-resolution, real-time perception data, even in challenging conditions like glare, darkness, or adverse weather. This allows vehicles to detect and respond to pedestrians and other hazards more effectively, supporting the development of advanced safety features and autonomous driving capabilities.

What makes AEye's LiDAR solutions suitable for challenging environments?

AEye's LiDAR systems are engineered to perform reliably in adverse conditions such as rain, darkness, and fog. Their dynamic scan patterns and software-defined architecture allow them to adapt to changing environments, ensuring consistent performance and operational reliability. Learn more about AEye's products.

How does AEye's technology compare to competitors like Velodyne, Luminar, and Innoviz?

AEye differentiates itself with dynamic scan patterns, software-defined architecture, and over-the-air updates. Unlike Velodyne's fixed scan patterns, Luminar's hardware-focused approach, or Innoviz's limited customization, AEye offers real-time adaptability, high performance, and future-proof technology. For a detailed comparison, see the competition table in AEye's resources.

What are the key features of AEye's LiDAR solutions?

Key features include dynamic scan patterns, ultra-long-range detection (up to one kilometer with Apollo), high resolution, adaptability to challenging environments, software-defined customization, over-the-air updates, and flexible mounting options. These features make AEye's solutions suitable for automotive, smart infrastructure, logistics, and more. See product details.

What industries benefit from AEye's LiDAR technology?

Industries include automotive, trucking, smart infrastructure, aviation, defense, rail, and logistics. AEye's technology enhances safety, operational efficiency, and adaptability across these sectors. See industry-specific case studies at AEye's resources page.

Who are some of AEye's customers and partners?

AEye's customers and partners include Continental (automotive), Sanmina Corporation (manufacturing for non-automotive markets), and NVIDIA (integration with autonomous vehicle platforms). These relationships demonstrate AEye's industry reach and technology trust. Learn more.

What technical documentation is available for AEye's solutions?

AEye provides specification sheets, white papers, case studies, and technology insights. Notable resources include the Apollo spec sheet, 'Rethinking the Four Rs of LiDAR', and validation reports. Access all resources at AEye's resources page.

How easy is it to implement AEye's LiDAR solutions?

AEye's products are designed for ease of integration with existing systems, supported by comprehensive technical support, validation testing tools, and user education resources. These features ensure a smooth and efficient onboarding process for customers. Implementation timelines may vary by use case. See resources.

What feedback have customers provided about AEye's ease of use?

Customers benefit from AEye's ease of integration, comprehensive technical support, user education, and validation tools, which together make the adoption process smooth and efficient. Specific testimonials are not cited, but these features are highlighted in AEye's product documentation.

What integrations does AEye offer for its LiDAR solutions?

AEye's Apollo sensor is integrated with the NVIDIA DRIVE AGX platform, including AGX DRIVE Thor, combining long-range LiDAR perception with advanced AI compute. OEM integration options include behind the windshield, on the roof, or in the grille. Read more.

What are some real-world success stories using AEye's technology?

Success stories include 'A Pedestrian in Headlights' (pedestrian detection in glare), 'Flatbed Trailer Across Roadway' (obstacle detection), 'Obstacle Avoidance', 'False Positive Mitigation', and 'Cargo Protruding from Vehicle'. These case studies demonstrate AEye's impact on safety and operational efficiency. See all case studies.

Where can I find more resources about AEye's technology and use cases?

Visit AEye's resources page for white papers, case studies, technical documentation, videos, and more. The resource archive includes news, press releases, and investor information.

How does AEye's software-defined architecture benefit customers?

AEye's software-defined architecture allows customers to customize and scale LiDAR solutions for specific applications without hardware changes. Over-the-air updates ensure long-term relevance and adaptability, reducing the risk of obsolescence and supporting evolving requirements.

What is the Apollo LiDAR system and what are its capabilities?

The Apollo LiDAR system is AEye's flagship product, featuring a small form factor and the ability to detect objects at distances up to one kilometer. It is ideal for highway autopilot, high-speed driving, and applications requiring ultra-long-range detection. Learn more about Apollo.

What is OPTIS™ and how does it differ from Apollo?

OPTIS™ is a full-stack solution from AEye that captures high-resolution 3D images, interprets them, and provides actionable direction in real-time. While Apollo focuses on ultra-long-range detection, OPTIS™ provides a complete perception and action system for advanced applications. Learn more about OPTIS™.

What is the 4Sight Intelligent Sensing Platform?

The 4Sight Intelligent Sensing Platform delivers precise measurement imaging for autonomous vehicles, smart infrastructure, and logistics. It is designed to enhance safety, efficiency, and adaptability, and supports over-the-air updates for future-proofing. Learn more about 4Sight.

How does AEye's LiDAR technology help with obstacle avoidance?

AEye's customizable LiDAR adapts to specific environments and applications, enabling precise detection and avoidance of obstacles. The 'Obstacle Avoidance' use case demonstrates how dynamic scan patterns and high-resolution data improve safety in real-world scenarios. See the use case.

How does AEye's technology adapt to new challenges and scenarios?

AEye's future-proof technology supports over-the-air updates, allowing the system to adapt to new challenges and scenarios without hardware changes. This is demonstrated in use cases like 'Cargo Protruding from Vehicle', where software updates enable the system to address emerging threats. See the use case.

What are the main benefits of using AEye's LiDAR solutions?

Main benefits include enhanced safety, operational efficiency, adaptability to challenging environments, future-proof design, and flexibility in placement. These advantages are supported by real-world case studies and industry adoption. See more benefits.

How does AEye's LiDAR support smart infrastructure and connected environments?

AEye's LiDAR technology enhances smart infrastructure by providing precise, real-time perception data for connected environments, such as intelligent transportation systems (ITS) and smart cities. This supports improved safety, traffic management, and operational efficiency. Download ITS case studies.

Where can I find videos demonstrating AEye's LiDAR in action?

AEye provides demonstration videos, including 'A Pedestrian in Headlights', 'AEye Lidar for Trucking', and 'Virtual Driving Demo for Automotive Applications', on its resources page. Watch videos here.

How can I contact AEye for more information or support?

For more information, technical support, or to request documentation, visit AEye's official website and use the contact options provided.

AEye to Report 2026 First Quarter Results on Wednesday, May 13 and Host Conference Call and Webcast Read more AEye Joining NVIDIA Halos AI Systems Inspection Lab to Advance Safety-Certified Physical AI Solutions Read more AEye Announces Strategic Commercial Relationship with SynTech to Expand Defense Applications of Apollo™ Lidar Internationally Read more AEye’s CEO Issues Letter to Stockholders Read more AEye to Report 2026 First Quarter Results on Wednesday, May 13 and Host Conference Call and Webcast Read more AEye Joining NVIDIA Halos AI Systems Inspection Lab to Advance Safety-Certified Physical AI Solutions Read more AEye Announces Strategic Commercial Relationship with SynTech to Expand Defense Applications of Apollo™ Lidar Internationally Read more AEye’s CEO Issues Letter to Stockholders Read more

A Pedestrian in Headlights


PERCEPTION INNOVATION
Resolving Edge Cases in ADAS & Autonomous Driving

Human drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. A pedestrian in headlights is an example. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software-definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.

AEye Edge Case: A Pedestrian in Headlights
Download

Challenge: A Pedestrian in Headlights

A vehicle equipped with an advanced driver assistance system (ADAS) is on the road at night, traveling down a busy city block filled with pedestrians and vehicles. Its driver is distracted by a text message. As it approaches an intersection, the headlights of an oncoming car points directly into the lens of its perception system’s camera—just as a pedestrian steps off the curb. There is now a pedestrian in the headlights. In order to react correctly, the system must not only register the pedestrian, but it must also send detailed data about her to the domain controller. This data must enable the controller to classify the pedestrian, determine the direction she’s headed, and how fast she’s moving, so that the controller can decide whether to brake or swerve.

How Current Solutions Fall Short

Today’s advanced driver assistance systems (ADAS) will experience great difficulty recognizing these threats or reacting appropriately. They will either fail to detect the pedestrian before it’s too late or, if the system is biased towards braking, it will constantly slam on the brakes whenever an unclassified object, like a reflection or soft target, enters the vehicle’s path. Such behavior will either create a nuisance or cause accidents.

Camera. A camera’s performance is conditional on the environment. In this scenario, the problem is that the camera’s limited dynamic range may not be able to handle the sharp contrast between the ambient low light and the glare from oncoming headlights. The large difference in light intensity between the surroundings and what’s shining into the camera lens causes some of the image sensor pixels to be saturated—an effect called blooming. As a result, there is little-to-no information from the camera to send to the perception system. And there is potential for obstacles—or pedestrians—to be hiding in that blind spot.

Radar. Radar is not adversely affected by light conditions, so oncoming headlights have no impact on its ability to see the pedestrian. However, the manner in which it detects objects—via radio waves— does not contribute much to resolve the problem due to its limited resolution. Radar can only provide low resolution detection of objects, which means that everything a radar detects appears as an amorphous shape. Moreover, radar’s ability to detect objects is impacted by their materials. Metallic objects, like vehicles, produce strong radar signals; soft objects, like pedestrians, create weak ones.

Camera + Radar. While camera and radar might potentially improve detectability, a system that relies on a camera combined with radar will be unable to assess this situation accurately. When the camera fails to detect the pedestrian, the perception system will rely entirely on the radar to send data about the environment to the domain controller. While surrounding vehicles will register clearly, other soft objects like pedestrians, especially if they are close to vehicles, will be hard to distinguish at all—certainly not enough for classification.

LiDAR. LiDAR relies on directed laser light to precisely determine an object’s 3D position in space to centimeter-level accuracy. As such, LiDAR also does not struggle with issues of light saturation. Where conventional LiDAR falls short is that its scans are collected via a passive process. LiDAR scans the environment uniformly, giving the same attention to irrelevant objects (parked vehicles, buildings, trees) as to objects in motion (pedestrians, moving vehicles). In this scenario, low density fixed scanning LiDAR would be challenged to prioritize and track the pedestrian. As a result, the system would likely be unable to gather sufficient data about her location, velocity, and trajectory fast enough for the vehicle’s controller to respond in time.

Successfully Resolving the Pedestrian Challenge with iDAR

The moment the camera experiences a loss of data, iDAR dynamically changes the LiDAR’s temporal and spatial sampling density, selectively foveating on every moving object—much like the human eye—and comprehensively “painting” them with a dense pattern of laser pulses. At the same time, it keeps tabs on stationary background objects (parked cars, buildings, trees). By selectively allocating additional shots to the most important objects in a scene, like pedestrians, iDAR is able to gather comprehensive data without overloading system resources. This data can then be used to extract additional information about moving objects, such as their identity, direction, and velocity.

Software Components and Data Types

Cueing + Feedback Loops. During difficult or low light conditions, iDAR’s intelligent perception system relies on LiDAR to collect data about stationary and moving objects. When the pixels are saturated and the camera returns little or no data, the system will immediately generate a feedback loop that tells the LiDAR to increase shots in the area of the blooming to search for potential threats.

True Velocity. Scanning the pedestrian at a much higher rate than the rest of the environment enables iDAR to gather all useful information, including vector and true velocity. These data types are crucial information for the domain controller, which needs to determine how fast the pedestrian is moving and in which direction she’s headed.

Intensity. iDAR collects data about the intensity of laser light reflecting back to the LiDAR and uses it to make crucial decisions. Pedestrians are inherently less reflective than metallic objects, like vehicles, so laser light bouncing off of them is less intense. In many situations, intensity data can help iDAR’s perception system better distinguish soft objects from the surrounding environment.

The Value of AEye’s iDAR

Intelligent LiDAR sensors embedded with AI for perception are very different than those that passively collect data. When a vehicle’s perception system loses the benefit of camera data, iDAR selectively allocates additional LiDAR shots to generate a dense pattern of laser pulses around every object that’s in motion. Using this information, the LiDAR can classify objects and extract important information, such as direction and velocity. This unprecedented ability to calculate valuable attributes enables the vehicle to act more rapidly to immediate threats and track them through time and space more accurately, making the roads safer for pedestrians.