Frequently Asked Questions

Product Information

What is the main focus of the white paper "Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison"?

This white paper provides a technical comparison between Time of Flight (ToF) and Frequency Modulated Continuous Wave (FMCW) LiDAR systems, evaluating claims about FMCW's benefits and highlighting the strengths of high-performance, agile-scanning ToF systems for autonomous vehicle applications. Read the full white paper here.

Is FMCW LiDAR a new technology?

No, FMCW LiDAR is not a new technology. Its origins date back to the 1960s at MIT Lincoln Laboratory, only a few years after the invention of the laser. While recent advances in laser technology have renewed interest in FMCW, many of its limitations have been known for decades. Source.

How does ToF LiDAR compare to FMCW in terms of object detection range?

ToF LiDAR systems, such as those from AEye, can detect low-reflectivity objects and pedestrians at over 200 meters, vehicles at 300 meters, and large trucks at up to 1 kilometer. FMCW systems on the market currently do not match this level of range, field of view, and point density. Source.

Can FMCW LiDAR measure velocity and range more efficiently than ToF?

This claim is misleading. While FMCW can theoretically measure velocity in a single shot, it cannot instantly measure lateral velocity, which is critical for many automotive scenarios. ToF systems, especially with agile scanning, can efficiently measure both radial and lateral velocities, providing more actionable data for autonomous vehicles. Source.

Does FMCW LiDAR experience less interference than ToF LiDAR?

Contrary to some claims, FMCW LiDAR can suffer more from interference, especially in the time/waveform domain, due to sidelobes and reflections from surfaces like windshields. ToF systems, particularly those with Gaussian pulses, have no time-based sidelobes and handle multi-echo processing more robustly. Source.

Is FMCW LiDAR automotive grade, reliable, and scalable?

This is unproven. While FMCW leverages some mature photonics components, its critical laser requirements and system complexity make it less mature and scalable than ToF LiDAR, which already uses automotive-qualified components and has a robust supply chain. Source.

What are the cost differences between ToF and FMCW LiDAR systems?

ToF systems generally have lower costs due to mature supply chains for lasers, detectors, and optics. FMCW systems require expensive, low phase noise lasers and high-precision optics, making them costlier to produce at scale. Source.

How mature are ToF and FMCW LiDAR technologies?

According to NASA's Technology Readiness Level (TRL) scale, ToF LiDAR components and systems are at TRL 8 (close to full deployment), while FMCW is at TRL 4 (early development). This indicates a significant maturity gap. Source.

What are the main technical challenges for FMCW LiDAR in automotive applications?

FMCW LiDAR faces challenges such as the need for low phase noise lasers, high-speed ADCs and FPGAs, and tight optical tolerances. These requirements increase complexity and cost, making large-scale automotive deployment difficult compared to ToF systems. Source.

How does the combination of FMCW and Optical Phased Arrays (OPAs) impact LiDAR performance?

This approach is still experimental and not ready for large-scale deployment. Combining FMCW with OPAs introduces additional technical risks and is estimated to be at a very low technology readiness level (TRL 3). Source.

Where can I download the full white paper comparing ToF and FMCW LiDAR?

You can download the full white paper "Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison" from this link.

What is the Technology Readiness Level (TRL) of ToF and FMCW LiDAR?

ToF LiDAR is considered to be at TRL 8 (close to deployment in multiple successful missions), while FMCW LiDAR is at TRL 4 (component and/or breadboard validation in laboratory environment). Source.

How do ToF and FMCW LiDAR systems handle multi-echo processing?

ToF systems handle multi-echo processing straightforwardly, which is important for dealing with obscurants like smoke, steam, and fog. FMCW systems require significant disambiguation, making multi-echo processing more complex. Source.

What are the main advantages of ToF LiDAR for autonomous vehicles?

ToF LiDAR offers high shot rates, agile scanning, high point cloud density, robust supply chains, and mature automotive-grade components, making it well-suited for autonomous vehicle applications where cost, range, and performance are critical. Source.

How does AEye's ToF LiDAR system achieve high performance?

AEye's ToF LiDAR system achieves high performance through fast laser shot rates (millions per second), agile scanning, high-density regions of interest, and the ability to detect objects at long ranges (up to 1 km for large vehicles). Source.

What are the main limitations of FMCW LiDAR for automotive use?

FMCW LiDAR's main limitations include lower shot rates, higher system complexity, expensive laser and optics requirements, and lower technology readiness for mass automotive deployment. Source.

How does AEye's ToF LiDAR handle clutter and interference?

AEye's ToF LiDAR uses Gaussian pulses, which have no time-based sidelobes, making it more robust against clutter and interference compared to FMCW systems that can suffer from sidelobe artifacts and reflections. Source.

What resources are available to learn more about AEye's LiDAR technology?

AEye offers a variety of resources, including white papers, case studies, and technical documentation. You can access these materials on the AEye resources page.

Where can I find case studies on AEye's LiDAR applications?

Case studies demonstrating AEye's LiDAR technology in real-world applications are available on the AEye resources page and in documents such as Lidar Case Studies for ITS Use Cases.

Features & Capabilities

What features does AEye's LiDAR offer for autonomous vehicles and smart infrastructure?

AEye's LiDAR solutions provide dynamic scan patterns, ultra-long-range detection (up to 1 km), high resolution, adaptability to challenging environments (rain, darkness, fog), over-the-air updates, and flexible mounting options. These features enhance safety, efficiency, and adaptability for autonomous vehicles and smart infrastructure. Learn more.

Does AEye's LiDAR support over-the-air updates?

Yes, AEye's LiDAR technology is future-proof, supporting over-the-air updates to ensure continued relevance and adaptability to evolving requirements. Source.

What integration options are available for AEye's LiDAR sensors?

AEye's Apollo sensor is integrated with the NVIDIA DRIVE AGX platform and supports OEM integration options behind the windshield, on the roof, or in the grille, allowing for flexible deployment in various vehicle designs. Source.

How does AEye's software-defined architecture benefit customers?

AEye's software-defined LiDAR allows for customization and scalability without hardware changes, enabling adaptation to specific customer needs and future requirements. Source.

What technical documentation is available for AEye's LiDAR solutions?

Technical documentation includes specification sheets, white papers, validation reports, and case studies. For example, the Apollo spec sheet and white papers like "Rethinking the Four Rs of LiDAR" are available on the resources page.

What are the main products offered by AEye?

AEye offers products such as Apollo (flagship long-range LiDAR), OPTIS (full-stack 3D perception system), and the 4Sight Intelligent Sensing Platform, all designed for applications in autonomous vehicles, smart infrastructure, logistics, and more. Learn more.

What industries are represented in AEye's case studies?

Industries include automotive, trucking, smart infrastructure, aviation, defense, rail, and logistics. These case studies demonstrate the versatility of AEye's LiDAR technology. Source.

What are some specific use cases addressed by AEye's LiDAR?

Use cases include pedestrian detection in challenging scenarios, obstacle avoidance, false positive reduction, abrupt stop detection, cargo protrusion detection, and flexible sensor placement for optimal coverage. See use cases.

What feedback have customers provided about the ease of use of AEye's products?

Customers benefit from ease of integration, comprehensive technical support, user education resources, and validation testing tools, making onboarding and implementation smooth and efficient. Source.

How long does it take to implement AEye's LiDAR solutions?

Implementation timelines vary by use case, but AEye's focus on ease of integration, technical support, and validation tools ensures a quick and efficient start for most customers. Source.

Competition & Comparison

How does AEye's LiDAR compare to Velodyne?

Velodyne offers traditional LiDAR with fixed scan patterns and high-resolution imaging. AEye differentiates itself with dynamic scan patterns, software-defined architecture, and over-the-air updates, providing greater adaptability and future-proofing. Source.

How does AEye's LiDAR compare to Luminar?

Luminar focuses on long-range, hardware-centric LiDAR. AEye's LiDAR offers dynamic scan patterns, adaptability to challenging environments, and flexible mounting options, making it suitable for a wider range of applications. Source.

How does AEye's LiDAR compare to Innoviz?

Innoviz offers solid-state LiDAR with a focus on automotive applications. AEye's software-defined architecture allows for greater customization and future-proofing, while also providing high performance and long-range detection. Source.

What are the main differentiators of AEye's LiDAR compared to competitors?

AEye's main differentiators include dynamic scan patterns, software-defined customization, future-proof design with over-the-air updates, high performance (ultra-long-range and high resolution), and flexible placement options. Source.

Why should a customer choose AEye over other LiDAR providers?

Customers should consider AEye for its unique combination of dynamic scan patterns, software-defined architecture, future-proof technology, high performance, and flexible integration options, all of which address diverse industry needs and use cases. Source.

Use Cases & Benefits

Who are some of AEye's customers and partners?

AEye's customers and partners include Continental (automotive), Sanmina Corporation (manufacturing), and NVIDIA (autonomous driving platforms), as well as organizations in automotive, intelligent transportation, aviation, defense, rail, and smart infrastructure sectors. Source.

What problems does AEye's LiDAR technology solve?

AEye's LiDAR addresses challenges such as early detection of pedestrians and obstacles, adaptability to adverse weather, reduction of false positives, and efficient integration into diverse platforms, improving safety and operational efficiency. Source.

Can you share specific case studies or success stories using AEye's LiDAR?

Yes, case studies such as "A Pedestrian in Headlights" and "Flatbed Trailer Across Roadway" demonstrate enhanced safety and obstacle detection, while "Obstacle Avoidance" and "False Positive" highlight adaptability and operational efficiency. See case studies.

What resources are available for learning about AEye's technology?

AEye provides white papers, case studies, videos, and technical documentation on its resources page, including detailed comparisons and validation reports.

Where can I find AEye's white papers?

White papers are available on the white papers archive page, including topics like the Four Rs of LiDAR and sensor performance validation.

What white papers are available to explore AEye's technology in detail?

Available white papers include "Rethinking the Four Rs of LiDAR," "Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison," and "AEye iDAR: Sensor Performance Validation Report (VSI Labs)." See all white papers.

Where can I find additional resources like datasheets or videos from AEye?

Datasheets, videos, and other resources are available on the AEye resources page.

AEye Reports Fourth Quarter and Full-Year 2025 Results; Strengthened Foundation for Commercial Growth Read more AEye Joining NVIDIA Halos AI Systems Inspection Lab to Advance Safety-Certified Physical AI Solutions Read more AEye Reports Fourth Quarter and Full-Year 2025 Results; Strengthened Foundation for Commercial Growth Read more AEye Joining NVIDIA Halos AI Systems Inspection Lab to Advance Safety-Certified Physical AI Solutions Read more

Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison

Introduction

Recent papers1–5 have presented a number of marketing claims about the benefits of Frequency Modulated Continuous Wave (FMCW) LiDAR systems. As might be expected, there is more to the story than the headlines claim. This white paper examines these claims and offers a technical comparison of Time of Flight (ToF) vs. FMCW LiDAR for each of them. In this white paper, we will demonstrate that high performance, agile-scanning systems serve the needs of autonomous vehicle LiDAR more effectively than FMCW when cost, range, performance, and point cloud quality are important. We understand that not all and FMCW systems are equal, so we will focus on as employed at AEye. However, we believe the bulk of our comparisons are valid. Our hope is that this white paper serves to outline some of the difficult system trade-offs a successful practitioner must overcome, thereby stimulating robust informed discussion, competition, and ultimately, improvement of both and FMCW offerings to advance perception for autonomy.

Competitive Claims

Below is a summary of our views and a side-by-side comparison between TOF vs. FMCW LiDAR claims.

Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison
Download

Claim #1: FMCW is a (new) revolutionary technology

UntrueThis is untrue

Contrary to the recent news articles, FMCW LiDAR has been around for a very long time, with its beginnings stemming from work done at MIT Lincoln Laboratory in the 1960s,8 only seven years after the laser itself was invented.9 Many of the lessons we learned about FMCW over the years—while unclassified and public domain—have unfortunately been long forgotten. What has changed in recent years is the higher availability of long coherence-length lasers. While this has justifiably rejuvenated interest in the established technology, as it can theoretically provide an extremely high signal gain, there are still several limitations, long ago identified, that must be addressed to make this LiDAR viable for autonomous vehicles. If not addressed, the claim that “new” FMCW will cost-effectively solve the automotive industry’s challenges with both scalable data collection and long-range, small object detections, will prove untrue.

Claim #2: FMCW detects/tracks objects farther, faster

UnprovenThis is unproven

ToF LiDAR systems can offer very fast laser shot rates (several million shots per second in the AEye system), agile scanning, increased return salience, and the ability to apply high density Regions of Interest (ROIs)— giving you a factor of two- to four-times better information from returns versus other systems. By comparison, many low complexity FMCW systems are only capable of shot rates in the 10’s to 100’s of thousands of shots per second (~50x slower). So, in essence, we are comparing nanosecond dwell times and high repetition rates with tens of microsecond dwell times and low repetition rates (per laser/rx pair). AEye offers commercial, automotive-grade LiDAR products that produce millions of returns per second using ToF, with large FOV and super-high resolution of more than 1000 points per degree squared. AEye is unaware of any FMCW systems that match this level of performance (FMCW systems on the market currently tend to lack specific performance specifications).

Detection, acquisition (classification), and tracking of objects at long range are all heavily influenced by laser shot rate, because higher laser shot density (in space and/or time) provides more information that allows for faster detection times and better noise filtering. AEye has demonstrated a system that is capable of multi-point detects of low reflectivity: small objects and pedestrians at over 200m, vehicles at 300m, and a class-3 truck at 1km range. This speaks to the ranging capability of ToF technology. Indeed, virtually all laser rangefinders use ToF, not FMCW, for distance ranging (e.g., the Voxtel rangefinder10 products, some with a 10+km detection range). Although recent articles claim that FMCW has superior range, we haven’t seen an FMCW system that can match the range of an advanced ToF system while providing matching FOV, overall range swath, and point density.

Claim #3: FMCW measures velocity and range more accurately and efficiently

MisleadingThis is misleading

ToF systems, including AEye’s LiDAR, do require multiple laser shots to determine target velocity. This might seem like extra overhead when compared to the claims of FMCW with single shots. Much more important, is the understanding that not all velocity measurements are equal. While radial velocity in two  cars  moving head-on is urgent (one of the reasons a longer range of detection is so desired), so too is lateral velocity as it comprises over 90% of the most dangerous edge cases. Cars running a red light, swerving vehicles, pedestrians stepping into a street, all require lateral velocity for evasive decision making. FMCW cannot measure lateral velocity simultaneously, in one shot, and has no benefit whatsoever in finding lateral velocity over ToF systems.

Motion Detection: FMCW vs. ToF

Consider a car moving between 30 and 40 meters/second (~67 to 89 MPH) detected by a laser shot. If a second laser shot is taken a short period later, say 50μs after the first, the target will only have moved ~1.75mm during that interval. To establish a velocity that is statistically significant, the target should have moved at least 2cm, which takes about 500μs (while requiring sufficient SNR to interpolate range samples). With that second measurement, a statistically significant range and velocity can be established within a time frame that is negligible compared to a frame rate. With an agile scanner, such as the one AEye has developed, the 500μs is not solely dedicated or “captive” to velocity estimation. Instead, many other shots can be fired at targets in the interim. We can use the time wisely to look at other areas/targets before returning to the original target for a high confidence velocity measurement. Whereas, an FMCW system is captive for their entire dwell time.

Compounding the captivity time is the additional fact that FMCW often requires a minimum of two laser frequency sweeps (up and down) to form an unambiguous detection, with the down sweep providing information needed to overcome ambiguity arising from the mixing range + Doppler shift. This doubles the dwell time required per shot above and beyond that already described in the previous paragraph. The amount of motion of a target in 10μs can be typically only 0.5mm. This level of displacement enters the regime where it is difficult to separate vibration versus real, lineal motion. Again, in the case of lateral velocity, no FMCW system will instantly detect lateral speed at all without multi-position estimates such as those used by ToF systems, but with the additional baggage of long FMCW dwell times.

Lastly, in an extreme ToF example, the AEye system has demonstrated detected objects at 1km. Even if it required two consecutive shots to get velocity on a target at 1km, it’s easy to see how that would be superior to a single shot at 100m given a common frame rate of 20Hz and typical vehicle speeds.

 Claim #4: FMCW has less interference

UntrueQuite the opposite actually!

Spurious reflections arise in both ToF and FMCW systems. These can include retroreflector anomalies like “halos,” “shells,” first surface reflections (even worse behind windshields), off-axis spatial sidelobes, as well as multipath, and clutter. The key to any good LiDAR is to suppress sidelobes in both the spatial domain (with good optics) and the temporal/waveform domain. ToF and FMCW are comparable in spatial behavior, but where FMCW truly suffers is in the time domain/waveform domain when high contrast targets are present.

Clutter

FMCW relies on window-based sidelobe rejection to address self-interference (clutter) that is far less robust than ToF, which has no sidelobes to begin with. To provide context, a 10μs FMCW pulse spreads light radially across 1.5km range. Any objects within this range extent will be caught in the FFT (time) sidelobes. Even a shorter 1μs FMCW pulse can be corrupted by high intensity clutter 150m away. The first sidelobe of a Rectangular Window FFT is well known to be –13dB, far above the levels needed for a consistently good point cloud. (Unless no object in the shot differs in intensity by any other range point in a shot by more than about 13dB, something that is unlikely in operational road conditions).

Of course, deeper sidelobe taper can be applied, but at the sacrifice of pulse broadening. Furthermore, nonlinearities in the receiver front end (so-called spurious-free dynamic range) will limit the effective overall system sidelobe levels achievable due to: compression and ADC spurs (third order intercepts); phase noise;6 and atmospheric phase modulation etc., which no amount of window taper can mitigate. Aerospace and defense systems of course can and do overcome such limitations, but we are unaware of any low-cost automotive grade systems capable of the time-instantaneous >100db dynamic range required to sort out long-range small objects from near-range retroreflectors, such as arise in FMCW.

In contrast, a typical Gaussian ToF system, at 2ns pulse duration, has no time-based sidelobes whatsoever beyond the few cm of the pulse duration itself. No amount of dynamic range between small and large offset returns has any effect on the light incident on the photodetector when the small target return is captured. We invite anyone evaluating LiDAR systems to carefully inspect the point cloud quality of ToF vs. FMCW under various driving conditions for themselves. The multitude of potential sidelobes in FMCW lead to artifacts that impact not just local range samples, but the entire returned waveform for a given pulse!

First surface (e.g., FMCW behind a windshield or other first surface)

A potentially stronger interference source is a reflection caused by either a windshield or other first surface that is applied to the LiDAR system. Just as the transmit beam is on near continuously, the reflections will be continuous, and very strong, relative to distant objects, representing a similar kind of low frequency component that creates undesirable FFT sidelobes in the transformed data. The result can also be a significant reduction of usable dynamic range. Furthermore, windshields, being multilayer glass under mechanical stress, have complex inhomogeneous polarization. This randomizes the electric field of the signal return on the photodetector surface complicating (decohering) optical mixing.

Lastly, due to the nature of the time domain processing vs. frequency domain processing, the handling of multi- echoes—even with high dynamic range—is a straightforward process in ToF systems. Whereas, it requires significant disambiguation in FMCW systems. Multi-echo processing is especially important in dealing with obscurants like smoke, steam, and fog.

Claim #5: FMCW is automotive grade, reliable, and readily scalable

UnprovenThis is unproven at best

FMCW purports to be advantaged by the fact that it leverages photonics and telecommunications technology maturity, thereby facilitating scalability to higher performance levels (in addition to cost savings). True, FMCW allows low cost photodetectors, like PINs, whereas ToF often use APDs and other more costly detectors.

However, as we outline below, the details are far more nuanced.

The supply chain for LiDAR components is relatively nascent, but components like Fiber Lasers, PIN array receivers, ADCs and FPGA (ASICS) have been used in various industries for years. These specific types of components are very low risk from a supply base point-of-view. By comparison, the critical component for FMCW systems is the very low phase noise laser, which has many tight requirements and no other high-volume user to help drive down volume manufacturing costs. This is even before the implementation complexities caused by environmental requirements.

The optical components used in ToF LiDAR systems are derivatives of components widely and routinely used in commercial systems (cable TV, telecom, medical instrumentation, and other industries). The new developments are the MEMS, which have also been previously used in virtually all air bag and pressure sensors in automotive, as well as Gatlin guns, missile seekers, and laser resonator q-switches in the military. The components of FMCW systems have been available in laboratory environments for years, but no high-volume production systems have deployed items like the frequency agile long coherence length diode laser needed to enable such systems.

Furthermore, ToF LiDARs already have multiple vendors selling automotive qualified components across the entire hardware stack: lasers, detectors, ASICs, etc. Historically, a disruptive technology (such as FMCW laser sources) that is uniquely manufactured in-house, must have a 10x technical gain to offset a product that enjoys a robust supply chain with multiple vendors already passing quality standards for a given customer base.

Scalability ties directly to maturity. One way of describing technology maturity is a scheme developed by NASA in the 1970s7 called the “Technology Readiness Level” (TRL). This scheme assigns numbers to a technology according to how far along the path from technology inspiration (TRL 1) to deployment in multiple successful missions (TRL 9). This numbering scheme leaves out the sense of how much work is involved to go from one level to another, but our experience is that there is at least a factor of 10 between each level (and perhaps even a factor of 100).

In the case of ToF LiDAR, we believe the components and systems are at TRL 8, while the FMCW components and systems are at TRL 4. This is a significant gap in technology readiness and will take many years to close. The major scalability shortcomings of FMCW systems include the low shot rate due to the laser chirp pulse stretching, and the high-speed ADC and FPGA required to process returns. In the case where higher shot rates at the system level are required, parallel channels of the optical path and electronics may be deployed. These might use a single scanning MEMS, but each replicated item is most of the cost of the LiDAR system, so doubling channels nearly doubles the overall cost of the LiDAR.

Laser costs

In FMCW systems, coherence length is determined by how the laser is designed and fabricated and must be at least twice as long as the longest target range. Typically, a low phase noise laser is much more expensive than a traditional diode laser. In contrast, outside of maintaining a good pulse shape, there are few other requirements on the laser in a ToF system beyond those already required in telecom markets.

Receiver costs

While it is true that FMCW detectors can be low grade PINs and relatively cheap, the total receiver cost is expensive due to the front end optics and back end electronics requirements. Even here though, a coaxial FMCW system and a coaxial ToF system will not see significant differences in detector costs based on detector sizes needed. The total receiver cost will favor a ToF system.

Optics costs

In a typical ToF system, incoherent detection (simple amplitude peak detection) takes place and optical elements only have to be within one-quarter of a wavelength (so called λ/4). In comparison, FMCW uses coherent detection and in aggregate, all of the optical surfaces have to be within a much tighter tolerance, like λ/20. These components can be very expensive and there are much fewer suppliers capable of making them.

Electronics costs

In the AEye ToF system, the electronics consists of a high-speed Analog to Digital converter (ADC) and a Field Programmable Gate Array (FPGA) that performs peak detection and range calculations. The bandwidth of the electronics is proportional to the range resolution and for common LiDAR system requirements, the components are nothing unusual.

FMCW requires ADC conversion rates that are two- to four-times as high as a ToF system and then must be followed by an FPGA capable of taking the data in and doing very high speed FFT conversions. Even with the use of ASICs, the complexity of FMCW systems is several times the complexity (and cost) of the processing required for ToF.

Claim #6: Adding FMCW to Optical Phased Arrays (OPAs) will compensate for lack of solid-state performance of FMCW

UnprovenThis is unproven

FMCW has a low technical readiness level, and Optical Phased Arrays have an even lower technical readiness level (roughly TRL 3 with experimental proof of principle and is not usable at scale to the extent needed for FMCW). The original DARPA Modular Optical Aperture Building Blocks (MOABB) program demonstrated that, to achieve very low spatial sidelobe transmit beam-steering performance, submicron (λ/2) waveguides were necessary.11 The consequence of needing such small waveguides is the power handling capability of such elements, which was identified as a fundamental limitation to the approach. On the receive side, the idea of coupling light from an input lens to a photonic substrate where the light has to be collected into a very small waveguide is also an optical performance challenge (etendue limitation).

Most OPA systems use thermal shifting of laser wavelength to steer beams in one dimension while using phased arrays to steer beams in another dimension. It is well known that phased array beam steering degrades (creates spatial sidelobes) very quickly with frequency shifts of the laser beam. The combination of a beam steering mechanism that depends on the laser being a constant intensity and constant wavelength, while the ranging mechanism depends on sweeping the frequency (wavelength) of the laser, doesn’t work well for traditional FMCW approaches. The idea of combining FMCW with this beam steering technology that is in such an early stage of development is incredibly risky. We believe this path can take another 10 years to reach usable maturity.

Conclusion

AEye believes that high performance, agile-scanning ToF systems serve the needs of autonomous vehicle LiDAR more effectively than FMCW when cost, range, performance, and point cloud quality are important. However, it is not hard to see the logical reasoning where FMCW could play a niche role in applications where lower shot rates are suitable and FMCW systems are more economical. While there will be nice videos of FMCW and other low TRL systems in well controlled environments with expensive prototypes, it’s a whole different world when taking harsh environments and mass production into account. We hope this white paper will stimulate development and awareness in both ToF and FMCW systems, increasing the component options for perception engineers everywhere.

  References

  1. Aurora Team, “FMCW Lidar: The Self-Driving Game-Changer”, www.medium.com, April 9, 2020
  2. Philip Ross, “Aeva Unveils Lidar on a Chip”, IEEE Spectrum, December 11, 2019.
  3. Timothy Lee, “Two Apple veterans built a new lidar sensor — here’s how it works”, arsTECHNICA, October 2, 2018.
  4. Jeff Hect, “Lasers for Lidar: FMCW lidar: An alternative for self-driving cars”, LaserFocusWorld, May 31st, 2019.
  5. “Aeva launches ‘4D’ LiDAR on chip for autonomous driving”, www.optics.org, December 16, 2019.
  6. Phillip Sandborn, “FMCW Lidar: Scaling to the Chip-Level and Improving Phase-Noise-Limited Performance”, Electrical Engineering and Computer Sciences, University of California at Berkeley, Technical Report No. UCB/EECS-2019-148, December 1, 2019.
  7. “Technology readiness level”, Wikipedia
  8. A Gschwendtner, W Keicher, “Development of Coherent Laser Radar at Lincoln Laboratory”, MIT Tech journal, Vol 12, #2, 2000.
  9. C. Patel, “Stability of Single Frequency Lasers”, IEEE J Quantum Electronics, v4, 1968.
  10. Voxtel Laser Rangefinders, www.voxtel-inc.com, June 2020
  11. P Suni et al, “Photonic Integrated Circuit FMCW Lidar On A Chip”, 19th Coherent Laser Radar Conference