On April 11, 2019, AEye’s Technical Product Manager, Indu Vijayan, will speak on “AI & Machine Learning” at SAE World Congress in Detroit, Michigan.
Indu Vijayan is a specialist in systems, software, algorithms and perception for self driving-cars. As the Technical Product Manager at AEye, she leads software development for the company’s leading-edge artificial perception system for autonomous vehicles. Prior to AEye, Indu spent five years at Delphi/Aptiv, where, as a senior software engineer on the Autonomous Driving team, she played a major role in bridging ADAS sensors and algorithms, and extending them for mobility. She holds a BS, Technology in Computer Science from India’s Amrita University, and an MS in Computer Engineering from Stony Brook University.
We sat down with Indu to learn more about why the advancement of edge computing and AI is so critical to the rollout of safe and efficient autonomous vehicles…
Q: What does it mean to implement Artificial Intelligence “at the sensor level”?
Q: Why is this favorable to the development of advanced artificial perception systems?
Since iDAR is intelligent, it can efficiently cycle and prioritize sensory information. Meaning, that it only sends the most relevant data to the vehicle’s path-planning system. In a conventional sensor system, layers upon layers of algorithms are needed to extract out relevant, actionable data, which creates too much latency for the vehicle to navigate safely at highway speeds. Say you are driving 60mph along a highway when, suddenly, you hear the siren of an ambulance coming from behind you, quickly closing in. In this instance, you are left with two choices: either stay in your lane and maintain your speed, or safety slow down and/or pull over to the side of the road. Whichever decision you choose is determined by the auditory and visual cues you are receiving from the environment, such as the speed of the ambulance, or the density of the traffic around you.
Just like in human perception, our iDAR system creates feedback loops that can efficiently cycle and prioritize sensory information. When humans gather information from the visual cortex, it creates a feedback loop that helps make each step of visual perception more efficient. Because we mimic this process in our system, we enable similar behavior to be learned and trained in autonomous vehicles so that they can make better, more accurate decisions, faster. Therefore, it is able to continually learn and adapt, so that, over time, it becomes even better at identifying and tracking potential hazards.