Interviews, Smart Mobility

Ambarella’s Take on the Future of Perception for ADAS to L4 Autonomous Vehicles | Auto.AI USA 2022

Robert Bloomquist VP, Automotive Business Development Ambarella, Inc.

Key to the evolution of perception for ADAS to L4 autonomous vehicles is the development of AI perception. In the run-up to the 5th Auto.AI USA 2022, we.CONECT spoke to Robert Bloomquist, VP, Automotive Business Development at Ambarella, Inc., a leading developer of edge AI semiconductors.

we.CONECT: Hi, Robert, we are excited to have you as our speaker at the 5th Auto.AI USA 2022, and we’d like to get some insights into Ambarella’s AI perception technology. But first a little introduction: you work as VP of Automotive Business Development for Ambarella; which tasks would you highlight as particularly interesting? 

Robert Bloomquist: The thing that excites me most about my role is the opportunity to grow a team focused on making a real impact on our customers’ end products. As we focus on advancing safety systems, we are improving society by saving lives. I enjoy working with leaders in the industry who share these goals and are directly driving these new technologies into the vehicles of the largest OEMs in the world.

we.CONECT: Your latest solution, which is being featured at Auto.AI USA 2022, is the CV3 AI Domain Controller Family for ADAS and L2+ to L4 Autonomous Vehicles. What is the most important message about this solution that you want to convey to the participants and your clients?

Robert Bloomquist: Our “algorithm first” approach for the design of our CVflow® AI systems-on-chip provides a truly unique combination of the industry’s highest performance and lowest power consumption. Along with our integrated world-class image signal processor, encode/decode and overall chip architecture, we have a purpose-built, automotive-focused SoC portfolio that scales from entry-level ADAS systems to high-performance domain control computing—all using the same software development kit. This means that our customers can maximize their development efforts and ROI while lowering risk, accelerating time to market, and ensuring high-quality results across their entire vehicle portfolio.

we.CONECT: You launched the CV3 AI domain controller SoC family earlier this year for single-chip multi-sensor perception, fusion, and path planning in ADAS to L4 Autonomous Vehicles. Other than what you’ve already mentioned, what makes this technology so unique and sets it apart from other domain controllers that are currently on the market?

Robert Bloomquist: The competitive offerings are horizontal plays meant to serve many markets, meaning that they aren’t developed for automotive-only from the ground up. This ultimately forces our competition to make tradeoffs, leading to suboptimal architectures for automotive. The end result is many undesirable artifacts that customers have to work around in their implementations, including larger system sizes, costs, and overall power consumption.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video


we.CONECT: If you think of your clients’ requirements: Which problem areas regarding AI perception do you think are particularly critical?

Robert Bloomquist: It’s crucial to understand the true AI performance capabilities of the SoCs that they’re evaluating for their designs, past reading and relying on datasheet metrics. Truly embedded platforms require a high level of efficiency. While general-purpose AI engines that essentially rely on brute force can achieve high performance, their efficiency is poor; well under 20% in most cases. This means that, while their datasheets show a large TOPS number, their overall performance executing AI workloads is suboptimal and comes at a high price in terms of both the sheer number of transistors as well as power. The overall AI performance and efficiency of SoCs varies tremendously from manufacturer to manufacturer, and determining true performance requires significant benchmarking activity and system knowledge. Different manufacturers leverage different mechanisms in the silicon to increase performance and efficiency. Customers need to understand these differences to accurately ascertain the capabilities of the underlying silicon and thus the comparative advantages and disadvantages of each silicon platform.

we.CONECT: What do you think are the most promising technologies to meet the challenges faced by ADAS and AD designers?

Robert Bloomquist: Advanced and mature AI tool chains are the key. The right tool chain will provide the ability to rapidly move among development environments like Pytorch and TensorFlow, and all the way down to the embedded silicon, in an efficient manner and with high-quality results. The CV3 family integrates our third-generation CVflow AI engine, which incorporates our learnings from the prior two generations wherein we have run hundreds of different open source neural networks, as well as numerous custom networks from our customers. This experience has also resulted in many advances in our tool chains over time, enabling our customers to shorten development times.

we.CONECT: How do you assess the market developments for solutions that will enable more intelligent AV systems, including AI-driven perception technologies, over the next 1-2 years?

Robert Bloomquist: On a few fronts. First, how are they affecting our ability to perceive the environment in an efficient and predictable manner? Second, how are they fundamentally reducing the latency to make complex decisions in an L2+ – L5 safety context? Third, do they enable a clear path toward high volume commercial availability and affordability?

we.CONECT: It is your first time participating in an Auto.AI event. What are your main expectations for the conference?

Robert Bloomquist: Ambarella is honored to be included among the innovative companies speaking at this year’s event. We look forward to an open exchange of ideas with these dynamic organizations who share our goal to more rapidly move the automotive industry up the levels of vehicle autonomy.

we.CONECT: Thank you for your time, Robert, we are looking forward to seeing you in Detroit soon!

Robert Bloomquist is moderating an Icebreaker session on Sunday, June 19th at 8 PM at The Henry Hotel. He will encourage the conference participants to discuss the evolution of perception for L4 & 5 autonomous vehicles.

Includes Icebreaker and Networking Dinner.

Our conference partner Ambarella will also be presenting their technology in motion on June 20 as part of Auto.AI Driving Days, located near .Ambarella’s test drives will feature an SUV outfitted with the following four technology demonstrations…

  • Forward-Camera AI Demos:
  • One running Ambarella’s AmbaNet neural network
  • Another running Autobrains’ neural network
  • Oculii™ 360-Degree High-Resolution Radar Perception
  • Full Display, 2MP Rearview eMirror

Auto.AI USA is the leading technical event on deep learning for SAE level 4 and 5 autonomous vehicles bringing together more than 300 top industry experts and decision-makers in machine learning, neural networks, and perception. Join now and discuss self-supervised and behavioral learning concepts, scalable machine and reinforcement learning approaches, benchmarking perception and computer vision systems for ADs with your peers from the automotive AI community.

The 5th Auto.Ai USA, America’s No. 1 event on deep driving for level 4 and 5 autonomous vehicles is back on June 19 – 21, 2022, live in Detroit!

Previous ArticleNext Article