AI applications are key to realizing autonomous driving

0

Autonomous driving and smart cockpit are two key application areas of artificial intelligence (AI), which can be used to turn data from cameras and sensors into actionable insights.

According to Boston Consulting Group (BCG) projections, the value of the global autonomous vehicle market will reach $42 billion by 2025, and partially autonomous vehicles will account for a 12.4% market share. BCG said the market will continue to grow at a rapid pace to 2035, and intelligent autonomous vehicles will become an irreversible development trend for the automotive industry.

There are three main development directions for AI applications in intelligent vehicles: level 5 fully autonomous vehicles upgraded from vehicles equipped with ADAS (advanced driver assistance systems), smart cockpit upgraded from smart dashboard and in-car infotainment systems, and smart systems/autonomous commercial and public transport fleet management system developed to meet smart city/transportation demand.

LiDAR, radar, camera and AI

On a technical level, autonomous vehicles can sense the environment around them by transforming data collected through their sensors, including cameras, LiDARs and radars, into actionable information using edge computing.

Cheng Wen-Huang, director of the graduate program in artificial intelligence at National Yang-Ming Chiao Tung University, pointed out that visual sensors in self-driving vehicles are most often used to detect and mark visible objects when the vehicles are on the road. For sensors to perform such functions, massive amounts of data are used to train deep learning and machine learning models through data-intensive technologies, he explained.

Currently, most major automakers have adopted sensor fusion techniques to enable autonomous vehicles to make accurate decisions on the road. The most common sensors are infrared radar, ultrasonic radar, millimeter wave radar, and camera. Each type of sensor is useful in different situations. For example, when cameras cannot work well in low visibility environments, LiDARs and radars can be used to monitor road conditions in front of vehicles.

Not so long ago, when Tesla announced that it would abandon the mmWave radar and instead adopt a pure-vision AI perception system, the announcement sparked heated discussions in the industry.

Tesla’s decision to remove radar from its vehicles was driven not only by its intention to reduce vehicle production costs, but also by the dilemma of “which systems should be used in unique situations” when vehicles have to both radar perception and AI vision systems. With more than 14% share of the global electric vehicle (EV) market, Tesla can fully leverage the benefits of AI vision by leveraging image data collected from millions of its vehicles sold over the past 10 years. However, for automakers that have only recently entered the electric vehicle market, sensor fusion is a more practical direction of development than a purely vision-based approach.

The accuracy of AI vision is determined by the volume and quality of data

According to Cheng, there are two reasons why ADAS and autonomous driving systems using pure visual perception technology might not be reliable.

First, there are still limitations preventing the effectiveness of some machine learning algorithms, and these limitations are difficult to overcome. Even though deep learning models can circumvent these limitations, most developers of ADAS and autonomous driving systems will struggle to acquire the massive amounts of payload data required by deep learning technologies.

Second, there is a difference in quality between commercialized AI solutions, especially in terms of data acquisition. Large companies are more capable of collecting useful data than medium and small companies, so they are more likely to develop high-quality AI products.

Since 2021, Huawei has unveiled its vehicle-to-everything (V2X) platform and 5G vehicle communication software, while partnering with Audi to showcase an autonomous vehicle adopting Huawei’s AI chipset. Cheng noted that Huawei has accumulated a large volume of data on weather conditions, geographical locations and traffic status in different cities. This data should be used to improve ADAS and autonomous driving systems, and it allows autonomous vehicles to operate in different scenarios, such as in the countryside, or on highways and urban roads, he explained.

Lee Kai-Fu, Chairman and CEO of Sinovation Ventures, said big data is both a crucial part of autonomous driving and a major data management challenge for enterprises. He said the safe application of AI to autonomous vehicles is a major challenge for the automotive industry.

The era of the complete intellectualization of vehicles?

As major automakers attempt to optimize ADAS systems by adding sensors to their vehicles, they are also paying increased attention to smart cockpits. Mercedes-Benz caught the eye at CES 2021 with a 56-inch Hyperscreen it plans to install in its cars, and some automakers have introduced personalized infotainment services and voice-assist systems that can prevent distracted driving.

According to IHS, the global smart cockpit market was valued at over US$40 billion in 2021, and the figure is expected to reach US$43.8 billion this year and US$68.1 billion in 2030. China accounted for 20% of the global share in 2021. , and it is expected to hold a 30% share in 2030 with a market size of 160 billion CNY (24.16 billion USD).

In view of massive business opportunities associated with smart cockpits, Taiwanese companies Foxconn, Pegatron, AU Optronics, Innolux, Adlink, Macronix, Winbond, First International Computer and Trend Micro have all deepened their deployments in this area since last year.

Mindtronic AI CTO Mike Huang said most traditional automakers still approach smart cockpits from a hardware perspective. Many of them are planning to integrate LCD screens, navigation systems and networking functions into vehicles, but few have provided practical solutions to achieve “cockpit intellectualization”, he said. He underlines.

Huang said the concept of an intelligent cockpit is more than just integrating multiple functions and component modules. It’s a fully customized cockpit environment inside a vehicle, he pointed out.

In the past, traditional automakers enhanced the value of vehicles by upgrading hardware components, such as the audio system and rearview mirror, Huang noted. However, after Tesla began exploring the relationship between in-vehicle hardware equipment, drivers, and passengers in recent years, consumers and mainstream automakers have expanded their imaginations of smart cockpits.

The intelligent assistant, man-machine co-driving and third living space are considered as the three stages of the future development of the intelligent cockpit, and they reflect the change in the relationship between the driver and the vehicle from ADAS to the fully autonomous driving. Although the automotive industry still has a long way to go before realizing fully autonomous driving, AI technologies are undoubtedly reshaping our understanding of the concept of driving.

Share.

Comments are closed.