×

Getting hands-on with AI in automotive

Getting hands-on with AI in automotive


From cloud to edge, hardware-agnostic AI/ML

In January Canonical will reconfirm its presence at CES 2025; here we will be showing a cutting-edge AI/ML demo that showcases how AI models can be trained, deployed, and updated seamlessly across various hardware platforms. This demonstration is a proof of our commitment to building hardware-agnostic solutions, empowering automotive manufacturers to integrate AI into their systems on various vehicle configurations, without being tied to any specific silicon vendor. Whether it’s predictive maintenance, in-car personalization, or even AD/ADAS features, our flexible AI/ML stack is designed to meet the needs for AI in automotive.

With our cars becoming smarter and more connected, the demand for AI capabilities in the automotive industry has grown exponentially. However, one of the main challenges is the wide range of electrical and electronic vehicle architectures that different manufacturers and suppliers use. In order to solve this complex hardware dilemma, it is required to have a truly platform-agnostic AI stack that allows companies to choose the best hardware for their needs—whether it’s NVIDIA’s powerful GPUs, Intel processors, or Qualcomm chipsets—without sacrificing performance or compatibility.

One of the most significant challenges when it comes to deploying AI models in automotive at the edge is hardware fragmentation. OEMs and Tier 1s often work with different types of hardware across their supply chains, making it difficult to maintain a consistent AI experience. Having a hardware-agnostic AI stack solves this problem by running seamlessly across various platforms, reducing the complexity of integrating AI in automotive.

One of the highlights of our CES 2025 demo will be a real-world use case demonstrating AI-driven fault detection in automotive manufacturing, from cloud-based model training to real-time deployment at the edge. This demo focuses on how AI can be used as an advantage for quality assurance (QA) tasks, process and analyze data efficiently, ensuring parts meet high-quality standards.

In this demo, we simulate a manufacturing environment where 3D-printed gears are produced for automotive use. Some of these gears are correctly printed, while others are defective due to printing errors. The AI model, trained using object recognition, is tasked with differentiating between well-manufactured and defective parts. The ability to automatically identify defective gears on the production line reduces the need for manual inspection and increases operational efficiency.

The model itself is trained based on a dataset containing images of good and defective gears, in an Ubuntu-based cloud environment running Kubeflow and MLflow on top of MicroK8s. 

Through transfer learning, the model quickly adapts and improves its fault detection capacity. Indeed, if the model misclassifies any parts, this feedback is used to retrain and refine the model. Updates are deployed back to the edge using over-the-air (OTA) updates, ensuring that manufacturers always have the most accurate version of the model running in their production environment. This allows manufacturers to accelerate the AI training process by leveraging pre-trained models tailored to their specific use case.

Once trained, the model is containerized and deployed to edge devices, which would be located on the factory floor. These edge devices run Ubuntu and process real-time data from cameras that scan the gears as they move down the production line. The AI model evaluates each part in real-time, flagging defective parts for removal, while passing those that meet the quality standards. This edge-based deployment is particularly important in the automotive industry, where low latency and immediate decision-making are critical to maintaining production efficiency.

Partnering with industry leaders

Our CES 2025 demo will also highlight our ongoing partnerships with major players in the semiconductor industry, including NVIDIA and Intel. Working closely with NVIDIA, we’ve developed AI/ML solutions that harness the latest GPU technologies, allowing for rapid model training and edge deployment that meets the high standards of AI in automotive.

By supporting multiple architectures, Canonical ensures that manufacturers have the freedom to choose the best tools for their specific use cases. This hardware-agnostic approach is a key theme throughout our AI/ML demo and will be a core focus at our CES 2025 booth.

Other potential use cases

Our demo clearly shows how this flexibility enables developers to seamlessly deploy AI models across a wide range of devices. For example, for object recognition used for advanced driver assistance systems (ADAS), the AI models trained in the cloud can be optimized for edge deployment, ensuring smooth performance in vehicles with limited computing resources. This eliminates the headache of needing to develop AI solutions that are tied to a single hardware vendor, a huge advantage for automotive manufacturers who are working with different semiconductor partners.

As mentioned above, Canonical’s AI/ML stack allows developers to train models in the cloud and optimize them for deployment at the edge, such as inside vehicles. This process ensures that the AI models run efficiently on edge devices, where computing power is often more limited.

With support for Kubernetes, and containerization, our AI/ML solution offers automotive companies the flexibility to train models in different environments. This flexibility not only accelerates development but also ensures that AI models can be easily updated, scaled, or rolled back, as needed. Moreover, our OTA update approach makes it easy to deploy software updates to vehicles securely and efficiently, a critical feature for maintaining up-to-date AI models in the field.

As AI continues to reshape the automotive industry, Canonical’s hardware-agnostic AI/ML stack is positioned to lead the way. From seamless deployment across diverse hardware platforms to real-world applications in vehicles, our demo at CES 2025 will illustrate how AI/ML is driving the future of automotive. Join us at our booth (#10277 in the North Hall) to experience these innovations in action and learn how our solutions can help you build the next generation of intelligent vehicles.

Don’t miss out on CES 2025—let’s drive the future of AI in automotive together.

To learn more about Canonical and our engagement in automotive: 

Contact Us

Check out our webpage

Watch our webinar with Elektrobit about SDV

Download our whitepaper on V2X (Vehicle-to-Everything)



Source link