The automotive industry is changing and development cycles are becoming shorter, and in order to develop innovative technologies even more efficiently and quickly, German Tier 1 major Continental has invested in setting up its own supercomputer for Artificial Intelligence (AI), powered by Nvidia InfiniBand-connected DGX systems.
The system has been operating from a datacenter in Frankfurt am Main, Germany, since the beginning of 2020 and is offering computing power as well as storage to developers in locations worldwide. The AI enhances advanced driver assistance systems, makes mobility smarter and safer and accelerates the development of systems for autonomous driving.
Christian Schumacher, head of Program Management Systems in Continental’s Advanced Driver Assistance Systems business unit said: “The supercomputer is an investment in our future. The state-of-the-art system reduces the time to train neural networks, as it allows for at least 14 times more experiments to be run at the same time. When searching for a partner, we look for two things: quality and speed. The project was set up with an ambitious timeline and implemented in less than a year. After intensive testing and scouting, Continental selected Nvidia, which powers many of the fastest supercomputers around the world.”
Manuvir Das, head of Enterprise Computing at Nvidia said: “Nvidia DGX systems give innovators like Continental AI supercomputing in a cost-effective, enterprise-ready solution that’s easy to deploy. Using the InfiniBand-connected Nvidia DGX POD for autonomous vehicle training, Continental is engineering tomorrow’s most intelligent vehicles, as well as the IT infrastructure that will be used to design them.”
Among the Top 500 supercomputers in the automotive industry
Continental’s supercomputer is built with more than 50 Nvidia DGX systems, connected with the Nvidia Mellanox InfiniBand network. The company says it is ranked according to the publicly available list of Top 500 supercomputers as the top system in the automotive industry. A hybrid approach has been chosen to be able to extend capacity and storage through cloud solutions if needed. “The supercomputer is a masterpiece of IT infrastructure engineering. Every detail has been planned precisely by the team – in order to ensure the full performance and functionality today, with scalability for future extensions,” added Schumacher.
The Tier 1 says the advanced driver assistance systems (ADAS) uses AI to make decisions, assist the driver and ultimately operate autonomously. Environmental sensors like radar and cameras deliver raw data. This raw data is being processed in real-time by intelligent systems to create a comprehensive model of the vehicle’s surroundings and devise a strategy on how to interact with the environment. Finally, the vehicle needs to be controlled to behave like planned. But with systems becoming more and more complex, traditional software development methods and machine learning methods have reached their limit. Deep Learning and simulations have become fundamental methods in the development of AI-based solutions.
With Deep Learning, an artificial neural network enables the machine to learn by experience and connect new information with existing knowledge, essentially imitating the learning process within the human brain. But while a child is capable of recognising a car after being shown a few dozen pictures of different car types, several thousand hours of training with millions of images and therefore enormous amounts of data are necessary to train a neural network that will later on assist a driver or even operate a vehicle autonomously. The Nvidia DGX POD not only reduces the time needed for this complex process, it also reduces the time to market for new technologies.
Balazs Lorand, head of Continental’s AI Competence Center in Budapest, Hungary, who also works on the development of infrastructure for AI-based innovations together with his groups in Continental said: “Overall, we are estimating the time needed to fully train a neural network to be reduced from weeks to hours. Our development team has been growing in numbers and experience over the past years. With the supercomputer, we are now able to scale computing power even better according to our needs and leverage the full potential of our developers.”
Driving 15,000kms each day or 50,000 hours of movies
The German technology company says to date, the data used for training those neural networks comes mainly from the Continental test vehicle fleet. At present, they drive around 15,000 test kilometres each day, collecting around 100 terabytes of data – equivalent to 50,000 hours of movies. Already, the recorded data can be used to train new systems by being replayed and thus simulating physical test drives. With the supercomputer, data can now be generated synthetically, a highly computing power consuming use case that allows systems to learn from travelling virtually through a simulated environment.
According to the company this can have several advantages for the development process: Firstly, over the long run, it might make recording, storing and mining the data generated by the physical fleet unnecessary, as necessary training scenarios can be created instantly on the system itself. Secondly, it increases speed, as virtual vehicles can travel the same number of test kilometres in a few hours that would take a real car several weeks. Thirdly, the synthetic generation of data makes it possible for systems to process and react to changing and unpredictable situations. Ultimately, this will allow vehicles to navigate safely through changing and extreme weather conditions or make reliable forecasts of pedestrian movements – thus paving the way to higher levels of automation.
The ability to scale was one of the main drivers behind the conception of the Nvidia DGX POD. Through technology, machines can learn faster, better and more comprehensively than through any human-controlled method, with potential performance growing exponentially with every evolutionary step.
The new supercomputer is located in a datacenter in Frankfurt, which has been chosen for its proximity to cloud providers and, more importantly, its AI-ready environment, fulfilling specific requirements regarding cooling systems, connectivity and power supply. Certified green energy is being used to power the computer, with GPU clusters being much more energy efficient than CPU clusters by design.