In March 2002, NVIDIA released the nuclear bomb GPU and large computing power automatic driving chip

In March 2002, NVIDIA released nuclear bomb GPU and large computing power autonomous driving chip
NVIDIA nuclear bomb grade GPU: 80 billion transistors, 20 pieces carry global Internet traffic
On March 22, 2022, at NVIDIA GTC2022, NVIDIA introduced Hopper architecture, H100 GPU , the metaverse, new supercomputers, software, data centers, and more.
NVIDIA founder and CEO Huang Renxun praised the "amazing" progress of AI technology at the opening ceremony of this NVIDIA GTC conference, and looked forward to how AI and Omniverse will combine the real and virtual worlds.
Huang Renxun made a commitment to transform multiple industries worth trillions of dollars to meet the "major challenges" of the current era of human society. He shared the vision of the new era: he hopes to realize intelligent creation based on the industrial scale, and truly integrate reality and virtual world.
At this NVIDIA GTC conference, Huang Renxun also introduced a new generation of chips - including the new Hopper GPU architecture and H100 GPU, plus new AI and accelerated computing software, and powerful new data center scale systems.
This time, the virtual environment generated by NVIDIA's Omniverse real-time 3D collaboration and simulation platform has become a new stage for Huang Renxun. He said, "Enterprise customers are actively processing and refining data, developing AI software, and gradually transforming into intelligent manufacturers." At present, AI technology It's "going fast in every direction."
Omniverse is able to bring it all together to accelerate human-AI collaboration, better understand and model the real world, and become the catalyst for a new type of robotics, the "next wave of AI." proving ground.
At the beginning of the keynote speech, the picture leads through the new campus of NVIDIA from the perspective of flight. This scene is completely rendered by Omniverse, and in the picture, you can see laboratories working on advanced robotics projects.
Jen-Hsun Huang shared how NVIDIA is working with a broad ecosystem to save lives and even the planet by enabling healthcare and drug discovery.
Huang Renxun said, "Scientists predict that in order to effectively simulate climate change in a specific area, a supercomputer 1 billion times the current level is needed to achieve it."
"But NVIDIA decided to pass the "Earth-2" (Earth-2, The world's first AI digital twin supercomputer) challenges this challenge. New AI and computing technologies have been invented in hopes of preventing irreversible damage to the climate. "
Heavy Release of New Chips Based on Hopper Architecture
To advance these ambitious goals, Jen-Hsun Huang introduced the NVIDIA H100 based on Hopper architecture, which it claims is "a new engine for global AI infrastructure."
Voice, Conversation, Customer Service and Recommendations AI applications such as systems are driving a fundamental change in data center design.
Huang Renxun said, "AI data centers are responsible for processing large amounts of continuous data to train and improve AI models. The input raw data can be refined and gradually transformed into output intelligent results, and enterprises can use these results to build and operate large-scale artificial intelligence factories. "
Such a factory will be operating 24/7 at high intensity. Any small improvement in quality will significantly improve customer engagement and greatly increase corporate profits.

H100 will help these factories to accelerate the pace of development. The chip, which uses TSMC's 4-nanometer process, will hold 80 billion "massive" transistors. Nvidia says this is the most powerful GPU out there. Huang Renxun said that 20 H100 GPUs can support the traffic of the global Internet.
insert image description here

Huang Renxun said, "Hopper H100 will achieve the largest intergenerational performance leap in history, with large-scale training performance up to 9 times that of A100, and large-scale language model inference throughput reaching 30 times that of A100."
Hopper's technological breakthroughs include The new Transformer engine can achieve 6x network acceleration performance without losing accuracy.
Huang Renxun pointed out that "the training period of the transformer model will be shortened from weeks to days." The H100 is currently in production and is expected to be officially available from the third quarter.
Jen-Hsun Huang announced the Grace CPU Superchip, Nvidia's first discrete data center CPU for high-performance computing. Contains two CPU chips, 144 compute cores connected to each other via a 900 GB/s NVLink chip-to-chip interconnect, and 1 TB/s memory bandwidth.
"Grace is an ideal CPU for global AI infrastructure."
Huang Renxun announced several new AI supercomputers based on Hopper GPU, namely DGX H100, H100 DGX POD and DGX SuperPOD.
In order to connect all of this, the new NVLink high-speed interconnect technology developed by NVIDIA will fully cover all subsequent NVIDIA chips, including CPU, GPU, DPU and SOC.
Huang Renxun said that Nvidia will provide NVLink to customers and partners to build companion chips. "NVLink will open up a new world of possibilities for customers to build semi-custom chips and systems using the NVIDIA platform and ecosystem."
New software: AI has "completely changed" the face of software
Accelerated Computing The maturity of AI has given AI an "amazing" trajectory.
"AI has fundamentally changed what software can achieve and the fundamental way software is developed."
Transformers open the door to self-supervised learning, eliminating the high dependence of traditional AI on manually labeled data. As a result, transformers are now popping up everywhere.
"Transformers enable self-supervised learning, which drives the rapid development of AI technology."
Whether it is Google BERT for language understanding, NVIDIA MegaMilBART for drug discovery, or DeepMind AlphaFold2 for predicting protein structure, the root can be Trace back to this wave of technological breakthroughs brought about by transformers.
New deep learning models have made achievements in fields such as natural language understanding, physics, creative design, character animation, and even NVCell chip layout.
"AI is moving fast in all directions, new architectures, new learning strategies, bigger and stronger models, new sciences, new applications, new industries, etc., all fields have ushered in a wave of parallel development.
" Go" to accelerate new breakthroughs in the field of AI, hoping to help the practical application of AI and machine learning technologies in various industries.
The NVIDIA AI platform is going through a major update, including the Triton inference server, the NeMo Megatron 0.9 framework for training large language models, and the Maxine framework for audio and video quality enhancements, among others.
Also included in the platform is NVIDIA AI Enterprise 2.0, an end-to-end, cloud-native suite of AI and data analytics tools and frameworks that has been optimized and certified by NVIDIA and now supports all major data center and cloud platforms.
Huang Renxun pointed out, "60 SDK updates were announced at this GTC. So NVIDIA can announce to 3 million developers, scientists, AI researchers and tens of thousands of startups that it is now faster to run NVIDIA systems. "
NVIDIA AI software and accelerated computing SDKs have been implemented in many companies around the world. Microsoft Translator improves global communication efficiency with real-time translation powered by NVIDIA Triton.
AT&T uses NVIDIA RAPIDS software to accelerate its internal data science team, making it easy to process trillions of message records. "NVIDIA SDK has currently served multiple industries with a total value of up to 100 trillion US dollars, such as healthcare, energy, transportation, retail, finance, media and entertainment." The
next evolutionary direction: for the virtual world metaverse
insert image description here

Half a century ago, the Apollo 13 moon landing program suffered an accident. In order to save the crew, NASA engineers decided to build a model of the crew capsule on Earth and explore possible rescue methods.
"The idea of ​​digital twins is to scale this up to create a virtual world that is connected to the physical world. Combined with the current Internet background, this is undoubtedly the direction of the next wave of evolution."
NVIDIA Omniverse software is designed for building digital twins The new data center-scale NVIDIA OVX system will be an integral part of "action-oriented AI."
Jen-Hsun Huang brought a new version of Omniverse at this conference, and when introducing the updated content, he said, "Omniverse is the core of the robotics platform. Like NASA and Amazon, NVIDIA customers from the robotics/industrial automation field are deeply aware of digital twins. The importance of being with the Omniverse.” The
OVX system will be the operational vehicle for the Omniverse digital twin, responsible for running large-scale simulations for multiple autonomous systems in the same space and time.
insert image description here

The backbone of OVX lies in its network structure, which is derived from the NVIDIA Spectrum-4 high-performance data network infrastructure platform announced this time.
As the world's first 400 Gbps end-to-end networking platform, NVIDIA Spectrum-4 consists of Spectrum-4 series switches, NVIDIA ConnectX-7 SmartNIC, NVIDIA BlueField-3 DPU, and NVIDIA DOCA data center infrastructure software.
In order to allow more users to access Omniverse, Ying Renxun also announced the Omniverse Cloud at the conference. Now with just a few clicks, collaborators can connect to the cloud and participate in the Omniverse.
Huang Renxun also demonstrated how to build virtual worlds collaboratively through four designers (including an AI "designer").
Describes how Amazon uses Omniverse Enterprise to "design and optimize extremely complex logistics center operations."
"The modern logistics center itself is a great technological miracle. The operation of the facility needs to be completed by humans and robots."
Robots and autonomous driving will set off the next wave
of AI. New chips, new software and new simulation functions are integrated together. It is bound to set off a natural wave of "the next wave of AI". The resulting robot will have the ability to "design, plan and act".
NVIDIA Avatar, DRIVE, Metropolis, Isaac and Holoscan are end-to-end, full-stack robotic platforms built around the "four pillars": real data generation, AI model training, robotic stack and Omniverse digital twin.
insert image description here
Among them, NVIDIA's DRIVE self-driving car system is essentially an "AI driver".
As for NVIDIA's Hyperion 8, which is used to build NVIDIA's DRIVE self-driving car hardware architecture, it is capable of fully autonomous driving through a suite of 360-degree cameras, radar, lidar, and ultrasonic sensors. The Hyperion 8 will be available in Mercedes-Benz vehicles from 2024, followed by the Jaguar Land Rover lineup a year later.
Jen-Hsun Huang also announced that Nvidia Orin, the centralized AV and AI computer that is the self-driving engine for next-generation electric cars, robotaxis, shuttle buses and cargo trucks, will start shipping this month.
Also announced this time is the Hyperion 9, powered by the upcoming DRIVE Atlan SoC, which delivers twice the performance of the previous-generation Hyperion 8 based on the DRIVE Orin architecture. This plan will be officially launched in 2026.
insert image description here
BYD, the world's second-largest electric vehicle maker, will use the DRIVE Orin computing device in its vehicles starting in the first half of 2023. Lucid Motors revealed that the DreamDrive Pro advanced driver assistance system is based on NVIDIA DRIVE.
Overall, Nvidia's automotive product line will grow to over $11 billion over the next six years.
Clara Holoscan brings most of the real-time computing capabilities in DRIVE to medical food and real-time sensors for use cases such as RF ultrasound, 4K surgical video, high-energy cameras, and laser guidance.
Jen-Hsun Huang also showed a video of holographic scanning accelerated images, in which images from a laser microscope were transformed into "movies" of cell movement and division.
Such an instrument produces 3 terabytes of data in an hour, and the corresponding processing cycle often takes a full day.
But at UC Berkeley's Center for Advanced Bioimaging, Holoscan was able to help researchers process this data in real-time, ensuring that the microscope would continue to autofocus during experiments.
The Holoscan development platform is currently open to early users and is planned to be officially launched in May 2022, with medical-grade applications tentatively scheduled for the first quarter of 2023.
NVIDIA also works with customers and developers to build robotics solutions for manufacturing, retail, healthcare, agriculture, construction, airports, and municipal governance.
Nvidia's robotic platform consists of Metropolis, a stationary robot that can track moving objects, and Isaac, a platform that carries objects to move.
To help robots navigate indoor spaces such as factories and warehouses, Nvidia has released the Isaac Nova Orin based on the Jetson AGX Orin. This is an advanced computing and sensor reference platform that accelerates the development and deployment of autonomous mobile robots.
In a video, Jen-Hsun Huang shows how PepsiCo uses both Metropolis and Omniverse digital twins.
insert image description here
Four-layer stack, five motivations
At the end of the speech, Huang Renxun connected all technical achievements, product releases and demonstrations with NVIDIA's next-generation computing development strategy.
Nvidia announced new products for its four-layer stack: hardware, system software and libraries, software platforms such as Nvidia HPC, Nvidia AI, and Nvidia Omniverse, plus a framework for AI and robotics applications.
insert image description here
Huang Renxun introduced the five major driving forces that are reshaping the industry: millions of times faster computing, transformers to accelerate AI, data centers to AI factories, exponential growth in demand for robotic systems, and digital twins under the next generation of AI.
“Will continue to work hard over the next decade to achieve a millionfold speedup on the overall stack and data center scale. Can’t wait to see what new possibilities the next wave of millionfold speedup brings.”
At the end of the speech, Huang Renxun mentioned that "every rendering, every simulation you see here" is generated by Omniverse. NVIDIA's excellent creative team invited the audience to "experience the Omniverse again", and the equipment in the NVIDIA campus "come to life" and played a wonderful piece of jazz together. Of course, Huang Renxun's digital avatar, Toy Jensen, also appeared, and Lao Huang also had a question-and-answer dialogue with this cute little version.
NVIDIA's self-driving chip: Orin mass production ahead of schedule, Hyperion 9 oriole behind
insert image description here

Nvidia gave car companies a reassurance. Not only did they not have a bounced Orin chip, but they also launched a new generation of autonomous driving platform DRIVE Hyperion 9 with doubled performance.
On the evening of March 22, Nvidia founder Huang Renxun announced at the GTC 2022 conference that its autonomous driving chip Orin was officially put into production and sales this month. NVIDIA has launched the DRIVE Hyperion 9, a new generation of autonomous driving platform based on the Atlan chip, which is planned to be mass-produced in 2026.
Nvidia also announced two new new car company partners: BYD and Lucid Group. At present, more than 25 car companies and autonomous driving companies have chosen NVIDIA. These partners will contribute more than $11 billion in revenue to Nvidia over the next six years.
01 Orin is mass-produced ahead of schedule, and Hyperion 9 Orin is behind
NVIDIA, which controls both the brain of autonomous driving and the nerves of autonomous driving.
The relationship between autonomous driving chips, platforms and models is not easy to understand. Nvidia gives an image metaphor: the car is the body, the autonomous driving platform is the nerve, and the autonomous driving chip is the brain.
At this GTC 2022 conference, the Hyperion 9 autonomous driving platform released by Huang Renxun belongs to the "neural" category, and the "brain" supporting the platform, the autonomous driving chip, is Atlan.

insert image description here

In terms of NVIDIA Hyperion 9
Hyperion 9 autonomous driving platform, compared with the current 8th generation platform, the most obvious change is that the number of supporting perception hardware has been greatly increased, up to 50. These include 14 cameras, 9 millimeter-wave radars, 3 lidars, and 20 ultrasonic radars in the exterior part of the car; the interior part can support 3 cameras and 1 millimeter-wave radar.
Hyperion 9 has 17 more sensing hardware, but the amount of data generated will be more than twice that of the 8th generation platform. This means that Hyperion 9 has twice the performance of the 8th generation platform.
Hyperion 9 will support L3 autonomous driving and L4 parking in the parking lot.
insert image description here

Atlan chip
Hyperion 9 uses the Atlan chip. The Atlan chip was launched at the GTC 2021 conference. On the basis of the existing Orin chip, the overall chip architecture has undergone major changes. It will integrate Grace-Next CPU, Ampere-Next GPU units, and integrate the Bluefield data processing unit (DPU) for the first time. Assist in AI computing and enhance autonomous driving capabilities.
Although NVIDIA has not announced the specific core parameters of each module, in terms of computing power, the target computing power of the Atlan chip is 1000TOPS, and the computing power level of the Orin chip is 254TOPS. Atlan improved by about 3 times.
In terms of delivery time, the delivery time of NVIDIA's Atlan chip is expected to be in 2025, and the delivery time of Hyperion 9 autonomous driving platform is expected to be in 2026.

In addition to the "nerve" and "brain" required for autonomous driving, Huang Renxun also announced the launch of the DRIVE Map multi-modal map engine at this conference, which is a map data platform for "brain" decision-making. Although Huang Renxun did not clearly indicate the level of the map, from the perspective of accuracy and function, the map platform is a high-precision map collection, production and update platform.
insert image description here

DRIVE Map
DRIVE Map includes camera, radar and lidar data for different levels of autonomous driving perception.
The camera data corresponds to the visual perception hierarchy, and DRIVE Map will provide data such as lane separators, road markings, road boundaries, traffic lights, signs and poles.
At the millimeter-wave radar level, aggregated point cloud data of radar echoes will be provided to make autonomous driving safer in harsh weather environments. LiDAR level, providing accurate and reliable 3D environmental data with an accuracy of 5 cm.
DRIVE Map will have two map engines, corresponding to the DeepMap engine and the crowdsourced fleet map engine respectively. The two map engines can meet the needs of different functions, and achieve the best of both worlds in terms of data accuracy, data freshness, and scale.
insert image description here

NVIDIA plans to complete the creation of DRIVE Maps for major roads in North America, Western Europe, and Asia by the end of 2024, and the total road mileage will reach 500,000 kilometers.
The data generated by DRIVE Map will be imported into NVIDIA Omniverse to build a digital twin in the virtual world for training the autonomous driving engine.
02Won BYD, Lucid, the market position is absolutely leading
"Today, I am pleased to announce that BYD, the world's second largest electric vehicle manufacturer, will start production of vehicles equipped with the DRIVE Orin computing platform in the first half of 2023." With the release of new technologies , Huang Renxun also announced a new partnership with car companies.
In addition to BYD, at this conference, companies that have reached cooperation with NVIDIA include Lucid, WeRide Zhixing, Yuanrong Qixing, Yunji Zhixing, Outrider and Yopao Technology.
NVIDIA's positioning service L3 and above intelligent driving, as the inventor of GPU, has a monopoly in the GPU market of automotive main control chips, maintaining a market share of 70% all year round.
insert image description here

NVIDIA customers NVIDIA's customers
in the automotive circle can be roughly divided into three categories:
First, new car-making forces, including Weilai (ET5, ET7), Xiaopeng (P5, P7, G9), Ideal (X01), Weimar (M7) , SAIC Zhiji, R Auto, FF, Lucid Group, etc.;
second, traditional car companies, including BYD, Mercedes-Benz, Jaguar Land Rover, Volvo, Hyundai, Audi, Lotus, etc.;
third, autonomous driving companies, including GM Cruise, Amazon Zoox , Didi, Volvo Commercial Vehicles, Kodiak, Tucson Future, Zhijia Technology, AutoX, Pony.ai, WeRide, Yuanrong Qixing, etc.
The reason why NVIDIA can quickly get a large number of customers is that there are no more optional chips on the market for enterprises that want to develop L3 and above intelligent driving.
insert image description here

As early as 2015, NVIDIA launched the NVIDIA Drive series platform to empower the autonomous driving ecosystem.
At CES 2015, NVIDIA launched the first-generation platform based on NVIDIA's Maxwell GPU architecture: DRIVE CX with one Tegra X1, mainly for digital cockpits; and DRIVE PX with two Tegra X1s, mainly for autonomous driving.
Since then, NVIDIA has updated the Drive platform once or twice every year, and released a car-grade SoC chip every two years, which has continuously increased the level of computing power.
In 2020, the computing power of the Xavier chip will be 30 TOPS, and the computing power of Orin, which will be mass-produced in 2022, will jump to 254 TOPS.
On the "NIO Day" in 2021, the NIO ET7 was officially unveiled and announced to be the first mass-produced car of the Orin series.
Subsequently, including Weilai ET7, IM Zhiji Automobile and Weimar M7 said that they were equipped with four Orin chips, with a total computing power of more than 1000TOPS. With the delivery of NIO ET7 at the end of this month, the mass-produced car will enter the era of 1000TOPS computing power for the first time.
insert image description here

The delivery of the NIO ET7
NVIDIA Orin chip will be a milestone event for electric vehicles. The computing power of the autonomous driving chip will replace the horsepower index of traditional fuel vehicles and become a new competition point in the automotive industry.
Whether it is Mobileye, Huawei, or Horizon's mass-produced landing chips, each chip is basically dozens of TOPS, and the gap is large. Competitors, the Qualcomm Snapdragon Ride, which has a computing power coverage of 10TOPS to 700TOPS, will not be available until 2023.
Not only that, but Nvidia is increasing the distance.
At the GTC conference in 2021, NVIDIA released Atlan, with a single computing power of 1000TOPS. According to NVIDIA's plan, Atlan will provide samples to developers in 2023, and mass production will start in 2025.
In addition to computing power, car companies and autonomous driving companies choose NVIDIA because of its open and efficient R&D ecosystem. The specific ecological advantages include the following five aspects:
1. Decoupling of software and hardware, which can be upgraded independently, and supports hardware upgrade routes and software upgrade routes;
2. NVIDIA, as a GPU leader, has obvious hardware advantages;
3. NVIDIA has the industry's most complete Official development kit;
4. The software level is highly open, and APIs can be opened in DriveWorks (functional software layer), and APIs can also be opened in Drive AV and Drive IX (application software layer);
5. R&D bundled, its deep learning algorithm accelerates All are based on NVIDIA's own CUDA and TensorRT, so that its software development and software R&D system cannot be separated from the NVIDIA platform.
The superposition of these advantages makes NVIDIA the best choice for companies pursuing high-level autonomous driving.
03 Nvidia era?
In 2022, with the mass production and delivery of NIO ET7, the autonomous driving chip market will enter the era of NVIDIA with high computing power.
For competitors, what is scary is not only Nvidia's leading market share, but what is even scarier is that it is becoming synonymous with high-end reliability.
But after winning the crown, Nvidia will also continue to face two major challenges.
First, the competitors who are temporarily behind Nvidia will continue to chase hard, looking for any market they can break through. For example, the domestic autonomous driving chip dark horse Horizon has reached cooperation with a number of domestic car manufacturers to cut into the mid-to-low-end market and try to break down the market from the bottom up.
Second, the self-revolution of car companies. Some car companies have taken Tesla and Apple as an example and have put self-developed chips on the agenda. Shen Yanan, President of Ideal Motors, once talked about the logic of car companies' chip layout. He said that car companies should first master the hardware capabilities of domain controllers, and then master the capabilities of operating systems, and then develop a good chip.
Therefore, the war for autonomous driving chips is more like a competition of power. Nvidia’s lead can be understood as a victory in a staged contest, but the contest is far from over.

Reference link:
https://blogs.nvidia.com/blog/2022/03/22/ai-factories-hopper-h100-nvidia-ceo-jensen-huang/
https://mp.weixin.qq.com/s /FbWBYn6SLRkeLdgTIwJL-w

Guess you like

Origin blog.csdn.net/wujianing_110117/article/details/123701287