, , , , , , ,

AMD Confirms Subsequent-Gen Intuition MI400 Sequence AI Accelerators Already In The Works

AMD units its sight on adopting an aggressive strategy within the AI business, because it discloses its plans for next-gen MI400 Intuition AI accelerators.

AMD Goals At Enhancing its Software program Platform to Present Enhanced Help For Generative AI Functions, Confirms Subsequent-Gen Intuition MI400 Accelerators

AMD has formally confirmed its intentions to develop the “MI400” Intuition sequence. This affirmation comes as no shock since Lenovo’s VP already confirmed the existence of the next-gen accelerators on AMD’s agenda.

Throughout the firm’s current Q2 earnings name, CEO Lisa Su hinted on the future Intuition MI400 AI Accelerators, with out disclosing any specifics, therefore making a cliffhanger. However similar to the MI300 sequence, the MI400 accelerators may also are available a spread of configurations.

Whenever you look throughout these workloads and the investments that we’re making, not simply at present, however going ahead with our subsequent era MI400 sequence and so forth and so forth, we positively consider that we have now a really aggressive and succesful {hardware} roadmap. I believe the dialogue about AMD, frankly, has at all times been concerning the software program roadmap, and we do see a little bit of a change right here on the software program facet.

Dr. Lisa Su (AMD CEO)

The CEO’s assertion mentions the truth that the Intuition lineup is well-equipped within the {hardware} division, providing top-of-the-line specs. Nevertheless, it’s felt that AMD lacks software program growth, which incorporates in depth assist for generative AI functions. NVIDIA has been on the forefront for now, with the introduction of options corresponding to “NVIDIA ACE” and “DLSDR”. AMD plans at enhancing issues on the software program facet, therefore we are able to see main adjustments being launched by AMD, which can hopefully elevate the Intuition platform.

As beforehand mentioned, aside from the next-gen MI400 Intuition lineup, AMD additionally disclosed plans for the event of “cut-down” MI300 variants for Chinese language markets in an effort to adjust to US commerce insurance policies. Whereas precise specs aren’t sure but, we consider that Crew Purple will undertake the same strategy taken by NVIDIA with their “H800 and A800” GPUs.

NVIDIA has largely been capitalizing from the AI “gold mine”, reaching great ranges of gross sales and demand. Rivals like Intel and AMD have been late, however they may present powerful competitors given the truth that they create higher efficiency and worth onboard.

AMD Radeon Intuition Accelerators

Accelerator Title AMD Intuition MI400 AMD Intuition MI300 AMD Intuition MI250X AMD Intuition MI250 AMD Intuition MI210 AMD Intuition MI100 AMD Radeon Intuition MI60 AMD Radeon Intuition MI50 AMD Radeon Intuition MI25 AMD Radeon Intuition MI8 AMD Radeon Intuition MI6
CPU Structure Zen 5 (Exascale APU) Zen 4 (Exascale APU) N/A N/A N/A N/A N/A N/A N/A N/A N/A
GPU Structure CDNA 4 Aqua Vanjaram (CDNA 3) Aldebaran (CDNA 2) Aldebaran (CDNA 2) Aldebaran (CDNA 2) Arcturus (CDNA 1) Vega 20 Vega 20 Vega 10 Fiji XT Polaris 10
GPU Course of Node 4nm 5nm+6nm 6nm 6nm 6nm 7nm FinFET 7nm FinFET 7nm FinFET 14nm FinFET 28nm 14nm FinFET
GPU Chiplets TBD 8 (MCM) 2 (MCM)
1 (Per Die)
2 (MCM)
1 (Per Die)
2 (MCM)
1 (Per Die)
1 (Monolithic) 1 (Monolithic) 1 (Monolithic) 1 (Monolithic) 1 (Monolithic) 1 (Monolithic)
GPU Cores TBD Up To 19,456 14,080 13,312 6656 7680 4096 3840 4096 4096 2304
GPU Clock Velocity TBD TBA 1700 MHz 1700 MHz 1700 MHz 1500 MHz 1800 MHz 1725 MHz 1500 MHz 1000 MHz 1237 MHz
FP16 Compute TBD TBA 383 TOPs 362 TOPs 181 TOPs 185 TFLOPs 29.5 TFLOPs 26.5 TFLOPs 24.6 TFLOPs 8.2 TFLOPs 5.7 TFLOPs
FP32 Compute TBD TBA 95.7 TFLOPs 90.5 TFLOPs 45.3 TFLOPs 23.1 TFLOPs 14.7 TFLOPs 13.3 TFLOPs 12.3 TFLOPs 8.2 TFLOPs 5.7 TFLOPs
FP64 Compute TBD TBA 47.9 TFLOPs 45.3 TFLOPs 22.6 TFLOPs 11.5 TFLOPs 7.4 TFLOPs 6.6 TFLOPs 768 GFLOPs 512 GFLOPs 384 GFLOPs
VRAM TBD 192 GB HBM3 128 GB HBM2e 128 GB HBM2e 64 GB HBM2e 32 GB HBM2 32 GB HBM2 16 GB HBM2 16 GB HBM2 4 GB HBM1 16 GB GDDR5
Reminiscence Clock TBD 5.2 Gbps 3.2 Gbps 3.2 Gbps 3.2 Gbps 1200 MHz 1000 MHz 1000 MHz 945 MHz 500 MHz 1750 MHz
Reminiscence Bus TBD 8192-bit 8192-bit 8192-bit 4096-bit 4096-bit bus 4096-bit bus 4096-bit bus 2048-bit bus 4096-bit bus 256-bit bus
Reminiscence Bandwidth TBD 5.2 TB/s 3.2 TB/s 3.2 TB/s 1.6 TB/s 1.23 TB/s 1 TB/s 1 TB/s 484 GB/s 512 GB/s 224 GB/s
Type Issue TBD OAM OAM OAM Twin Slot Card Twin Slot, Full Size Twin Slot, Full Size Twin Slot, Full Size Twin Slot, Full Size Twin Slot, Half Size Single Slot, Full Size
Cooling TBD Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling
TDP (Max) TBD 750W 560W 500W 300W 300W 300W 300W 300W 175W 150W

Information Sources: Videocardz, SemiAnalysis

Share this story

Fb

Twitter

Originally posted 2023-08-05 17:31:03.

Related Posts