Nvidia Reports $2,300 Profit Every Second Amid Surging AI Demand
Brief news summary
Nvidia is witnessing remarkable financial growth, generating $2,300 per second during the current AI surge. Its data center revenue now exceeds that of gaming GPUs, prompting the debut of new AI-centric GPUs like the Blackwell Ultra GB300, Vera Rubin, and Rubin Ultra, set for release from late 2023 to 2027. The Blackwell Ultra is engineered for superior AI capabilities, aiming for 20 petaflops with advanced memory and networking features, positioning it as a formidable rival to the Nvidia H100 chip released in 2022. A desktop version, the DGX Station, will incorporate a single Blackwell Ultra chip. On the horizon is the Vera Rubin, projected to achieve roughly 50 petaflops, while the Rubin Ultra is set to target an impressive 100 petaflops of FP4 performance. Furthermore, the NVL576 rack is designed to reach a groundbreaking 15 exaflops in FP4 inference. CEO Jensen Huang emphasizes the increasing demand for cutting-edge computing solutions, reflecting strong market interest in Nvidia’s offerings. Looking ahead, a new architecture named Feynman is slated for launch in 2028, promising further advancements in technology.Nvidia is currently generating $2, 300 in profit every second, primarily driven by the demand for AI technology. Its data center segment has grown so large that revenue from networking hardware has surpassed that from gaming GPUs. The company has introduced AI GPUs to maintain its market advantage, including the Blackwell Ultra GB300 due for release in the latter half of this year, the Vera Rubin expected in the second half of next year, and the Rubin Ultra set to launch in 2027. The Blackwell Ultra, while not matching prior expectations for a faster production cycle of AI chips, is noteworthy for its specifications. One Blackwell Ultra chip provides the same 20 petaflops of AI performance as its predecessor but features increased memory of 288GB HBM3e. The Blackwell Ultra DGX GB300 “Superpod” cluster retains a similar setup as the earlier version, but with enhanced memory capacity. In comparisons with the H100—the chip that contributed to Nvidia's AI success—the Blackwell Ultra offers 1. 5 times greater FP4 inference speed and improved AI reasoning capabilities, enabling significant reductions in response times for computational tasks.
Additionally, Nvidia has unveiled a desktop model, the DGX Station, equipped with a single GB300 Blackwell Ultra chip, alongside powerful specifications, with major companies like Asus and Dell expected to offer versions. Looking ahead, the Vera Rubin and Rubin Ultra architectures promise substantial performance boosts, with the Rubin expected to provide 50 petaflops of FP4 and the Rubin Ultra projecting 100 petaflops by combining two Rubin GPUs. A complete NVL576 rack of Rubin Ultra claims to deliver 15 exaflops of FP4 inference. Nvidia has already realized $11 billion in Blackwell sales, with substantial demand from major buyers. During the Nvidia GPU Technology Conference, founder Jensen Huang stressed the increasing need for computing power, countering recent assumptions of a decline in demand. He hinted at a new architecture called Feynman to follow Vera Rubin, named after famed physicist Richard Feynman.
Watch video about
Nvidia Reports $2,300 Profit Every Second Amid Surging AI Demand
Try our premium solution and start getting clients — at no cost to you