.wrapper { background-color: #f9fafb; }

I loko o kahi ʻōlelo koʻikoʻi i lōʻihi ma mua o ʻelua hola, ʻAʻole wale ʻo Huang i wehe i nā ʻenehana hou loa akā hāʻawi pū i kahi wānana kālā i loaʻa i nā mea hoʻopukapuka’ pāʻani adrenaline. Hoʻomanaʻo ʻo ia i kēlā makahiki i hala, koi makeke no ka Blackwell a me ka Rubin chips ma o 2026 ua manaoia ma kahi o $500 piliona. I kēia manawa, e kū ana ma kahi kikoʻī hou, ua hoʻopālua ʻo ia i kēlā helu.

ʻO ka Luna Hoʻokele o Nvidia ʻo Jensen Huang i ka wā o kahi kamaʻilio nui ma Nvidia's GTC Conference ma Malaki 16, 2026 ma San Jose, Kaleponi.

“ʻElima haneli piliona he helu hōkū,” Ua ʻōlelo ʻo Huang i kāna haʻiʻōlelo. “Akā i kēia lā, Eia wau e haʻi iā ʻoe e nānā ana i mua 2027, ʻo kaʻu mea e ʻike nei ma ka liʻiliʻi loa $1 trillion ma ke koi.”

Kākoʻo ʻia kēia hoʻolaha wiwo ʻole e ka ʻenehana ʻenehana mau ʻole o Nvidia. ʻO ka hale hana ʻo Vera Rubin, wehe ʻia i loko 2024 e like me ka chip AI hou loa o ka hui, was hailed by Huang as thenew pinnacleof AI hardware. According to Nvidia’s official data, the Rubin architecture will be 3.5 times faster than its predecessor, Blackwell, on model training tasks, and five times faster on inference tasks, with peak performance reaching up to 50 petaflops. Such a significant leap in performance is the core driver compelling global tech giants to keep placing orders.

Nvidia has made it clear that it expects to significantly ramp up production of Rubin chips in the second half of this year to meet surging market demand. This trillion-dollar outlook will undoubtedly further cement Nvidia’s dominance in the AI computing landscape.

Na admin