NVIDIA
Короткая

As NVIDIA faces more risks, questions remain unanswered

MSFT NVDA GOOG AMZN INTC AMD
As the common market motion is embracing a recession, one of Bloomberg's Magnificent 7 fights hard to keep a stiff upper lip. But not only as the signs of the times currently stand, but also by their own decisions, NVIDIA will get in turmoil.

Background

In my view (working in IT for 15 years), the chip designer is overreaching its capacity with the following two factors.

Planning not a new GPU, but a whole GPU architecture, for every year to release

While GPUs became an essential part of many cutting edge technologies aside graphics, an architecture means to write drivers (computer software working as interpreter between devices) for the variety of operating systems currently at the market, which are, in this order, Linux, Android and Windows.

Linux already despises the closed-source proprietary licensing model of drivers by NVIDIA, which can neither be changed nor get improved nor adapted to compatibility to Linux. An annual release of new architectures without more vertical integration of software development will result in a lack of support by the industry ecosystem, as the industry tends to adapt only to products which grow in relevance. Pick SQL, the database language.

Every other year, a new complete ISO standard of SQL is released to implement, but as buying a whole single set of standard documents comes with high costs, 2.500 USD for paper, the market adoption of new and newest issues of the SQL standards are rare, even in enterprise software. Open source database software tends to implement whatever is available for either no or low cost, so they naturally would not implement the latest standard.

The same dynamic will go on a new GPU architecture release every year, and NVIDIA, given the management is sane, will scrap that strategy after the first three or four cycles.

Planning to challenge Intel, other CPU makers

The most common processors, natural CPU chips built in every computing device, are coming from Intel or an Intel architecture, or ARM's RISCy architecture. Every successful processor ever made in the market had to either be made for a closed ecosystem of devices of their own (like the Zilog Z80 for Nintendo's Game Boy or any home computer of the 80's really, or any of Apple's devices) or it had to be compatible with Intel and its architecture and instruction set. Any other processor sold for general application in the past has failed to penetrate the market. And as Intel, as well as Intel-compatible processors from manufacturers like AMD, are widely common and available, NVIDIA faces the challenge to generate Unique Selling Points for its own CPU ambitions. What could that be?

Lower prices?

If NVIDIA thinks they could create the generations of CPUs to come in a much cheaper way, they would offer discounted hardware on the NVIDIA label, without holding to the promises everyone assumes with the brand: top-edge graphical calculation or AI. The margin would also challenge the stress-test of the ever-altering architecture leadership of Intel, giving Intel itself the opportunity to disrupt NVIDIA's development cycle by letting NVIDIA face the same challenges like Linux developers do when integrating NVIDIA's driver sets, with the additional risk that not every newly released architecture will sell.

Additional features, AI?

AI calculations are power-exhaustive, and delivering them on a CPU will add to the power consumption, and thereby energy costs, as much as overclocking already does. Heat development (and thereby fast aging) is a problem among the CPU industry of which sufficient and endurable solutions are rare. AI software runs in a cloud, preferrably, as the cloud would consist of multiple GPU cards which are cooled with means like heatpipes with chilled water or nitrogen, as well as a general A/C for the room. Describing all this already lets you picture any larger datacenter with its own powerplant, and if you can picture that, you know there is no consumer application for these kind of chips. AI-enhanced CPUs will eventually serve only a niche market, as datacenters would already have (matured and cheaper) GPUs to go for, which would by the way release annually and raise the cost, and as power grids worldwide are not ready to transport this kind of energy for a broad consumer approach.

If any of that is true, what still goes for NVIDIA?

NVIDIA will remain the most important infrastructure provider for datacenters, and by that extent, cloud providers. Neither Microsoft nor Google nor Amazon will be able to turn to other manufacturers, except they'd be successful in running ARM-driven datacenters, making the CPU strategy of NVIDIA even harder. Eventually datacenters will heterogenize in their inner structure to provide cost-effectively for different applications, from a homogenous set of almost equal hardware to a ring-like system for different classes of chips and applications. Many datacenters already either specialize for a certain kind of application or allow a general approach by heterogenizing their hardware, and this trend will continue and create more variety and diversification with coming hardware generations. NVIDIA, as a key infrastructure provider with a heavy foot in the AI field, will remain to be able to supply and influence the business of almost every other cloud-providing technology company, if NVIDIA only would reflect on its virtues, cap their endeavours and emphasize even more on its current strengths.
Beyond Technical AnalysischipmakerschipsFundamental Analysisfundamental-analysismagnificent7Trend Lines

Отказ от ответственности