Why a 24-Year-Old Chipmaker Is One of Tech’s Hot Prospects


“They are just cruising,” Hans Mosesmann, an analyst at Rosenblatt Securities, said of Nvidia, which he has tracked since it went public in 1999.

Driving the surge is Jen-Hsun Huang, an Nvidia founder and the company’s chief executive, whose strategic instincts, demanding personality and dark clothes prompt comparisons to Steve Jobs.

Mr. Huang — who, like Mr. Jobs at Apple, pushed for a striking headquarters building, which Nvidia will soon occupy — made a pivotal gamble more than 10 years ago on a series of modifications and software developments so that GPUs could handle chores beyond drawing images on a computer screen.

“The cost to the company was incredible,” said Mr. Huang, 54, who estimated that Nvidia had spent $500 million a year on the effort, known broadly as CUDA (for compute unified device architecture), when the company’s total revenue was around $3 billion. Nvidia puts its total spending on turning GPUs into more general-purpose computing tools at nearly $10 billion since CUDA was introduced.

Mr. Huang bet on CUDA as the computing landscape was undergoing broad changes. Intel rose to dominance in large part because of improvements in computing speed that accompanied what is known as Moore’s Law: the observation that, through most of the industry’s history, manufacturers packed twice as many transistors onto chips roughly every two years. Those improvements in speed have now slowed.

Photo

Nvidia’s chief executive, Jen-Hsun Huang, made a pivotal bet more than 10 years ago on a series of modifications and software developments to the company graphics processing units, or GPUs.

Credit
Ethan Miller/Getty Images

The slowdown led designers to start dreaming up more specialized chips that could work alongside Intel processors and wring more benefits from the miniaturization of chip circuitry. Nvidia, which repurposed existing chips instead of starting from scratch, had a big head start. Using its chips and software it developed as part of the CUDA effort, the company gradually created a technology platform that became popular with many programmers and companies.

“They really were well led,” said John L. Hennessy, a computer scientist who stepped down as Stanford University’s president last year.

Now, Nvidia chips are pushing into new corporate applications. German business software giant SAP, for example, is promoting an artificial-intelligence technique called deep learning and using Nvidia GPUs for tasks like accelerating accounts-payable processes and matching resumes to job openings.

SAP has also demonstrated Nvidia-powered software to spot company logos in broadcasts of sports like basketball or soccer, so advertisers can learn about their brands’ exposure during games and take steps to try to improve it.

“That could not be done before,” said Juergen Mueller, SAP’s chief innovation officer.

Such applications go far beyond the original ambitions of Mr. Huang, who was born in Taiwan and studied electrical engineering at Oregon State University and Stanford before taking jobs at Silicon Valley chipmakers. He started Nvidia with Chris Malachowsky and Curtis Priem in 1993, setting out initially to help PCs offer visual effects to rival those of dedicated video game consoles.

The company’s original product was a dud, Mr. Malachowsky said, and the graphics market attracted a mob of rivals.

But Nvidia retooled its products and strategy and gradually separated itself from the competition to become the clear leader in the GPU-accelerator cards used in gaming PCs.

GPUs generate triangles to form framelike structures, simulating objects and applying colors to pixels on a display screen. To do that, many simple instructions must be executed in parallel, which is why graphics chips evolved with many tiny processors. A new GPU announced by Nvidia in May, called Volta, has more than 5,000 such processors; a new, high-end Intel server chip, by contrast, has just 28 larger, general-purpose processor cores.

Nvidia began its CUDA push in 2004 after hiring Ian Buck, a Stanford doctoral student and company intern who had worked on a programming challenge that involved making it easier to harness a GPU’s many calculating engines. Nvidia soon made changes to its chips and developed software aids, including support for a standard programming language rather than the arcane tools used to issue commands to graphics chips.

The company built CUDA into consumer GPUs and high-end products. That decision was critical, Mr. Buck said, because it meant researchers and students who owned laptops or desktop PCs for gaming could tinker on software in campus labs and dorm rooms. Nvidia also convinced many universities to offer courses in its new programming techniques.

Photo

Nvidia’s new headquarters in Santa Clara during construction last year. The company’s stock-market value has swelled more than sevenfold in the past two years.

Credit
Ramin Rahimian for The New York Times

Programmers gradually adopted GPUs for applications used in, among other things, climate modeling and oil and gas discovery. A new phase began in 2012 after Canadian researchers began to apply CUDA and GPUs to unusually large neural networks, the many-layered software required for deep learning.

Those systems are trained to perform tricks like spotting a face by exposure to millions of images instead of through definitions established by programmers. Before the emergence of GPUs, Mr. Buck said, training such a system might take an entire semester.

Aided by the new technology, researchers can now complete the process in weeks, days or even hours.

“I can’t imagine how we’d do it without using GPUs,” said Silvio Savarese, an associate professor at Stanford who directs the SAIL-Toyota Center for A.I. Research at the university.

Competitors argue that the A.I. battle among chipmakers has barely begun.

Intel, whose standard chips are widely used for A.I. tasks, has also spent heavily to buy Altera, a maker of programmable chips; start-ups specializing in deep learning and machine vision; and the Israeli car technology supplier Mobileye.

Google recently unveiled the second version of an internally developed A.I. chip that helped beat the world’s best player of the game Go. The search giant claims the chip has significant advantages over GPUs in some applications. Start-ups like Wave Computing make similar claims.

But Nvidia will not be easy to dislodge. For one thing, the company can afford to spend more than most of its A.I. rivals on chips — Mr. Huang estimated Nvidia had plowed an industry record $3 billion into Volta — because of the steady flow of revenue from the still-growing gaming market.

Nvidia said more than 500,000 developers are now using GPUs. And the company expects other chipmakers to help expand its fan base once it freely distributes an open-source chip design they can use for low-end deep learning applications — light-bulbs or cameras, for instance — that it does not plan to target itself.

A.I., Mr. Huang said, “will affect every company in the world. We won’t address all of it.”

Continue reading the main story



Source link

About admin

Check Also

Getting Social With Waze – The New York Times

Q. What does it mean when Waze makes a car horn noise at you? A. ...

Leave a Reply

Your email address will not be published. Required fields are marked *