CIOTechOutlook >> Magazine >> February - 2016 issue

From Gaming to Artificial Intelligence: NVIDIA Takes on the Future

By

2015 was the year that personal computer shipments worldwide decreased by 7.7 percent, owing to a number of factors including the growth of mobile devices.But for NVIDIA(NASDAQ-NVDA), the leader in visual computing and inventor of the graphics processing unit(GPU), its latest financial results revealed a 15 percent increase in revenue. Perhaps the company’s ability to resist the downturn blighting its peers can be directly attributed to how Jen-Hsun Huang, CEO of the visual computing giant, responds when questioned on the individuals who inspire his journey. “There is no necessity to look too far for a role model. Most of my teammates that I work with are able to execute amazing things that I can’t imagine working on myself. They are the real inspiration behind my success.”

Back in 1993, a team of three engineers, Jen-Hsun Huang, Chris Malachowsky and Curtis Priem, joined forces to counter the data-intensive problem of visual computing. Increasing fidelity in terms of output displays and the need to implement visualization in real-time to meet consumer expectations were the key pain points the trio focused on. After scores of permutations and combinations, they realized that, unless the fundamental architecture of the compute infrastructure changed, visual computing could not realise its true potential.

NVIDIA took a major step toward that in 1999, inventing the first GPU, with GeForce 256, which was aimed at the PC gaming market. The company defined it as “a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display,” One of the basic differences between GPUs and the prevailing central processing unit (CPU) computing architecture lies in the way they handle information. While CPUs process data serially, one task at a time, the GPU does so in parallel, processing multiple tasks simultaneously.

Originally conceived as a PC graphics chip, the GPU, with its parallel nature, was to prove key in transforming the company far beyond its gaming roots.Today, GPUs find applications in numerous verticals, from movie making and architecture to medical research and self-driving cars. In October 2014, Wired magazine noted GPUs as one of the “Three breakthroughs that have finally unleashed AI on the world.”

Exploring New Markets

NVIDIA, under the resolute leadership of Jen-Hsun Huang, has come a long way since the GeForce 256 was first revealed. The organization today encompasses a wide range of verticals in its portfolio, touching both the consumer and enterprise spaces. NVIDIA remains extremely engaged with the consumer gaming market, with its GeForce brand delivering the world’s best PC gaming experience. In enterprise, the company dominates the professional graphics market, but also has established a strong presence in accelerated computing for science and industry, graphics virtualization and the high-growth field of machine learning. It is also leading the industry in laying groundwork toward the creation of self-driving cars.

It may seem counter intuitive that a technology originally designed for gaming has, for example, played a key role in helping researchers determine the precise chemical structure of the HIV "capsid," a discovery which takes a significant step towards effective treatment. But across these diverse applications, the GPU’s powerful parallel nature has been harnessed to meet their computing needs and accelerate innovation.

In fact, NVIDIA has reached a strategic inflection point. Once simply a PC graphics company, the organisation is now building on its PC OEM strongholds and gaming markets to become a specialized platform company that engages with sectors where visual computing is essential and deeply valued.

The birth of NVIDIA GRID

Cloud computing is one of the most significant technology transformations sweeping businesses worldwide. NVIDIA’s contribution has been to empower organisations that rely on graphics-rich applications with the ability to virtualize their IT infrastructure and reap the benefits of virtualization. Achieving this functionality was no mean feat. The company’s engineers faced nearly intractable challenges, ranging from making the solution’s data transfer fast enough to be interactive to racking thousands of GPUs in a single datacenter with efficient energy consumption to store the data.

The solution, christened NVIDIA GRID, is now in its second generation. By virtualizing the power of the GPU, GRID helps enterprise overcome some of today’s toughest business challenges, including IP security, employee mobility and flexible working.

Computing at the cutting edge

True to form, NVIDIA is at the forefront of three trends that promise to revolutionize consumer tech, enterprise infrastructure and scientific research: virtual reality, machine learning and self-driving cars.

Virtual reality is hardly a new concept but, thanks to the power of the GPU, its time has finally come. The opportunities to create immersive entertainment experiences are obvious, but there are also intriguing professional applications for this technology. Imagine an architect taking her clients on a virtual tour of their new stadium before a single brick is laid, a real estate agent who can show house hunters around properties anywhere in the world, or a customer experiencing every detail of the new car he’s just specified before even leaving the dealership. These applications are already a reality and it won’t be long before a host of other industries begin to explore the potential of this game-changing technology.

Machine learning is the force behind many of today’s most exciting applications. This discipline allows computers to perform tasks for which they have not been specifically programmed. Instead, they are trained to handle given situations, in much the same way humans learn. By feeding a computing large amounts of information which it runs through sophisticated deep learning algorithms, the machine builds a ‘neural network’ that allows it to solve problems based on its experience rather than a fixed set of commands. The results of this training are then deployed, via huge data centers, to put the solution into practice at scale.

First introduced in the 1950s, the massive computational requirements of machine learning have hampered its widespread use. Now, thanks to the power of the GPU, the computationally intensive process of training and deploying machine learning systems can now be performed fast enough to unlock its vast potential. Diverse applications, including visual search, gaze detection, cancer detection and autonomous vehicles, are being turbocharged by machine learning on the GPU.

The machine learning phenomenon –from AI to driver less cars


On The Deck

CXO Insights

Securing IT-OT Converged Infrastructure

By Saurabh Sharma, SMIEEE, FIE, CEH, Chief Manager (BIS) & CISO, Petronet LNG Ltd.

The Fundamental Transformation from Purchase to...

By Vinod Mathur, Sr.Director- Strategic Services, JDA Software

Two Success Strategies to Help You Innovate...

By Premalakshmi R, Head – Cloud Platform, Oracle India

Facebook