The iPad and other tablets make up more than half of all personal computer sales, and their share of the market continues to grow. Consumer demand for new diversions drives innovation. This is especially true for video games. In fact, gaming is bigger than the movie and music business combined.
Many games take place in virtual worlds that respond onscreen to player demands in real time. But traditional computing chips (CPUs) are not very good at this. CPUs excel at performing instructions in the order that they are given. This works for applications like word processors, database programs, and operating systems… not huge interactive virtual environments.
Graphics processing units help replicate the real world
Graphics processing units (GPUs) were invented to meet the needs of video games. GPUs model the rules of physics so that games can function as if the actions were taking place in the real world. GPUs perform massive parallel processing. This allows for a rapid rendering of data to put constantly changing images onscreen.
The first chip marketed as a GPU was the GeForce 256. It came from a small chip company Nvidia in 1999.
It’s interesting that chip giant Intel passed on the chance to claim the GPU space. This seems to be a pretty standard practice in established industries. It’s ironic, though, because Intel’s initial success was due to IBM’s decision not to manufacture its own chips for the emerging PC market (though it had done so for its mainframes).
Until recently, Intel’s indifference to the GPU market wasn’t seen as a big problem. It may have even protected the company from anti-trust actions. But a few years ago, that began to change.
Researchers in artificial Intelligence (AI) were writing programs to perform complicated functions such as image and voice recognition. To replicate those abilities, researchers had to figure out how the brain allows us to see and hear. This is not well understood, so progress was quite slow.
Some AI researchers worked for decades on a different approach. If a computer could use software that copied the way the human brain works, it could teach itself how to see and hear. With sufficient data plus layers of software able to recognize and analyze complex patterns, computers might solve the problems that humans couldn’t.
But the power needed to run these programs on CPU-based systems was so great and costly that researchers thought they might not succeed until Moore’s law delivered much more powerful and inexpensive CPUs needed to test their theories.
The expanding uses for GPUs
Fortunately, the GPUs that run game systems are uniquely suited to imitating brain-like neural activity, perhaps because games, like the brain, must deal with real world physics instead of pure mathematics. By 2010, most AI researchers were starting to think about GPUs. And in 2012, a computer scientist from the University of Toronto, Alex Krizhevsky, won an important computer image recognition competition using GPUs. It didn’t use a system written by programmers. Rather, it was developed by a deep learning AI, which used neural network algorithms.
That system was built with chips manufactured by NVidia. Up to that point, NVidia was known mainly for its high-end gaming GPUs. Now, the company dominates the field of AI research and development, having tailored GPUs for that market. Financially, this bodes very well for the company. The potential for the use of deep learning AIs is growing.
Researchers and startups are aggressively pursuing ways to leverage self-teaching AIs for many industries and purposes. My guess is that everyone will have their own AI personal assistant capable of learning and simplifying all aspects of our lives, from tax planning to scheduling.
GPUs and life extension
The most important applications for this newly accelerated field will be in biotechnology. I say this for two reasons. One is that health and life are by definition our prime directives. Another is that health care is the biggest financial sector by a large margin.
There’s a nice symmetry to this. AI technology is progressing due to biomimetics, the science of mimicking biological systems. In this case, researchers are building models of the brain’s neurological structure. In the end, the greatest and most profitable AI ventures will be those that decode the secrets of our genomes that build those neurological systems.
This task is possible, in theory, but impractical without powerful self-learning computer systems. Already, some of the most important AI scientists are turning their tools toward the genome. Their goal is to discover the means to slow and even reverse the aging process. I know some of the people working toward this goal, so I’m optimistic.
The big question is, how long will it take? My opinion is that it will happen faster than almost anyone outside the field expects. And for that, we should thank the gamers who funded and pushed the GPU technologies that have driven this revolution.
Editor, Transformational Technology Alert
We welcome your comments. Please comply with our Community Rules.