Nvidia’s GPUs Highly Suitable As A Coprocessor
Nvidia is the leading manufacturer of GPU(Graphics Processing Unit), a processor with multiple cores that is optimized for the compute-intensive functions involved in processing graphics. These computational capabilities make GPUs ideally suited for use as coprocessors in High Performance Computing environments. Over the last decade, in particular, the use of GPUs in this capacity has gained traction. This is because the nature of computations involved in deep-learning algorithms used in HPC and computer graphics are similar. It is worth noting that GPUs have a parallel architecture with hundreds of cores, making it highly suited for matrix and vector operations in both deep learning and 3D computer graphics. For example, Nvidia’s Tegra X1 GPU has 256 cores running its Cuda operating system for partioning and load balancing workloads. The Tegra X1 CPU, in contrast, has for ARM 64b cores, while the Tegra K1 GPU (designed for automotive applications) has 192 cores. At the summit, in this regard, is the Tesla P100 with 3584 CUDA cores and many Teraflops of performance.
Intel’s Xeon Phi Coprocessor And Its Recent Acquisitions Have Expanded Its Reach In The Coprocessor Market
In contrast to a GPU with thousands of cores that allows parallel computations, a single core processor can perform only serial computations, processing only one element at a time. Though Intel has multi-core processors that can allow for parallel computations, they cannot match up to the speed of GPUs. However, they are designed to be coprocessors and share a common code base and development tools with the main processor. In 2012, Intel launched its Xeon Phi processors, a series of massively parallel multicore processors, which expanded its offerings in the HPC market. And it updated this coprocessor family earlier this year. Additionally, Intel acquired Altera in 2015, gaining access to its FPGA (Field Programmable Gate Array) technology, further expanding Intel’s capabilities to address the coprocessor needs of computing in the future with the FPGA technology.
Furthermore, Intel recently announced the acquisition of the deep learning technology startup Nervana Systems, which according to Diane Bryant, has a fully-optimized software and hardware stack for deep learning and an advanced expertise in accelerating deep learning algorithms. This acquisition which can help Intel expand its capabilities in the field of AI (artificial intelligence) and compete directly with Nvidia. Sources report that Nervana has gained traction against Nvidia with its Cuda-compatible Neon software offering. The company is also developing a Deep Learning accelerator (i.e., coprocessor ) that is expected next year.
Read more here: www.forbes.com/sites/greatspeculations/2016/12/23/intel-to-duke-it-out-with-nvidia-in-the-coprocessor-market/#5fef1d2b6f7a