This site may earn affiliate commissions from the links on this folio. Terms of use.

Deep learning has been a hot topic this year, with high-profile announcements from companies similar IBM, Facebook, Google, Nvidia, Qualcomm, and Tesla. Now, Intel is tossing its own hat into the ring by buying the deep learning software and hardware developer Nervana Systems.

Nervana Systems has a cloud-based AI organization that it sells to customers who want to arrange deep learning for their own specific apply-cases and businesses besides equally a proprietary, GPU-specific framework dubbed Neon. The visitor'due south third production hasn't really launched nonetheless, only information technology might exist the principle reason why Intel bought this company in particular. The Nervana Engine is an ASIC that focuses on the strengths of what GPUs bring to the table, rather than the not-insignificant amounts of hardware that ultimately aren't useful to deep learning issues.

HBM-Hardware

Nervana hasn't released much information on its upcoming ASIC, simply we know the chip uses HBM.

The reason that GPUs are useful for these kinds of applications is because they contain enormous arrays of cores that tin can exist employed to solve complex bug. Resource like ROPs, texture caches, and FP64 (or even FP32) back up aren't particularly important for deep learning, all the same — that's why Pascal'south 16-chip one-half-precision mode was something Nvidia talked up when it unveiled GP100 before this yr. Nervana's existing Neon engine already runs on Nvidia hardware, but Intel'southward decision to buy the visitor will probable put an end to more permissive licensing arrangements.

Why Intel wants in

Right now, Intel is stuck in a flake of a tight spot. The company'southward consumer revenues have declined alongside the PC market's downturn, only its data center and HPC markets remain quite good for you. Intel missed out on the entire mobile and tablet market, and already had to cancel its plans to create new business for itself in those spaces (a failure we chronicled in a two-part article earlier this year).

This goes beyond non wanting to miss an emerging marketplace, yet. Intel has been acquiring companies with production lines and markets that stretch across its ain say-so of the data heart, consumer, and high operation computing markets. While products like Xeon Phi could theoretically be used for deep learning, Xeon Phi is designed to perform massive vector calculations, non the one-half-precision operations that a deep learning network uses. It also packs far fewer cores than an Nvidia Tesla or fifty-fifty an equivalent AMD card, though we'd caution against treating core counts as indicative of deep learning operation.

If deep learning is as central to the time to come of AI and computing as the industry has claimed, entering the market past acquiring a visitor with specialized ASIC hardware and proven designs is an splendid way for Intel to ensure that it remains relevant as calculating continues to evolve. It could also exist read as a tacit admission that Intel isn't necessarily sure how to continue to button the evolution of microprocessors farther than it has already.

I've talked before well-nigh how Intel isn't just dragging its anxiety on Moore's police force — there are cardinal limits to silicon engineering, and they aren't going away. Moves like this could be read to mean that fifty-fifty Intel recognizes that the era of huge advances in general purpose compute operation are by and large over. Machines will continue to draw less power and be slightly more efficient over time, just the final major leap for Intel'southward CPUs was Sandy Bridge over Nehalem. Haswell and Skylake were much more small-scale improvements.

Moving into markets like this gives Intel the opportunity to explore other types of compute architectures, not as replacements for x86, merely as high-performance supplements to information technology.