Blogs

Three Computing Approaches To AI And Deep Learning

By Hubert Yoshida posted 11-06-2020 20:17

  
Artificial intelligence can benefit the economy and improve society by helping the evolution of work and creating new opportunities. With deep learning and machine learning, AI can become smarter over time as it processes more data. And there will be no shortage of data as the accumulated world of data explodes to 44 zettabytes by this year, 2020.

The biggest problem facing AI and deep learning is the lack of processing power. The lack of processing power costs time and money. The amount of time needed to train a deep learning model can take weeks.  This doesn’t count the weeks or months in defining the problem and the iterative successes and failures in programming deep learning networks before they reach the required performance thresholds. Time is money. Money for months and years of very expensive data scientists and data engineers and money for weeks of continuous compute time on hundreds of GPUs. Renting 800 GPUs from Amazon’s cloud computing service for just a week would cost around $120,000 at list price. 

This has opened up a race for three compute technologies to reduce the time and cost of AI and deep learning processing.



High Performance Computing
This is the main focus of what we see today. Stick to the Deep Neural Net architectures that we know, just make them faster and easier to access. Intel, NVIDIA, and other chip makers are utilizing GPUs and FPGAs in larger and larger data centers. Google and Microsoft are building proprietary chips to make their deep learning platforms a little faster or a little more desirable than others. Every major player in AI has made their AI platforms open source to attract users. Since the volume of data is key, Hitachi is focusing on making it faster to access and easier to manage data with tools like VSP, HCI, Pentaho, Lumada, and the recent OEM partnership with WekaIO.


Neuromorphic Computing
Neuromorphic Computing mimics the natural biological structures of our nervous system. It is an attempt to replicate the cognitive abilities of our brains to process information faster and more efficiently than computers due to the architecture of our neural system. Hitachi has joined with Intel in developing Neuromorphic Computing to improve the scalability and flexibility of edge computing systems. (See my Previous Post)


This year Intel announced a Neuromorphic Compute system, Pohoiki Springs system. Unlike traditional CPUs, the memory and computing elements are intertwined rather than separate. That minimizes the distance that data has to travel, because in traditional computing architectures, data has to flow back and forth between memory and computing. With neuromorphic computing, it is possible to train machine-learning models using a fraction of the data it takes to train them on traditional computing hardware. Early examples show they can generalize about their environment by learning from one environment and applying it to another. They can remember and generalize making them very quick learners and ultimately make predictions that could be more accurate than those made by traditional machine-learning models. They are also much more energy efficient which opens a path to miniaturization.




Quantum Computing
Quantum Computers are available today. D-Wave Systems has been selling Quantum Computers since 2015. IBM’s Quantum Computer, IBM Q, is available as a cloud subscription service. Google and Microsoft are on track to commercially release their own Quantum machines over the next two or three years as are a whole host of independents and academic institutions.

The D-Wave Quantum Computer uses quantum techniques to solve combinatorial optimization problems using a statistical mechanics model that mimics the behavior of magnetic material. This special type of quantum computing is called a quantum annealing machine as opposed to gate-based quantum computers. Rather than using expensive quantum techniques, Hitachi developed a complementary metal-oxide semi-conductor (CMOS) annealing machine that simulates an Ising model on semiconductor circuits. (See my earlier post on Hitachi’s Ising model). While a quantum computer may require a special data center that requires special cooling (-273 Degrees C), Hitachi’s CMOS annealing machine can run in a rack in a regular data center.

While CMOS annealing is capable of high-speed combinatorial optimization computations of unprecedented size, the first challenge is reducing a customer’s business problems to a model capable to being solved by annealing. Annealing requires expertise in how to represent real-world situations in terms of spins, magnetic fields, and couplings, something that customers find difficult to do on their own.

At our CSI (Center for Social Innovation) labs, Hitachi works through a cycle of steps to gain a deeper understanding of the customer business and CMOS annealing and build models for resolving the challenges they face. Specifically, the four steps of this cycle are first to meet with the customer to discuss their business issues, then to formalize these issues into a form that CMOS annealing can solve, perform optimization using actual data, and review the results with the customer to obtain feedback.

Summary

Hitachi is working on different approaches to address the lack of processing power for efficient, cost effective, Ai and machine learning solutions. High Performance Computing, Neuromorphic Computing, and Quantum Computing are strong candidates for reducing the time and cost of AI and Machine learning.

#Hu'sPlace
#Blog
2 comments
2 views

Permalink

Comments

02-12-2021 06:27

Thanks for the information

 

expresshr

11-12-2020 11:08

Thanks for the information