Latest News

Fujitsu develops technology to process deep learning

Fujitsu Laboratories has announced the development of software technology that uses multiple GPUs to enable high-speed deep learning powered by the application of supercomputer software parellelisation technology.

A conventional method to accelerate deep learning is to use multiple computers equipped with GPUs, networked and arranged in parallel. According to the company however, the issue with this method is that the effects of parallelisation become progressively harder to obtain as the time required to share data between computers increases when more than 10 computers are used at the same time.

Fujitsu has newly developed parallelisation technology to efficiently share data between machines, and applied it to Caffe, an open source deep learning framework widely used around the world. To confirm effectiveness with a wide range of deep learning, the company evaluated the technology on AlexNet, where it was confirmed to have achieved learning speeds with 16 and 64 GPUs that are 14.7 and 27 times faster, respectively, than a single GPU. The company claims that these are the world’s fastest processing speeds, representing an improvement in learning speeds of 46 per cent for 16 GPUs and 71 per cent for 64 GPUs.

With this technology, machine learning that would have taken about a month on one computer can now be processed in about a day by running it on 64 GPUs in parellel, the company stated.

The company is still working to improve the technology in pursuit of greater learning speed, and aims to commercialise it later in the year.

 

Send this to a friend