Fujitsu improves GPU memory efficiency

Fujitsu Australia

Thursday, 22 September, 2016

Fujitsu Laboratories has announced the development of technology to streamline the internal memory of GPUs to support the growing neural network scale that works to heighten machine learning accuracy. The news was announced at IEEE Machine Learning for Signal Processing 2016, an international conference held from 13–16 September.

In order to make use of a GPU’s high-speed calculation ability, the data to be used in a series of calculations needs to be stored in the GPU’s internal memory. This, however, creates an issue where the scale of the neural network that could be built is limited by memory capacity. Fujitsu Laboratories has now developed technology to streamline memory efficiency to expand the scale of a neural network for computations with one GPU, without using parallelisation methods that reduce learning speed.

The technology reduces the volume of memory by enabling the re-use of memory resources; it takes advantage of the ability to independently execute both calculations to generate the intermediate error data from weighted data and calculations to generate the weighted data error from intermediate data. When learning begins, the structure of every layer of the neural network is analysed and the order of calculations is changed so that memory space in which larger data has been allocated can be re-used.

Fujitsu Laboratories implemented the technology into the Caffe open source deep learning software framework and measured the usage of GPU internal memory. In evaluations using AlexNet and VGGNet, image-recognition neural networks widely used in research, it achieved reductions in memory usage volume of over 40% compared with before the application of this technology, enabling the scale of learning on a neural network for each GPU to be increased by up to roughly two times. This will enable high-speed learning calculations using the full capability of a GPU, even with a large-scale neural network that requires complicated processing, accelerating the development of more accurate models.

The company aims to commercialise the technology as part of Fujitsu Limited’s AI technology, Human Centric AI Zinrai, by March 2017. In addition, it plans to combine the technology with its already announced high-speed technology to process deep learning through GPU parallelisation.

Related News

STMicroelectronics breaks 20 nm barrier for next-gen microcontrollers

STMicroelectronics has launched an advanced process based on 18 nm Fully Depleted Silicon On...

Chip opens door to AI computing at light speed

A team of engineers have developed a silicon-photonics chip that uses light waves, rather than...

Insights into the behaviour of excitons in 2D semiconductors

A recent study has shed light on the behaviour of excitons in two-dimensional semiconductors.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd