IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 On the Impact of Approximate Computation in an Analog DeSTIN Architecture

Abstract—Deep Machine Learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on co...

Full description

Bibliographic Details
Main Authors: Steven Young, Student Member, Junjie Lu, Jeremy Holleman, Itamar Arel, Senior Member
Other Authors: The Pennsylvania State University CiteSeerX Archives
Format: Text
Language:English
Subjects:
DML
Online Access:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.384.4985
http://web.eecs.utk.edu/~itamar/Papers/TNNLS_Young_2013/
Description
Summary:Abstract—Deep Machine Learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well-suited to custom hardware. Analog computation have demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems, while performing non-ideal computations. In this paper we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN- a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of non-ideal computations is necessary to fully exploit the efficiency of analog circuits. Index Terms—analog computation, analog circuits, deep machine learning, floating gates, feature extraction. I.