Ultra Low Power Techniques for Machine Learning on the Edge
Analytics
10 views ◎3 downloads ⇓
Abstract
Deep learning has become an integral part of machine learning. It has radically transformed our lives in healthcare, automotive systems, human computer interaction etc. Although, deep learning requires a tremendous amount of compute power and resources, the success of deep learning in solving complex tasks has generated a serious interest in deploying deep learning models in edge sensors and IoT devices. However, that goal presents serious challenges. Typical deep learning models require very powerful hardware with large memories and high power consumption. However, sensor systems and IoT devices at the edge are heavily resource constrained. They have a limited amount of compute power and on-board memory. That is why many efforts are being actively pursued to optimize the deep learning models so that they fit into the limited resources of edge devices. In this dissertation, I explore different techniques for achieving ultra low power hardware for enabling machine learning at the edge. There have been numerous advances in circuit design techniques such as subthreshold analog computing, in memory computation, etc., for very low power applications. Emerging devices and circuits to integrate those devices into low power applications have shown promising results for custom hardware based edge devices. In this study, I explore neuromorphic techniques that lower the power consumption of the computation hardware without significantly degrading the performance. I draw inspiration from biology to design low power circuits, specifically spiking neurons of the biological nervous system. I explore biologically relevant neurons, circuits and learning rules to minimize computation and power consumption for machine learning at the edge device and sensors.I have proposed a modification to a sparse coding algorithm that decreases the number of circuits for hardware implementation. I have proposed an analog spiking neuron design which can display a variety of spiking behaviors. The circuit is compact, low power, uses low supply voltage and has high power efficiency, which improves the state of the art. Analog circuits suffer from the problem of leakage current, which makes the design of synaptic circuits difficult. I have proposed a leakage current mitigation technique in a synaptic circuit array and provide simulation experiments to show its efficacy. Spiking neural network is still an emerging branch of machine learning. Hence, there are a lack of necessary tools for simulation. Although there are many hardware neuron circuits, there are no spiking neural network simulators that can account for the hardware non-idealities in the simulation. When it comes to the performance of robust circuits and systems with predictable outcomes through simulation, the inclusion of hardware non-idealities is a must. Given the complexity of spiking neural network hardware, it is not an easy task. I propose phase-plane method for easily extracting hardware non-idealities and using them in the existing simulator to simulate spiking neural networks. The proposed method is computationally inexpensive and easily integrates with spiking network simulators. I compare the spice simulation and phase-plane simulation of spiking neural networks to show that phase-plane can indeed account for hardware non-idealities.