Implementation of Approximate Half Precision Floating Point Multiplier Using Verilog
Implementation of Approximate Half Precision Floating Point Multiplier Using Verilog
Abstract
In the age of Artificial intelligence, there is a need for higher rate of data
processing. Basic neural networks are composed of layers, each layer
consists of consists of neurons. Each neuron multiplies the inputs with its
corresponding weights to get the output. The weight computation and
output calculation require many floating-point multiplications. In image
processing applications using neural networks, the exact value of output
is not needed, a certain level of accuracy is accepted and will not hugely
affect its functionality. Approximate designs will help in achieving
functionality with approximation instead of exactness. Approximate
multipliers introduce an error in the output making the circuit less
complex and reducing the propagation delay. Adopting approximate
floating-point multipliers can be beneficial as they need less space and
power as compared to their exact counterparts. Exact arithmetic
computation requires many gates consuming a large amount of power
and calculation takes up much of the time. So, using approximate
arithmetic computation can hugely affect the speed and power
consumption of the system, and all overall complexity of the system is
reduced. These approximate multipliers are used in such applications
where the error can be tolerated to a certain level and does not affect the
system’s overall functionality. Approximate floating-point multiplier using
Booth's Algorithm is implemented using Verilog.
Objectives
1. To perform multiplication of two floating point numbers
2. To achieve computations with less delay.
3. To achieve less power consumption and cell area.
Methodology
The half precision floating point number consists of 16 bits out of which
the MSB bit is a sign bit of size 1 bit, exponent of size 4 bits and mantissa
of size 10 bits.
Timelines
Week 1 – Resume
Week 2 – Finalising the project
References
1. Tuan D. Nguyen an james E. Stine “A combined IEEE Half and Single Precision Floating Point Multipliers for Deep
Learning” IEEE 978-1-5386-1823-3/17, Doi:10.1109/ACSSC.2017.8335507
2. S.Venkatachalam and S B Ko. “Design of Power and Area Efficient Approximate Multipliers, ” in IEEE Transactions
on VLSI Systems, vol. 25, no. 5, pp. 1782-1786,May 2017, Doi: 10.1109/TVLSI.2016.2643639.
3. M. Ramasamy, G. Narmada and S. Deivasigamani. “Carry based approximate full adder for low power
approximate computing.” 2019 7th International Conference on Smart Computing & Communications (ICSCC),
2019, pp. 1-4, Doi: 10.1109.ICSCC.2019.8843644.
4. P.Yin,C.Wang,W. Liu, and F.Lombardi, “Design and performance Evaluation of Approximate Floating-Point
Multipliers.” 2016 IEEE Computer Society Annual Symposium on VLSI(ISVLSI),2016,pp.296-301,Doi:
10.1109/IVLSI.2016.15.
5. H. Zhang ,W.Zhang,and J.Lach, “A low-power accuracy-configurable floating point multiplier,” in Proc. IEEE 32nd
Int.Conf.Comput.Des,2014,pp.48-54.