0% found this document useful (0 votes)
59 views

Implementation of Approximate Half Precision Floating Point Multiplier Using Verilog

This document discusses the implementation of an approximate half precision floating point multiplier using Verilog to achieve lower power consumption and circuit complexity. The multiplier performs multiplication of two 16-bit floating point numbers using Booth's algorithm and approximates the result to reduce delay. It calculates the sign, exponent, and mantissa separately then combines them. The approximated design is compared to an exact multiplier and shows reductions in power, area, and complexity while maintaining an acceptable level of accuracy for applications like image processing using neural networks.

Uploaded by

usutkarsh0705
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Implementation of Approximate Half Precision Floating Point Multiplier Using Verilog

This document discusses the implementation of an approximate half precision floating point multiplier using Verilog to achieve lower power consumption and circuit complexity. The multiplier performs multiplication of two 16-bit floating point numbers using Booth's algorithm and approximates the result to reduce delay. It calculates the sign, exponent, and mantissa separately then combines them. The approximated design is compared to an exact multiplier and shows reductions in power, area, and complexity while maintaining an acceptable level of accuracy for applications like image processing using neural networks.

Uploaded by

usutkarsh0705
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Implementation of Approximate Half Precision

Floating Point Multiplier Using Verilog

 Abstract
In the age of Artificial intelligence, there is a need for higher rate of data
processing. Basic neural networks are composed of layers, each layer
consists of consists of neurons. Each neuron multiplies the inputs with its
corresponding weights to get the output. The weight computation and
output calculation require many floating-point multiplications. In image
processing applications using neural networks, the exact value of output
is not needed, a certain level of accuracy is accepted and will not hugely
affect its functionality. Approximate designs will help in achieving
functionality with approximation instead of exactness. Approximate
multipliers introduce an error in the output making the circuit less
complex and reducing the propagation delay. Adopting approximate
floating-point multipliers can be beneficial as they need less space and
power as compared to their exact counterparts. Exact arithmetic
computation requires many gates consuming a large amount of power
and calculation takes up much of the time. So, using approximate
arithmetic computation can hugely affect the speed and power
consumption of the system, and all overall complexity of the system is
reduced. These approximate multipliers are used in such applications
where the error can be tolerated to a certain level and does not affect the
system’s overall functionality. Approximate floating-point multiplier using
Booth's Algorithm is implemented using Verilog.
 Objectives
1. To perform multiplication of two floating point numbers
2. To achieve computations with less delay.
3. To achieve less power consumption and cell area.

 Methodology
The half precision floating point number consists of 16 bits out of which
the MSB bit is a sign bit of size 1 bit, exponent of size 4 bits and mantissa
of size 10 bits.

The designing of floating-point multiplier includes 3 different


calculations. They are sign, exponent and mantissa calculations.
Algorithm steps
1. Multiplying the mantissa of two numbers using Booth’s Algorithm.
2. Adding a decimal point to the result
3. The exponents of two numbers are added.
4. Sign bit (MSB) can be obtained by performing the XOR operation
of MSB’s of two numbers.
5. 1 is added at the MSB of mantissa.
6. The result is rounded to be fit into certain number of bits based on
the exponent.
7. Verifying the result for overflow or underflow conditions.
 Conclusion
An approximated single-precision floating-point multiplier is designed.
This multiplier is implemented using Verilog. The results are compared
with the exact single precision multiplier. The approximated multiplier
uses less power and area and the complexity of the circuit are reduced.

 Timelines
Week 1 – Resume
Week 2 – Finalising the project

 References
1. Tuan D. Nguyen an james E. Stine “A combined IEEE Half and Single Precision Floating Point Multipliers for Deep
Learning” IEEE 978-1-5386-1823-3/17, Doi:10.1109/ACSSC.2017.8335507
2. S.Venkatachalam and S B Ko. “Design of Power and Area Efficient Approximate Multipliers, ” in IEEE Transactions
on VLSI Systems, vol. 25, no. 5, pp. 1782-1786,May 2017, Doi: 10.1109/TVLSI.2016.2643639.
3. M. Ramasamy, G. Narmada and S. Deivasigamani. “Carry based approximate full adder for low power
approximate computing.” 2019 7th International Conference on Smart Computing & Communications (ICSCC),
2019, pp. 1-4, Doi: 10.1109.ICSCC.2019.8843644.
4. P.Yin,C.Wang,W. Liu, and F.Lombardi, “Design and performance Evaluation of Approximate Floating-Point
Multipliers.” 2016 IEEE Computer Society Annual Symposium on VLSI(ISVLSI),2016,pp.296-301,Doi:
10.1109/IVLSI.2016.15.
5. H. Zhang ,W.Zhang,and J.Lach, “A low-power accuracy-configurable floating point multiplier,” in Proc. IEEE 32nd
Int.Conf.Comput.Des,2014,pp.48-54.

You might also like