Abstract

In floating point arithmetic operations, multiplication is the most required operation for many signal processing and scientific applications. 24-bit length mantissa multiplication is involved to obtain the floating point multiplication final result for two given single precision floating point numbers. This mantissa multiplication plays the major role in the performance evaluation in respect of occupied area and propagation delay. This paper presents the design and analysis of single precision floating point multiplication using karatsuba algorithm with vedic multiplier with the considering of modified 2x1 multiplexers and modified 4:2 compressors in order to overcome the drawbacks in the existing techniques . Further , the performance analysis of single precision floating point multiplier is analyzed in terms of area and delay using Karatsuba Algorithm with different existing techniques such as 4x1 multiplexers and 3:2 compressors and modified techniques such as 2x1 multiplexers, 4:2 compressors. From the simulation results, it is observed that single precision floating point multiplication with karatsuba algorithm using modified 4:2 compressor with XOR-MUX logic provides better performance with efficient usage of resources such as area and delay than that of existing techniques. All the blocks involved for floating point multiplication are coded with Verilog and synthesized using Xilinx ISE Simulator.

Highlights

  • In many applications like signal and numerical processing, floating point arithmetic operations are mostly used

  • This paper presents the design and analysis of single precision floating point multiplication using karatsuba algorithm with vedic multiplier with the considering of modified 2x1 multiplexers and modified 4:2 compressors in order to overcome the drawbacks in the existing techniques

  • It is observed that single precision floating point multiplication with karatsuba algorithm using modified 4:2 compressor with XOR-MUX logic provides better performance with efficient usage of resources such as area and delay than that of existing techniques

Read more

Summary

Introduction

In many applications like signal and numerical processing, floating point arithmetic operations are mostly used. The floating point number standard is defined by IEEE [3] for different formats like singleprecision and double-precision. In floating-point operations, multiplication process is the complex one and performance deciding block. Efficient implementation of floating-point multipliers is the major concern. Over the last few decades, a lot of work has been done at both algorithmic level and implementation level done to improve the performance of floating point computations [13]-[15]. Several works have focused on implementation in FPGA platforms [11],[20]. In spite of tremendous efforts, this arithmetic is still often the bottleneck in many computations

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call