Abstract

Software metrics produce discrete values as output from input source code as an indication of software quality. Many software have automated software metrics with different tools that are producing varying result. Measurement of software metrics cannot be said to be precise, repeatable and reproducible. This is consequent of varying definitions, design and implementation of the same software metrics, varying result, assessment and analysis of the same metrics from relative and personalized approaches to varying benchmarks, non-uniform definition of implementation contexts, software measurement terminologies and lack of standard reference and calibration with respect to the measure of “level of confidence” in software measurement. Several studies have proposed the unification of software metrics without necessarily looking at the underlying causes of these widely observed inconsistencies across existing metrics and their automated tools. This work identified pitfalls ways to minimize variances in the implementation of software measurements across contexts. From this stage of an ongoing research, we are determining the possibility of objectively unifying software metrics by closing the gap in observed sources of expressed variance and adoption of metrological approaches to software measurement.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.