Abstract

Context-aware applications are emerging applications in the modern era of computing. These applications can determine and adapt to situational context to provide better user experience. Testing these applications is not straightforward. Constantly changing nature of context makes testing context-aware application is a challenging task. To uncover a defect in context-aware application, a test engineer needs activity (sequence of actions) and context information (context data); this makes test case development a difficult task. Conventional test case development methodologies do not cater for context information. Besides, conventional applications have only one input source, but context-aware application must obtain data from many sources to infer the context. Yet another issue that these applications often face is the noisy data problem as input data collected from physical sensors could be noisy. Test adequacy criteria are used as test stoppage rule and define the quality of testing as well as for generating test suites. Test adequacy criteria is helpful to control the cost of testing as well as determining/establishing confidence in the software product quality. A number of test adequacy criteria exist for testing conventional applications, but the same is not true for context-aware applications. Defining test adequacy criteria and test coverage measures for context-aware applications warrants further research. Several techniques have been developed by researchers to generate and execute test cases for context-aware applications; however, end-to-end testing and result analysis of executed test cases still remains a grueling task for the test engineers. The aim of this study is to automate end-to-end functional testing, analysis of the generated test results as well as functional/requirement coverage assessment. Moreover, we also present a confidence assessment template for result analysis. Test engineers can use our proposed framework to assess the requirement coverage. Our proposed framework will reduce testing time, efforts and cost thus enabling test engineers to execute more testing cycles to attain higher degree of test coverage.

Highlights

  • Software testing could be described as a practice to determine whether the actual results produced by a functionality ofThe associate editor coordinating the review of this manuscript and approving it for publication was Alba Amato .application under test are consistent with the expected results

  • Graphical user interface-based test automation tools emerged where test script development task was to transcribe the steps written in manual test cases to the test scripting language

  • Learning new scripting language involves a learning curve and could be time consuming and it can make it harder for test engineers to grasp a good command on a particular test automation tool making the learning process even longer for inexperienced test engineers, and the overall success rate of automated testing suffers

Read more

Summary

INTRODUCTION

ContextDrive is a functional testing framework that consists of six phases: test scenario designing, Keyword-based (KBT) test case execution engine, test scripts, test script execution engine, test result generation and test result analysis using context quality confidence template (CQCT). Of functionally decompose test scripts and keyword-based test case execution In addition to these two methods, ContextDrive supports the execution of end-to-end test scenarios using scenario-based testing technique for context-aware applications.

CONTEXT QUALITY CONFIDENCE TEMPLATE
RESULT ANALYSIS USING CONTEXT QUALITY CONFIDENCE TEMPLATE
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call