Abstract

This paper elaborates on the evaluation stage of an empirical process whose purpose is to evaluate and select a best-of-breed AWEM, Automated Web Engineering Methodology environment. The process is basically comprised of four major stages, namely: characteristics identification, screening of available AWEM environments, evaluation, and selection.During the evaluation stage, an “evaluation scheme”, which serves specific web application domains reflecting the organization's perspective towards the AWEM environment, is created. The idea is to make the evaluation process more user-centric. The actual evaluation is then conducted based on the results gained through the development of real pilot projects.The paper contributes to the current research in web engineering area by proposing an evaluation mechanism through the introduction of a so-called “evaluation scheme”, a sub-set of predefined essential criteria. Another major contribution is the introduction of an evaluation algorithm for weighing and rating various characteristics and alternatives that will eventually assist in making a final decision. Both, the process and the algorithm, were fully automated and evaluated on real world cases using the AWEM-ESS, which is an evaluation and selection system built specifically for this purpose.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.