Abstract

Purpose: This study addresses the crucial challenge of managing risks associated with data and models in Agile Artificial Intelligence (AI) and Machine Learning (ML) projects. It aims to develop a systematic framework for effective risk control utilizing agile methodologies. Methodology: The research is grounded in an interpretivist approach and utilizes a deductive method. It constructs a comprehensive framework for identifying and mitigating risks, integrating risk management seamlessly into Agile processes for AI and ML development. Findings: The study introduces four technological themes critical for risk mitigation: dynamic distribution of resources, model robustness, risk integration, and quality assessment of information. These themes provide actionable strategies for reducing risks throughout the Agile AI/ML development lifecycle, ensuring that risk assessment and mitigation are integral to project planning and execution. Unique contribution to theory, practice, and policy: The study contributes to both theory and practice by offering a detailed, actionable framework for risk management in Agile AI/ML projects. It advocates for the adoption of adaptive technologies and tools, continuous stakeholder engagement, and adherence to ethical standards. Recommendations include validation of the framework through empirical research and ongoing longitudinal evaluations to adapt and refine risk management practices. This approach aims to enhance the reliability and efficiency of project outputs in dynamic environments, providing a significant foundation for policy development in technology project management.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call