Abstract

As of January 2027, Machine Safety Regulation 2023/1230 replaces Directive 2006/42/EC, and a proposal for the regulation of Artificial Intelligence (AI) systems is soon to be ratified by the European Council. These legislative changes address hazards from increased automation and AI integration. Original Equipment Manufacturers (OEMs) face compliance challenges due to a lack of standardized guidelines, especially for off-road machinery with level 4 ‘high automation’ capabilities.At level 4 ‘high automation’, the machine operator assumes a supervisory role, intervening only in situations beyond the designed operational domain. Relying solely on supervisor intervention at this level of automation is unreasonable. Within this context, legislative safety requirements to maintain safety advocate for robust risk management systems, including both a human-in-the-loop safety option and adequate safety-related decision-making by the level 4 ‘high automation’ off-road machine, i.e., the safe reasoning software becomes paramount.In this research, conducted through a systematic mapping study, we discuss emerging safety trends and integration for the safe reasoning software a level 4 off-road machine requires. Our analyses shows the necessity of safety constraint-based design procedures. These procedures involve constraining the AI system by embedding fault management, reasoning checks, and diagnostic monitoring together with safety-related decision-making, thereby enhancing functional reliability and reducing risks. We consider these trends and patterns through the lens of changing machine safety legislative requirements, and the proposal of the regulation of AI products. Illustrating their implications alongside a case study of a level 4 tree harvesting machine, for a tangible and to industry transferable perspective on our results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call