Abstract

The detection of moving objects, animals, or pedestrians, as well as static objects such as road signs, is one of the fundamental tasks for assisted or self-driving vehicles. This accomplishment becomes even more difficult in low light conditions such as driving at night or inside road tunnels. Since the objects found in the driving scene represent a significant collision risk, the aim of this scientific contribution is to propose an innovative pipeline that allows real time low-light driving salient objects tracking. Using a combination of the time-transient non-linear cellular networks and deep architectures with self-attention, the proposed solution will be able to perform a real-time enhancement of the low-light driving scenario frames. The downstream deep network will learn from the frames thus improved in terms of brightness in order to identify and segment salient objects by bounding-box based approach. The proposed algorithm is ongoing to be ported over a hybrid architecture consisting of a an embedded system with SPC5x Chorus MCU integrated with an automotive-grade system based on STA1295 MCU core. The performances (accuracy of about 90% and correlation coefficient of about 0.49) obtained in the experimental validation phase confirmed the effectiveness of the proposed method.

Highlights

  • Autonomous Driving (AD) or ADAS, i.e., Advanced Driver Assisting Systems, are considered very promising technology/based solutions able to cover the safety requirements in such very complex automotive scenarios [1,2]

  • The aim of this proposal is the design of an innovative system that allows addressing the issue of assisted or autonomous driving in low light driving scenario conditions

  • I = −0.5; TIrnantshienet fSotepllso(twk)in=g5 Figure 2 we report such instances of the transient Cellular Non-linear Network (TCNN) enhanced input lowlight driving frames, witIhn athedfeotllaoiwl inogf Feigaucrhe 2liwgehtre-eponrht asunccheimnsteanctesgoefntehreaTtCeNdNbeynheaancehd oinfputht e TCNN configurationsloaws-lpigehrt dEriqvuinagtfiroamness,(w5)it–h(a7)d.etail of each light-enhancement generated by each of the TCNN configurations as per Equations (5)–(7)

Read more

Summary

Introduction

Autonomous Driving (AD) or ADAS, i.e., Advanced Driver Assisting Systems, are considered very promising technology/based solutions able to cover the safety requirements in such very complex automotive scenarios [1,2]. Both for an autonomous vehicle and assisted driving, it is critical to have a normal-light captured driving video-frames as most of the classical computer vision algorithms degrade significantly in the absence of adequate lighting [3]. An innovative method named “hyper-filtering” combined with robust computer vision algorithms has been reported in [8,9,10] to provide an intelligent assistance to the car driver

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call