Abstract

Deep convolutional networks have proven practical for autonomous vehicle applications as deep CNN technology has advanced. There has been a growing vogue for using end-to-end computational methods for the mechanization of vehicular activities. Preliminary studies, though, have demonstrated that deep learning network classifiers are sensitive to adversarial approaches. But, the impact of adversarial strategies on regression problems is not sufficiently known. We propose two white-box direct security breaches targeting progressive self-driving vehicles in this research. A prediction model is used in the navigation mechanism, which receives a picture as feed and returns a steering angle. By altering the input image, we may influence the actions of the automated driving unit. Two different attacks may be launched in practice on CPUs with no need for GPUs. The effectiveness of the threats is demonstrated by trials carried out in Udacity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call