Abstract

ABSTRACT Electric vehicles (EVs) have played a significant role in sustainability, and EVs fire accidents have raised doubts in recent years. To solve the mobile analytical equipment limitation in EVs fire accident and help staff receive prompt results at on-spot inspection, we provide a lightweight but accurate Transformer that can ideally adapt to the mobile environment. First, we built on the simple Segformer and extended it to aggregate the representations of amorphous objects, such as fire traces, in image recognition. Second, we used shunt-based self-attention (SSA) to enhance the model for capturing multi-scale contextual information and help distinguish the deformed level of EVs after combustion. Third, we redesigned a simple multi-level information aggregation (MIA) decoder to obtain the relationship between pixels in the channel dimensions by a weighted aggregation. Furthermore, to foster image trace recognition, we put forwards and evaluated the accuracy of models on electric vehicle fire traces (EVFTrace), a dataset of images of burnt EVs. On EVFTrace, the mean intersection over union (mIoU) achieves 72.24%. The float point operations (Flops) and parameters (Params) achieve 114.83 G and 89.5 M. Our model shows excellent efficiency and accuracy for burnt EVs segmentation tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call