Infrared based visual perception is important for night vision of autonomous vehicles, unmanned aerial vehicles (UAVs), etc. Semantic segmentation based on deep learning is one of the key techniques for infrared vision-based perception systems. Currently, most of the advanced methods are based on Transformers, which can achieve favorable segmentation accuracy. However, the high complexity of Transformers prevents them from meeting the real-time requirement of inference speed in resource constrained applications. In view of this, we suggest several lightweight designs that significantly reduce existing computational complexity. In order to maintain the segmentation accuracy, we further introduce the recent vision big model – Segment Anything Model (SAM) to supply auxiliary supervisory signals while training models. Based on these designs, we propose a lightweight segmentation network termed SMALNet (Segment Anything Model Aided Lightweight Network). Compared to existing state-of-the-art method, SegFormer, it reduces 64% FLOPs while maintaining the accuracy to a large extent on two commonly-used benchmarks. The proposed SMALNet can be used in various infrared based vision perception systems with limited hardware resources.