Although deep neural networks (DNNs) demonstrated its superior performance in computer vision, recent works proved that the vulnerability of deep neural task systems to carefully crafted human-imperceptible perturbations. We observe that the majority of adversarial attack methods for generating perturbations are based on rotation or modification of the input-diagnostic. Regardless of the characteristics completed by additional sources, such approaches yield comparable properties to enrich initial details. It motivates us to consider views about using various information apart from original inputs to deliver adversarial features. For such needs, we induce a simple yet adaptable adversarial attack strategy with a Feature-Guided Method (FGM) for crafting adversarial examples (AEs) in segmentation domain. FGM first induces the multi-source patterns that are apart from the original inputs. Then, producing feature diversities generated with the original data to deliver the perturbed components. Finally, FGM blends original inputs with created features in a defined norm constraint to form the adversarial examples. In such way, it preserves the original positive class-general characteristics and enriches the new positive class-specific diversities when performing adversarial attacks. Moreover, FGM employs the adaptive gradient-based strategy on such generated information, which lowers the risk of falling into the local optimum when searching for the decision boundary of the source and target models in latent space. We conduct detailed experiments to evaluate the performance of the proposed method compared to baselines on public segmentation models. The experimental results reveal better performance of FGMs in fooling source and target segmentation systems leading large margins over 5% on mIoU, mRec, and mAcc. We also deploy the adversarial training with proposed work and PGD on widely used models. Our approach improves the robust quality of adversarially trained models on FCN, PSPNet, and DeepLabv3 with various backbones by significant margins with over 13% improvement on mIoU and 12% on mRec, which indicates a better impact of deploying such mechanisms on robust deep neural segmentation models.