The dependence on densely labeled samples in Few-shot Semantic Segmentation (FSS) poses challenges in terms of sample annotation. This paper introduces a more challenging few-shot semantic segmentation task that only utilizes intra-class images as weak supervision. To address this task, we propose a Mask-free Iterative Refinement Network consisting of a mask generation module (MGM) and an iterative refinement module (IRM), respectively addressing the inherent two challenges of locating segmented objects and deriving class-specific features in the absence of support mask and semantic labels. MGM generates pseudo-masks for the support image without requiring any training, which provides a rough estimation of the object locations, serving as an initial guidance for segmentation. IRM is designed to capture the co-occurrence class-specific information between the support and query images for segmentation and progressively refine the predicted mask in a bootstrap manner with the guidance of the class-specific information. Experimental results on three FSS benchmarks, i.e., FSS-1000, PASCAL-5i, and COCO-20i demonstrate that our proposed method achieves comparable or decent performance compared to existing zero-shot and supervised segmentation methods.