Purpose: Medical image segmentation models are essential for efficiently diagnosing prostate cancer. Current models for segmentation of micro ultrasound images do not utilize foundation models, or geometric and spatial data from the ultrasound image. We aim to leverage the Segment Anything Model (SAM) as a pretrained backbone to create a fully automatic prostate segmentation model and investigate prompting with positonal information such as the slice frame number and previous segmentations. Methods: The Segment Anything Model (SAM) was fine-tuned on Micro-Ultrasound images1 and adapted for the context of clinical prostate segmentation. We explored different fine-tuning methods and prompting combinations fit for automatic segmentation. Dice coefficient (DSC) and Hausdorff 95% distance (HD95) were used as evaluation metrics to compare our model against past models. Results: In comparison to the state of the art models, SliceTrack-SAM improved the mean Dice coefficient from the MicroSegNet2 model (93.1) to 94.3, and reduced the Hausdorff distance from the nnU-Net model (1.09 mm) to 0.81 mm. Conclusion: In summary, using a fully-finetuned SAM, improves prostate segmentation accuracy. In addition, prompting with geometric and spatial data derived from ultrasound images shows potential to enhance segmentation accuracy, particularly on difficult datasets.