During the last five years, there have been significant developments in the field of Natural Language Processing including the deployment of advanced large language models such as ChatGPT, Bard and Llama. These large language models are helpful in generating text and designing content and they have several applications in various industries. However, they can memorize and reveal malicious content and personal information from their training dataset which also includes an enormous amount of data from the internet. As a result, it can lead to compromised privacy and security challenges for users who have their personal information available on the internet directly or through third parties. To address this issue, the proposed research work conducts a thorough investigation of these challenges and puts forward a prompt designing-based solution. In this method, we build a customized training dataset to fine-tune a pre-trained model (Llama-2) to produce a harmless response ‘I can’t provide you with this information’ to prompts seeking to extract personal information and malicious content from LLMs. Experimental results reveal that the proposed work achieves an accuracy of 63% with a precision score of 0.706 and a recall score of 0.571. The work ensures almost no leakage of private information and strengthens the LLM model against extraction attacks.