Vulnerabilities in the source code are one of the main causes of potential threats in software-intensive systems. There are a large number of vulnerabilities published each day, and effective vulnerability detection is critical to identifying and mitigating these vulnerabilities. AI has emerged as a promising solution to enhance vulnerability detection, offering the ability to analyse vast amounts of data and identify patterns indicative of potential threats. However, AI-based methods often face several challenges, specifically when dealing with large datasets and understanding the specific context of the problem. Large Language Model (LLM) is now widely considered to tackle more complex tasks and handle large datasets, which also exhibits limitations in terms of explaining the model outcome and existing works focus on providing overview of explainability and transparency. This research introduces a novel transparency obligation practice for vulnerability detection using BERT based LLMs. We address the black-box nature of LLMs by employing XAI techniques, unique combination of SHAP, LIME, heat map. We propose an architecture that combines the BERT model with transparency obligation practices, which ensures the assurance of transparency throughout the entire LLM life cycle. An experiment is performed with a large source code dataset to demonstrate the applicability of the proposed approach. The result shows higher accuracy of 91.8% for the vulnerability detection and model explainability outcome is highly influenced by “vulnerable”, “function”, "mysql_tmpdir_list", “strmov” tokens using both SHAP and LIME framework. Heatmap of attention weights, highlights the local token interactions that aid in understanding the model's decision points.