This study benchmarks the capabilities of Llama 3 70B, a 70-billion parameter large language model (LLM), for code generation tasks. To effectively train and fine-tune this massive model, we integrate PyTorch Fully Sharded Data Parallel (FSDP) [1], [2] for distributed training and Quantized Low-Rank Adaptation (Q-LoRA) [7] for efficient fine-tuning. We address challenges associated with distributed training, including communication overhead and synchronization complexities, through optimization strategies like gradient accumulation, optimizer state sharding, and mixed precision training. Additionally, we employ advanced training techniques such as Curriculum Learning, Dynamic Batch Sizing, and Adaptive Optimization Algorithms to enhance model performance and training efficiency. Our primary focus is evaluating the performance of the fine-tuned Llama 3 70B model on two widely-recognized code generation benchmarks: HumanEval [8] and MBPP [9]. HumanEval assesses the model's ability to translate natural language problem descriptions into functionally correct code, while MBPP evaluates its proficiency in solving complex programming problems by generating accurate Python code. We present detailed performance results on these benchmarks, analyzing the model's strengths and limitations in various code generation scenarios. Furthermore, we compare the impact of our training and fine-tuning methodologies on scalability, memory efficiency, and training speed, demonstrating the feasibility and efficiency of our approach. This benchmark study offers valuable insights for researchers and practitioners exploring the application of LLMs for code generation. It provides a comprehensive evaluation of Llama 3 70B's capabilities, sheds light on the effectiveness of various training and fine-tuning techniques, and emphasizes the importance of rigorous benchmark evaluation in driving progress within this rapidly evolving field.