Integrating Robotic Process Automation (RPA) and Generative AI can revolutionize business processes, enabling greater efficiency, scalability, and intelligent decision-making. However, this powerful combination also introduces various security challenges and impacts to governance that organizations must address to protect sensitive data and maintain trust in automation systems. As RPA bots increasingly interact with AI models, vulnerabilities such as unauthorized data access, malicious model manipulation, and improper handling of sensitive information become more pronounced. These risks can lead to cyberattacks, data breaches, and regulatory compliance violations. This paper examines the security challenges inherent in RPA and Generative AI integration, focusing on three key areas: data privacy, model integrity, and automation governance. We assess how improper configurations and lack of security oversight can expose these systems to exploitation. Furthermore, we explore solutions such as implementing robust encryption protocols, secure data access controls, and continuous monitoring of AI model behavior to detect anomalies. By presenting case studies and evaluating emerging best practices, we offer a framework for safeguarding RPA and AI systems, ensuring that automation remains a trusted and secure tool for organizations. The paper also discusses aligning security strategies with regulatory requirements and industry standards. This approach enables organizations to unlock the full potential of RPA and Generative AI while mitigating risks and protecting against evolving cyber threats.