The University of Victoria (UVic) operates an Infrastructure-asa-Service scientific cloud for Canadian researchers, and a Tier 2 site for the ATLAS experiment at CERN as part of the Worldwide LHC Computing Grid (WLCG). At first, these were two distinctly separate systems, but over time we have taken steps to migrate the Tier 2 grid services to the cloud. This process has been significantly facilitated by basing our approach on Kubernetes, a versatile, robust, and very widely adopted automation platform for orchestrating containerized applications. Previous work exploited the batch capabilities of Kubernetes to run grid computing jobs and replace the conventional grid computing elements by interfacing with the Harvester workload management system of the ATLAS experiment. However, the required functionality of a Tier 2 site encompasses more than just batch computing. Likewise, the capabilities of Kubernetes extend far beyond running batch jobs, and include for example scheduling recurring tasks and hosting long-running externally-accessible services in a resilient way. We are now undertaking the more complex and challenging endeavour of adapting and migrating all remaining services of the Tier 2 site — such as APEL accounting and Squid caching proxies, and in particular the grid storage element — to cloud-native deployments on Kubernetes. We aim to enable fully comprehensive deployment of a complete ATLAS Tier 2 site on a Kubernetes cluster via Helm charts, which will benefit the community by providing a streamlined and replicable way to install and configure an ATLAS site. We also describe our experience running a high-performance self-managed Kubernetes ATLAS Tier 2 cluster at the scale of 8 000 CPU cores for the last two years, and compare with the conventional setup of grid services.