SummaryThe Oak Ridge Leadership Computing Facility (OLCF) has been preparing the nation's first exascale system, Frontier, for production and end users. Frontier is based on HPE Cray's new EX architecture and Slingshot interconnect and features 74 cabinets of optimized 3rd Gen AMD EPYC CPUs for HPC and AI and AMD Instinct 250X accelerators. As a part of this preparation, “real‐world” user codes have been selected to help assess the functionality, performance, and usability of the system. This article describes early experiences using the system in collaboration with the Hamburg Observatory for two selected codes, which have since been adopted in the OLCF test harness. Experiences discussed include efforts to resolve performance variability and per‐cycle slowdowns. Results are shown for a performance portable astrophysical magnetohydronamics code, AthenaPK, and a mini‐application stressing the core functionality of a performance portable block‐structured adaptive mesh refinement framework, Parthenon‐Hydro. These results show good scaling characteristics to the full system. At the largest scale, the Parthenon‐Hydro miniapp reaches a total of zone‐cycles/s on 9216 nodes (73,728 logical GPUs) at 92% weak scaling parallel efficiency (starting from a single node using a second‐order, finite‐volume method).