While Monte Carlo Neutron Transport (MCNT) is near-embarrasingly parallel, the effectively unpredictable lifetime of neutrons can lead to divergence when MCNT is evaluated on GPUs. Divergence is the phenomenon of adjacent threads in a warp executing different control flow paths; on GPUS, it reduces performance because each work group may only execute one path at a time. The process of Thread Data Remapping (TDR) resolves these discrepancies by moving data across hardware such that data in the same warp will be processed through similar paths. A common issue among prior implementations of TDR is the synchronous nature of its remapping and processing cycles, which exhaustively sort data produced by prior processing passes and exhaustively evaluate the sorted data. In another work, we defined a method of remapping data through an asynchronous scheduler which allows for work to be stored in shared memory and deferred arbitrarily until that work is a viable option for low-divergence evaluation. This article surveys a wider set of cases, with the goal of characterizing performance trends across a more comprehensive set of parameters. These parameters include cross sections of scattering/capturing/fission, use of implicit capture, source neutron counts, simulation time spans, and tuned memory allocations. Across these cases, we have recorded minimum and average execution times, as well as a heuristically tuned near-optimal memory allocation size for both synchronous and asynchronous scheduling. Across the collected data, it is shown that the asynchronous method is faster and more memory efficient in the majority of cases, and that it requires less tuning to achieve competitive performance.