High Performance Computing (HPC) is a key infrastructure to solve large scale scientific problems, from weather to quantum simulations. Scheduling jobs in HPC infrastructures is complex due to their scale, the different behaviors of their users, and the multiple objectives, from performance to ecological impact. Schedulers are evaluated on data center simulations, due to the complexity and cost of evaluating them in-situ. One key element for this evaluation is the behavioral model of users. Most studies are limited to replaying past workload of existing data centers. This reduces the realism of performance evaluation in cases where the scheduler and the hardware infrastructure are not exactly the same. Any such change would potentially impact the behavior of the users.In this article we introduce a novel model “Replay with Feedback” accounting for the impact of HPC system performances on user submission behavior in simulations. Instead of keeping the original timestamps of job submissions, we exhibit and use the relationships between each user jobs. We propose an open-source implementation of this model along with an extensive and reproducible set of experiments to assess the impact of the scheduler and infrastructure changes. We also provide new metrics adapted to the flexibility of user submission behaviors. Results show that using this model, we advance towards more realistic simulations of schedulers in HPC systems.