The execution of complex event processing (CEP) applications on a set of clustered homogenous computing nodes is latency-sensitive, especially when workload conditions widely change at runtime. To manage the varying workloads of nodes in a scalable and cost-effective manner, adjusting the application parallelism at runtime is critical. To tackle the scalability challenge, we have extended an existing parallelization model called PARS that only supports stateless CEP operators and runs operators in parallel regions without changing the number of computing nodes assigned to parallel regions. We have added new features to PARS in support of stateful operators by introducing local controllers and new initiator and terminator event types, making partitioning fully transparent to application developers. We have proved the correctness of this extended model, called PARS+ , with respect to its presented formal definition. We have then used PARS+ as the base parallelization model in formulation of an adaptive strategy called ACEP to auto-scale operators including the stateful operators. Scaling decisions are governed by a predictive performance model that uses a control-theoretic method for estimating the resource and latency costs of each operator at runtime. The loads of clustered compute nodes are monitored, and compute nodes in a parallel region are reconfigured at runtime to ensure a balanced load on all compute nodes, accruing minimum cost to parallelize a stateful operator. ACEP minimizes network delays because it does not force using the shared state or techniques that employ state migration. We have built an event generator to simulate event sources and experimentally evaluate the ACEP strategy in terms of response time and resource costs. Two variant implementations of ACEP have been compared with the elastic strategy presented by Xiao et al. (2018), and the findings demonstrated that our implementations had adapted themselves better in different resource and response-time-sensitive scenarios, with lower response time (6%) and lower resource cost (8%).