Centralized power management control systems are hitting a scalability limit. In particular, enforcing a power cap in a many-core system in a performance-friendly manner is quite challenging. Today’s on-chip controller reduces the clock speed of compute domains in response to local or global power limit alerts. However, this is opaque to the operating system (OS), which continues to request higher clock frequency based on the workload characteristics acting against the centralized on-chip controller. To address these issues, we introduce TokenSmart, which implements a set of scalable distributed frequency control heuristics within the OS, using a novel token-based mechanism. The number of system-allocated power tokens represents the maximum allowable power consumption; and the OS governor orchestrates a token-passing (or sharing) algorithm between the compute engines. Token allocation count increase (decrease) corresponds to a increase (decrease) of clock frequency. The compute units are connected in a ring-topology allowing minimal meta-data to be passed along with the token value for regulating power budget. We explore different heuristics to assign tokens smartly across the units. This results in efficient power regulation and sustenance of turbo frequencies over a longer duration. Our proposed methodology can be implemented in hardware with multiple on-chip controllers, or in software where each set of cores acts as a compute unit. The methodology is currently implemented within the Linux kernel of a real IBM POWER9 many-core system and experimentally verified on different real world workloads such as Redis, Cassandra, PostgreSQL along with a micro-benchmark such as rt-app. Our experiments indicate the increase in throughput for all the workloads along with the benefit of power savings. For instance, results show a considerable boost of about 4% in throughput of both the PostgreSQL and Redis benchmark with a substantial savings in power consumption (18% and 37%, respectively). If the approach is implemented in hardware, then our experimental analysis speculates the throughput to increase up to 14% in PostgreSQL benchmark.
Read full abstract