Electric overloading conditions and contingencies put modern power systems at risk of voltage collapse and blackouts. Load shedding is crucial to maintain voltage stability for grid emergency control. However, the rule- or model-based schemes rely on accurate dynamic system models and face considerable challenges in adapting to various operating conditions and uncertain event occurrences. To address these issues, this paper proposes a novel deep reinforcement learning (DRL)-based voltage stability control algorithm with automatic entropy adjustment (AEA) for grid emergency control. Various dynamic network components for complex system operations are modeled to construct the DRL environment. An off-policy soft actor-critic architecture is developed to maximize the expected reward and policy entropy simultaneously. The AEA mechanism is proposed to facilitate the policy maximum entropy procedure, and the proposed method can automatically provide effective discrete and continuous actions against various fault scenarios. Our approach accomplishes high sampling efficiency, scalability, and auto-adaptivity of the control policies under high uncertainties. Comparative studies with the existing DRL-based control methods in IEEE benchmarks indicate salient performance improvement of the proposed method for dynamic system emergency control.