Robotic ultrasound systems (RUS) have gained increasing attention because they can automate repetitive procedures and relieve operators' workloads. However, the complexity and uncertainty of the human surface pose a challenge for stable scanning control. This paper proposes a general active compliance control strategy based on inverse reinforcement learning (IRL) to perform adaptable scanning for uncertain and unstructured environments. We analyze the manual scanning process pattern and propose a velocity-and-force-related control strategy to achieve variable force control and handle unpredictable deformation. Then, a hybrid policy optimization framework is proposed to improve transferability. In this framework, a reinforcement learning (RL) policy with a predefined reward is built to establish the relationship between contact force and posture. Furthermore, the policy is re-optimized using IRL and generated demonstrations for IRL training. The policy is trained on simple standard phantoms and further evaluated for stability and transferability in unseen and complex environments. Quantitative results show that the difference between the proposed method and the 3-D reconstructed model in terms of posture is (2.3±1.3°, 1.9±1.2°) in continuous scans. Overall, our method provides a solution for improving the usability of robotic ultrasound systems in real-world environments.