Virtualized data centers usually consist of heterogeneous servers which have different specifications (performance). Though there usually exist unused heterogeneous servers in such data centers, conventional DVFS (Dynamic Voltage and Frequency Scaling)-based DTM (Dynamic Thermal Management) techniques do not exploit the unused servers to cool down hot servers. In this paper, we propose a novel DTM technique which adaptively exploits external computing resources (unused servers with different performance) as well as internal computing resources (unused CPU cores in the server) available in heterogeneous data centers. Additionally, we also propose to consider locations of the servers when migrating VMs (Virtual Machines) among servers in a rack, which has a large impact on the on-chip temperatures and performance due to the heat conduction; when VMs run on the two closest servers in the rack, the ambient temperature of servers is up to 6.2° higher, compared to the case where VMs run on the two farthest servers, so that on-chip temperature more rapidly increases causing up to 13.5% of performance degradation due to more frequent thermal throttling. When the temperature of a CPU core in a server exceeds a pre-defined thermal threshold, our proposed technique estimates the impact of VM migrations on performance (e.g., performance degradation due to the physical machine migrations and/or core migrations of VMs). Depending on the estimated performance impact of VM migrations, our technique adaptively employs the following three methods: (1) a method that migrates a VM to another distant server with different performance, (2) a method that migrates VMs among CPU cores in the server, and (3) a DVFS-based method. In our experiments, our proposed technique improves performance by 15.1% and saves system-wide EDP by 22.9%, on average, compared to a state-of-the-art DVFS-based DTM technique, satisfying thermal constraints.