Heterogeneous memory systems promise to provide both large storage capacity and high performance. Prior DRAM cache systems have metadata scalability issue and suffer from low cache hit rate, low DRAM space utilization, and significant data migration overhead. We observe that instances of an object type exhibit stable and predictable memory access patterns in its constituent cachelines and these patterns are referred to as object fingerprints. We propose the hardware-assisted cache that manages DRAM at the object type level and fetches data block at the granularity of a cacheline, by exploiting object fingerprint. To address its design challenges, we first present a software-hardware co-design to convey the software information to hardware. Second, we design multiple granularity sector caches that can be dynamically adjusted to adapt to changing behaviors and improve DRAM cache utilization. To address the challenges of large metadata storage overhead, we propose to bound possible sizes for each sector cache. Experimental results show our designs improve DRAM cache hit rate by 21.6%, boost IPC by 19.8%, and reduce data migration traffic by 51.6% on average, compared with state-of-art DRAM caches. More importantly, our online object fingerprint learning method is 2.3% inferior to the offline one in terms of IPC.