Increasing power density (due to faster clock and high device integration density) coupled with limited cooling capacity of the package causes die overheating and leads to reliability concerns. Conventional system-level thermal management techniques (e.g. voltage scaling, throttling etc.) typically increase the design complexity and degrade performance. In this paper, we have presented a methodology to mitigate temperature-induced reliability problems by transferring the heat dissipated in a region of high activity (such as the ALU in a processor that creates localized “hotspot”) to regions of lower activity (such as on-chip cache). We propose to use carbon nanotubes (CNTs) as “thermal interconnect” for on-die heat transfer since CNTs have significantly higher thermal conductivity than typical heat-spreader materials (such as copper or aluminum). We note that the proposed heat transfer framework is particularly suitable to thermal management in Silicon-on-Insulator (SOI) devices, which suffer from fine-grained thermal gradient. Simulation results indicate that the use of CNTs for heat conduction from hotspot to a region of lower activity (which we denote as a ‘coolspot’), achieves 13% (16°C) decrease in temperature at the hotspot and only 3% (1.5°C) increase in temperature at the coolspot of an alpha microprocessor model.