The emerging unmanned aerial vehicle (UAV) such as a quadcopter offers a reliable, controllable, and flexible way of ferrying information from energy harvesting powered IoT devices in remote areas to the IoT edge servers. Nonetheless, the employment of UAVs faces a major challenge which is the limited fly range due to the necessity for recharging, especially when the charging stations are situated at considerable distances from the monitoring area, resulting in inefficient energy usage. To mitigate these challenges, we proposed to place multiple charging stations in the field and each is equipped with a powerful energy harvester and acting as a cluster head to collect data from the sensor node under its jurisdiction. In this way, the UAV can remain in the field continuously and get the data while charging. However, the intermittent and unpredictable nature of energy harvesting can render stale or even obsolete information stored at cluster heads. To tackle this issue, in this work, we proposed a Deep Reinforcement Learning (DRL) based path planning for UAVs. The DRL agent will gather the global information from the UAV to update its input environmental states for outputting the location of the next stop to optimize the overall age of information of the whole network. The experiments show that the proposed DDQN can significantly reduce the age of information (AoI) by 3.7\% reliably compared with baseline techniques.