Fixed-model deployment in object detection wastes computation on easy frames and degrades performance on challenging ones. We present an uncertainty-guided adaptive model hopping system that dynamically switches among five YOLOv8 models (3.2M-68.2M parameters) based on per-frame tracking difficulty. Unlike prior approaches that rely on system level indicators such as CPU load or bandwidth, the method uses detector confidence and lightweight uncertainty signals to estimate frame complexity and allocate compute accordingly. High-confidence frames are processed with lightweight models, while low-confidence or high-uncertainty detections trigger transitions to higher-capacity models. Bidirectional switching with hysteresis prevents oscillation and supports escalation under challenging conditions as well as de-escalation when confidence recovers. Experiments on seven MOT17 sequences (4,746 frames) demonstrate a 58.2% reduction in computation relative to always using YOLOv8-XLarge while preserving 99.6% tracking success. The system achieves 28 FPS on an Intel NUC 14 Pro using OpenVINO, demonstrating practical real-time edge deployment without model retraining or architectural modification.