This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong PUFs. These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial PUF complexity analyses, and are much easier and more efficient to apply: They do not require detailed knowledge of various ML methods, substantial computation times, or the availability of an internal parametric model of the studied PUF. Our metrics also can be standardized particularly easily. This avoids the sometimes inconclusive or contradictory findings of existing ML based security test, which can result from the use of different or non-optimized ML algorithms, differing hardware resources, or varying numbers of challenge-response pairs in the training phase. This first manuscript within the abovementioned sequence treats one of the conceptually most straightforward security metrics: It investigates the effects that small perturbations in the PUF-challenges have on the resulting PUF-responses. We first develop and implement several sub-metrics that realize this approach in practice. We then empirically show that these metrics have surprising predictive power, and compare our obtained test scores with the known real-world security of several popular Strong PUF designs. These include (XOR) Arbiter PUFs, Feed Forward Arbiter PUFs, (XOR) Bistable Ring PUFs. Finally, our manuscript suggests techniques for representing the results of our metrics graphically, and for interpreting them in a meaningful manner.