As the 2020 roadblock gradually approaches, the need of breakthrough in computing systems directed researchers to novel computing paradigm. As the recently emerged reservoir computing model, delayed feedback reservoir computing only utilizes one nonlinear neuron along with a delay loop creating a ring topology. Delayed feedback reservoir not only offers ease of hardware implementation, but also due to the inherent delay in the system, its rich intrinsic dynamics also facilitate computational ability allowing optimal performance. The field of deep learning has attracted worldwide attention due to its hierarchical architecture that allows more efficient performance than a shallow structure. Along with our analog hardware implementation of delayed feedback reservoir, we investigate the possibility of merging deep learning and delayed feedback reservoir computing systems. By evaluating the results, deep DFR models show 6.7%-12.5% better performance than shallow leaky ESN model. Due to the difference in architecture, the training time of MI-deepDFR requires approximately 21% longer than that of deepDFR. Our approach offers the great potential and promise in realization of analog hardware implementation for deep DFR systems.