Predicting Time Complexity of TensorFlow Lite Models
Abstract
The Internet of Things (IoT) revolutionizes global communication and data exchange, while edge computing optimizes data transmission and enhances security, adaptability, and cost-effectiveness. However, integrating computer vision into IoT systems faces challenges in adapting algorithms to limited computational power and memory resources. This paper investigates the performance of deep vision models designed for low-power systems, focusing on inference time. Through a comprehensive experiment, various structures of convolutional neural network (CNN) models are evaluated based on their layer configurations and inference time. Systematically analyzing these components, the research establishes a predictive formula for inference time estimation based on the model architecture. The results reveal dependencies between CNN layer complexity and inference efficiency, guiding optimal configurations for edge device deployment. This analysis offers insights for designing efficient deep vision models tailored for low-power systems.
