Polynomial Approximation Degree Influence on Implicit Network Regularization for Impedance Signal Reconstruction
Abstract
A scanning electrochemical microscope (SECM) with artificial intelligence could generate the sample's activity image from approach curves measured at several points, minimizing the time of measurement and calculating sample activity in points of interest. The time used to shape the separation between the feature space and regression line in the convolutional process for CNNs and DNNs constitutes a significant contribution to the accuracy of the AI model performance. Kernel functions and pre-processing of synthetic data can achieve higher efficiency and localization precision by applying them to the initial or deeper hidden layers of an MLP network; assuming an infinite-width network, we can use the theory from NTK to approximate the kernel shape function to influence the dynamics of the vector flow in gradient descent update for back-propagation. In this paper, we compare various exponential polynomial and trigonometric degree kernel approximations and determine the effectiveness of the Taylor series kernel interpolation filtering to detect high-frequency features for non-image data in multi-layer perceptron (MLP) networks. The results show how engineered feature mapping and data shaping can affect the convergence and dynamics of several types of Gaussian and Fourier mapping method based NN algorithms for training and validation convergence, as well as model learning of non-periodic function approximation mapping in implicitly initialized neural networks.
