Rodyti trumpą aprašą

dc.rights.licenseVisos teisės saugomos / All rights reserveden_US
dc.contributor.authorIvinskij, Vadimas
dc.contributor.authorMorkvėnaitė-Vilkončienė, Inga
dc.date.accessioned2026-01-13T08:15:54Z
dc.date.available2026-01-13T08:15:54Z
dc.date.issued2025
dc.identifier.isbn9798331598747en_US
dc.identifier.issn2831-5634en_US
dc.identifier.urihttps://etalpykla.vilniustech.lt/handle/123456789/159725
dc.description.abstractA scanning electrochemical microscope (SECM) with artificial intelligence could generate the sample's activity image from approach curves measured at several points, minimizing the time of measurement and calculating sample activity in points of interest. The time used to shape the separation between the feature space and regression line in the convolutional process for CNNs and DNNs constitutes a significant contribution to the accuracy of the AI model performance. Kernel functions and pre-processing of synthetic data can achieve higher efficiency and localization precision by applying them to the initial or deeper hidden layers of an MLP network; assuming an infinite-width network, we can use the theory from NTK to approximate the kernel shape function to influence the dynamics of the vector flow in gradient descent update for back-propagation. In this paper, we compare various exponential polynomial and trigonometric degree kernel approximations and determine the effectiveness of the Taylor series kernel interpolation filtering to detect high-frequency features for non-image data in multi-layer perceptron (MLP) networks. The results show how engineered feature mapping and data shaping can affect the convergence and dynamics of several types of Gaussian and Fourier mapping method based NN algorithms for training and validation convergence, as well as model learning of non-periodic function approximation mapping in implicitly initialized neural networks.en_US
dc.format.extent7 p.en_US
dc.format.mediumTekstas / Texten_US
dc.language.isoenen_US
dc.relation.urihttps://etalpykla.vilniustech.lt/handle/123456789/159405en_US
dc.source.urihttps://ieeexplore.ieee.org/document/11016831en_US
dc.subjectmachine learningen_US
dc.subjectfeature engineeringen_US
dc.subjectMLPen_US
dc.subjectFFT mappingen_US
dc.subjectgradient flow dynamicsen_US
dc.subjectFFen_US
dc.subjectNTK neural networksen_US
dc.titlePolynomial Approximation Degree Influence on Implicit Network Regularization for Impedance Signal Reconstructionen_US
dc.typeKonferencijos publikacija / Conference paperen_US
dcterms.accrualMethodRankinis pateikimas / Manual submissionen_US
dcterms.issued2025-06-02
dcterms.references19en_US
dc.description.versionTaip / Yesen_US
dc.contributor.institutionVilniaus Gedimino technikos universitetasen_US
dc.contributor.institutionVilnius Gediminas Technical Universityen_US
dc.contributor.facultyElektronikos fakultetas / Faculty of Electronicsen_US
dc.contributor.departmentElektros inžinerijos katedra / Department of Electrical Engineeringen_US
dcterms.sourcetitle2025 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream), April 24, 2025, Vilnius, Lithuaniaen_US
dc.identifier.eisbn9798331598730en_US
dc.identifier.eissn2690-8506en_US
dc.publisher.nameIEEEen_US
dc.publisher.countryUnited States of Americaen_US
dc.publisher.cityNew Yorken_US
dc.identifier.doihttps://doi.org/10.1109/eStream66938.2025.11016831en_US


Šio įrašo failai

FailaiDydisFormatasPeržiūra

Su šiuo įrašu susijusių failų nėra.

Šis įrašas yra šioje (-se) kolekcijoje (-ose)

Rodyti trumpą aprašą