UTILIZING EXCESS RESOURCES IN TRAINING NEURAL NETWORKS

Amit Henig, Raja Giryes

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this work, we suggest Kernel Filtering Linear Overparameterization (KFLO), where a linear cascade of filtering layers is used during training to improve network performance in test time. We implement this cascade in a kernel filtering fashion, which prevents the trained architecture from becoming unnecessarily deeper. This also allows using our approach with almost any network architecture and let combining the filtering layers into a single layer in test time. Thus, our approach does not add computational complexity during inference. We demonstrate the advantage of KFLO on various network models and datasets in supervised learning.

Original languageEnglish
Title of host publication2022 IEEE International Conference on Image Processing, ICIP 2022 - Proceedings
PublisherIEEE Computer Society
Pages1941-1945
Number of pages5
ISBN (Electronic)9781665496209
DOIs
StatePublished - 2022
Event29th IEEE International Conference on Image Processing, ICIP 2022 - Bordeaux, France
Duration: 16 Oct 202219 Oct 2022

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880

Conference

Conference29th IEEE International Conference on Image Processing, ICIP 2022
Country/TerritoryFrance
CityBordeaux
Period16/10/2219/10/22

Funding

FundersFunder number
ERC-StG
Not added757497

    Keywords

    • kernel filtering/composition
    • linear overparameterization
    • structural reparameterization

    Fingerprint

    Dive into the research topics of 'UTILIZING EXCESS RESOURCES IN TRAINING NEURAL NETWORKS'. Together they form a unique fingerprint.

    Cite this