F-INR: Functional Tensor Decomposition for Implicit Neural Representations

Computer Vision Group
Friedrich Schiller University Jena, Germany
WACV 2026

Abstract

Implicit Neural Representations (INRs) model signals as continuous, differentiable functions. However, monolithic INRs scale poorly with data dimensionality, leading to excessive training costs. We propose F-INR, a framework that addresses this limitation by factorizing a high-dimensional INR into a set of compact, axis-specific sub-networks based on functional tensor decomposition. These sub-networks learn low-dimensional functional components that are then combined via tensor operations. This factorization reduces computational complexity while additionally improving representational capacity. F-INR is both architecture- and decomposition-agnostic. It integrates with various existing INR backbones (e.g., SIREN, WIRE, FINER, Factor Fields) and tensor formats (e.g., CP, TT, Tucker), offering fine-grained control over the speed-accuracy trade-off via the tensor rank and mode. Our experiments show F-INR accelerates training by up to and improves fidelity by over 6.0 dB PSNR compared to state-of-the-art INRs. We validate these gains on diverse tasks, including image representation, 3D geometry reconstruction, and neural radiance fields. We further show F-INR's applicability to scientific computing by modeling complex physics simulations. Thus, F-INR provides a scalable, flexible, and efficient framework for high-dimensional signal modeling.

BibTeX

If you find our work useful, utilize with our models, start your own research with our data set, or use our parts of our code, please cite our work:
@inproceedings{vemuri2026finr,
      author = {Sai Karthikeya Vemuri and Tim Büchner and Joachim Denzler},
      title = {F-INR: Functional Tensor Decomposition for Implicit Neural Representations},
      year = {2026},
      doi = {10.48550/arXiv.2503.21507},
      booktitle = {Winter Conference on Applications of Computer Vision (WACV)},
}