Privacy-Preserving Semantic Segmentation from Ultra-Low-Resolution RGB Inputs




Authors:

X. Huang, S. Pan, O. Zatsarynna, J. Gall, M. Bennewitz

Type:

Preprint

Published in:

Arxiv Pre-print

Year:

2026

Related Projects:

PRIVATAR - Privacy-friendly Mobile Avatars for Sick School Children, Robotics Institute Germany

Links:

PreprintCode

BibTex String

@article{huang2026ulrss,
title={Privacy-Preserving Semantic Segmentation from Ultra-Low-Resolution RGB Inputs},
author={Huang, Xuying and Pan, Sicong and Zatsarynna, Olga and Gall, Juergen and Bennewitz, Maren},
journal={arXiv preprint arXiv:2507.16034},
year={2026}
}
Topic

Abstract:

RGB-based semantic segmentation has become a mainstream approach for visual perception and iswidely applied in a variety of downstream tasks. However, existing methods typically rely on highresolution RGB inputs, which may expose sensitive visual content in privacy-critical environments.Ultra-low-resolution RGB sensing suppresses sensitive information directly during image acquisition,making it an attractive privacy-preserving alternative. Nevertheless, recovering semantic segmentationfrom ultra-low-resolution RGB inputs remains highly challenging due to severe visual degradation.In this work, we introduce a novel fully joint-learning framework to mitigate the optimization conflicts exacerbated by visual degradation for ultra-low-resolution semantic segmentation. Experimentsdemonstrate that our method outperforms representative baselines in semantic segmentation performance and our ultra-low-resolution RGB input achieves a favorable trade-off between privacypreservation and semantic segmentation performance. We deploy our privacy-preserving semanticsegmentation method in a real-world robotic object-goal navigation task, demonstrating successfuldownstream task execution even under severe visual degradation.