AttrLeaks on the Edge: Exploiting Information Leakage from Privacy-Preserving Co-inference
-
Graphical Abstract
-
Abstract
Collaborative inference (co-inference) accelerates deep neural network inference via extracting representations at the device and making predictions at the edge server, which however might disclose the sensitive information about private attributes of users (e.g., race). Although many privacy-preserving mechanisms on co-inference have been proposed to eliminate privacy concerns, privacy leakage of sensitive attributes might still happen during inference. In this paper, we explore privacy leakage against the privacy-preserving co-inference by decoding the uploaded representations into a vulnerable form. We propose a novel attack framework named AttrLeaks, which consists of the shadow model of feature extractor (FE), the susceptibility reconstruction decoder, and the private attribute classifier. Based on our observation that values in inner layers of FE (internal representation) are more sensitive to attack, the shadow model is proposed to simulate the FE of the victim in the black-box scenario and generates the internal representations. Then, the susceptibility reconstruction decoder is designed to transform the uploaded representations of the victim into the vulnerable form, which enables the malicious classifier to easily predict the private attributes. Extensive experimental results demonstrate that AttrLeaks outperforms the state of the art in terms of attack success rate.
-
-