In this paper, we proposed a new method to solve the privacy and accuracy problems described in Sec.II. We called our proposed method Identity Protected Federated Learning (IPFed) In IPFed, the class embedding is multiplied by a random transformation parameter which is secret to the learning server. This makes it possible to perform the optimization while keeping the class embeddeddings secret from any server.In this paper, we proposed a new method to solve the privacy and accuracy problems described in Sec.II. We called our proposed method Identity Protected Federated Learning (IPFed) In IPFed, the class embedding is multiplied by a random transformation parameter which is secret to the learning server. This makes it possible to perform the optimization while keeping the class embeddeddings secret from any server.

IPFed: A Privacy-Preserving Federated Learning Framework for Face Verification

:::info Authors:

(1) Yosuke Kaga, Hitachi, Ltd., Japan;

(2) Yusei Suzuki, Hitachi, Ltd., Japan;

(3) Kenta Takahashi, Hitachi, Ltd., Japan.

:::

Abstract and I. Introduction

II. Related Work

III. IPFED

IV. Experiments

[V. Conclusion and References]()

III. IPFE

We propose a new method to solve the privacy and accuracy problems described in Sec.II. We call our proposed method Identity Protected Federated Learning (IPFed). The overview of IPFed and the training algorithm of IPFed are shown in Fig.2 and Algorithm 1, respectively

\

\ Fig. 2. The overview of IPFed.

\ In IPFed, the class embedding is multiplied by a random transformation parameter which is secret to the learning server. Furthermore, the updated class embedding is returned to the original feature space using the inverse matrix of the transformation parameters. This makes it possible to perform the optimization while keeping the class embeddings secret from any server. In the following, we show in Section III-A that our method can perform the same optimization as FedFace even when the class embedding is kept secret, and we also show in Section III-B that it is difficult for an attacker on any entity to obtain the user’s personal data.

\ A. Derivation of IPFed

\

\

\

\

\ B. Privacy analysis of IPFed

\ In this section, we discuss how privacy protection is achieved by our IPFed. We assume that an attacker against IPFed can obtain the data stored or communicated on any one of the three entities. We also assume a semi-honest model in which each entity follows correct protocols. We also assume that the goal of the attack is to obtain personal data of a specific individual.

\

\ Attacker against the parameter server: From the parameter server, the attacker can obtain the transformation parameter rt. Since the transformation parameter is randomly generated data independent of the personal data, it is difficult to obtain any personal data.

\

\ In conclusion, we have confirmed that IPFed can strongly protect the privacy of training data under the assumptions defined in this paper.

\ C. Efficiency analysis of IPFed

\

\ Note that the parameter server is a newly introduced and its operating costs are newly incurred. However, since the role of the parameter server is only to generate and send the transformation parameter, there is no problem even if the server has very little computing power.

\ From the above discussion, it can be said that the efficiency of IPFed is comparable to that of the conventional method [6]. However, quantitative evaluation of the efficiency of IPFed is a future work.

\

IV. EXPERIMENTS

In this chapter, we show the effectiveness of the proposed method through experiments on face image datasets.

\ A. Setting

\ Datasets: We follow the setting in [6] and used CASIAWebFace [14] for training. CASIA-WebFace consists of 494,414 images of 10,575 subjects. We randomly select 9,000 subjects for pre-training and 1,000 subjects for federated learning. To evaluate the performance of face verification, we use three datasets: LFW [15], IJB-A [16] and IJB-C [17].

\ Implementation: We use CosFace [18] for the face feature extractor. Only CosFace was used as the face feature extractor according to [6], but evaluation using more recent multiple the face feature extractors is a subject for future work. We use the scale parameter of 30 and the margin parameter of 10 for the CosFace loss function. The margin parameters are m = 0.9 and v = 0.7. The parameter λ = 25 in Eq.(3). We perform federated learning with a communication round of 200 and a learning rate of 0.1.

\ B. Evaluation

\ First, the face verification performance of each method for 1000 clients is shown in the Table.I. The comparison methods are shown below:

TABLE IFACE VERIFICATION PERFORMANCE ON STANDARD FACE RECOGNITION BENCHMARKS LFW, IJB-A AND IJB-C.

\ TABLE IIIJB-A TAR @FAR=0.1% ON NUM. OF SUBJECTS.

Baseline A typical CosFace model, pre-trained on 9000 subjects.

\ Fine-tuning A fine-tuned model of Baseline using CosFace on 1000 subjects.

\ FedFace A model trained according to [6].

\ IPFed A model trained based on the proposed random projection approach.

\ Fixed class embedding (FCE) A model trained by fixing the class embedding to the initial data. In FCE, the class embedding does not need to be shared with the server, and secure federated learning can be performed.

\ As shown in Table.I, IPFed achieves the same level of accuracy as FedFace. This means that the random projection based spreadout in IPFed is equivalent to the spreadout in FedFace. On the other hand, FCE is less accurate than IPFed. This is due to the lack of class embedding optimization, which means that the spreadout in IPFed contributes to the accuracy improvement.

\ Furthermore, the accuracy against the number of subjects used for federated learning is shown in TableII. The IPFed achieves higher accuracy than the FCE for any of the number of the subjects. This indicates that sharing and optimizing class embedding improve the accuracy. However, while Fine-tuning has a monotonic increase in accuracy for the number of the subjects, IPFed does not. This may be due to the fact that the hyperparameters in learning are not optimal for the number of the subjects, so automatic adjustment of hyperparameters is a future task.

\ In addition, in our experiments, only accuracy was evaluated, and attack defense performance was only theoretically evaluated. This is also a topic for future work.

\

V. CONCLUSION

In this paper, we focused on the problem of personal data leakage from class embedding in federated learning for user authentication, and proposed IPFed, which performs federated learning while protecting class embedding. We proved that IPFed, which is based on random projection for class embedding, can perform learning equivalent to the state-of-theart method. We evaluate the proposed method on face image datasets and confirm that the accuracy of IPFed is equivalent to that of the state-of-the-art method. IPFed can improve the model for user authentication while preserving the privacy of the training data.

REFERENCES

[1] I. D. Raji and G. Fried, About face: A survey of facial recognition evaluation, 2021. arXiv: 2102.00813 [cs.CV].

\ [2] P. Voigt and A. Von dem Bussche, “The eu general data protection regulation (gdpr),” A Practical Guide, 1st Ed., Cham: Springer International Publishing, vol. 10, p. 3 152 676, 2017.

\ [3] M. Al-Rubaie and J. M. Chang, “Privacy-preserving machine learning: Threats and solutions,” IEEE Security Privacy, vol. 17, no. 2, pp. 49–58, 2019. DOI: 10.1109/MSEC.2018.2888775.

\ [4] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics, PMLR, 2017, pp. 1273–1282.

\ [5] F. Yu, A. S. Rawat, A. Menon, and S. Kumar, “Federated learning with only positive labels,” in International Conference on Machine Learning, PMLR, 2020, pp. 10 946–10 956.

\ [6] D. Aggarwal, J. Zhou, and A. K. Jain, “Fedface: Collaborative learning of face recognition model,” in 2021 IEEE International Joint Conference on Biometrics (IJCB), 2021, pp. 1–8. DOI: 10.1109/IJCB52358.2021.9484386.

\ [7] H. Hosseini, S. Yun, H. Park, C. Louizos, J. Soriaga, and M. Welling, Federated learning of user authentication models, 2020. arXiv: 2007.04618 [cs.LG].

\ [8] H. Hosseini, H. Park, S. Yun, C. Louizos, J. Soriaga, and M. Welling, “Federated learning of user verification models without sharing embeddings,” in Proceedings of the 38th International Conference on Machine Learning, M. Meila and T. Zhang, Eds., ser. Proceedings of Machine Learning Research, vol. 139, PMLR, 18–24 Jul 2021, pp. 4328–4336. [Online]. Available: https://proceedings.mlr.press/v139/hosseini21a.html.

\

:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

CME Group to launch options on XRP and SOL futures

CME Group to launch options on XRP and SOL futures

The post CME Group to launch options on XRP and SOL futures appeared on BitcoinEthereumNews.com. CME Group will offer options based on the derivative markets on Solana (SOL) and XRP. The new markets will open on October 13, after regulatory approval.  CME Group will expand its crypto products with options on the futures markets of Solana (SOL) and XRP. The futures market will start on October 13, after regulatory review and approval.  The options will allow the trading of MicroSol, XRP, and MicroXRP futures, with expiry dates available every business day, monthly, and quarterly. The new products will be added to the existing BTC and ETH options markets. ‘The launch of these options contracts builds on the significant growth and increasing liquidity we have seen across our suite of Solana and XRP futures,’ said Giovanni Vicioso, CME Group Global Head of Cryptocurrency Products. The options contracts will have two main sizes, tracking the futures contracts. The new market will be suitable for sophisticated institutional traders, as well as active individual traders. The addition of options markets singles out XRP and SOL as liquid enough to offer the potential to bet on a market direction.  The options on futures arrive a few months after the launch of SOL futures. Both SOL and XRP had peak volumes in August, though XRP activity has slowed down in September. XRP and SOL options to tap both institutions and active traders Crypto options are one of the indicators of market attitudes, with XRP and SOL receiving a new way to gauge sentiment. The contracts will be supported by the Cumberland team.  ‘As one of the biggest liquidity providers in the ecosystem, the Cumberland team is excited to support CME Group’s continued expansion of crypto offerings,’ said Roman Makarov, Head of Cumberland Options Trading at DRW. ‘The launch of options on Solana and XRP futures is the latest example of the…
Share
BitcoinEthereumNews2025/09/18 00:56
XLM Price Prediction: Stellar Targets $0.26-$0.27 Range by February 2026

XLM Price Prediction: Stellar Targets $0.26-$0.27 Range by February 2026

The post XLM Price Prediction: Stellar Targets $0.26-$0.27 Range by February 2026 appeared on BitcoinEthereumNews.com. Zach Anderson Jan 14, 2026 13:31 XLM
Share
BitcoinEthereumNews2026/01/15 10:06
Adoption Leads Traders to Snorter Token

Adoption Leads Traders to Snorter Token

The post Adoption Leads Traders to Snorter Token appeared on BitcoinEthereumNews.com. Largest Bank in Spain Launches Crypto Service: Adoption Leads Traders to Snorter Token Sign Up for Our Newsletter! For updates and exclusive offers enter your email. Leah is a British journalist with a BA in Journalism, Media, and Communications and nearly a decade of content writing experience. Over the last four years, her focus has primarily been on Web3 technologies, driven by her genuine enthusiasm for decentralization and the latest technological advancements. She has contributed to leading crypto and NFT publications – Cointelegraph, Coinbound, Crypto News, NFT Plazas, Bitcolumnist, Techreport, and NFT Lately – which has elevated her to a senior role in crypto journalism. Whether crafting breaking news or in-depth reviews, she strives to engage her readers with the latest insights and information. Her articles often span the hottest cryptos, exchanges, and evolving regulations. As part of her ploy to attract crypto newbies into Web3, she explains even the most complex topics in an easily understandable and engaging way. Further underscoring her dynamic journalism background, she has written for various sectors, including software testing (TEST Magazine), travel (Travel Off Path), and music (Mixmag). When she’s not deep into a crypto rabbit hole, she’s probably island-hopping (with the Galapagos and Hainan being her go-to’s). Or perhaps sketching chalk pencil drawings while listening to the Pixies, her all-time favorite band. This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Center or Cookie Policy. I Agree Source: https://bitcoinist.com/banco-santander-and-snorter-token-crypto-services/
Share
BitcoinEthereumNews2025/09/17 23:45