Track 14: Secure and Efficient Federated Learning / 安全高效的联邦学习

Organizers 组织者:

  • • Organizer: Jianhong Zhang (Professor), North China University of Technology, China
    张键红,教授,北方工业大学
  • • Organizer: Zhengtao Jiang (Professor), Communication University of China, China
    姜正涛,教授,中国传媒大学
  • • Organizer: Jian Mao (Professor), Beihang University, China
    毛剑,教授,北京航空航天大学

Abstract 论坛简介:

Since its inception in 2016, Federated Learning (FL) has become a popular framework for collaboratively training machine learning models across multiple devices while ensuring that user data remains on local devices to enhance privacy. With the exponential growth of data, increasing data diversity, and limited availability of computing resources, improving the efficiency of FL training has become more critical than ever.

自2016年问世以来,联邦学习(FL)已成为跨多个设备协作训练机器学习模型的流行框架,同时确保用户数据保留在设备上以增强隐私性。随着数据呈指数级增长、数据类型日益多样化,加之计算资源可获取性有限,提升FL训练过程的效率较以往更为迫切。

The Forum on Secure and Efficient Federated Learning aims to build an academic exchange platform to discuss the core strengths of federated learning and how to achieve these strengths collaboratively. In modern distributed systems, data leakage concerns are increasing, while the demand for training large-scale models under resource constraints is becoming more pressing. Against this background, the security and efficiency of federated and unlearning mechanisms are the central themes of this forum.

安全高效联邦学习论坛旨在搭建一个学术交流平台,用于探讨联邦学习的核心优势,以及如何实现这些优势的协同达成。在现代分布式系统中,数据泄露问题引发的关注日益加剧,同时在资源有限的条件下训练大规模模型的需求愈发迫切;在此背景下,联邦学习及反学习的安全性与效率成为本次研讨会的核心议题。

Topics 主题范围:

  • Security attacks and defenses in federated learning
    联邦学习中的安全攻击和防御
  • Privacy protection in federated forgetting
    联邦遗忘的隐私保护技术
  • Verifiable federated forgetting learning
    可验证的联邦遗忘学习
  • Federated learning in heterogeneous networks
    异构网络中的联邦学习
  • Federated learning for large language models
    大型语言模型的联邦学习
  • Communication efficiency in federated learning
    联邦学习中的通信效率
  • Data poisoning attacks and defenses
    数据投毒攻击与防御