Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
(wenliang updates seminar)
m
Line 15: Line 15:
|abstract = Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint. Data heterogeneity is one of the main challenges in FL, which results in slow convergence and degraded performance. Most existing approaches only tackle the heterogeneity challenge by restricting the local model update in client, ignoring the performance drop caused by direct global model aggregation. Instead, we propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG), which relieves the issue of direct model aggregation. Concretely, FedFTG explores the input space of local models through a generator, and uses it to transfer the knowledge from local models to the global model. Besides, we propose a hard sample mining scheme to achieve effective knowledge distillation throughout the training. In addition, we develop customized label sampling and class-level ensemble to derive maximum utilization of knowledge, which implicitly mitigates the distribution discrepancy across clients. Extensive experiments show that our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
|abstract = Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint. Data heterogeneity is one of the main challenges in FL, which results in slow convergence and degraded performance. Most existing approaches only tackle the heterogeneity challenge by restricting the local model update in client, ignoring the performance drop caused by direct global model aggregation. Instead, we propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG), which relieves the issue of direct model aggregation. Concretely, FedFTG explores the input space of local models through a generator, and uses it to transfer the knowledge from local models to the global model. Besides, we propose a hard sample mining scheme to achieve effective knowledge distillation throughout the training. In addition, we develop customized label sampling and class-level ensemble to derive maximum utilization of knowledge, which implicitly mitigates the distribution discrepancy across clients. Extensive experiments show that our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
|confname=CVPR 2022
|confname=CVPR 2022
|link=https://openaccess.thecvf.com/content/CVPR2022/papers/Zhang_Fine-Tuning_Global_Model_via_Data-Free_Knowledge_Distillation_for_Non-IID_Federated_CVPR_2022_paper.pdf4
|link=https://arxiv.org/pdf/2203.09249.pdf
|title=Fine-Tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning
|title=Fine-Tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning
|speaker=Jiaqi}}
|speaker=Jiaqi}}

Revision as of 23:53, 3 May 2023

Time: 2023-04-27 9:30
Address: 4th Research Building A527-B
Useful links: Readling list; Schedules; Previous seminars.

Latest

  1. [TMC 2023] An Efficient Cooperative Transmission Based Opportunistic Broadcast Scheme in VANETs, Luwei
    Abstract: In vehicular ad hoc networks (VANETs), quick and reliable multi-hop broadcasting is important for the dissemination of emergency warning messages. By scheduling multiple nodes to transmit messages concurrently and cooperatively, cooperative transmission based broadcast schemes may yield much better broadcast performance than conventional broadcast schemes. However, a cooperative transmission requires multiple relays to achieve strict synchronization on both time and frequency, which may induce high cost for a cooperative transmission process. In this paper, we analyze the cost and benefit of a cooperative transmission for data broadcasting in vehicular networks, and introduce a new metric called the single-hop broadcast efficiency (SBE) to evaluate the overall broadcast performance. We propose an efficient, non-deterministic cooperation mechanism to reduce the cooperation cost. The mechanism maximizes the expected broadcast performance by selecting cooperators with the largest expected SBE value for a lead relay, and initiates cooperative broadcasting process when the expected SBE value is larger than that of a single-relay based broadcasting. Based on the non-deterministic mechanism, we propose an efficient, cooperative transmission based opportunistic broadcast (ECTOB) scheme which further utilizes rebroadcast to improve the reliability of the broadcast scheme. Simulation results show that the proposed scheme outperforms the conventional ones.
  2. [CVPR 2022] Fine-Tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning, Jiaqi
    Abstract: Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint. Data heterogeneity is one of the main challenges in FL, which results in slow convergence and degraded performance. Most existing approaches only tackle the heterogeneity challenge by restricting the local model update in client, ignoring the performance drop caused by direct global model aggregation. Instead, we propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG), which relieves the issue of direct model aggregation. Concretely, FedFTG explores the input space of local models through a generator, and uses it to transfer the knowledge from local models to the global model. Besides, we propose a hard sample mining scheme to achieve effective knowledge distillation throughout the training. In addition, we develop customized label sampling and class-level ensemble to derive maximum utilization of knowledge, which implicitly mitigates the distribution discrepancy across clients. Extensive experiments show that our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
  3. [TMC 2021] Pushing the Data Rate of Practical VLC via Combinatorial Light Emission, Mengyu
    Abstract: Visible light communication (VLC) systems relying on commercial-off-the-shelf (COTS) devices have gathered momentum recently, due to the pervasive adoption of LED lighting and mobile devices. However, the achievable throughput by such practical systems is still several orders below those claimed by controlled experiments with specialized devices. In this paper, we engineer CoLight aiming to boost the data rate of the VLC system purely built upon COTS devices. CoLight adopts COTS LEDs as its transmitter, but it innovates in its simple yet delicate driver circuit wiring an array of LED chips in a combinatorial manner. Consequently, modulated signals can directly drive the on-off procedures of individual chip groups, so that the spatially synthesized light emissions exhibit a varying luminance following exactly the modulation symbols. To obtain a readily usable receiver, CoLight interfaces a COTS PD with a smartphone through the audio jack, and it also has an alternative MCU-driven circuit to emulate a future integration into the phone. The evaluations on CoLight are both promising and informative: they demonstrate a throughput up to 80 kbps at a distance of 2 m, while suggesting various potentials to further enhance the performance.judiciously allocating 15.81 -- 37.67% idle resources on frames that tend to yield greater marginal benefits from enhancement.


History

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}