Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
m
 
(155 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2023-04-27 9:30'''
|time='''2025-12-05 10:30'''
|addr=4th Research Building A527-B
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}


===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract=In vehicular ad hoc networks (VANETs), quick and reliable multi-hop broadcasting is important for the dissemination of emergency warning messages. By scheduling multiple nodes to transmit messages concurrently and cooperatively, cooperative transmission based broadcast schemes may yield much better broadcast performance than conventional broadcast schemes. However, a cooperative transmission requires multiple relays to achieve strict synchronization on both time and frequency, which may induce high cost for a cooperative transmission process. In this paper, we analyze the cost and benefit of a cooperative transmission for data broadcasting in vehicular networks, and introduce a new metric called the single-hop broadcast efficiency (SBE) to evaluate the overall broadcast performance. We propose an efficient, non-deterministic cooperation mechanism to reduce the cooperation cost. The mechanism maximizes the expected broadcast performance by selecting cooperators with the largest expected SBE value for a lead relay, and initiates cooperative broadcasting process when the expected SBE value is larger than that of a single-relay based broadcasting. Based on the non-deterministic mechanism, we propose an efficient, cooperative transmission based opportunistic broadcast (ECTOB) scheme which further utilizes rebroadcast to improve the reliability of the broadcast scheme. Simulation results show that the proposed scheme outperforms the conventional ones.
|abstract = Intermediate reasoning or acting steps have successfully improved large language models (LLMs) for handling various downstream natural language processing (NLP) tasks. When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as in chain-of-thought (CoT) prompting, and then output code with the natural language or other structured intermediate steps. However, such output is not suitable for code translation or generation tasks since the standard CoT has different logical structures and forms of expression with the code. In this work, we introduce the universal code (UniCode) as the intermediate representation. It is a description of algorithm steps using a mix of conventions of programming languages, such as assignment operator, conditional operator, and loop. Hence, we collect an instruction dataset UniCoder-Instruct to train our model UniCoder on multi-task learning objectives. UniCoder-Instruct comprises natural-language questions, code solutions, and the corresponding universal code. The alignment between the intermediate universal code representation and the final code solution significantly improves the quality of the generated code. The experimental results demonstrate that UniCoder with the universal code significantly outperforms the previous prompting methods by a large margin, showcasing the effectiveness of the structural clues in pseudo-code.
|confname=TMC 2023
|confname =ACL'24
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9519523
|link = https://arxiv.org/abs/2406.16441
|title=An Efficient Cooperative Transmission Based Opportunistic Broadcast Scheme in VANETs
|title= UniCoder: Scaling Code Large Language Model via Universal Code
|speaker=Luwei}}
|speaker=Bairong Liu
|date=2025-12-05
}}
{{Latest_seminar
{{Latest_seminar
|abstract = Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint. Data heterogeneity is one of the main challenges in FL, which results in slow convergence and degraded performance. Most existing approaches only tackle the heterogeneity challenge by restricting the local model update in client, ignoring the performance drop caused by direct global model aggregation. Instead, we propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG), which relieves the issue of direct model aggregation. Concretely, FedFTG explores the input space of local models through a generator, and uses it to transfer the knowledge from local models to the global model. Besides, we propose a hard sample mining scheme to achieve effective knowledge distillation throughout the training. In addition, we develop customized label sampling and class-level ensemble to derive maximum utilization of knowledge, which implicitly mitigates the distribution discrepancy across clients. Extensive experiments show that our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
|abstract =LoRaWANs are envisioned to connect billions of IoT devices through thousands of physically overlapping yet logically orthogonal channels (termed logical channels). These logical channels hold significant potential for enabling highly concurrent scalable IoT connectivity. Large-scale deployments however face strong interference between logical channels. This practical issue has been largely overlooked by existing works but becomes increasingly prominent as LoRaWAN scales up. To address this issue, we introduce Canas, an innovative gateway design that is poised to orthogonalize the logical channels by eliminating mutual interference. To this end, Canas develops a series of novel solutions to accurately extract the meta-information of individual ultra-weak LoRa signals from the received overlapping channels. The meta-information is then leveraged to accurately reconstruct and subtract the LoRa signals over thousands of logical channels iteratively. Real-world evaluations demonstrate that Canas can enhance concurrent transmissions across overlapping logical channels by 2.3× compared to the best known related works.
|confname=CVPR 2022
|confname =TMC'25
|link=https://arxiv.org/pdf/2203.09249.pdf
|link = https://ieeexplore.ieee.org/abstract/document/11160677
|title=Fine-Tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning
|title= Resolving Inter-Logical Channel Interference for Large-scale LoRa Deployments
|speaker=Jiaqi}}
|speaker=Mengyu
{{Latest_seminar
|date=2025-12-05
|abstract = Visible light communication (VLC) systems relying on commercial-off-the-shelf (COTS) devices have gathered momentum recently, due to the pervasive adoption of LED lighting and mobile devices. However, the achievable throughput by such practical systems is still several orders below those claimed by controlled experiments with specialized devices. In this paper, we engineer CoLight aiming to boost the data rate of the VLC system purely built upon COTS devices. CoLight adopts COTS LEDs as its transmitter, but it innovates in its simple yet delicate driver circuit wiring an array of LED chips in a combinatorial manner. Consequently, modulated signals can directly drive the on-off procedures of individual chip groups, so that the spatially synthesized light emissions exhibit a varying luminance following exactly the modulation symbols. To obtain a readily usable receiver, CoLight interfaces a COTS PD with a smartphone through the audio jack, and it also has an alternative MCU-driven circuit to emulate a future integration into the phone. The evaluations on CoLight are both promising and informative: they demonstrate a throughput up to 80 kbps at a distance of 2 m, while suggesting various potentials to further enhance the performance.judiciously allocating 15.81 -- 37.67% idle resources on frames that tend to yield greater marginal benefits from enhancement.
}}
|confname=TMC 2021
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8978742
|title=Pushing the Data Rate of Practical VLC via Combinatorial Light Emission
|speaker=Mengyu}}
 
 
 
=== History ===
 
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 09:25, 5 December 2025

Time: 2025-12-05 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [ACL'24] UniCoder: Scaling Code Large Language Model via Universal Code, Bairong Liu
    Abstract: Intermediate reasoning or acting steps have successfully improved large language models (LLMs) for handling various downstream natural language processing (NLP) tasks. When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as in chain-of-thought (CoT) prompting, and then output code with the natural language or other structured intermediate steps. However, such output is not suitable for code translation or generation tasks since the standard CoT has different logical structures and forms of expression with the code. In this work, we introduce the universal code (UniCode) as the intermediate representation. It is a description of algorithm steps using a mix of conventions of programming languages, such as assignment operator, conditional operator, and loop. Hence, we collect an instruction dataset UniCoder-Instruct to train our model UniCoder on multi-task learning objectives. UniCoder-Instruct comprises natural-language questions, code solutions, and the corresponding universal code. The alignment between the intermediate universal code representation and the final code solution significantly improves the quality of the generated code. The experimental results demonstrate that UniCoder with the universal code significantly outperforms the previous prompting methods by a large margin, showcasing the effectiveness of the structural clues in pseudo-code.
  2. [TMC'25] Resolving Inter-Logical Channel Interference for Large-scale LoRa Deployments, Mengyu
    Abstract: LoRaWANs are envisioned to connect billions of IoT devices through thousands of physically overlapping yet logically orthogonal channels (termed logical channels). These logical channels hold significant potential for enabling highly concurrent scalable IoT connectivity. Large-scale deployments however face strong interference between logical channels. This practical issue has been largely overlooked by existing works but becomes increasingly prominent as LoRaWAN scales up. To address this issue, we introduce Canas, an innovative gateway design that is poised to orthogonalize the logical channels by eliminating mutual interference. To this end, Canas develops a series of novel solutions to accurately extract the meta-information of individual ultra-weak LoRa signals from the received overlapping channels. The meta-information is then leveraged to accurately reconstruct and subtract the LoRa signals over thousands of logical channels iteratively. Real-world evaluations demonstrate that Canas can enhance concurrent transmissions across overlapping logical channels by 2.3× compared to the best known related works.

History

|abstract =The rapid expansion of large language models (LLMs) requires the development of extensive GPU clusters, with companies deploying clusters with tens to hundreds of thousands of GPUs. This growth significantly expands the design space for LLM training systems, requiring thorough exploration of different parallelization strategies, communication parameters, congestion control, fabric topology, etc. Current methods require up to 10k simulation experiments to identify optimal configurations, with inadequate exploration leading to significant degradation of training performance. In this paper, we tackle the overlooked problem of efficiently conducting parallel simulation experiments for design space exploration. Our

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}