Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
m
(42 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2023-05-04 9:30'''
|time='''Friday 10:30-12:00'''
|addr=4th Research Building A527-B
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
}}
}}
Line 7: Line 7:
===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract=In vehicular ad hoc networks (VANETs), quick and reliable multi-hop broadcasting is important for the dissemination of emergency warning messages. By scheduling multiple nodes to transmit messages concurrently and cooperatively, cooperative transmission based broadcast schemes may yield much better broadcast performance than conventional broadcast schemes. However, a cooperative transmission requires multiple relays to achieve strict synchronization on both time and frequency, which may induce high cost for a cooperative transmission process. In this paper, we analyze the cost and benefit of a cooperative transmission for data broadcasting in vehicular networks, and introduce a new metric called the single-hop broadcast efficiency (SBE) to evaluate the overall broadcast performance. We propose an efficient, non-deterministic cooperation mechanism to reduce the cooperation cost. The mechanism maximizes the expected broadcast performance by selecting cooperators with the largest expected SBE value for a lead relay, and initiates cooperative broadcasting process when the expected SBE value is larger than that of a single-relay based broadcasting. Based on the non-deterministic mechanism, we propose an efficient, cooperative transmission based opportunistic broadcast (ECTOB) scheme which further utilizes rebroadcast to improve the reliability of the broadcast scheme. Simulation results show that the proposed scheme outperforms the conventional ones.
|abstract=We present NeuriCam, a novel deep learning-based system to achieve video capture from low-power dual-mode IoT camera systems. Our idea is to design a dual-mode camera system where the first mode is low power (1.1 mW) but only outputs grey-scale, low resolution and noisy video and the second mode consumes much higher power (100 mW) but outputs color and higher resolution images. To reduce total energy consumption, we heavily duty cycle the high power mode to output an image only once every second. The data for this camera system is then wirelessly sent to a nearby plugged-in gateway, where we run our real-time neural network decoder to reconstruct a higher-resolution color video. To achieve this, we introduce an attention feature filter mechanism that assigns different weights to different features, based on the correlation between the feature map and the contents of the input frame at each spatial location. We design a wireless hardware prototype using off-the-shelf cameras and address practical issues including packet loss and perspective mismatch. Our evaluations show that our dual-camera approach reduces energy consumption by 7x compared to existing systems. Further, our model achieves an average greyscale PSNR gain of 3.7 dB over prior single and dual-camera video super-resolution methods and 5.6 dB RGB gain over prior color propagation methods.
|confname=TMC 2023
|confname=MobiCom 2023
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9519523
|link=https://dl.acm.org/doi/10.1145/3570361.3592523
|title=An Efficient Cooperative Transmission Based Opportunistic Broadcast Scheme in VANETs
|title=NeuriCam: Key-Frame Video Super-Resolution and Colorization for IoT Cameras
|speaker=Luwei}}
|speaker=Jiyi
|date=2024-04-12}}
{{Latest_seminar
{{Latest_seminar
|abstract = Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint. Data heterogeneity is one of the main challenges in FL, which results in slow convergence and degraded performance. Most existing approaches only tackle the heterogeneity challenge by restricting the local model update in client, ignoring the performance drop caused by direct global model aggregation. Instead, we propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG), which relieves the issue of direct model aggregation. Concretely, FedFTG explores the input space of local models through a generator, and uses it to transfer the knowledge from local models to the global model. Besides, we propose a hard sample mining scheme to achieve effective knowledge distillation throughout the training. In addition, we develop customized label sampling and class-level ensemble to derive maximum utilization of knowledge, which implicitly mitigates the distribution discrepancy across clients. Extensive experiments show that our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
|abstract=The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
|confname=CVPR 2022
|confname=Neurips 2017
|link=https://arxiv.org/pdf/2203.09249.pdf
|link=https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
|title=Fine-Tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning
|title=Attention Is All You Need
|speaker=Jiaqi}}
|speaker=Qinyong
{{Latest_seminar
|date=2024-04-12}}
|abstract = Visible light communication (VLC) systems relying on commercial-off-the-shelf (COTS) devices have gathered momentum recently, due to the pervasive adoption of LED lighting and mobile devices. However, the achievable throughput by such practical systems is still several orders below those claimed by controlled experiments with specialized devices. In this paper, we engineer CoLight aiming to boost the data rate of the VLC system purely built upon COTS devices. CoLight adopts COTS LEDs as its transmitter, but it innovates in its simple yet delicate driver circuit wiring an array of LED chips in a combinatorial manner. Consequently, modulated signals can directly drive the on-off procedures of individual chip groups, so that the spatially synthesized light emissions exhibit a varying luminance following exactly the modulation symbols. To obtain a readily usable receiver, CoLight interfaces a COTS PD with a smartphone through the audio jack, and it also has an alternative MCU-driven circuit to emulate a future integration into the phone. The evaluations on CoLight are both promising and informative: they demonstrate a throughput up to 80 kbps at a distance of 2 m, while suggesting various potentials to further enhance the performance.judiciously allocating 15.81 -- 37.67% idle resources on frames that tend to yield greater marginal benefits from enhancement.
|confname=TMC 2021
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8978742
|title=Pushing the Data Rate of Practical VLC via Combinatorial Light Emission
|speaker=Mengyu}}
 
 
 
=== History ===
 
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Revision as of 15:10, 9 April 2024

Time: Friday 10:30-12:00
Address: 4th Research Building A518
Useful links: Readling list; Schedules; Previous seminars.

Latest

  1. [MobiCom 2023] NeuriCam: Key-Frame Video Super-Resolution and Colorization for IoT Cameras, Jiyi
    Abstract: We present NeuriCam, a novel deep learning-based system to achieve video capture from low-power dual-mode IoT camera systems. Our idea is to design a dual-mode camera system where the first mode is low power (1.1 mW) but only outputs grey-scale, low resolution and noisy video and the second mode consumes much higher power (100 mW) but outputs color and higher resolution images. To reduce total energy consumption, we heavily duty cycle the high power mode to output an image only once every second. The data for this camera system is then wirelessly sent to a nearby plugged-in gateway, where we run our real-time neural network decoder to reconstruct a higher-resolution color video. To achieve this, we introduce an attention feature filter mechanism that assigns different weights to different features, based on the correlation between the feature map and the contents of the input frame at each spatial location. We design a wireless hardware prototype using off-the-shelf cameras and address practical issues including packet loss and perspective mismatch. Our evaluations show that our dual-camera approach reduces energy consumption by 7x compared to existing systems. Further, our model achieves an average greyscale PSNR gain of 3.7 dB over prior single and dual-camera video super-resolution methods and 5.6 dB RGB gain over prior color propagation methods.
  2. [Neurips 2017] Attention Is All You Need, Qinyong
    Abstract: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Template loop detected: Resource:Previous Seminars

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}