Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
m
 
(54 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2023-02-20 9:30'''
|time='''Friday 10:30-12:00'''
|addr=4th Research Building A527-B
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
}}
}}
Line 7: Line 7:
===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract = Visible light communications (VLC) is a good candidate technology for the 6th generation (6G) wireless communications. Red, green, and blue (RGB) light-emitting diodes (LEDs) based VLC has become an important research branch due to its low price and high reliability. However, the saturation of photodiode (PD) caused by the ambient background light may seriously degrade the bit error rate (BER) performance of an RGB-VLC system's three spatially uncoupled information streams (i.e., red, green, and blue LEDs can transmit different data packets simultaneously) in practical applications. To mitigate the ambient light interference in point-to-point RGB-VLC systems, we propose, PNC-VLC, a network-coded scheme that uses two LEDs with the same color at the transmitter to transmit two different data streams and we make use of the naturally overlapped signals at the receiver to formulate physical-layer network coding (PNC). The adaptivity of PNC-VLC could effectively improve the BER degradation problem caused by the saturation of PD under the influence of ambient light. We conducted simulations based on the parameters of commercial off-the-shelf (COTS) products to prove the superiority of the PNC-VLC under the influence of four typical illuminants. Simulation results show that the PNC-VLC system can maintain a better and more stable system BER performance under different ambient background light conditions. Remarkably, with 2/3 throughput efficiency, PNC-VLC can bring 133.3% gain to the BER performance when compared with RGB-VLC under the Illuminant A interference model, making it a good option for VLC applications with unpredictable ambient background interferences.
|abstract=We present NeuriCam, a novel deep learning-based system to achieve video capture from low-power dual-mode IoT camera systems. Our idea is to design a dual-mode camera system where the first mode is low power (1.1 mW) but only outputs grey-scale, low resolution and noisy video and the second mode consumes much higher power (100 mW) but outputs color and higher resolution images. To reduce total energy consumption, we heavily duty cycle the high power mode to output an image only once every second. The data for this camera system is then wirelessly sent to a nearby plugged-in gateway, where we run our real-time neural network decoder to reconstruct a higher-resolution color video. To achieve this, we introduce an attention feature filter mechanism that assigns different weights to different features, based on the correlation between the feature map and the contents of the input frame at each spatial location. We design a wireless hardware prototype using off-the-shelf cameras and address practical issues including packet loss and perspective mismatch. Our evaluations show that our dual-camera approach reduces energy consumption by 7x compared to existing systems. Further, our model achieves an average greyscale PSNR gain of 3.7 dB over prior single and dual-camera video super-resolution methods and 5.6 dB RGB gain over prior color propagation methods.
|confname=IEEE Photonics Journal 2023
|confname=MobiCom 2023
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10028767
|link=https://dl.acm.org/doi/10.1145/3570361.3592523
|title=Physical-Layer Network Coding Enhanced Visible Light Communications Using RGB LEDs
|title=NeuriCam: Key-Frame Video Super-Resolution and Colorization for IoT Cameras
|speaker=Jiahui}}
|speaker=Jiyi
|date=2024-04-12}}
{{Latest_seminar
{{Latest_seminar
|abstract = Mobile edge computing (MEC), as a key ingredient of the 5G ecosystem, is envisioned to support demanding applications with stringent latency requirements. The basic idea is to deploy servers close to end-users, e.g., on the network edge-side instead of the remote cloud. While conceptually reasonable, we find that the operational 5G is not coordinated with MEC and thus suffers from intolerable long response latency. In this work, we propose Tutti, which couples 5G RAN and MEC at the user space to assure the performance of latency-critical video analytics. To enable such capacity, Tutti precisely customizes the application service demand by fusing instantaneous wireless dynamics from the 5G RAN and application-layer content changes from edge servers. Tutti then enforces a deadline-sensitive resource provision for meeting the application service demand by real-time interaction between 5G RAN and edge servers in a lightweight and standard-compatible way. We prototype and evaluate Tutti on a software-defined platform, which shows that Tutti reduces the response latency by an average of 61.69% compared with the existing 5G MEC system, as well as negligible interaction costs.
|abstract=The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
|confname=Mobicom 2022
|confname=Neurips 2017
|link=https://dl.acm.org/doi/pdf/10.1145/3498361.3539765
|link=https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
|title=Tutti: coupling 5G RAN and mobile edge computing for latency-critical video analytics
|title=Attention Is All You Need
|speaker=Silience}}
|speaker=Qinyong
 
|date=2024-04-12}}
 
 
=== History ===
 
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 15:10, 9 April 2024

Time: Friday 10:30-12:00
Address: 4th Research Building A518
Useful links: Readling list; Schedules; Previous seminars.

Latest

  1. [MobiCom 2023] NeuriCam: Key-Frame Video Super-Resolution and Colorization for IoT Cameras, Jiyi
    Abstract: We present NeuriCam, a novel deep learning-based system to achieve video capture from low-power dual-mode IoT camera systems. Our idea is to design a dual-mode camera system where the first mode is low power (1.1 mW) but only outputs grey-scale, low resolution and noisy video and the second mode consumes much higher power (100 mW) but outputs color and higher resolution images. To reduce total energy consumption, we heavily duty cycle the high power mode to output an image only once every second. The data for this camera system is then wirelessly sent to a nearby plugged-in gateway, where we run our real-time neural network decoder to reconstruct a higher-resolution color video. To achieve this, we introduce an attention feature filter mechanism that assigns different weights to different features, based on the correlation between the feature map and the contents of the input frame at each spatial location. We design a wireless hardware prototype using off-the-shelf cameras and address practical issues including packet loss and perspective mismatch. Our evaluations show that our dual-camera approach reduces energy consumption by 7x compared to existing systems. Further, our model achieves an average greyscale PSNR gain of 3.7 dB over prior single and dual-camera video super-resolution methods and 5.6 dB RGB gain over prior color propagation methods.
  2. [Neurips 2017] Attention Is All You Need, Qinyong
    Abstract: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Template loop detected: Resource:Previous Seminars

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}