Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
(26 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2023-10-08 16:20'''
|time='''Friday 10:30-12:00'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
Line 7: Line 7:
===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract=This paper presents CellFusion, a system designed for high-quality, real-time video streaming from vehicles to the cloud. It leverages an innovative blend of multipath QUIC transport and network coding. Surpassing the limitations of individual cellular carriers, CellFusion uses a unique last-mile overlay that integrates multiple cellular networks into a single, unified cloud connection. This integration is made possible through the use of in-vehicle Customer Premises Equipment (CPEs) and edge-cloud proxy servers. In order to effectively handle unstable cellular connections prone to intense burst losses and unexpected latency spikes as a vehicle moves, CellFusion introduces XNC. This innovative network coding-based transport solution enables efficient and resilient multipath transport. XNC aims to accomplish low latency, minimal traffic redundancy, and reduced computational complexity all at once. CellFusion is secure and transparent by nature and does not require modifications for vehicular apps connecting to it. We tested CellFusion on 100 self-driving vehicles for over six months with our cloud-native back-end running on 50 CDN PoPs. Through extensive road tests, we show that XNC reduced video packet delay by 71.53% at the 99th percentile versus 5G. At 30Mbps, CellFusion achieved 66.11% ~ 80.62% reduction in video stall ratio versus state-of-the-art multipath transport solutions with less than 10% traffic redundancy.
|abstract=We present NeuriCam, a novel deep learning-based system to achieve video capture from low-power dual-mode IoT camera systems. Our idea is to design a dual-mode camera system where the first mode is low power (1.1 mW) but only outputs grey-scale, low resolution and noisy video and the second mode consumes much higher power (100 mW) but outputs color and higher resolution images. To reduce total energy consumption, we heavily duty cycle the high power mode to output an image only once every second. The data for this camera system is then wirelessly sent to a nearby plugged-in gateway, where we run our real-time neural network decoder to reconstruct a higher-resolution color video. To achieve this, we introduce an attention feature filter mechanism that assigns different weights to different features, based on the correlation between the feature map and the contents of the input frame at each spatial location. We design a wireless hardware prototype using off-the-shelf cameras and address practical issues including packet loss and perspective mismatch. Our evaluations show that our dual-camera approach reduces energy consumption by 7x compared to existing systems. Further, our model achieves an average greyscale PSNR gain of 3.7 dB over prior single and dual-camera video super-resolution methods and 5.6 dB RGB gain over prior color propagation methods.
|confname=SIGCOMM '23
|confname=MobiCom 2023
|link=https://dl.acm.org/doi/10.1145/3603269.3604832
|link=https://dl.acm.org/doi/10.1145/3570361.3592523
|title=CellFusion: Multipath Vehicle-to-Cloud Video Streaming with Network Coding in the Wild
|title=NeuriCam: Key-Frame Video Super-Resolution and Colorization for IoT Cameras
|speaker=Rong Cong
|date=2023-10-08}}
{{Latest_seminar
|abstract=Resource disaggregation offers a cost effective solution to resource scaling, utilization, and failure-handling in data centers by physically separating hardware devices in a server. Servers are architected as pools of processor, memory, and storage devices, organized as independent failure-isolated components interconnected by a high-bandwidth network. A critical challenge, however, is the high performance penalty of accessing data from a remote memory module over the network. Addressing this challenge is difficult as disaggregated systems have high runtime variability in network latencies/bandwidth, and page migration can significantly delay critical path cache line accesses in other pages. This paper conducts a characterization analysis on different data movement strategies in fully disaggregated systems, evaluates their performance overheads in a variety of workloads, and introduces DaeMon, the first software-transparent mechanism to significantly alleviate data movement overheads in fully disaggregated systems. First, to enable scalability to multiple hardware components in the system, we enhance each compute and memory unit with specialized engines that transparently handle data migrations. Second, to achieve high performance and provide robustness across various network, architecture and application characteristics, we implement a synergistic approach of bandwidth partitioning, link compression, decoupled data movement of multiple granularities, and adaptive granularity selection in data movements. We evaluate DaeMon in a wide variety of workloads at different network and architecture configurations using a state-of-the-art simulator. DaeMon improves system performance and data access costs by 2.39× and 3.06×, respectively, over the widely-adopted approach of moving data at page granularity.
|confname=SigMetrics '23
|link=https://dl.acm.org/doi/abs/10.1145/3579445
|title=DaeMon: Architectural Support for Efficient Data Movement in Fully Disaggregated Systems
|speaker=Jiyi
|speaker=Jiyi
|date=2023-10-08}}
|date=2024-04-12}}
{{Latest_seminar
{{Latest_seminar
|abstract=LoRa has emerged as a key wireless communication technology for a gateway to provide geographically-distributed IoT devices with low-rate, long-range connections. In this paper, we present MaLoRaGW, the first-of-its-kind Multi-antenna LoRa GateWay that enables multi-user MIMO (MU-MIMO) LoRa communications in both uplink and downlink. MaLoRaGW was inspired by the success of MU-MIMO in cellular and Wi-Fi networks. The key component of MaLoRaGW is a joint baseband PHY design for uplink packet detection and downlink beamforming. Its innovation lies in three modules: spatial signal projection, accurate channel estimation, and implicit beamforming, all of which reside only in a LoRa gateway and require no modification on LoRa client devices. We have built a prototype of two-antenna MaLoRaGW on a USRP device and extensively evaluated its performance with commercial LoRa dongles in three scenarios: lab, office building, and university campus. Our experimental results show that, compared to the state-of-the-art, the two-antenna MaLoRaGW increases uplink throughput by 10% and downlink throughput by 95%.
|abstract=The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
|confname=SenSys '22
|confname=Neurips 2017
|link=https://dl.acm.org/doi/pdf/10.1145/3560905.3568533
|link=https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
|title=MaLoRaGW: Multi-User MIMO Transmission for LoRa
|title=Attention Is All You Need
|speaker=Kai Chen
|speaker=Qinyong
|date=2023-10-08}}
|date=2024-04-12}}
{{Latest_seminar
|abstract=On-boarding new devices into an existing SDN network is a pain for network operations (NetOps) teams, because much expert effort is required to bridge the gap between the configuration models of the new devices and the unified data model in the SDN controller. In this work, we present an assistant framework NAssim, to help NetOps accelerate the process of assimilating a new device into a SDN network. Our solution features a unified parser framework to parse diverse device user manuals into preliminary configuration models, a rigorous validator that confirm the correctness of the models via formal syntax analysis, model hierarchy validation and empirical data validation, and a deep-learning-based mapping algorithm that uses state-of-the-art neural language processing techniques to produce human-comprehensible recommended mapping between the validated configuration model and the one in the SDN controller. In all, NAssim liberates the NetOps from most tedious tasks by learning directly from devices' manuals to produce data models which are comprehensible by both the SDN controller and human experts. Our evaluation shows, NAssim can accelerate the assimilation process by 9.1x. In this process, we also identify and correct 243 errors in four mainstream vendors' device manuals, and release a validated and expert-curated dataset of parsed manual corpus for future research.
|confname=SIGCOMM '22
|link=https://dl.acm.org/doi/10.1145/3544216.3544244
|title=Software-defined network assimilation: bridging the last mile towards centralized network configuration management with NAssim
|speaker=Yaliang
|date=2023-10-08}}
=== History ===
 
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Revision as of 15:10, 9 April 2024

Time: Friday 10:30-12:00
Address: 4th Research Building A518
Useful links: Readling list; Schedules; Previous seminars.

Latest

  1. [MobiCom 2023] NeuriCam: Key-Frame Video Super-Resolution and Colorization for IoT Cameras, Jiyi
    Abstract: We present NeuriCam, a novel deep learning-based system to achieve video capture from low-power dual-mode IoT camera systems. Our idea is to design a dual-mode camera system where the first mode is low power (1.1 mW) but only outputs grey-scale, low resolution and noisy video and the second mode consumes much higher power (100 mW) but outputs color and higher resolution images. To reduce total energy consumption, we heavily duty cycle the high power mode to output an image only once every second. The data for this camera system is then wirelessly sent to a nearby plugged-in gateway, where we run our real-time neural network decoder to reconstruct a higher-resolution color video. To achieve this, we introduce an attention feature filter mechanism that assigns different weights to different features, based on the correlation between the feature map and the contents of the input frame at each spatial location. We design a wireless hardware prototype using off-the-shelf cameras and address practical issues including packet loss and perspective mismatch. Our evaluations show that our dual-camera approach reduces energy consumption by 7x compared to existing systems. Further, our model achieves an average greyscale PSNR gain of 3.7 dB over prior single and dual-camera video super-resolution methods and 5.6 dB RGB gain over prior color propagation methods.
  2. [Neurips 2017] Attention Is All You Need, Qinyong
    Abstract: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Template loop detected: Resource:Previous Seminars

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}