Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
m
(13 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''Thursday 16:20-18:00'''
|time='''Friday 10:30-12:00'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
Line 7: Line 7:
===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract=Obtaining urban-scale vehicle trajectories is essential to understand the urban mobility and benefits various downstream applications. The mobility knowledge obtained from existing vehicle trajectory sensing techniques is typically incomplete. To fill the gap, we propose F3VeTrac , an efficient deep-learning-based vehicle trajectory recovery system that utilizes complementary characteristics of the Camera Surveillance System and the Vehicle Tracking System to obtain fine-grained, fully-road-covered, and fully-individual-penetrative ( F3 ) trajectories. F3VeTrac utilizes five well-designed modules to model the co-occurrence relationships hidden in both coarse-grained and fine-grained trajectories from the two complementary sensing systems and fuse them to recover the coarse-grained trajectories. We implement and evaluate F3VeTrac with two real-world datasets from over 100 million regular vehicle trajectories and 16 million commercial vehicle trajectories in two cities of China, together with an on-field case study based on 251 regular vehicle trajectories collected by 17 volunteers, demonstrating its great advantages over six state-of-the-art alternative schemes. Source codes are available in https://github.com/UrbanComp-BUPT/F3VeTrac . Moreover, we present a downstream application of F3VeTrac for traffic condition estimation, which obtains obvious performance gains.
|abstract=We present NeuriCam, a novel deep learning-based system to achieve video capture from low-power dual-mode IoT camera systems. Our idea is to design a dual-mode camera system where the first mode is low power (1.1 mW) but only outputs grey-scale, low resolution and noisy video and the second mode consumes much higher power (100 mW) but outputs color and higher resolution images. To reduce total energy consumption, we heavily duty cycle the high power mode to output an image only once every second. The data for this camera system is then wirelessly sent to a nearby plugged-in gateway, where we run our real-time neural network decoder to reconstruct a higher-resolution color video. To achieve this, we introduce an attention feature filter mechanism that assigns different weights to different features, based on the correlation between the feature map and the contents of the input frame at each spatial location. We design a wireless hardware prototype using off-the-shelf cameras and address practical issues including packet loss and perspective mismatch. Our evaluations show that our dual-camera approach reduces energy consumption by 7x compared to existing systems. Further, our model achieves an average greyscale PSNR gain of 3.7 dB over prior single and dual-camera video super-resolution methods and 5.6 dB RGB gain over prior color propagation methods.
|confname=TMC '23
|confname=MobiCom 2023
|link=https://ieeexplore.ieee.org/abstract/document/10209220
|link=https://dl.acm.org/doi/10.1145/3570361.3592523
|title=F3VeTrac: Enabling Fine-grained, Fully-road-covered, and Fully-individual penetrative Vehicle Trajectory Recovery
|title=NeuriCam: Key-Frame Video Super-Resolution and Colorization for IoT Cameras
|speaker=Zhenguo
|speaker=Jiyi
|date=2023-11-30}}
|date=2024-04-12}}
 
{{Latest_seminar
{{Latest_seminar
|abstract=In cloud gaming, interactive latency is one of the most important factors in users' experience. Although the interactive latency can be reduced through typical network infrastructures like edge caching and congestion control, the interactive latency of current cloud-gaming platforms is still far from users' satisfaction. This paper presents ZGaming, a novel 3D cloud gaming system based on image prediction, in order to eliminate the interactive latency in traditional cloud gaming systems. To improve the quality of the predicted images, we propose (1) a quality-driven 3D-block cache to reduce the "hole" artifacts, (2) a server-assisted LSTM-predicting algorithm to improve the prediction accuracy of dynamic foreground objects, and (3) a prediction-performance-driven adaptive bitrate strategy which optimizes the quality of predicted images. The experiment on the real-world cloud gaming network conditions shows that compared with existing methods, ZGaming reduces the interactive latency from 23 ms to 0 ms when providing the same video quality, or improves the video quality by 5.4 dB when keeping the interactive latency as 0 ms.
|abstract=The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
|confname=SIGCOMM '23
|confname=Neurips 2017
|link=https://dl.acm.org/doi/pdf/10.1145/3603269.3604819
|link=https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
|title=ZGaming: Zero-Latency 3D Cloud Gaming by Image Prediction
|title=Attention Is All You Need
|speaker=Wenjie
|date=2023-11-30}}
 
{{Latest_seminar
|abstract=Given the central role mobile core plays in supporting mobile network operations, the efficiency, cost-effective dynamic scalability and resilience of the core control plane are paramount. Achieving these goals, however, presents two main challenges: (i) decoupling core network state from processing; (ii) decoupling control plane processing in the core from its interface to the radio access network (RAN). To overcome them, we present CoreKube, a novel message focused and cloud-native mobile core system design, which features truly stateless workers (processing units) that interface with a common database (to hold the core network state) and with the RAN through a frontend. The fully stateless and generic nature of the workers to process any control plane message enables efficient message handling. Orchestration of containerized CoreKube components using Kubernetes, allows leveraging the latter's autoscaling and self-healing properties. We develop 4G and 5G standard-compliant CoreKube implementations, exploiting the agile development methodology enabled by CoreKube's message focused design. Results from our extensive experimental evaluations over the Powder platform relative to prior art show that CoreKube efficiently processes control plane messages, scales dynamically while using minimal compute resources and recovers seamlessly from failures.
|confname=MobiCom '23
|link=https://dl.acm.org/doi/abs/10.1145/3570361.3592522
|title=CoreKube: An Efficient, Autoscaling and Resilient Mobile Core System
|speaker=Qinyong
|speaker=Qinyong
|date=2023-11-30}}
|date=2024-04-12}}
 
{{Latest_seminar
|abstract=Maximum target coverage by adjusting the orientation of distributed sensors is an important problem in directional sensor networks (DSNs). This problem is challenging as the targets usually move randomly but the coverage range of sensors is limited in angle and distance. Thus, it is required to coordinate sensors to get ideal target coverage with low power consumption, e.g. no missing targets or reducing redundant coverage. To realize this, we propose a Hierarchical Target-oriented Multi-Agent Coordination (HiT-MAC), which decomposes the target coverage problem into two-level tasks: targets assignment by a coordinator and tracking assigned targets by executors. Specifically, the coordinator periodically monitors the environment globally and allocates targets to each executor. In turn, the executor only needs to track its assigned targets. To effectively learn the HiT-MAC by reinforcement learning, we further introduce a bunch of practical methods, including a self-attention module, marginal contribution approximation for the coordinator, goal-conditional observation filter for the executor, etc. Empirical results demonstrate the advantage of HiT-MAC in coverage rate, learning efficiency, and scalability, comparing to baselines. We also conduct an ablative analysis on the effectiveness of the introduced components in the framework.
|confname=NeurIPS '20
|link=https://proceedings.neurips.cc/paper/2020/hash/7250eb93b3c18cc9daa29cf58af7a004-Abstract.html
|title=Learning Multi-Agent Coordination for Enhancing Target Coverage in Directional Sensor Networks
|speaker=Jiahui
|date=2023-11-30}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Revision as of 15:10, 9 April 2024

Time: Friday 10:30-12:00
Address: 4th Research Building A518
Useful links: Readling list; Schedules; Previous seminars.

Latest

  1. [MobiCom 2023] NeuriCam: Key-Frame Video Super-Resolution and Colorization for IoT Cameras, Jiyi
    Abstract: We present NeuriCam, a novel deep learning-based system to achieve video capture from low-power dual-mode IoT camera systems. Our idea is to design a dual-mode camera system where the first mode is low power (1.1 mW) but only outputs grey-scale, low resolution and noisy video and the second mode consumes much higher power (100 mW) but outputs color and higher resolution images. To reduce total energy consumption, we heavily duty cycle the high power mode to output an image only once every second. The data for this camera system is then wirelessly sent to a nearby plugged-in gateway, where we run our real-time neural network decoder to reconstruct a higher-resolution color video. To achieve this, we introduce an attention feature filter mechanism that assigns different weights to different features, based on the correlation between the feature map and the contents of the input frame at each spatial location. We design a wireless hardware prototype using off-the-shelf cameras and address practical issues including packet loss and perspective mismatch. Our evaluations show that our dual-camera approach reduces energy consumption by 7x compared to existing systems. Further, our model achieves an average greyscale PSNR gain of 3.7 dB over prior single and dual-camera video super-resolution methods and 5.6 dB RGB gain over prior color propagation methods.
  2. [Neurips 2017] Attention Is All You Need, Qinyong
    Abstract: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Template loop detected: Resource:Previous Seminars

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}