Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
 
(96 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''Thursday 16:20-18:00'''
|time='''2025-04-11 10:30-12:00'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}


===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract=Obtaining urban-scale vehicle trajectories is essential to understand the urban mobility and benefits various downstream applications. The mobility knowledge obtained from existing vehicle trajectory sensing techniques is typically incomplete. To fill the gap, we propose F3VeTrac , an efficient deep-learning-based vehicle trajectory recovery system that utilizes complementary characteristics of the Camera Surveillance System and the Vehicle Tracking System to obtain fine-grained, fully-road-covered, and fully-individual-penetrative ( F3 ) trajectories. F3VeTrac utilizes five well-designed modules to model the co-occurrence relationships hidden in both coarse-grained and fine-grained trajectories from the two complementary sensing systems and fuse them to recover the coarse-grained trajectories. We implement and evaluate F3VeTrac with two real-world datasets from over 100 million regular vehicle trajectories and 16 million commercial vehicle trajectories in two cities of China, together with an on-field case study based on 251 regular vehicle trajectories collected by 17 volunteers, demonstrating its great advantages over six state-of-the-art alternative schemes. Source codes are available in https://github.com/UrbanComp-BUPT/F3VeTrac . Moreover, we present a downstream application of F3VeTrac for traffic condition estimation, which obtains obvious performance gains.
|abstract = While existing strategies to execute deep learning-based classification on low-power platforms assume the models are trained on all classes of interest, this paper posits that adopting context-awareness i.e. narrowing down a classification task to the current deployment context consisting of only recent inference queries can substantially enhance performance in resource-constrained environments. We propose a new paradigm, CACTUS, for scalable and efficient context-aware classification where a micro-classifier recognizes a small set of classes relevant to the current context and, when context change happens (e.g., a new class comes into the scene), rapidly switches to another suitable micro-classifier. CACTUS features several innovations, including optimizing the training cost of context-aware classifiers, enabling on-the-fly context-aware switching between classifiers, and balancing context switching costs and performance gains via simple yet effective switching policies. We show that CACTUS achieves significant benefits in accuracy, latency, and compute budget across a range of datasets and IoT platforms.
|confname=TMC '23
|confname = Mobisys'24
|link=https://ieeexplore.ieee.org/abstract/document/10209220
|link = https://dl.acm.org/doi/abs/10.1145/3643832.3661888
|title=F3VeTrac: Enabling Fine-grained, Fully-road-covered, and Fully-individual penetrative Vehicle Trajectory Recovery
|title= CACTUS: Dynamically Switchable Context-aware micro-Classifiers for Efficient IoT Inference
|speaker=Zhenguo
|speaker= Zhenhua
|date=2023-11-30}}
|date=2025-04-18
}}
{{Latest_seminar
{{Latest_seminar
|abstract=In cloud gaming, interactive latency is one of the most important factors in users' experience. Although the interactive latency can be reduced through typical network infrastructures like edge caching and congestion control, the interactive latency of current cloud-gaming platforms is still far from users' satisfaction. This paper presents ZGaming, a novel 3D cloud gaming system based on image prediction, in order to eliminate the interactive latency in traditional cloud gaming systems. To improve the quality of the predicted images, we propose (1) a quality-driven 3D-block cache to reduce the "hole" artifacts, (2) a server-assisted LSTM-predicting algorithm to improve the prediction accuracy of dynamic foreground objects, and (3) a prediction-performance-driven adaptive bitrate strategy which optimizes the quality of predicted images. The experiment on the real-world cloud gaming network conditions shows that compared with existing methods, ZGaming reduces the interactive latency from 23 ms to 0 ms when providing the same video quality, or improves the video quality by 5.4 dB when keeping the interactive latency as 0 ms.
|abstract = Nowadays, volumetric videos have emerged as an attractive multimedia application providing highly immersive watching experiences since viewers could adjust their viewports at 6 degrees-of-freedom. However, the point cloud frames composing the video are prohibitively large, and effective compression techniques should be developed. There are two classes of compression methods. One suggests exploiting the conventional video codecs (2D-based methods) and the other proposes to compress the points in 3D space directly (3D-based methods). Though the 3D-based methods feature fast coding speeds, their compression ratios are low since the failure of leveraging inter-frame redundancy. To resolve this problem, we design a patch-wise compression framework working in the 3D space. Specifically, we search rigid moves of patches via the iterative closest point algorithm and construct a common geometric structure, which is followed by color compensation. We implement our decoder on a GPU platform so that real-time decoding and rendering are realized. We compare our method with GROOT, the state-of-the-art 3D-based compression method, and it reduces the bitrate by up to 5.98×. Moreover, by trimming invisible content, our scheme achieves comparable bandwidth demand of V-PCC, the representative 2D-based method, in FoV-adaptive streaming.
|confname=SIGCOMM '23
|confname = TC'24
|link=https://dl.acm.org/doi/pdf/10.1145/3603269.3604819
|link = https://ieeexplore.ieee.org/document/10360355
|title=ZGaming: Zero-Latency 3D Cloud Gaming by Image Prediction
|title= A GPU-Enabled Real-Time Framework for Compressing and Rendering Volumetric Videos
|speaker=Wenjie
|speaker=Mengfan
|date=2023-11-30}}
|date=2025-04-18
{{Latest_seminar
}}
|abstract=Given the central role mobile core plays in supporting mobile network operations, the efficiency, cost-effective dynamic scalability and resilience of the core control plane are paramount. Achieving these goals, however, presents two main challenges: (i) decoupling core network state from processing; (ii) decoupling control plane processing in the core from its interface to the radio access network (RAN). To overcome them, we present CoreKube, a novel message focused and cloud-native mobile core system design, which features truly stateless workers (processing units) that interface with a common database (to hold the core network state) and with the RAN through a frontend. The fully stateless and generic nature of the workers to process any control plane message enables efficient message handling. Orchestration of containerized CoreKube components using Kubernetes, allows leveraging the latter's autoscaling and self-healing properties. We develop 4G and 5G standard-compliant CoreKube implementations, exploiting the agile development methodology enabled by CoreKube's message focused design. Results from our extensive experimental evaluations over the Powder platform relative to prior art show that CoreKube efficiently processes control plane messages, scales dynamically while using minimal compute resources and recovers seamlessly from failures.
 
|confname=MobiCom '23
|link=https://dl.acm.org/doi/abs/10.1145/3570361.3592522
|title=CoreKube: An Efficient, Autoscaling and Resilient Mobile Core System
|speaker=Qinyong
|date=2023-11-30}}
{{Latest_seminar
|abstract=Maximum target coverage by adjusting the orientation of distributed sensors is an important problem in directional sensor networks (DSNs). This problem is challenging as the targets usually move randomly but the coverage range of sensors is limited in angle and distance. Thus, it is required to coordinate sensors to get ideal target coverage with low power consumption, e.g. no missing targets or reducing redundant coverage. To realize this, we propose a Hierarchical Target-oriented Multi-Agent Coordination (HiT-MAC), which decomposes the target coverage problem into two-level tasks: targets assignment by a coordinator and tracking assigned targets by executors. Specifically, the coordinator periodically monitors the environment globally and allocates targets to each executor. In turn, the executor only needs to track its assigned targets. To effectively learn the HiT-MAC by reinforcement learning, we further introduce a bunch of practical methods, including a self-attention module, marginal contribution approximation for the coordinator, goal-conditional observation filter for the executor, etc. Empirical results demonstrate the advantage of HiT-MAC in coverage rate, learning efficiency, and scalability, comparing to baselines. We also conduct an ablative analysis on the effectiveness of the introduced components in the framework.
|confname=NeurIPS '20
|link=https://proceedings.neurips.cc/paper/2020/hash/7250eb93b3c18cc9daa29cf58af7a004-Abstract.html
|title=Learning Multi-Agent Coordination for Enhancing Target Coverage in Directional Sensor Networks
|speaker=Jiahui
|date=2023-11-30}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 10:54, 18 April 2025

Time: 2025-04-11 10:30-12:00
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [Mobisys'24] CACTUS: Dynamically Switchable Context-aware micro-Classifiers for Efficient IoT Inference, Zhenhua
    Abstract: While existing strategies to execute deep learning-based classification on low-power platforms assume the models are trained on all classes of interest, this paper posits that adopting context-awareness i.e. narrowing down a classification task to the current deployment context consisting of only recent inference queries can substantially enhance performance in resource-constrained environments. We propose a new paradigm, CACTUS, for scalable and efficient context-aware classification where a micro-classifier recognizes a small set of classes relevant to the current context and, when context change happens (e.g., a new class comes into the scene), rapidly switches to another suitable micro-classifier. CACTUS features several innovations, including optimizing the training cost of context-aware classifiers, enabling on-the-fly context-aware switching between classifiers, and balancing context switching costs and performance gains via simple yet effective switching policies. We show that CACTUS achieves significant benefits in accuracy, latency, and compute budget across a range of datasets and IoT platforms.
  2. [TC'24] A GPU-Enabled Real-Time Framework for Compressing and Rendering Volumetric Videos, Mengfan
    Abstract: Nowadays, volumetric videos have emerged as an attractive multimedia application providing highly immersive watching experiences since viewers could adjust their viewports at 6 degrees-of-freedom. However, the point cloud frames composing the video are prohibitively large, and effective compression techniques should be developed. There are two classes of compression methods. One suggests exploiting the conventional video codecs (2D-based methods) and the other proposes to compress the points in 3D space directly (3D-based methods). Though the 3D-based methods feature fast coding speeds, their compression ratios are low since the failure of leveraging inter-frame redundancy. To resolve this problem, we design a patch-wise compression framework working in the 3D space. Specifically, we search rigid moves of patches via the iterative closest point algorithm and construct a common geometric structure, which is followed by color compensation. We implement our decoder on a GPU platform so that real-time decoding and rendering are realized. We compare our method with GROOT, the state-of-the-art 3D-based compression method, and it reduces the bitrate by up to 5.98×. Moreover, by trimming invisible content, our scheme achieves comparable bandwidth demand of V-PCC, the representative 2D-based method, in FoV-adaptive streaming.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}