Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
m
 
(117 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''Thursday 16:20-18:00'''
|time='''2025-10-24 10:30'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}


===Latest===
===Latest===
{{Latest_seminar
|abstract=Obtaining urban-scale vehicle trajectories is essential to understand the urban mobility and benefits various downstream applications. The mobility knowledge obtained from existing vehicle trajectory sensing techniques is typically incomplete. To fill the gap, we propose F3VeTrac , an efficient deep-learning-based vehicle trajectory recovery system that utilizes complementary characteristics of the Camera Surveillance System and the Vehicle Tracking System to obtain fine-grained, fully-road-covered, and fully-individual-penetrative ( F3 ) trajectories. F3VeTrac utilizes five well-designed modules to model the co-occurrence relationships hidden in both coarse-grained and fine-grained trajectories from the two complementary sensing systems and fuse them to recover the coarse-grained trajectories. We implement and evaluate F3VeTrac with two real-world datasets from over 100 million regular vehicle trajectories and 16 million commercial vehicle trajectories in two cities of China, together with an on-field case study based on 251 regular vehicle trajectories collected by 17 volunteers, demonstrating its great advantages over six state-of-the-art alternative schemes. Source codes are available in https://github.com/UrbanComp-BUPT/F3VeTrac . Moreover, we present a downstream application of F3VeTrac for traffic condition estimation, which obtains obvious performance gains.
|confname=TMC '23
|link=https://ieeexplore.ieee.org/abstract/document/10209220
|title=F3VeTrac: Enabling Fine-grained, Fully-road-covered, and Fully-individual penetrative Vehicle Trajectory Recovery
|speaker=Zhenguo
|date=2023-11-30}}


{{Latest_seminar
{{Latest_seminar
|abstract=In cloud gaming, interactive latency is one of the most important factors in users' experience. Although the interactive latency can be reduced through typical network infrastructures like edge caching and congestion control, the interactive latency of current cloud-gaming platforms is still far from users' satisfaction. This paper presents ZGaming, a novel 3D cloud gaming system based on image prediction, in order to eliminate the interactive latency in traditional cloud gaming systems. To improve the quality of the predicted images, we propose (1) a quality-driven 3D-block cache to reduce the "hole" artifacts, (2) a server-assisted LSTM-predicting algorithm to improve the prediction accuracy of dynamic foreground objects, and (3) a prediction-performance-driven adaptive bitrate strategy which optimizes the quality of predicted images. The experiment on the real-world cloud gaming network conditions shows that compared with existing methods, ZGaming reduces the interactive latency from 23 ms to 0 ms when providing the same video quality, or improves the video quality by 5.4 dB when keeping the interactive latency as 0 ms.
|abstract = Immersive telepresence has the potential to revolutionize remote communication by offering a highly interactive and engaging user experience. However, state-of-the-art exchanges large volumes of 3D content to achieve satisfactory visual quality, resulting in substantial Internet bandwidth consumption. To tackle this challenge, we introduce MagicStream, a first-of-its-kind semantic-driven immersive telepresence system that effectively extracts and delivers compact semantic details of captured 3D representation of users, instead of traditional bit-by-bit communication of raw content. To minimize bandwidth consumption while maintaining low end-to-end latency and high visual quality, MagicStream incorporates the following key innovations: (1) efficient extraction of user's skin/cloth color and motion semantics based on lighting characteristics and body keypoints, respectively; (2) novel, real-time human body reconstruction from motion semantics; and (3) on-the-fly neural rendering of users' immersive representation with color semantics. We implement a prototype of MagicStream and extensively evaluate its performance through both controlled experiments and user trials. Our results show that, compared to existing schemes, MagicStream can drastically reduce Internet bandwidth usage by up to 1195X while maintaining good visual quality.
|confname=SIGCOMM '23
|confname = Sensys'24
|link=https://dl.acm.org/doi/pdf/10.1145/3603269.3604819
|link = https://dl.acm.org/doi/10.1145/3666025.3699344
|title=ZGaming: Zero-Latency 3D Cloud Gaming by Image Prediction
|title= MagicStream: Bandwidth-conserving Immersive Telepresence via Semantic Communication
|speaker=Wenjie
|speaker= Mengfan Wang
|date=2023-11-30}}
|date=2025-10-31
 
}}{{Latest_seminar
{{Latest_seminar
|abstract =To fulfill computing demands of numerous Internet of Things (IoT) devices in infrastructure-free regions, low earth orbit (LEO) satellite edge computing has been proposed in recent years, to circumvent the latency arising from long backhaul and link congestion in traditional cloud computing mode. This article proposes a novel time-varying graph-based collaborative task offloading strategy for LEO satellite IoT to reduce task computing latency. To this end, a computing coordinate graph (CCG) is designed to characterize the time-varying topology and resource distribution of LEO satellite networks. When a task is offloaded to LEO satellite networks because local computing capability is unable to meet latency constraint, the position of the task access satellite in the CCG is determined first. Then, the expanded hop counts from all satellite nodes to the access satellite are calculated, which informs the partitioning of different node sets. Afterwards, considering both link and on-board computing resources, with the access satellite as the reference node, the minimum total task computing latency for each node set is obtained in an ascending order of the expanded hop counts. Finally, the minimum one among obtained latency values is the anticipated total task computing latency. Simulation results demonstrate the effectiveness of the proposed task offloading strategy in reducing task computing latency.
|abstract=Given the central role mobile core plays in supporting mobile network operations, the efficiency, cost-effective dynamic scalability and resilience of the core control plane are paramount. Achieving these goals, however, presents two main challenges: (i) decoupling core network state from processing; (ii) decoupling control plane processing in the core from its interface to the radio access network (RAN). To overcome them, we present CoreKube, a novel message focused and cloud-native mobile core system design, which features truly stateless workers (processing units) that interface with a common database (to hold the core network state) and with the RAN through a frontend. The fully stateless and generic nature of the workers to process any control plane message enables efficient message handling. Orchestration of containerized CoreKube components using Kubernetes, allows leveraging the latter's autoscaling and self-healing properties. We develop 4G and 5G standard-compliant CoreKube implementations, exploiting the agile development methodology enabled by CoreKube's message focused design. Results from our extensive experimental evaluations over the Powder platform relative to prior art show that CoreKube efficiently processes control plane messages, scales dynamically while using minimal compute resources and recovers seamlessly from failures.
|confname = Systems Joural
|confname=MobiCom '23
|link = https://ieeexplore.ieee.org/document/11024019
|link=https://dl.acm.org/doi/abs/10.1145/3570361.3592522
|title= Collaborative Task Offloading for LEO Satellite Internet of Things: A Novel Computing Coordinate Graph-Based Approach
|title=CoreKube: An Efficient, Autoscaling and Resilient Mobile Core System
|speaker= Yifei Zhou
|speaker=Qinyong
|date=2025-10-31
|date=2023-11-30}}
}}
 
{{Latest_seminar
|abstract=Maximum target coverage by adjusting the orientation of distributed sensors is an important problem in directional sensor networks (DSNs). This problem is challenging as the targets usually move randomly but the coverage range of sensors is limited in angle and distance. Thus, it is required to coordinate sensors to get ideal target coverage with low power consumption, e.g. no missing targets or reducing redundant coverage. To realize this, we propose a Hierarchical Target-oriented Multi-Agent Coordination (HiT-MAC), which decomposes the target coverage problem into two-level tasks: targets assignment by a coordinator and tracking assigned targets by executors. Specifically, the coordinator periodically monitors the environment globally and allocates targets to each executor. In turn, the executor only needs to track its assigned targets. To effectively learn the HiT-MAC by reinforcement learning, we further introduce a bunch of practical methods, including a self-attention module, marginal contribution approximation for the coordinator, goal-conditional observation filter for the executor, etc. Empirical results demonstrate the advantage of HiT-MAC in coverage rate, learning efficiency, and scalability, comparing to baselines. We also conduct an ablative analysis on the effectiveness of the introduced components in the framework.
|confname=NeurIPS '20
|link=https://proceedings.neurips.cc/paper/2020/hash/7250eb93b3c18cc9daa29cf58af7a004-Abstract.html
|title=Learning Multi-Agent Coordination for Enhancing Target Coverage in Directional Sensor Networks
|speaker=Jiahui
|date=2023-11-30}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 10:10, 31 October 2025

Time: 2025-10-24 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [Sensys'24] MagicStream: Bandwidth-conserving Immersive Telepresence via Semantic Communication, Mengfan Wang
    Abstract: Immersive telepresence has the potential to revolutionize remote communication by offering a highly interactive and engaging user experience. However, state-of-the-art exchanges large volumes of 3D content to achieve satisfactory visual quality, resulting in substantial Internet bandwidth consumption. To tackle this challenge, we introduce MagicStream, a first-of-its-kind semantic-driven immersive telepresence system that effectively extracts and delivers compact semantic details of captured 3D representation of users, instead of traditional bit-by-bit communication of raw content. To minimize bandwidth consumption while maintaining low end-to-end latency and high visual quality, MagicStream incorporates the following key innovations: (1) efficient extraction of user's skin/cloth color and motion semantics based on lighting characteristics and body keypoints, respectively; (2) novel, real-time human body reconstruction from motion semantics; and (3) on-the-fly neural rendering of users' immersive representation with color semantics. We implement a prototype of MagicStream and extensively evaluate its performance through both controlled experiments and user trials. Our results show that, compared to existing schemes, MagicStream can drastically reduce Internet bandwidth usage by up to 1195X while maintaining good visual quality.
  2. [Systems Joural] Collaborative Task Offloading for LEO Satellite Internet of Things: A Novel Computing Coordinate Graph-Based Approach, Yifei Zhou
    Abstract: To fulfill computing demands of numerous Internet of Things (IoT) devices in infrastructure-free regions, low earth orbit (LEO) satellite edge computing has been proposed in recent years, to circumvent the latency arising from long backhaul and link congestion in traditional cloud computing mode. This article proposes a novel time-varying graph-based collaborative task offloading strategy for LEO satellite IoT to reduce task computing latency. To this end, a computing coordinate graph (CCG) is designed to characterize the time-varying topology and resource distribution of LEO satellite networks. When a task is offloaded to LEO satellite networks because local computing capability is unable to meet latency constraint, the position of the task access satellite in the CCG is determined first. Then, the expanded hop counts from all satellite nodes to the access satellite are calculated, which informs the partitioning of different node sets. Afterwards, considering both link and on-board computing resources, with the access satellite as the reference node, the minimum total task computing latency for each node set is obtained in an ascending order of the expanded hop counts. Finally, the minimum one among obtained latency values is the anticipated total task computing latency. Simulation results demonstrate the effectiveness of the proposed task offloading strategy in reducing task computing latency.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}