Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
 
(53 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2024-11-1 10:30-12:00'''
|time='''2025-10-24 10:30'''
|addr=4th Research Building A533
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}
Line 8: Line 8:


{{Latest_seminar
{{Latest_seminar
|abstract = In this paper, we revisit the problem of the current routing system in terms of prediction scalability and routing result optimality. Specifically, the current traffic prediction models are not suitable for large urban networks due to the incomplete information of traffic conditions. Besides, existing routing systems can only plan the routes based on the past traffic conditions and struggle to update the optimal route for vehicles in real-time. As a result, the actual route taken by vehicles is different from the ground-truth optimal path. Therefore, we propose a Just-In-Time Predictive Route Planning framework to tackle these two problems. Firstly, we propose a Travel Time Constrained Top- kn Shortest Path algorithm which pre-computes a set of candidate paths with several switch points. This empowers vehicles to continuously have the opportunity to switch to better paths taking into account real-time traffic condition changes. Moreover, we present a query-driven prediction paradigm with ellipse-based searching space estimation, along with an efficient multi-queries handling mechanism. This not only allows for targeted traffic prediction by prioritizing regions with valuable yet outdated traffic information, but also provides optimal results for multiple queries based on real-time traffic evolution. Evaluations on two real-life road networks demonstrate the effectiveness and efficiency of our framework and methods.
|abstract = Immersive telepresence has the potential to revolutionize remote communication by offering a highly interactive and engaging user experience. However, state-of-the-art exchanges large volumes of 3D content to achieve satisfactory visual quality, resulting in substantial Internet bandwidth consumption. To tackle this challenge, we introduce MagicStream, a first-of-its-kind semantic-driven immersive telepresence system that effectively extracts and delivers compact semantic details of captured 3D representation of users, instead of traditional bit-by-bit communication of raw content. To minimize bandwidth consumption while maintaining low end-to-end latency and high visual quality, MagicStream incorporates the following key innovations: (1) efficient extraction of user's skin/cloth color and motion semantics based on lighting characteristics and body keypoints, respectively; (2) novel, real-time human body reconstruction from motion semantics; and (3) on-the-fly neural rendering of users' immersive representation with color semantics. We implement a prototype of MagicStream and extensively evaluate its performance through both controlled experiments and user trials. Our results show that, compared to existing schemes, MagicStream can drastically reduce Internet bandwidth usage by up to 1195X while maintaining good visual quality.
|confname =ICDE‘24
|confname = Sensys'24
|link = https://ieeexplore.ieee.org/document/10598147/authors#authors
|link = https://dl.acm.org/doi/10.1145/3666025.3699344
|title= A Just-In-Time Framework for Continuous Routing
|title= MagicStream: Bandwidth-conserving Immersive Telepresence via Semantic Communication
|speaker=Zhenguo
|speaker= Mengfan Wang
|date=2024-11-8
|date=2025-10-31
}}{{Latest_seminar
|abstract =To fulfill computing demands of numerous Internet of Things (IoT) devices in infrastructure-free regions, low earth orbit (LEO) satellite edge computing has been proposed in recent years, to circumvent the latency arising from long backhaul and link congestion in traditional cloud computing mode. This article proposes a novel time-varying graph-based collaborative task offloading strategy for LEO satellite IoT to reduce task computing latency. To this end, a computing coordinate graph (CCG) is designed to characterize the time-varying topology and resource distribution of LEO satellite networks. When a task is offloaded to LEO satellite networks because local computing capability is unable to meet latency constraint, the position of the task access satellite in the CCG is determined first. Then, the expanded hop counts from all satellite nodes to the access satellite are calculated, which informs the partitioning of different node sets. Afterwards, considering both link and on-board computing resources, with the access satellite as the reference node, the minimum total task computing latency for each node set is obtained in an ascending order of the expanded hop counts. Finally, the minimum one among obtained latency values is the anticipated total task computing latency. Simulation results demonstrate the effectiveness of the proposed task offloading strategy in reducing task computing latency.
|confname = Systems Joural
|link = https://ieeexplore.ieee.org/document/11024019
|title= Collaborative Task Offloading for LEO Satellite Internet of Things: A Novel Computing Coordinate Graph-Based Approach
|speaker= Yifei Zhou
|date=2025-10-31
}}
}}
{{Latest_seminar
|abstract = Many networking tasks now employ deep learning (DL) to solve complex prediction and optimization problems. However, current design philosophy of DL-based algorithms entails intensive engineering overhead due to the manual design of deep neural networks (DNNs) for different networking tasks. Besides, DNNs tend to achieve poor generalization performance on unseen data distributions/environments. Motivated by the recent success of large language models (LLMs), this work studies the LLM adaptation for networking to explore a more sustainable design philosophy. With the powerful pre-trained knowledge, the LLM is promising to serve as the foundation model to achieve "one model for all tasks" with even better performance and stronger generalization. In pursuit of this vision, we present NetLLM, the first framework that provides a coherent design to harness the powerful capabilities of LLMs with low efforts to solve networking problems. Specifically, NetLLM empowers the LLM to effectively process multimodal data in networking and efficiently generate task-specific answers. Besides, NetLLM drastically reduces the costs of fine-tuning the LLM to acquire domain knowledge for networking. Across three networking-related use cases - viewport prediction, adaptive bitrate streaming and cluster job scheduling, we showcase that the NetLLM-adapted LLM significantly outperforms state-of-the-art algorithms.
|confname =NSDI‘24
|link = https://dl.acm.org/doi/abs/10.1145/3651890.3672268
|title= NetLLM: Adapting Large Language Models for Networking
|speaker=Yinghao
|date=2024-11-8
}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 10:10, 31 October 2025

Time: 2025-10-24 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [Sensys'24] MagicStream: Bandwidth-conserving Immersive Telepresence via Semantic Communication, Mengfan Wang
    Abstract: Immersive telepresence has the potential to revolutionize remote communication by offering a highly interactive and engaging user experience. However, state-of-the-art exchanges large volumes of 3D content to achieve satisfactory visual quality, resulting in substantial Internet bandwidth consumption. To tackle this challenge, we introduce MagicStream, a first-of-its-kind semantic-driven immersive telepresence system that effectively extracts and delivers compact semantic details of captured 3D representation of users, instead of traditional bit-by-bit communication of raw content. To minimize bandwidth consumption while maintaining low end-to-end latency and high visual quality, MagicStream incorporates the following key innovations: (1) efficient extraction of user's skin/cloth color and motion semantics based on lighting characteristics and body keypoints, respectively; (2) novel, real-time human body reconstruction from motion semantics; and (3) on-the-fly neural rendering of users' immersive representation with color semantics. We implement a prototype of MagicStream and extensively evaluate its performance through both controlled experiments and user trials. Our results show that, compared to existing schemes, MagicStream can drastically reduce Internet bandwidth usage by up to 1195X while maintaining good visual quality.
  2. [Systems Joural] Collaborative Task Offloading for LEO Satellite Internet of Things: A Novel Computing Coordinate Graph-Based Approach, Yifei Zhou
    Abstract: To fulfill computing demands of numerous Internet of Things (IoT) devices in infrastructure-free regions, low earth orbit (LEO) satellite edge computing has been proposed in recent years, to circumvent the latency arising from long backhaul and link congestion in traditional cloud computing mode. This article proposes a novel time-varying graph-based collaborative task offloading strategy for LEO satellite IoT to reduce task computing latency. To this end, a computing coordinate graph (CCG) is designed to characterize the time-varying topology and resource distribution of LEO satellite networks. When a task is offloaded to LEO satellite networks because local computing capability is unable to meet latency constraint, the position of the task access satellite in the CCG is determined first. Then, the expanded hop counts from all satellite nodes to the access satellite are calculated, which informs the partitioning of different node sets. Afterwards, considering both link and on-board computing resources, with the access satellite as the reference node, the minimum total task computing latency for each node set is obtained in an ascending order of the expanded hop counts. Finally, the minimum one among obtained latency values is the anticipated total task computing latency. Simulation results demonstrate the effectiveness of the proposed task offloading strategy in reducing task computing latency.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}