Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2025-12-19 10:30'''
|time='''2025-12-26 10:30'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
Line 8: Line 8:


{{Latest_seminar
{{Latest_seminar
|abstract = Low Earth Orbit (LEO) satellite networks are expected to enable global connectivity for next-generation communications. To provide space-centric solutions, the limited coverage time and limited resources of LEO satellites pose challenges to maintaining service continuity and ensuring low latency for users. Furthermore, LEO satellites rely on solar panels to obtain energy, so a balance needs to be struck between energy consumption and service provision for satellite mobile edge computing. In this paper, we aim to achieve space-centric computational task offloading in LEO satellite networks. The goal is to minimize end-to-end task offloading latency while considering the constraints posed by the limited onboard computing, storage, and energy resources in constantly moving LEO satellites. To achieve this, we formulate a joint problem of service migration and power control in energy-harvesting LEO satellite networks. The problem is then converted into a Markov decision process (MDP) and solved with SpaceEdge, a novel algorithm based on Deep Reinforcement Learning (DRL). SpaceEdge offers supports for both centralized learning and multi-agent learning. Simulation results show that SpaceEdge, particularly the multi-agent model, outperforms existing solutions, demonstrating its effectiveness in deploying space-centric task offloading services in LEO satellite networks.
|abstract = Machine learning (ML) clusters stack multiple network interface cards (NICs) within each server to improve inter-server GPU communication bandwidth. However, existing systems fall short in fully utilizing NICs because of static GPU-NIC bindings. This leads to bottlenecks at hot-spot NICs when handling imbalanced communication in ML tasks. For example, large language model serving instances may have different communication demands across NICs; expert-parallel training tasks have imbalanced all-to-all traffic; and the embedding transmission volumes during recommendation model training vary across GPUs. To fully utilize all NICs, we propose FuseLink to enable efficient GPU communication over multiple NICs. FuseLink extends inter-server network by integrating high-speed intra-server connections, and leverages GPUs to efficiently relay traffic to idle NICs. We implement FuseLink and integrate it into NCCL, so that ML applications can benefit from FuseLink seamlessly without code modifications. Compared to NCCL, we demonstrate that FuseLink achieves up to 212GBps bandwidth between two inter-server GPUs and accelerates ML tasks with dynamic traffic patterns. Specifically, it reduces the latencies of first-token generation in LLM model servings by 1.04-2.73×, improves the training throughput of mixture-of-experts model by up to 1.3×, and accelerates deep learning recommendation model training by up to 1.2×.
|confname =TWC'24
|confname =OSDI'25
|link = https://ieeexplore.ieee.org/abstract/document/10623400
|link = https://www.usenix.org/conference/osdi25/presentation/ren
|title= SpaceEdge: Optimizing Service Latency and Sustainability for Space-Centric Task Offloading in LEO Satellite Networks
|title= Enabling Efficient GPU Communication over Multiple NICs with FuseLink
|speaker=Haifeng
|speaker=Jiahao
|date=2025-12-19
|date=2025-12-26
}}
}}
{{Latest_seminar
{{Latest_seminar
|abstract =For highly immersive mobile volumetric video streaming, it is essential to deliver photo-realistic full-scene content with smooth playback. Unlike traditional representations such as point clouds, 3D Gaussian Splatting (3DGS) has gained attention for its ability to represent high-quality full-scene 3D content. However, our preliminary experiments show that existing methods for 3DGS-based videos fail to achieve smooth playback on mobile devices. In this paper, we propose Vega, a 3DGS-based photo-realistic full-scene volumetric video streaming system that ensures real-time playback on mobile devices. The core idea behind Vega's real-time rendering is object-level selective computation, which allocates computational resources to visually important objects to meet strict rendering deadlines. To enable mobile streaming based on the selective computation, Vega addresses two challenges: (1) designing an encoding scheme that optimizes the data size of videos while being compatible with object-level prioritization, and (2) developing a rendering pipeline that efficiently operates on resource-constrained mobile devices. We implemented an end-to-end Vega system, consisting of a streaming server and an Android application. Experimental results on commodity smartphones show that Vega achieves 30 frames per second (FPS) for full-scene volumetric video streaming while maintaining competitive data size and visual quality compared to existing baselines.
|abstract =Operating a quantum network incurs high capital and operational expenditures, which are expected to be compensated by the high value of enabled quantum applications. However, existing mechanisms mainly focus on maximizing the entanglement distribution rate and neglect the cost incurred on users. This paper aims to address how to utilize quantum network resources in a cost-efficient manner while sustaining high-quantity and high-quality entanglement distribution. We first consider how to establish a steady stream of entanglements between remote nodes with the minimum cost. Utilizing a recent flow-based abstraction and a novel graph representation, we design an optimal algorithm for min-cost remote entanglement distribution. Next, we consider distributing entanglements with the highest fidelity subject to a cost bound and prove its NP-hardness. To explore the cost-fidelity trade-off due to swapping and purification, we propose an approximation scheme for maximizing fidelity while satisfying an arbitrary cost bound. Our algorithms provide rigorous tools for supporting high-performance quantum network applications with financial consideration and offer strong theoretical guarantees. Extensive simulation results validate the advantageous performance in cost efficiency and/or fidelity compared to existing solutions and heuristics.
|confname =Mobicom'25
|confname =ToN'25
|link = https://dl.acm.org/doi/10.1145/3680207.3765267
|link = https://ieeexplore.ieee.org/document/11153500
|title= Vega: Fully Immersive Mobile Volumetric Video Streaming with 3D Gaussian Splatting
|title= Cost-Aware High-Fidelity Entanglement Distribution and Purification in the Quantum Internet
|speaker=Jiyi
|speaker=Bangguo
|date=2025-12-19
|date=2025-12-26
}}
}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Revision as of 20:24, 25 December 2025

Time: 2025-12-26 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [OSDI'25] Enabling Efficient GPU Communication over Multiple NICs with FuseLink, Jiahao
    Abstract: Machine learning (ML) clusters stack multiple network interface cards (NICs) within each server to improve inter-server GPU communication bandwidth. However, existing systems fall short in fully utilizing NICs because of static GPU-NIC bindings. This leads to bottlenecks at hot-spot NICs when handling imbalanced communication in ML tasks. For example, large language model serving instances may have different communication demands across NICs; expert-parallel training tasks have imbalanced all-to-all traffic; and the embedding transmission volumes during recommendation model training vary across GPUs. To fully utilize all NICs, we propose FuseLink to enable efficient GPU communication over multiple NICs. FuseLink extends inter-server network by integrating high-speed intra-server connections, and leverages GPUs to efficiently relay traffic to idle NICs. We implement FuseLink and integrate it into NCCL, so that ML applications can benefit from FuseLink seamlessly without code modifications. Compared to NCCL, we demonstrate that FuseLink achieves up to 212GBps bandwidth between two inter-server GPUs and accelerates ML tasks with dynamic traffic patterns. Specifically, it reduces the latencies of first-token generation in LLM model servings by 1.04-2.73×, improves the training throughput of mixture-of-experts model by up to 1.3×, and accelerates deep learning recommendation model training by up to 1.2×.
  2. [ToN'25] Cost-Aware High-Fidelity Entanglement Distribution and Purification in the Quantum Internet, Bangguo
    Abstract: Operating a quantum network incurs high capital and operational expenditures, which are expected to be compensated by the high value of enabled quantum applications. However, existing mechanisms mainly focus on maximizing the entanglement distribution rate and neglect the cost incurred on users. This paper aims to address how to utilize quantum network resources in a cost-efficient manner while sustaining high-quantity and high-quality entanglement distribution. We first consider how to establish a steady stream of entanglements between remote nodes with the minimum cost. Utilizing a recent flow-based abstraction and a novel graph representation, we design an optimal algorithm for min-cost remote entanglement distribution. Next, we consider distributing entanglements with the highest fidelity subject to a cost bound and prove its NP-hardness. To explore the cost-fidelity trade-off due to swapping and purification, we propose an approximation scheme for maximizing fidelity while satisfying an arbitrary cost bound. Our algorithms provide rigorous tools for supporting high-performance quantum network applications with financial consideration and offer strong theoretical guarantees. Extensive simulation results validate the advantageous performance in cost efficiency and/or fidelity compared to existing solutions and heuristics.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}