Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2025-12-19 10:30'''
|time='''2026-01-09 10:30'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
Line 8: Line 8:


{{Latest_seminar
{{Latest_seminar
|abstract = Low Earth Orbit (LEO) satellite networks are expected to enable global connectivity for next-generation communications. To provide space-centric solutions, the limited coverage time and limited resources of LEO satellites pose challenges to maintaining service continuity and ensuring low latency for users. Furthermore, LEO satellites rely on solar panels to obtain energy, so a balance needs to be struck between energy consumption and service provision for satellite mobile edge computing. In this paper, we aim to achieve space-centric computational task offloading in LEO satellite networks. The goal is to minimize end-to-end task offloading latency while considering the constraints posed by the limited onboard computing, storage, and energy resources in constantly moving LEO satellites. To achieve this, we formulate a joint problem of service migration and power control in energy-harvesting LEO satellite networks. The problem is then converted into a Markov decision process (MDP) and solved with SpaceEdge, a novel algorithm based on Deep Reinforcement Learning (DRL). SpaceEdge offers supports for both centralized learning and multi-agent learning. Simulation results show that SpaceEdge, particularly the multi-agent model, outperforms existing solutions, demonstrating its effectiveness in deploying space-centric task offloading services in LEO satellite networks.
|abstract = DistServe improves the performance of large language models (LLMs) serving by disaggregating the prefill and decoding computation. Existing LLM serving systems colocate the two phases and batch the computation of prefill and decoding across all users and requests. We find that this strategy not only leads to strong prefill-decoding interferences but also couples the resource allocation and parallelism plans for both phases. LLM applications often emphasize individual latency for each phase: time to first token (TTFT) for the prefill phase and time per output token (TPOT) of each request for the decoding phase. In the presence of stringent latency requirements, existing systems have to prioritize one latency over the other, or over-provision compute resources to meet both. DistServe assigns prefill and decoding computation to different GPUs, hence eliminating prefill-decoding interferences. Given the application's TTFT and TPOT requirements, DistServe co-optimizes the resource allocation and parallelism strategy tailored for each phase. DistServe also places the two phases according to the serving cluster's bandwidth to minimize the communication caused by disaggregation. As a result, DistServe significantly improves LLM serving performance in terms of the maximum rate that can be served within both TTFT and TPOT constraints on each GPU. Our evaluations show that on various popular LLMs, applications, and latency requirements, DistServe can serve 7.4× more requests or 12.6× tighter SLO, compared to state-of-the-art systems, while staying within latency constraints for > 90% of requests.
|confname =TWC'24
|confname =OSDI'24
|link = https://ieeexplore.ieee.org/abstract/document/10623400
|link = https://www.usenix.org/conference/osdi24/presentation/zhong-yinmin
|title= SpaceEdge: Optimizing Service Latency and Sustainability for Space-Centric Task Offloading in LEO Satellite Networks
|title= DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving
|speaker=Haifeng
|speaker=Ruizheng
|date=2025-12-19
|date=2026-1-09
}}
}}
{{Latest_seminar
{{Latest_seminar
|abstract =For highly immersive mobile volumetric video streaming, it is essential to deliver photo-realistic full-scene content with smooth playback. Unlike traditional representations such as point clouds, 3D Gaussian Splatting (3DGS) has gained attention for its ability to represent high-quality full-scene 3D content. However, our preliminary experiments show that existing methods for 3DGS-based videos fail to achieve smooth playback on mobile devices. In this paper, we propose Vega, a 3DGS-based photo-realistic full-scene volumetric video streaming system that ensures real-time playback on mobile devices. The core idea behind Vega's real-time rendering is object-level selective computation, which allocates computational resources to visually important objects to meet strict rendering deadlines. To enable mobile streaming based on the selective computation, Vega addresses two challenges: (1) designing an encoding scheme that optimizes the data size of videos while being compatible with object-level prioritization, and (2) developing a rendering pipeline that efficiently operates on resource-constrained mobile devices. We implemented an end-to-end Vega system, consisting of a streaming server and an Android application. Experimental results on commodity smartphones show that Vega achieves 30 frames per second (FPS) for full-scene volumetric video streaming while maintaining competitive data size and visual quality compared to existing baselines.
|abstract =In virtual machine (VM) allocation systems, caching repetitive and similar VM allocation requests and associated resolution rules is crucial for reducing computational costs and meeting strict latency requirements. While modern allocation systems distribute requests among multiple allocator agents and use caching to improve performance, current schedulers often neglect the cache state and latency considerations when assigning each new request to an agent. Due to the high variance in costs of cache hits and misses and the associated processing overheads of updating the caches, simple load-balancing and cache-aware mechanisms result in high latencies. We introduce Kamino, a high-performance, latency-driven and cache-aware request scheduling system aimed at minimizing end-to-end latencies. Kamino employs a novel scheduling algorithm grounded in theory which uses partial indicators from the cache state to assign each new request to the agent with the lowest estimated latency. Evaluation of Kamino using a high-fidelity simulator on large-scale production workloads shows a 42% reduction in average request latencies. Our deployment of Kamino in the control plane of a large public cloud confirms these improvements, with a 33% decrease in cache miss rates and 17% reduction in memory usage.
|confname =Mobicom'25
|confname =OSDI'25
|link = https://dl.acm.org/doi/10.1145/3680207.3765267
|link = https://www.usenix.org/conference/osdi25/presentation/domingo
|title= Vega: Fully Immersive Mobile Volumetric Video Streaming with 3D Gaussian Splatting
|title= Kamino: Efficient VM Allocation at Scale with Latency-Driven Cache-Aware Scheduling
|speaker=Jiyi
|speaker=Chenli
|date=2025-12-19
|date=2026-1-09
}}
}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 00:25, 9 January 2026

Time: 2026-01-09 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [OSDI'24] DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving, Ruizheng
    Abstract: DistServe improves the performance of large language models (LLMs) serving by disaggregating the prefill and decoding computation. Existing LLM serving systems colocate the two phases and batch the computation of prefill and decoding across all users and requests. We find that this strategy not only leads to strong prefill-decoding interferences but also couples the resource allocation and parallelism plans for both phases. LLM applications often emphasize individual latency for each phase: time to first token (TTFT) for the prefill phase and time per output token (TPOT) of each request for the decoding phase. In the presence of stringent latency requirements, existing systems have to prioritize one latency over the other, or over-provision compute resources to meet both. DistServe assigns prefill and decoding computation to different GPUs, hence eliminating prefill-decoding interferences. Given the application's TTFT and TPOT requirements, DistServe co-optimizes the resource allocation and parallelism strategy tailored for each phase. DistServe also places the two phases according to the serving cluster's bandwidth to minimize the communication caused by disaggregation. As a result, DistServe significantly improves LLM serving performance in terms of the maximum rate that can be served within both TTFT and TPOT constraints on each GPU. Our evaluations show that on various popular LLMs, applications, and latency requirements, DistServe can serve 7.4× more requests or 12.6× tighter SLO, compared to state-of-the-art systems, while staying within latency constraints for > 90% of requests.
  2. [OSDI'25] Kamino: Efficient VM Allocation at Scale with Latency-Driven Cache-Aware Scheduling, Chenli
    Abstract: In virtual machine (VM) allocation systems, caching repetitive and similar VM allocation requests and associated resolution rules is crucial for reducing computational costs and meeting strict latency requirements. While modern allocation systems distribute requests among multiple allocator agents and use caching to improve performance, current schedulers often neglect the cache state and latency considerations when assigning each new request to an agent. Due to the high variance in costs of cache hits and misses and the associated processing overheads of updating the caches, simple load-balancing and cache-aware mechanisms result in high latencies. We introduce Kamino, a high-performance, latency-driven and cache-aware request scheduling system aimed at minimizing end-to-end latencies. Kamino employs a novel scheduling algorithm grounded in theory which uses partial indicators from the cache state to assign each new request to the agent with the lowest estimated latency. Evaluation of Kamino using a high-fidelity simulator on large-scale production workloads shows a 42% reduction in average request latencies. Our deployment of Kamino in the control plane of a large public cloud confirms these improvements, with a 33% decrease in cache miss rates and 17% reduction in memory usage.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}