Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2026-01-09 10:30'''
|time='''2026-01-23 10:30'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
Line 8: Line 8:


{{Latest_seminar
{{Latest_seminar
|abstract = DistServe improves the performance of large language models (LLMs) serving by disaggregating the prefill and decoding computation. Existing LLM serving systems colocate the two phases and batch the computation of prefill and decoding across all users and requests. We find that this strategy not only leads to strong prefill-decoding interferences but also couples the resource allocation and parallelism plans for both phases. LLM applications often emphasize individual latency for each phase: time to first token (TTFT) for the prefill phase and time per output token (TPOT) of each request for the decoding phase. In the presence of stringent latency requirements, existing systems have to prioritize one latency over the other, or over-provision compute resources to meet both. DistServe assigns prefill and decoding computation to different GPUs, hence eliminating prefill-decoding interferences. Given the application's TTFT and TPOT requirements, DistServe co-optimizes the resource allocation and parallelism strategy tailored for each phase. DistServe also places the two phases according to the serving cluster's bandwidth to minimize the communication caused by disaggregation. As a result, DistServe significantly improves LLM serving performance in terms of the maximum rate that can be served within both TTFT and TPOT constraints on each GPU. Our evaluations show that on various popular LLMs, applications, and latency requirements, DistServe can serve 7.4× more requests or 12.6× tighter SLO, compared to state-of-the-art systems, while staying within latency constraints for > 90% of requests.
|abstract = Object detection, a fundamental task in computer vision, is crucial for various intelligent edge computing applications. However, object detection algorithms are usually heavy in computation, hindering their deployments on resource-constrained edge devices. Traditional edge-cloud collaboration schemes, like deep neural network (DNN) partitioning across edge and cloud, are unfit for object detection due to the significant communication costs incurred by the large size of intermediate results. To this end, we propose a Difficult-Case based Small-Big model (DCSB) framework. It employs a difficult-case discriminator on the edge device to control data transfer between the small model on the edge and the large model in the cloud. We also adopt regional sampling to further reduce the bandwidth consumption and create a discriminator zoo to accommodate the varying networking conditions. Additionally, we extend DCSB to video tasks by developing an adaptive sampling rate update algorithm, aiming to minimize computational demands without sacrificing detection accuracy. Extensive experiments show that DCSB can detect 97.26%-97.96% objects while saving 74.37%-82.23% network bandwidth, compared to cloud-only methods. Furthermore, DCSB significantly outperforms the latest DNN partitioning methods, reducing inference time by 92.60%-95.10% given an 8Mbps transmission bandwidth. In video tasks, DCSB matches the detection accuracy of leading video analysis methods while cutting the computational overhead by 40%.
|confname =OSDI'24
|confname =TMC'25
|link = https://www.usenix.org/conference/osdi24/presentation/zhong-yinmin
|link = https://ieeexplore.ieee.org/document/10705683
|title= DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving
|title= Edge-Cloud Collaborated Object Detection via Bandwidth Adaptive Difficult-Case Discriminator
|speaker=Ruizheng
|speaker=Menghao Liu
|date=2026-1-09
|date=2026-1-23
}}
}}
{{Latest_seminar
{{Latest_seminar
|abstract =In virtual machine (VM) allocation systems, caching repetitive and similar VM allocation requests and associated resolution rules is crucial for reducing computational costs and meeting strict latency requirements. While modern allocation systems distribute requests among multiple allocator agents and use caching to improve performance, current schedulers often neglect the cache state and latency considerations when assigning each new request to an agent. Due to the high variance in costs of cache hits and misses and the associated processing overheads of updating the caches, simple load-balancing and cache-aware mechanisms result in high latencies. We introduce Kamino, a high-performance, latency-driven and cache-aware request scheduling system aimed at minimizing end-to-end latencies. Kamino employs a novel scheduling algorithm grounded in theory which uses partial indicators from the cache state to assign each new request to the agent with the lowest estimated latency. Evaluation of Kamino using a high-fidelity simulator on large-scale production workloads shows a 42% reduction in average request latencies. Our deployment of Kamino in the control plane of a large public cloud confirms these improvements, with a 33% decrease in cache miss rates and 17% reduction in memory usage.
|abstract =Video conferencing systems suffer from poor user experience when network conditions deteriorate because current video codecs simply cannot operate at extremely low bitrates. Recently, several neural alternatives have been proposed that reconstruct talking head videos at very low bitrates using sparse representations of each frame such as facial landmark information. However, these approaches produce poor reconstructions in scenarios with major movement or occlusions over the course of a call, and do not scale to higher resolutions. We design Gemino, a new neural compression system for video conferencing based on a novel high-frequency-conditional super-resolution pipeline. Gemino upsamples a very low-resolution version of each target frame while enhancing high-frequency details (e.g., skin texture, hair, etc.) based on information extracted from a single high-resolution reference image. We use a multi-scale architecture that runs different components of the model at different resolutions, allowing it to scale to resolutions comparable to 720p, and we personalize the model to learn specific details of each person, achieving much better fidelity at low bitrates. We implement Gemino atop aiortc, an open-source Python implementation of WebRTC, and show that it operates on 1024x1024 videos in real-time on a Titan X GPU, and achieves 2.2–5x lower bitrate than traditional video codecs for the same perceptual quality.
|confname =OSDI'25
|confname =NSDI'24
|link = https://www.usenix.org/conference/osdi25/presentation/domingo
|link = https://www.usenix.org/conference/nsdi24/presentation/sivaraman
|title= Kamino: Efficient VM Allocation at Scale with Latency-Driven Cache-Aware Scheduling
|title= Gemino: Practical and Robust Neural Compression for Video Conferencing
|speaker=Chenli
|speaker=Xinyan
|date=2026-1-09
|date=2026-1-23
}}
}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Revision as of 00:51, 23 January 2026

Time: 2026-01-23 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [TMC'25] Edge-Cloud Collaborated Object Detection via Bandwidth Adaptive Difficult-Case Discriminator, Menghao Liu
    Abstract: Object detection, a fundamental task in computer vision, is crucial for various intelligent edge computing applications. However, object detection algorithms are usually heavy in computation, hindering their deployments on resource-constrained edge devices. Traditional edge-cloud collaboration schemes, like deep neural network (DNN) partitioning across edge and cloud, are unfit for object detection due to the significant communication costs incurred by the large size of intermediate results. To this end, we propose a Difficult-Case based Small-Big model (DCSB) framework. It employs a difficult-case discriminator on the edge device to control data transfer between the small model on the edge and the large model in the cloud. We also adopt regional sampling to further reduce the bandwidth consumption and create a discriminator zoo to accommodate the varying networking conditions. Additionally, we extend DCSB to video tasks by developing an adaptive sampling rate update algorithm, aiming to minimize computational demands without sacrificing detection accuracy. Extensive experiments show that DCSB can detect 97.26%-97.96% objects while saving 74.37%-82.23% network bandwidth, compared to cloud-only methods. Furthermore, DCSB significantly outperforms the latest DNN partitioning methods, reducing inference time by 92.60%-95.10% given an 8Mbps transmission bandwidth. In video tasks, DCSB matches the detection accuracy of leading video analysis methods while cutting the computational overhead by 40%.
  2. [NSDI'24] Gemino: Practical and Robust Neural Compression for Video Conferencing, Xinyan
    Abstract: Video conferencing systems suffer from poor user experience when network conditions deteriorate because current video codecs simply cannot operate at extremely low bitrates. Recently, several neural alternatives have been proposed that reconstruct talking head videos at very low bitrates using sparse representations of each frame such as facial landmark information. However, these approaches produce poor reconstructions in scenarios with major movement or occlusions over the course of a call, and do not scale to higher resolutions. We design Gemino, a new neural compression system for video conferencing based on a novel high-frequency-conditional super-resolution pipeline. Gemino upsamples a very low-resolution version of each target frame while enhancing high-frequency details (e.g., skin texture, hair, etc.) based on information extracted from a single high-resolution reference image. We use a multi-scale architecture that runs different components of the model at different resolutions, allowing it to scale to resolutions comparable to 720p, and we personalize the model to learn specific details of each person, achieving much better fidelity at low bitrates. We implement Gemino atop aiortc, an open-source Python implementation of WebRTC, and show that it operates on 1024x1024 videos in real-time on a Titan X GPU, and achieves 2.2–5x lower bitrate than traditional video codecs for the same perceptual quality.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}