Difference between revisions of "Resource:Previous Seminars"

From MobiNetS
Jump to: navigation, search
 
Line 1: Line 1:
=== History ===
=== History ===
{{Hist_seminar
{{Hist_seminar
|abstract = Running deep neural networks (DNNs) on large-scale videos from widely distributed cameras presents two significant challenges. Firstly, video quality for analytical purposes is severely impacted by the camera deployment environment, which is termed Pixel Recession in this paper. Secondly, low-latency video streaming from the source camera to edge servers is greatly hindered by the rapid expansion of video traffic. Despite numerous efforts such as enhancing the video structure, uneven encoding, and filtering frames captured on camera, these methods have proven insufficient to address the challenges at hand. We propose Spliceosome, a novel video analytics system that effectively overcomes the pixel recession and streaming bottlenecks. In brief, Spliceosome 1) recovers from pixel recession by adaptive video knobs (i.e., brightness and contrast) tuning in ARP (anchor region proposal) granularity, and 2) lowers the transmission volume by video thinning, which uses only single-channel information for video encoding. We implemented Spliceosome using only commercial off-the-shelf hardware. Our experimental results demonstrate that Spliceosome outperforms other alternative designs by 4.71-14.47%, 40.94-58.71%, and 14.28% in detection accuracy, end-to-end delay, and efficiency of DNNs inference, respectively.
|confname =ToN'25
|link = https://ieeexplore.ieee.org/abstract/document/10843977
|title= Spliceosome: On-Camera Video Thinning and Tuning for Timely and Accurate Analytics
|speaker=Zhongwei Sun
|date=2025-11-28
}}{{Hist_seminar
|abstract =The rapid expansion of large language models (LLMs) requires the development of extensive GPU clusters, with companies deploying clusters with tens to hundreds of thousands of GPUs. This growth significantly expands the design space for LLM training systems, requiring thorough exploration of different parallelization strategies, communication parameters, congestion control, fabric topology, etc. Current methods require up to 10k simulation experiments to identify optimal configurations, with inadequate exploration leading to significant degradation of training performance. In this paper, we tackle the overlooked problem of efficiently conducting parallel simulation experiments for design space exploration. Our {{Hist_seminar
|abstract = As Large Language Models (LLMs) continue to scale, optimizing their deployment requires efficient hardware and system co-design. However, current LLM performance evaluation frameworks fail to capture both chip-level execution details and system-wide behavior, making it difficult to assess realistic performance bottlenecks. In this work, we introduce ReaLLM, a trace-driven simulation framework designed to bridge the gap between detailed accelerator design and large-scale inference evaluation. Unlike prior simulators, ReaLLM integrates kernel profiling derived from detailed microarchitectural simulations with a new trace-driven end-to-end system simulator, enabling precise evaluation of parallelism strategies, batching techniques, and scheduling policies. To address the high computational cost of exhaustive simulations, ReaLLM constructs a precomputed kernel library based on hypothesized scenarios, interpolating results to efficiently explore a vast design space of LLM inference systems. Our validation against real hardware demonstrates the framework's accuracy, achieving an average end-to-end latency prediction error of only 9.1% when simulating inference tasks running on 4 NVIDIA H100 GPUs. We further use ReaLLM to evaluate popular LLMs' end-to-end performance across traces from different applications and identify key system bottlenecks, showing that modern GPU-based LLM inference is increasingly compute-bound rather than memory-bandwidth bound at large scale. Additionally, we significantly reduce simulation time with our precomputed kernel library by a factor of 6× for full-simulations and 164× for workload SLO exploration. ReaLLM is open-source and available at https://github.com/bespoke-silicon-group/reallm..
|abstract = As Large Language Models (LLMs) continue to scale, optimizing their deployment requires efficient hardware and system co-design. However, current LLM performance evaluation frameworks fail to capture both chip-level execution details and system-wide behavior, making it difficult to assess realistic performance bottlenecks. In this work, we introduce ReaLLM, a trace-driven simulation framework designed to bridge the gap between detailed accelerator design and large-scale inference evaluation. Unlike prior simulators, ReaLLM integrates kernel profiling derived from detailed microarchitectural simulations with a new trace-driven end-to-end system simulator, enabling precise evaluation of parallelism strategies, batching techniques, and scheduling policies. To address the high computational cost of exhaustive simulations, ReaLLM constructs a precomputed kernel library based on hypothesized scenarios, interpolating results to efficiently explore a vast design space of LLM inference systems. Our validation against real hardware demonstrates the framework's accuracy, achieving an average end-to-end latency prediction error of only 9.1% when simulating inference tasks running on 4 NVIDIA H100 GPUs. We further use ReaLLM to evaluate popular LLMs' end-to-end performance across traces from different applications and identify key system bottlenecks, showing that modern GPU-based LLM inference is increasingly compute-bound rather than memory-bandwidth bound at large scale. Additionally, we significantly reduce simulation time with our precomputed kernel library by a factor of 6× for full-simulations and 164× for workload SLO exploration. ReaLLM is open-source and available at https://github.com/bespoke-silicon-group/reallm..
|confname =ASAP'25
|confname =ASAP'25

Latest revision as of 20:28, 4 December 2025

History

|abstract =The rapid expansion of large language models (LLMs) requires the development of extensive GPU clusters, with companies deploying clusters with tens to hundreds of thousands of GPUs. This growth significantly expands the design space for LLM training systems, requiring thorough exploration of different parallelization strategies, communication parameters, congestion control, fabric topology, etc. Current methods require up to 10k simulation experiments to identify optimal configurations, with inadequate exploration leading to significant degradation of training performance. In this paper, we tackle the overlooked problem of efficiently conducting parallel simulation experiments for design space exploration. Our

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}