Difference between revisions of "Resource:Previous Seminars"

From MobiNetS
Jump to: navigation, search
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
=== History ===
=== History ===
{{Hist_seminar
|abstract = Running deep neural networks (DNNs) on large-scale videos from widely distributed cameras presents two significant challenges. Firstly, video quality for analytical purposes is severely impacted by the camera deployment environment, which is termed Pixel Recession in this paper. Secondly, low-latency video streaming from the source camera to edge servers is greatly hindered by the rapid expansion of video traffic. Despite numerous efforts such as enhancing the video structure, uneven encoding, and filtering frames captured on camera, these methods have proven insufficient to address the challenges at hand. We propose Spliceosome, a novel video analytics system that effectively overcomes the pixel recession and streaming bottlenecks. In brief, Spliceosome 1) recovers from pixel recession by adaptive video knobs (i.e., brightness and contrast) tuning in ARP (anchor region proposal) granularity, and 2) lowers the transmission volume by video thinning, which uses only single-channel information for video encoding. We implemented Spliceosome using only commercial off-the-shelf hardware. Our experimental results demonstrate that Spliceosome outperforms other alternative designs by 4.71-14.47%, 40.94-58.71%, and 14.28% in detection accuracy, end-to-end delay, and efficiency of DNNs inference, respectively.
|confname =ToN'25
|link = https://ieeexplore.ieee.org/abstract/document/10843977
|title= Spliceosome: On-Camera Video Thinning and Tuning for Timely and Accurate Analytics
|speaker=Zhongwei Sun
|date=2025-11-28
}}{{Hist_seminar
|abstract =The rapid expansion of large language models (LLMs) requires the development of extensive GPU clusters, with companies deploying clusters with tens to hundreds of thousands of GPUs. This growth significantly expands the design space for LLM training systems, requiring thorough exploration of different parallelization strategies, communication parameters, congestion control, fabric topology, etc. Current methods require up to 10k simulation experiments to identify optimal configurations, with inadequate exploration leading to significant degradation of training performance. In this paper, we tackle the overlooked problem of efficiently conducting parallel simulation experiments for design space exploration. Our {{Hist_seminar
|abstract = As Large Language Models (LLMs) continue to scale, optimizing their deployment requires efficient hardware and system co-design. However, current LLM performance evaluation frameworks fail to capture both chip-level execution details and system-wide behavior, making it difficult to assess realistic performance bottlenecks. In this work, we introduce ReaLLM, a trace-driven simulation framework designed to bridge the gap between detailed accelerator design and large-scale inference evaluation. Unlike prior simulators, ReaLLM integrates kernel profiling derived from detailed microarchitectural simulations with a new trace-driven end-to-end system simulator, enabling precise evaluation of parallelism strategies, batching techniques, and scheduling policies. To address the high computational cost of exhaustive simulations, ReaLLM constructs a precomputed kernel library based on hypothesized scenarios, interpolating results to efficiently explore a vast design space of LLM inference systems. Our validation against real hardware demonstrates the framework's accuracy, achieving an average end-to-end latency prediction error of only 9.1% when simulating inference tasks running on 4 NVIDIA H100 GPUs. We further use ReaLLM to evaluate popular LLMs' end-to-end performance across traces from different applications and identify key system bottlenecks, showing that modern GPU-based LLM inference is increasingly compute-bound rather than memory-bandwidth bound at large scale. Additionally, we significantly reduce simulation time with our precomputed kernel library by a factor of 6× for full-simulations and 164× for workload SLO exploration. ReaLLM is open-source and available at https://github.com/bespoke-silicon-group/reallm..
|confname =ASAP'25
|link = https://ieeexplore.ieee.org/abstract/document/11113621
|title= ReaLLM: A Trace-Driven Framework for Rapid Simulation of Large-Scale LLM Inference
|speaker=JunZhe
|date=2025-11-21
}}{{Hist_seminar
|abstract =With the proliferation of mobile devices, spatial crowdsourcing has emerged as a promising paradigm for facilitating location-based services, encompassing various applications across academia and industries. Recently, pioneering works have attempted to infer workers' mobility patterns from historical data to improve the quality of task assignment. However, these studies have overlooked or under-examined issues such as the dynamic mobility patterns of crowd workers, especially in the context of newcomers, the misalignment between the objectives of mobility prediction and task assignment, and the effective utilization of predicted mobility patterns. In this paper, we investigate a problem we term Task Assignment in Mobility Prediction-aware Spatial Crowdsourcing (TAMP). To address the TAMP problem, we first propose a task-adaptive meta-learning algorithm, which trains a set of specific meta-knowledge for workers' mobility prediction models through game theory-based learning task clustering and meta-training within each cluster. Then, we design a task assignment-oriented loss function and develop a task assignment algorithm that incorporates prediction performance, prioritizing assignments with higher confidence of completion. Extensive experiments on real-world datasets validate that our proposed methods can effectively improve the quality of task assignment.
|confname =ICDE'25
|link = https://ieeexplore.ieee.org/document/11113007
|title= Effective Task Assignment in Mobility Prediction-Aware Spatial Crowdsourcing
|speaker= Zhenguo
|date=2025-11-21
}}{{Hist_seminar
|abstract = Entanglement distribution across remote distances is critical for many quantum applications. Currently, the de facto approach for remote entanglement distribution relies on optical fiber for on-the-ground entanglement distribution. However, the fiber-based approach is incapable of global-scale entanglement distribution due to intrinsic limitations. This paper investigates a new hybrid ground-satellite quantum network architecture (QuESat) for global-scale entanglement distribution, integrating an on-the-ground fiber network with a global-scale passive optical network built with low-Earth-orbit satellites. The satellite network provides dynamic construction of photon lightpaths based on near-vacuum beam guides constructed via adjustable arrays of lenses, forwarding photons from one ground station to another with very high efficiency over long distances compared to using fiber. To assess the feasibility and effectiveness of QuESat for global communication, we formulate lightpath provisioning and entanglement distribution problems, considering the orbital dynamics of satellites and the time-varying entanglement demands from ground users. A two-stage algorithm is developed to dynamically configure the beam guides and distribute entanglements, respectively. The algorithm combines randomized and deterministic rounding for lightpath provisioning to enable global connectivity, with optimal entanglement swapping for distributing entanglements to meet users' demands. By developing a ground-satellite quantum network simulator, QuESat achieves multi-fold improvements compared to repeater networks.
|confname = INFOCOM'25
|link = https://ieeexplore.ieee.org/document/11044649
|title= QuESat: Satellite-Assisted Quantum Internet for Global-Scale Entanglement Distribution
|speaker= Yaliang
|date=2025-11-07
}}{{Hist_seminar
|abstract =The global business of transnational enterprises demands geo-distributed databases, where the leader-follower-based consensus protocols are the key to guaranteeing consistency of replicas spread across regions. Compared with traditional databases running in a single data center, determining which node is the leader in consensus protocol has a greater per-formance impact in geo-distributed databases running across multiple data centers. However, the performance of legacy leader management is far from satisfactory due to the network and application dynamics (e.g., network delay, node popularity, operation read-write ratio). This paper proposes GeoLM toward performance-oriented leader management for geo-distributed consensus protocols. GeoLM captures the network and application dynamics and proactively conducts seamless leader handovers with bounded switching costs. Our geo-distributed experimental results show that GeoLM improves performance up to 49.75% over the baselines (e.g., Raft and Geo-Raft) and achieves considerably good performance compared to state-of-the-art consensus protocols (e.g., SwiftPaxos, CURP, and EPaxos).
|confname = INFOCOM'25
|link = https://ieeexplore.ieee.org/document/11044598
|title= GeoLM: Performance-oriented Leader Management for Geo-Distributed Consensus Protocol
|speaker= Linqi Liu
|date=2025-11-07
}}{{Hist_seminar
|abstract = Immersive telepresence has the potential to revolutionize remote communication by offering a highly interactive and engaging user experience. However, state-of-the-art exchanges large volumes of 3D content to achieve satisfactory visual quality, resulting in substantial Internet bandwidth consumption. To tackle this challenge, we introduce MagicStream, a first-of-its-kind semantic-driven immersive telepresence system that effectively extracts and delivers compact semantic details of captured 3D representation of users, instead of traditional bit-by-bit communication of raw content. To minimize bandwidth consumption while maintaining low end-to-end latency and high visual quality, MagicStream incorporates the following key innovations: (1) efficient extraction of user's skin/cloth color and motion semantics based on lighting characteristics and body keypoints, respectively; (2) novel, real-time human body reconstruction from motion semantics; and (3) on-the-fly neural rendering of users' immersive representation with color semantics. We implement a prototype of MagicStream and extensively evaluate its performance through both controlled experiments and user trials. Our results show that, compared to existing schemes, MagicStream can drastically reduce Internet bandwidth usage by up to 1195X while maintaining good visual quality.
|confname = Sensys'24
|link = https://dl.acm.org/doi/10.1145/3666025.3699344
|title= MagicStream: Bandwidth-conserving Immersive Telepresence via Semantic Communication
|speaker= Mengfan Wang
|date=2025-10-31
}}{{Hist_seminar
|abstract =To fulfill computing demands of numerous Internet of Things (IoT) devices in infrastructure-free regions, low earth orbit (LEO) satellite edge computing has been proposed in recent years, to circumvent the latency arising from long backhaul and link congestion in traditional cloud computing mode. This article proposes a novel time-varying graph-based collaborative task offloading strategy for LEO satellite IoT to reduce task computing latency. To this end, a computing coordinate graph (CCG) is designed to characterize the time-varying topology and resource distribution of LEO satellite networks. When a task is offloaded to LEO satellite networks because local computing capability is unable to meet latency constraint, the position of the task access satellite in the CCG is determined first. Then, the expanded hop counts from all satellite nodes to the access satellite are calculated, which informs the partitioning of different node sets. Afterwards, considering both link and on-board computing resources, with the access satellite as the reference node, the minimum total task computing latency for each node set is obtained in an ascending order of the expanded hop counts. Finally, the minimum one among obtained latency values is the anticipated total task computing latency. Simulation results demonstrate the effectiveness of the proposed task offloading strategy in reducing task computing latency.
|confname = Systems Joural
|link = https://ieeexplore.ieee.org/document/11024019
|title= Collaborative Task Offloading for LEO Satellite Internet of Things: A Novel Computing Coordinate Graph-Based Approach
|speaker= Yifei Zhou
|date=2025-10-31
}}
{{Hist_seminar
|abstract = Unlike traditional data collection applications (e.g., environment monitoring) that are dominated by uplink transmissions, the newly emerging applications (e.g., device actuation, firmware update, packet reception acknowledgement) also pose ever-increasing demands on downlink transmission capabilities. However, current LoRaWAN falls short in supporting such applications primarily due to downlink-uplink asymmetry. While the uplink can concurrently receive multiple packets, downlink transmission is limited to a single logical channel at a time, which fundamentally hinders the deployment of downlink-hungry applications. To tackle this practical challenge, FDLoRa develops the first-of-its-kind in-band full-duplex LoRa gateway design with novel solutions to mitigate the impact of self-interference (i.e., strong downlink interference to ultra-weak uplink reception), which unleashes the full spectrum for in-band downlink transmissions without compromising the reception of weak uplink packets. Built upon the full-duplex gateways, FDLoRa introduces a new downlink framework to support concurrent downlink transmissions over multiple logical channels of available gateways. Evaluation results demonstrate that FDLoRa boosts downlink capacity by 5.7x compared to LoRaWAN on a three-gateway testbed and achieves 2.58x higher downlink concurrency per gateway than the state-of-the-art.
|confname = Sensys'24
|link = https://dl.acm.org/doi/10.1145/3666025.3699338
|title= FDLoRa: Tackling Downlink-Uplink Asymmetry with Full-duplex LoRa Gateways
|speaker= Kai Chen
|date=2025-10-23
}}{{Hist_seminar
|abstract =Recent years have witnessed a widespread adoption of containers. While containers simplify and accelerate application development, existing container network technologies either incur significant overhead, which hurts performance for distributed applications, or lose flexibility or compatibility, which hinders the widespread deployment in production. We carefully analyze the kernel data path of an overlay network, quantifying the time consumed by each segment of the data path and identifying the extra overhead in an overlay network compared to bare metal. We observe that this extra overhead generates repetitive results among packets, which inspires us to introduce caches within an overlay network. We design and implement ONCache (Overlay Network Cache), a cache-based container overlay network, to eliminate the extra overhead while maintaining flexibility and compatibility. We implement ONCache using the extended Berkeley Packet Filter (eBPF) with only 524 lines of code, and integrate it as a plugin of Antrea. With ONCache, containers attain networking performance akin to that of bare metal. Compared to the standard overlay networks, ONCache improves throughput and request-response transaction rate by 12% and 36% for TCP (20% and 34% for UDP), respectively, while significantly reducing per-packet CPU overhead. Popular distributed applications also benefit from ONCache.
|confname = NSDI'25
|link = https://www.usenix.org/conference/nsdi25/presentation/lin-shengkai
|title= ONCache: A Cache-Based Low-Overhead Container Overlay Network
|speaker= Daobing Zeng
|date=2025-10-24
}}
{{Hist_seminar
|abstract = We present HyperCam, an energy-efficient image classification pipeline that enables computer vision tasks onboard low-power IoT camera systems. HyperCam leverages hyperdimensional computing to perform training and inference efficiently on low-power microcontrollers. We implement a low-power wireless camera platform using off-the-shelf hardware and demonstrate that HyperCam can achieve an accuracy of 93.60%, 84.06%, 92.98%, and 72.79% for MNIST, Fashion-MNIST, Face Detection, and Face Identification tasks, respectively, while significantly outperforming other classifiers in resource efficiency. \revSpecifically, it delivers inference latency of 0.08-0.27s while using 42.91-63.00KB flash memory and 22.25KB RAM at peak. Among other machine learning classifiers such as SVM, xgBoost, MicroNets, MobileNetV3, and MCUNetV3, HyperCam is the only classifier that achieves competitive accuracy while maintaining competitive memory footprint and inference latency that meets the resource requirements of low-power camera systems.
|confname = Arxiv
|link = https://arxiv.org/html/2501.10547v1
|title= HyperCam: Low-Power Onboard Computer Vision for IoT Cameras
|speaker= Menghao Liu
|date=2025-10-17
}}{{Hist_seminar
|abstract = We present NIER, a video conferencing system that can adaptively maintain a low bitrate (e.g., 10–100 Kbps) with reasonable visual quality while being robust to packet losses. We use key-point-based deep image animation (DIA) as a key building block and address a series of networking and system challenges to make NIER practical. Our evaluations show that NIER significantly outperforms the baseline solutions.
|confname =SIGCOMM'25 (short paper)
|link = https://dl.acm.org/doi/pdf/10.1145/3718958.3750518
|title= NIER: Practical Neural-enhanced Low-bitrate Video Conferencing
|speaker=Xinyan Wang
|date=2025-9-26
}}{{Hist_seminar
|abstract = Distributed Edge Computing (DEC) has emerged as a novel paradigm, owing to its superior performance in communication latency, parallel computing efficiency, and energy consumption. With the surge of tasks in generative artificial intelligence, DEC faces higher demands for parallel computing efficiency. Scheduling multiple tasks for simultaneous processing, rather than one-by-one handling, could enhance parallel efficiency. Multiple tasks have multi-dependencies, i.e., sequence dependency, attribute similarity, and attribute correlation. Utilizing the bidirectional edges of traditional graphs to represent multi-dependencies can lead to an explosion in quantity. A hypergraph, with its hyperedges capable of connecting any number of vertices, can significantly solve the above problem. However, the multi-dependencies are rarely studied in the current research, posing the challenges, including incapable representing and unable capturing of multi-dependency hypergraph. In this work, we introduce a Joint communication and computation scheduling for hypErgraph Tasks in DEC, namely HypeJet, To effectively represent multi-dependencies, we employ hypergraph construction to represent task attributes and utilize hypergraph partitioning to clarify and refine task attribute correlations, enhancing parallel efficiency. In response to the challenge of capturing multi-dependencies, we employ a scheduling mechanism with the hypergraph neural network that efficiently acquires higher-order attribute correlated information among convolution matrices, providing enriched contextual information on multi-dependencies that supports decision-making in scheduling tasks. The evaluations using real-world traces demonstrate an 18.07% improvement in parallel efficiency of task scheduling.
|confname =INFOCOM'25
|link = https://ieeexplore.ieee.org/abstract/document/11044587
|title= HyperJet: Joint Communication and Computation Scheduling for Hypergraph Tasks in Distributed Edge Computing
|speaker= Yi Zhou
|date=2025-9-26
}}{{Hist_seminar
|abstract = Localization of networked nodes is an essential problem in emerging applications, including first-responder navigation, automated manufacturing lines, vehicular and drone navigation, asset tracking, Internet of Things, and 5G communication networks. In this paper, we present Locate3D, a novel system for peer-to-peer node localization and orientation estimation in large networks. Unlike traditional range-only methods, Locate3D introduces angle-of-arrival (AoA) data as an added network topology constraint. The system solves three key challenges: it uses angles to reduce the number of measurements required by 4× and jointly uses range and angle data for location estimation. We develop a spanning-tree approach for fast location updates, and to ensure the output graphs are rigid and uniquely realizable, even in occluded or weakly connected areas. Locate3D cuts down latency by up to 75% without compromising accuracy, surpassing standard range-only solutions. It has a 0.86 meter median localization error for building-scale multi-floor networks (32 nodes, 0 anchors) and 12.09 meters for large-scale networks (100,000 nodes, 15 anchors).
|confname =NSDI'25
|link = https://www.usenix.org/conference/nsdi25/presentation/garg
|title= Large Network UWB Localization: Algorithms and Implementation
|speaker=Bangguo
|date=2025-9-26
}}
{{Hist_seminar
{{Hist_seminar
|abstract = With cloud-side computing and rendering, mobile cloud gaming (MCG) is expected to deliver high-quality gaming experiences to budget mobile devices. However, our measurement on representative MCG platforms reveals that even under good network conditions, all platforms exhibit high interactive latency of 112–403 ms, from a user-input action to its display response, that critically affects users’ quality of experience. Moreover, jitters in network latency often lead to significant fluctuations in interactive latency. In this work, we collaborate with a commercial MCG platform to conduct the first in-depth analysis on the interactive latency of cloud gaming. We identify VSync, the synchronization primitive of Android graphics pipeline, to be a key contributor to the excessive interactive latency; as many as five VSync events are intricately invoked, which serialize the complex graphics processing logic on both the client and cloud sides. To address this, we design an end-to-end VSync regulator, dubbed LoopTailor, which minimizes VSync events by decoupling game rendering from the lengthy cloud-side graphics pipeline and coordinating cloud game rendering directly with the client. We implement LoopTailor on the collaborated platform and commodity Android devices, reducing the interactive latency (by ∼34%) to stably below 100 ms.
|abstract = With cloud-side computing and rendering, mobile cloud gaming (MCG) is expected to deliver high-quality gaming experiences to budget mobile devices. However, our measurement on representative MCG platforms reveals that even under good network conditions, all platforms exhibit high interactive latency of 112–403 ms, from a user-input action to its display response, that critically affects users’ quality of experience. Moreover, jitters in network latency often lead to significant fluctuations in interactive latency. In this work, we collaborate with a commercial MCG platform to conduct the first in-depth analysis on the interactive latency of cloud gaming. We identify VSync, the synchronization primitive of Android graphics pipeline, to be a key contributor to the excessive interactive latency; as many as five VSync events are intricately invoked, which serialize the complex graphics processing logic on both the client and cloud sides. To address this, we design an end-to-end VSync regulator, dubbed LoopTailor, which minimizes VSync events by decoupling game rendering from the lengthy cloud-side graphics pipeline and coordinating cloud game rendering directly with the client. We implement LoopTailor on the collaborated platform and commodity Android devices, reducing the interactive latency (by ∼34%) to stably below 100 ms.

Latest revision as of 20:28, 4 December 2025

History

|abstract =The rapid expansion of large language models (LLMs) requires the development of extensive GPU clusters, with companies deploying clusters with tens to hundreds of thousands of GPUs. This growth significantly expands the design space for LLM training systems, requiring thorough exploration of different parallelization strategies, communication parameters, congestion control, fabric topology, etc. Current methods require up to 10k simulation experiments to identify optimal configurations, with inadequate exploration leading to significant degradation of training performance. In this paper, we tackle the overlooked problem of efficiently conducting parallel simulation experiments for design space exploration. Our

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}