Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
 
(31 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2024-12-06 10:30-12:00'''
|time='''2025-09-19 10:30'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
Line 8: Line 8:


{{Latest_seminar
{{Latest_seminar
|abstract = Packet routing in virtual networks requires virtual-to-physical address translation. The address mappings are updated by a single party, i.e., the network administrator, but they are read by multiple devices across the network when routing tenant packets. Existing approaches face an inherent read-write performance tradeoff: they either store these mappings in dedicated gateways for fast updates at the cost of slower forwarding or replicate them at end-hosts and suffer from slow updates.SwitchV2P aims to escape this tradeoff by leveraging the network switches to transparently cache the address mappings while learning them from the traffic. SwitchV2P brings the mappings closer to the sender, thus reducing the first packet latency and translation overheads, while simultaneously enabling fast mapping updates, all without changing existing routing policies and deployed gateways. The topology-aware data-plane caching protocol allows the switches to transparently adapt to changing network conditions and varying in-switch memory capacity.Our evaluation shows the benefits of in-network address mapping, including an up to 7.8× and 4.3× reduction in FCT and first packet latency respectively, and a substantial reduction in translation gateway load. Additionally, SwitchV2P achieves up to a 1.9× reduction in bandwidth overheads and requires order-of-magnitude fewer gateways for equivalent performance.
|abstract = With cloud-side computing and rendering, mobile cloud gaming (MCG) is expected to deliver high-quality gaming experiences to budget mobile devices. However, our measurement on representative MCG platforms reveals that even under good network conditions, all platforms exhibit high interactive latency of 112–403 ms, from a user-input action to its display response, that critically affects users’ quality of experience. Moreover, jitters in network latency often lead to significant fluctuations in interactive latency. In this work, we collaborate with a commercial MCG platform to conduct the first in-depth analysis on the interactive latency of cloud gaming. We identify VSync, the synchronization primitive of Android graphics pipeline, to be a key contributor to the excessive interactive latency; as many as five VSync events are intricately invoked, which serialize the complex graphics processing logic on both the client and cloud sides. To address this, we design an end-to-end VSync regulator, dubbed LoopTailor, which minimizes VSync events by decoupling game rendering from the lengthy cloud-side graphics pipeline and coordinating cloud game rendering directly with the client. We implement LoopTailor on the collaborated platform and commodity Android devices, reducing the interactive latency (by ∼34%) to stably below 100 ms.
|confname =SIGCOMM'24
|confname =NSDI'25
|link = https://dl.acm.org/doi/abs/10.1145/3651890.3672213
|link = https://www.usenix.org/conference/nsdi25/presentation/li-yang
|title= In-Network Address Caching for Virtual Networks
|title= Dissecting and Streamlining the Interactive Loop of Mobile Cloud Gaming
|speaker=Dongting
|speaker= Li Chen
|date=2024-12-06
|date=2025-9-9
}}{{Latest_seminar
}}
|abstract = Visible light communication (VLC) has become an important complementary means to electromagnetic communications due to its freedom from interference. However, existing Internet-of-Things (IoT) VLC links can reach only <10 meters, which has significantly limited the applications of VLC to the vast and diverse scenarios. In this paper, we propose ChirpVLC, a novel modulation method to prolong VLC distance from ≤10 meters to over 100 meters. The basic idea of ChirpVLC is to trade throughput for prolonged distance by exploiting Chirp Spread Spectrum (CSS) modulation. Specifically, 1) we modulate the luminous intensity as a sinusoidal waveform with a linearly varying frequency and design different spreading factors (SF) for different environmental conditions. 2) We design range adaptation scheme for luminance sensing range to help receivers achieve better signal-to-noise ratio (SNR). 3) ChirpVLC supports many-to-one and non-line-of-sight communications, breaking through the limitations of visible light communication. We implement ChirpVLC and conduct extensive real-world experiments. The results show that ChirpVLC can extend the transmission distance of 5W COTS LEDs to over 100 meters, and the distance/energy utility is increased by 532% compared to the existing work.
{{Latest_seminar
|confname = IDEA
|abstract = The local deployment of large language models (LLMs) on mobile devices has garnered increasing attention due to its advantages in enhancing user privacy and enabling offline operation. However, given the limited computational resources of a single mobile device, only small language models (SLMs) with restricted capabilities can currently be supported. In this paper, we explore the potential of leveraging the collective computing power of multiple mobile devices to collaboratively support more efficient local LLM inference. We evaluate the feasibility and efficiency of existing parallelism techniques under the constraints of mobile devices and wireless network, identifying that chunked pipeline parallelism holds promise for realizing this vision. Building on this insight, we propose FlexSpark, a novel solution designed to achieve efficient and robust multi-device collaborative inference. FlexSpark incorporates priority scheduling, ordered communication, and elastic compression to maximize wireless bandwidth utilization, and thus accelerates distributed inference. Preliminary experimental results demonstrate that FlexSpark achieves up to a 2 × speedup compared to state-of-the-art frameworks, significantly enhancing the practicality and scalability of LLM deployment on mobile devices.
|link = https://mobinets.cn/site/Resource:Seminar
|confname =APNet'25
|title= ChirpVLC:Extending The Distance of Low-cost Visible Light Communication with CSS Modulation
|link = https://dl.acm.org/doi/10.1145/3735358.3735368
|speaker=Mengyu
|title= FlexSpark: Robust and Efficient Multi-Device Collaborative Inference over Wireless Network
|date=2024-12-06
|speaker=Ruizhen
|date=2025-9-19
}}
}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 18:03, 18 September 2025

Time: 2025-09-19 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [NSDI'25] Dissecting and Streamlining the Interactive Loop of Mobile Cloud Gaming, Li Chen
    Abstract: With cloud-side computing and rendering, mobile cloud gaming (MCG) is expected to deliver high-quality gaming experiences to budget mobile devices. However, our measurement on representative MCG platforms reveals that even under good network conditions, all platforms exhibit high interactive latency of 112–403 ms, from a user-input action to its display response, that critically affects users’ quality of experience. Moreover, jitters in network latency often lead to significant fluctuations in interactive latency. In this work, we collaborate with a commercial MCG platform to conduct the first in-depth analysis on the interactive latency of cloud gaming. We identify VSync, the synchronization primitive of Android graphics pipeline, to be a key contributor to the excessive interactive latency; as many as five VSync events are intricately invoked, which serialize the complex graphics processing logic on both the client and cloud sides. To address this, we design an end-to-end VSync regulator, dubbed LoopTailor, which minimizes VSync events by decoupling game rendering from the lengthy cloud-side graphics pipeline and coordinating cloud game rendering directly with the client. We implement LoopTailor on the collaborated platform and commodity Android devices, reducing the interactive latency (by ∼34%) to stably below 100 ms.
  2. [APNet'25] FlexSpark: Robust and Efficient Multi-Device Collaborative Inference over Wireless Network, Ruizhen
    Abstract: The local deployment of large language models (LLMs) on mobile devices has garnered increasing attention due to its advantages in enhancing user privacy and enabling offline operation. However, given the limited computational resources of a single mobile device, only small language models (SLMs) with restricted capabilities can currently be supported. In this paper, we explore the potential of leveraging the collective computing power of multiple mobile devices to collaboratively support more efficient local LLM inference. We evaluate the feasibility and efficiency of existing parallelism techniques under the constraints of mobile devices and wireless network, identifying that chunked pipeline parallelism holds promise for realizing this vision. Building on this insight, we propose FlexSpark, a novel solution designed to achieve efficient and robust multi-device collaborative inference. FlexSpark incorporates priority scheduling, ordered communication, and elastic compression to maximize wireless bandwidth utilization, and thus accelerates distributed inference. Preliminary experimental results demonstrate that FlexSpark achieves up to a 2 × speedup compared to state-of-the-art frameworks, significantly enhancing the practicality and scalability of LLM deployment on mobile devices.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}