Difference between revisions of "Resource:Previous Seminars"

From MobiNetS
Jump to: navigation, search
 
(14 intermediate revisions by 3 users not shown)
Line 1: Line 1:
=== History ===
=== History ===
{{Hist_seminar
|abstract = Reconfigurable Intelligent Surfaces (RIS) are a promising technology for creating smart radio environments by controlling wireless propagation. However, several factors hinder the integration of RIS technology into existing cellular networks, including the incompatibility of RIS control interfaces with 5G PHY/MAC procedures for synchronizing radio scheduling decisions and RIS operation, and the cost and energy limitations of passive RIS technology. This paper presents RISENSE, a system for practical RIS integration in cellular networks. First, we propose a novel, low-cost, and low-power RIS design capable of decoding control messages without complex baseband operations or additional RF chains, utilizing a power sensor and a network of microstrip lines and couplers. Second, we design an effective in-band wireless RIS control interface, compatible with 5G PHY/MAC procedures, that embeds amplitude-modulated (AM) RIS control commands directly into standard OFDM-modulated 5G data channels. Finally, we propose a low-overhead protocol that supports swift on-demand RIS re-con gurability, making it adaptable to varying channel conditions and user mobility, while minimizing the wastage of 5G OFDM symbols. Our experiments validate the design of RISENSE and our evaluation shows that our system can reconfigure a RIS at the same pace as users move, boosting 5G coverage where static or slow RIS controllers cannot.
|confname = Mobisys'25
|link = https://dspace.networks.imdea.org/handle/20.500.12761/1925
|title= RISENSE: Long-Range In-Band Wireless Control of Passive Reconfigurable Intelligent Surfaces
|speaker= Haifeng
|date=2025-9-12
}}
{{Hist_seminar
|abstract = Traditional 3D content representations include dense point clouds that consume large amounts of data and hence network bandwidth, while newer representations such as neural radiance fields suffer from poor frame rates due to their non-standard volumetric rendering pipeline. 3D Gaussian splats (3DGS) can be seen as a generalization of point clouds that meet the best of both worlds, with high visual quality and efficient rendering for real-time frame rates. However, delivering 3DGS scenes from a hosting server to client devices is still challenging due to high network data consumption (e.g., 1.5 GB for a single scene). The goal of this work is to create an efficient 3D content delivery framework that allows users to view high quality 3D scenes with 3DGS as the underlying data representation. The main contributions of the paper are: (1) Creating new layered 3DGS scenes for efficient delivery, (2) Scheduling algorithms to choose what splats to download at what time, and (3) Trace-driven experiments from users wearing virtual reality headsets to evaluate the visual quality and latency. Our system for Layered 3D Gaussian Splats delivery (L3GS) demonstrates high visual quality, achieving 16.9% higher average SSIM compared to baselines, and also works with other compressed 3DGS representations. The code is available at https://github.com/mavens-lab/layered_3d_gaussian_splats.
|confname =Mobicom'25
|link = https://arxiv.org/html/2504.05517v1
|title= L3GS: Layered 3D Gaussian Splats for Efficient 3D Scene Delivery
|speaker=Jiyi
|date=2025-9-12
}}
{{Hist_seminar
|abstract = This year, we are embracing the exciting new trends in AIoT including MLsys, LLMs, embodied perception, volumetric videos, etc. Papers collected from top venues in 2025 will be discussed in-depth, and research problems and new ideas are to be discovered!
|confname = Begin of new semester
|link = https://mobinets.cn/site/Resource:Paper_Carnival_2025
|title= Paper Carnival 2025
|speaker=All
|date=2025-08-27
}}
{{Hist_seminar
|abstract = In the metaverse era, point cloud video (PCV) streaming on mobile XR devices is pivotal. While most current methods focus on PCV compression from traditional 3-DoF video services, emerging AI techniques extract vital semantic information, producing content resembling the original. However, these are early-stage and computationally intensive. To enhance the inference efficacy of AI-based approaches, accommodate dynamic environments, and facilitate applicability to metaverse XR devices, we present ISCom, an interest-aware semantic communication scheme for lightweight PCV streaming. ISCom is featured with a region-of-interest (ROI) selection module, a lightweight encoder-decoder training module, and a learning-based scheduler to achieve real-time PCV decoding and rendering on resource-constrained devices. ISCom’s dual-stage ROI selection provides significantly reduces data volume according to real-time interest. The lightweight PCV encoder-decoder training is tailored to resource-constrained devices and adapts to the heterogeneous computing capabilities of devices. Furthermore, We provide a deep reinforcement learning (DRL)-based scheduler to select optimal encoder-decoder model for various devices adaptivelly, considering the dynamic network environments and device computing capabilities. Our extensive experiments demonstrate that ISCom outperforms baselines on mobile devices, achieving a minimum rendering frame rate improvement of 10 FPS and up to 22 FPS. Furthermore, our method significantly reduces memory usage by 41.7% compared to the state-of-the-art AITransfer method. These results highlight the effectiveness of ISCom in enabling lightweight PCV streaming and its potential to improve immersive experiences for emerging metaverse application.
|confname =JSAC'24
|link = https://dl.acm.org/doi/10.1109/JSAC.2023.3345430
|title= ISCom: Interest-Aware Semantic Communication Scheme for Point Cloud Video Streaming on Metaverse XR Devices
|speaker=Jiyi
|date=2025-06-13
}}
{{Hist_seminar
|abstract = Scientific Illustration Tutorial
|confname = TUTORIAL
|link = https://mobinets.cn/Resource:Seminar
|title= Idea share
|speaker=OldBee
|date=2025-06-13
}}
{{Hist_seminar
|abstract = Deploying deep convolutional neural networks (CNNs) for edge-based video analytics poses significant challenges due to the intensive computing demands. Model partitioning has emerged as a promising solution by offloading segments of CNNs to multiple proximal edge devices for collaborative inference. However, this approach often incurs substantial cross-device transmission overhead, particularly in handling intermediate feature maps. To address these limitations, we propose ReDream (REsidual feature-DRivEn mixed spArse coding for Model partitioning), a novel edge-centric video analytics framework that jointly optimizes  transmission efficiency and inference accuracy. ReDream introduces two key innovations: 1) It enhances the sparsity of intermediate features by replacing activation functions with ReLU in selected CNN layers and retraining, thereby increasing the proportion of zero-valued elements. 2) It leverages the heterogeneous distribution of feature data across layers by applying a mixed sparse coding scheme, i.e., selecting different compression methods adaptively to optimize model partitioning. These optimizations enable ReDream to support more efficient cross-device inference while maintaining high model accuracy, making it well-suited for real-time deployment in collaborative edge environments.
|confname = IDEA
|link = https://mns.uestc.cn/wiki/Research:InProgress/MixedSparseCoding
|title= ReDream: Residual Feature-Driven Mixed Sparse Coding for Model Partitioning
|speaker=Xianyang
|date=2025-05-23
}}
{{Hist_seminar
|abstract = While existing strategies to execute deep learning-based classification on low-power platforms assume the models are trained on all classes of interest, this paper posits that adopting context-awareness i.e. narrowing down a classification task to the current deployment context consisting of only recent inference queries can substantially enhance performance in resource-constrained environments. We propose a new paradigm, CACTUS, for scalable and efficient context-aware classification where a micro-classifier recognizes a small set of classes relevant to the current context and, when context change happens (e.g., a new class comes into the scene), rapidly switches to another suitable micro-classifier. CACTUS features several innovations, including optimizing the training cost of context-aware classifiers, enabling on-the-fly context-aware switching between classifiers, and balancing context switching costs and performance gains via simple yet effective switching policies. We show that CACTUS achieves significant benefits in accuracy, latency, and compute budget across a range of datasets and IoT platforms.
|confname = Mobisys'24
|link = https://dl.acm.org/doi/abs/10.1145/3643832.3661888
|title= CACTUS: Dynamically Switchable Context-aware micro-Classifiers for Efficient IoT Inference
|speaker= Zhenhua
|date=2025-04-18
}}
{{Hist_seminar
|abstract = Nowadays, volumetric videos have emerged as an attractive multimedia application providing highly immersive watching experiences since viewers could adjust their viewports at 6 degrees-of-freedom. However, the point cloud frames composing the video are prohibitively large, and effective compression techniques should be developed. There are two classes of compression methods. One suggests exploiting the conventional video codecs (2D-based methods) and the other proposes to compress the points in 3D space directly (3D-based methods). Though the 3D-based methods feature fast coding speeds, their compression ratios are low since the failure of leveraging inter-frame redundancy. To resolve this problem, we design a patch-wise compression framework working in the 3D space. Specifically, we search rigid moves of patches via the iterative closest point algorithm and construct a common geometric structure, which is followed by color compensation. We implement our decoder on a GPU platform so that real-time decoding and rendering are realized. We compare our method with GROOT, the state-of-the-art 3D-based compression method, and it reduces the bitrate by up to 5.98×. Moreover, by trimming invisible content, our scheme achieves comparable bandwidth demand of V-PCC, the representative 2D-based method, in FoV-adaptive streaming.
|confname = TC'24
|link = https://ieeexplore.ieee.org/document/10360355
|title= A GPU-Enabled Real-Time Framework for Compressing and Rendering Volumetric Videos
|speaker=Mengfan
|date=2025-04-18
}}
{{Hist_seminar
|abstract = Cross-silo federated learning (FL) enables multiple institutions (clients) to collaboratively build a global model without sharing their private data. To prevent privacy leakage during aggregation, homomorphic encryption (HE) is widely used to encrypt model updates, yet incurs high computation and communication overheads. To reduce these overheads, packed HE (PHE) has been proposed to encrypt multiple plaintexts into a single ciphertext. However, the original design of PHE does not consider the heterogeneity among different clients, an intrinsic problem in cross-silo FL, often resulting in undermined training efficiency with slow convergence and stragglers. In this work, we propose FedPHE, an efficiently packed homomorphically encrypted FL framework with secure weighted aggregation and client selection to tackle the heterogeneity problem. Specifically, using CKKS with sparsification, FedPHE can achieve efficient encrypted weighted aggregation by accounting for contributions of local updates to the global model. To mitigate the straggler effect, we devise a sketching-based client selection scheme to cherry-pick representative clients with heterogeneous models and computing capabilities. We show, through rigorous security analysis and extensive experiments, that FedPHE can efficiently safeguard clients’ privacy, achieve a training speedup of 1.85 − 4.44×, cut the communication overhead by 1.24 − 22.62× , and reduce the straggler effect by up to 1.71 − 2.39×.
|confname =INFOCOM24'
|link = https://ieeexplore.ieee.org/abstract/document/10621440
|title= Efficient and Straggler-Resistant Homomorphic Encryption for Heterogeneous Federated Learning
|speaker=Dongting
|date=2025-03-28
}}{{Hist_seminar
|abstract = Entanglement routing (ER) in quantum networks must guarantee entanglement fidelity, a property that is crucial for applications such as quantum key distribution, quantum computation, and quantum sensing. Conventional ER approaches assume that network links can only generate entanglements with a fixed fidelity, and then they rely on purification to improve endto-end fidelities. However, recent advances in entanglement generation technologies show that quantum links can be configured by choosing among different fidelity/entanglement-rate combinations (defined in this paper as link configurations), hence enabling a more flexible assignment of quantum-network resources for meeting specific application requirements. To exploit this opportunity, we introduce the problem of link configuration for fidelityconstrained routing and purification (LC-FCRP) in Quantum Networks. We first formulate a simplified FCRP version as a Mixed Integer Linear Programming (MILP) model, where the link fidelity can be adjusted within a finite set. Then, to explore the full space of possible link configurations, we propose a link configuration algorithm based on a novel shortest-pathbased fidelity determination (SPFD) algorithm w/o Bayesian Optimization, which can be applied on top of any existing ER algorithm. Numerical results demonstrate that link configuration improves the acceptance ratio of existing ER algorithms by 87%.
|confname =INFOCOM25'
|link = https://re.public.polimi.it/bitstream/11311/1281986/1/final_infocom25_link_configuration_for_entanglement_routing.pdf
|title= Link Configuration for Fidelity-Constrained Entanglement Routing in Quantum Networks
|speaker=Yaliang
|date=2025-03-27
}}
{{Hist_seminar
|abstract = Large language models (LLMs) have demonstrated remarkable reasoning capabilities across diverse domains. Recent studies have shown that increasing test-time computation enhances LLMs' reasoning capabilities. This typically involves extensive sampling at inference time guided by an external LLM verifier, resulting in a two-player system. Despite external guidance, the effectiveness of this system demonstrates the potential of a single LLM to tackle complex tasks. Thus, we pose a new research problem: Can we internalize the searching capabilities to fundamentally enhance the reasoning abilities of a single LLM? This work explores an orthogonal direction focusing on post-training LLMs for autoregressive searching (i.e., an extended reasoning process with self-reflection and self-exploration of new strategies). To achieve this, we propose the Chain-of-Action-Thought (COAT) reasoning and a two-stage training paradigm: 1) a small-scale format tuning stage to internalize the COAT reasoning format and 2) a large-scale self-improvement stage leveraging reinforcement learning. Our approach results in Satori, a 7B LLM trained on open-source models and data. Extensive empirical evaluations demonstrate that Satori achieves state-of-the-art performance on mathematical reasoning benchmarks while exhibits strong generalization to out-of-domain tasks. Code, data, and models will be fully open-sourced.
|confname = Arxiv
|link = https://arxiv.org/abs/2502.02508
|title= Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search
|speaker=Qinyong
|date=2025-03-14
}}{{Hist_seminar
|abstract = Light bulbs have been recently explored to design Light Fidelity (LiFi) communication to battery-free tags, thus complementing Radiofrequency (RF) backscatter in the uplink. In this paper, we show that LiFi and RF backscatter are complementary and have unexplored interactions. We introduce PassiveLiFi, a battery-free system that uses LiFi to transmit RF backscatter at a meagre power budget. We address several challenges on the system design in the LiFi transmitter, the tag and the RF receiver. We design the first LiFi transmitter that implements a chirp spread spectrum (CSS) using the visible light spectrum. We use a small bank of solar cells for both communication and harvesting, and reconfigure them based on the amount of harvested energy and desired data rate. We further alleviate the low responsiveness of solar cells with a new low-power receiver design in the tag. We design and implement a novel technique for embedding multiple symbols in the RF backscatter based on delayed chirps. Experimental results with an RF carrier of 17dBm show that we can generate RF backscatter with a range of 92.1 meters/ μW consumed in the tag, which is almost double with respect to prior work.
|confname =ToN'23
|link = https://ieeexplore.ieee.org/document/10371205/
|title= LiFi for Low-Power and Long-Range RF Backscatter
|speaker=Mengyu
|date=2025-03-14
}}
{{Hist_seminar
|abstract = Video analytics is widespread in various applications serving our society. Recent advances of content enhancement in video analytics offer significant benefits for the bandwidth saving and accuracy improvement. However, existing content-enhanced video analytics systems are excessively computationally expensive and provide extremely low throughput. In this paper, we present region-based content enhancement, that enhances only the important regions in videos, to improve analytical accuracy. Our system, RegenHance, enables high-accuracy and high-throughput video analytics at the edge by 1) a macroblock-based region importance predictor that identifies the important regions fast and precisely, 2) a region-aware enhancer that stitches sparsely distributed regions into dense tensors and enhances them efficiently, and 3) a profile-based execution planer that allocates appropriate resources for enhancement and analytics components. We prototype RegenHance on five heterogeneous edge devices. Experiments on two analytical tasks reveal that region-based enhancement improves the overall accuracy of 10-19% and achieves 2-3x throughput compared to the state-of-the-art frame-based enhancement methods.
|confname =NSDI'25
|link = https://arxiv.org/pdf/2407.16990
|title= Region-based Content Enhancement for Efficient Video Analytics at the Edge
|speaker=Xinyan
|date=2025-03-07
}}{{Hist_seminar
|abstract = Occluded person re-identification is a challenging task as human body parts could be occluded by some obstacles (e.g. trees, cars, and pedestrians) in certain scenes. Some existing pose-guided methods solve this problem by aligning body parts according to graph matching, but these graph-based methods are not intuitive and complicated. Therefore, we propose a transformer-based Pose-guided Feature Disentangling (PFD) method by utilizing pose information to clearly disentangle semantic components (e.g. human body or joint parts) and selectively match non-occluded parts correspondingly. First, Vision Transformer (ViT) is used to extract the patch features with its strong capability. Second, to preliminarily disentangle the pose information from patch information, the matching and distributing mechanism is leveraged in Pose-guided Feature Aggregation (PFA) module. Third, a set of learnable semantic views are introduced in transformer decoder to implicitly enhance the disentangled body part features. However, those semantic views are not guaranteed to be related to the body without additional supervision. Therefore, Pose-View Matching (PVM) module is proposed to explicitly match visible body parts and automatically separate occlusion features. Fourth, to better prevent the interference of occlusions, we design a Pose-guided Push Loss to emphasize the features of visible body parts. Extensive experiments over five challenging datasets for two tasks (occluded and holistic Re-ID) demonstrate that our proposed PFD is superior promising, which performs favorably against state-of-the-art methods. Code is available at this https URL
|confname =AAAI'22
|link = https://arxiv.org/abs/2112.02466
|title= Pose-guided Feature Disentangling for Occluded Person Re-identification Based on Transformer
|speaker=Bairong
|date=2025-03-07
}}
{{Hist_seminar
|abstract = The emerging programmable networks sparked significant research on Intelligent Network Data Plane (INDP), which achieves learning-based traffic analysis at line-speed. Prior art in INDP focus on deploying tree/forest models on the data plane. We observe a fundamental limitation in tree-based INDP approaches: although it is possible to represent even larger tree/forest tables on the data plane, the flow features that are computable on the data plane are fundamentally limited by hardware constraints. In this paper, we present BoS to push the boundaries of INDP by enabling Neural Network (NN) driven traffic analysis at line-speed. Many types of NNs (such as Recurrent Neural Network (RNN), and transformers) that are designed to work with sequential data have advantages over tree-based models, because they can take raw network data as input without complex feature computations on the fly. However, the challenge is significant: the recurrent computation scheme used in RNN inference is fundamentally different from the match-action paradigm used on the network data plane. BoS addresses this challenge by (i) designing a novel data plane friendly RNN architecture that can execute unlimited RNN time steps with limited data plane stages, effectively achieving line-speed RNN inference; and (ii) complementing the on-switch RNN model with an off-switch transformer-based traffic analysis module to further boost the overall performance. We implement a prototype of BoS using a P4 programmable switch as our data plane, and extensively evaluate it over multiple traffic analysis tasks. The results show that BoS outperforms state-of-the-art in both analysis accuracy and scalability..
|confname =NSDI'24
|link = https://www.usenix.org/conference/nsdi24/presentation/yan
|title= Brain-on-Switch: Towards Advanced Intelligent Network Data Plane via NN-Driven Traffic Analysis at Line-Speed
|speaker=Youwei
|date=2025-02-28
}}
{{Hist_seminar
|abstract = Recent advances in quantum information science enabled the development of quantum communication network prototypes and created an opportunity to study full-stack quantum network architectures. This work develops SeQUeNCe, a comprehensive, customizable quantum network simulator. Our simulator consists of five modules: hardware models, entanglement management protocols, resource management, network management, and application. This framework is suitable for simulation of quantum network prototypes that capture the breadth of current and future hardware technologies and protocols. We implement a comprehensive suite of network protocols and demonstrate the use of SeQUeNCe by simulating a photonic quantum network with nine routers equipped with quantum memories. The simulation capabilities are illustrated in three use cases. We show the dependence of quantum network throughput on several key hardware parameters and study the impact of classical control message latency. We also investigate quantum memory usage efficiency in routers and demonstrate that redistributing memory according to anticipated load increases network capacity by 69.1% and throughput by 6.8%. We design SeQUeNCe to enable comparisons of alternative quantum network technologies, experiment planning, and validation and to aid with new protocol design. We are releasing SeQUeNCe as an open source tool and aim to generate community interest in extending it.
|confname =IOPSCIENCE'21
|link = https://iopscience.iop.org/article/10.1088/2058-9565/ac22f6/meta
|title= SeQUeNCe: a customizable discrete-event simulator of quantum networks
|speaker=Junzhe
|date=2025-02-21
}}{{Hist_seminar
|abstract = This article proposes a remote environmental monitoring system based on low-power Internet of Things, which is applied in smart agriculture to achieve remote and real-time measurement of temperature, humidity, and light intensity parameters in the crop growth environment within the coverage range of the device The system adopts low-power Internet of Things technology, which has the characteristics of wide coverage, multiple connections, fast speed, low cost, low power consumption, and excellent architecture. The overall design of the system includes multiple environmental monitoring nodes, a LoRa gateway, and corresponding environmental monitoring upper computer software. In terms of system software, it involves programming of node MCU and client upper computer software. The key technology implementation includes the hardware design and implementation of low-power sensor nodes and the development of LoRa protocol. System testing and performance analysis show that the optimized LoRa protocol performs well in communication distance, power consumption, stability, and other aspects, laying the foundation for the efficient operation of the system. This study provides a powerful tool for sustainable resource management, which helps to promote agricultural modernization and rural revitalization.
|confname =CISCE'24
|link = https://ieeexplore.ieee.org/abstract/document/10653076
|title= A Long Distance Environmental Monitoring System Based on Low Power IoT
|speaker= Ayesha Rasool
|date=2025-02-21
}}
{{Hist_seminar
|abstract = Recently, smart roadside infrastructure (SRI) has demonstrated the potential of achieving fully autonomous driving systems. To explore the potential of infrastructure-assisted autonomous driving, this paper presents the design and deployment of Soar, the first end-to-end SRI system specifically designed to support autonomous driving systems. Soar consists of both software and hardware components carefully designed to overcome various system and physical challenges. Soar can leverage the existing operational infrastructure like street lampposts for a lower barrier of adoption. Soar adopts a new communication architecture that comprises a bi-directional multi-hop I2I network and a downlink I2V broadcast service, which are designed based on off-the-shelf 802.11ac interfaces in an integrated manner. Soar also features a hierarchical DL task management framework to achieve desirable load balancing among nodes and enable them to collaborate efficiently to run multiple data-intensive autonomous driving applications. We deployed a total of 18 Soar nodes on existing lampposts on campus, which have been operational for over two years. Our real-world evaluation shows that Soar can support a diverse set of autonomous driving applications and achieve desirable real-time performance and high communication reliability. Our findings and experiences in this work offer key insights into the development and deployment of next-generation smart roadside infrastructure and autonomous driving systems.
|confname =MobiCom'24
|link = https://dl.acm.org/doi/abs/10.1145/3636534.3649352
|title= Soar: Design and Deployment of A Smart Roadside Infrastructure System for Autonomous Driving
|speaker=Jiahao
|date=2025-01-10
}}{{Hist_seminar
|abstract = GPUs are increasingly utilized for running DNN tasks on emerging mobile edge devices. Beyond accelerating single task inference, their value is also particularly apparent in efficiently executing multiple DNN tasks, which often have strict latency requirements in applications. Preemption is the main technology to ensure multitasking timeliness, but mobile edges primarily offer two priorities for task queues, and existing methods thus achieve only coarse-grained preemption by categorizing DNNs into real-time and best-effort, permitting a real-time task to preempt best-effort ones. However, the efficacy diminishes significantly when other real-time tasks run concurrently, but this is already common in mobile edge applications. Due to different hardware characteristics, solutions from other platforms are unsuitable. For instance, GPUs on traditional mobile devices primarily assist CPU processing and lack special preemption support, mainly following FIFO in GPU scheduling. Clouds handle concurrent task execution, but focus on allocating one or more GPUs per complex model, whereas on mobile edges, DNNs mainly vie for one GPU. This paper introduces Pantheon, designed to offer fine-grained preemption, enabling real-time tasks to preempt each other and best-effort tasks. Our key observation is that the two-tier GPU stream priorities, while underexplored, are sufficient. Efficient preemption can be realized through software design by innovative scheduling and novel exploitation of the nested redundancy principle for DNN models. Evaluation on a diverse set of DNNs shows substantial improvements in deadline miss rate and accuracy of Pantheon over state-of-the-art methods.
|confname =MobiSys'24
|link = https://dl.acm.org/doi/abs/10.1145/3643832.3661878
|title= Pantheon: Preemptible Multi-DNN Inference on Mobile Edge GPUs
|speaker=Jiale
|date=2025-01-10
}}
{{Hist_seminar
|abstract = Volumetric videos offer a unique interactive experience and have the potential to enhance social virtual reality and telepresence. Streaming volumetric videos to multiple users remains a challenge due to its tremendous requirements of network and computation resources. In this paper, we develop MuV2, an edge-assisted multi-user mobile volumetric video streaming system to support important use cases such as tens of students simultaneously consuming volumetric content in a classroom. MuV2 achieves high scalability and good streaming quality through three orthogonal designs: hybridizing direct streaming of 3D volumetric content with remote rendering, dynamically sharing edge-transcoded views across users, and multiplexing encoding tasks of multiple transcoding sessions into a limited number of hardware encoders on the edge. MuV2 then integrates the three designs into a holistic optimization framework. We fully implement MuV2 and experimentally demonstrate that MuV2 can deliver high-quality volumetric videos to over 30 concurrent untethered mobile devices with a single WiFi access point and a commodity edge server.
|confname =MobiCom'24
|link = https://dl.acm.org/doi/abs/10.1145/3636534.3649364
|title= MuV2: Scaling up Multi-user Mobile Volumetric Video Streaming via Content Hybridization and Sharing
|speaker=Jiyi
|date=2025-01-03
}}{{Hist_seminar
|abstract = The advent of 5G promises high bandwidth with the introduction of mmWave technology recently, paving the way for throughput-sensitive applications. However, our measurements in commercial 5G networks show that frequent handovers in 5G, due to physical limitations of mmWave cells, introduce significant under-utilization of the available bandwidth. By analyzing 5G link-layer and TCP traces, we uncover that improper interactions between these two layers causes multiple inefficiencies during handovers. To mitigate these, we propose M2HO, a novel device-centric solution that can predict and recognize different stages of a handover and perform state-dependent mitigation to markedly improve throughput. M2HO is transparent to the firmware, base stations, servers, and applications. We implement M2HO and our extensive evaluations validate that it yields significant improvements in TCP throughput with frequent handovers.
|confname =MobiCom'24
|link = https://dl.acm.org/doi/abs/10.1145/3636534.3690680
|title= M2HO: Mitigating the Adverse Effects of 5G Handovers on TCP
|speaker=Jiacheng
|date=2025-01-03
}}
====2024====
====2024====
{{Hist_seminar
|abstract = Packet routing in virtual networks requires virtual-to-physical address translation. The address mappings are updated by a single party, i.e., the network administrator, but they are read by multiple devices across the network when routing tenant packets. Existing approaches face an inherent read-write performance tradeoff: they either store these mappings in dedicated gateways for fast updates at the cost of slower forwarding or replicate them at end-hosts and suffer from slow updates.SwitchV2P aims to escape this tradeoff by leveraging the network switches to transparently cache the address mappings while learning them from the traffic. SwitchV2P brings the mappings closer to the sender, thus reducing the first packet latency and translation overheads, while simultaneously enabling fast mapping updates, all without changing existing routing policies and deployed gateways. The topology-aware data-plane caching protocol allows the switches to transparently adapt to changing network conditions and varying in-switch memory capacity.Our evaluation shows the benefits of in-network address mapping, including an up to 7.8× and 4.3× reduction in FCT and first packet latency respectively, and a substantial reduction in translation gateway load. Additionally, SwitchV2P achieves up to a 1.9× reduction in bandwidth overheads and requires order-of-magnitude fewer gateways for equivalent performance.
|confname =SIGCOMM'24
|link = https://dl.acm.org/doi/abs/10.1145/3651890.3672213
|title= In-Network Address Caching for Virtual Networks
|speaker=Dongting
|date=2024-12-06
}}{{Hist_seminar
|abstract = Visible light communication (VLC) has become an important complementary means to electromagnetic communications due to its freedom from interference. However, existing Internet-of-Things (IoT) VLC links can reach only <10 meters, which has significantly limited the applications of VLC to the vast and diverse scenarios. In this paper, we propose ChirpVLC, a novel modulation method to prolong VLC distance from ≤10 meters to over 100 meters. The basic idea of ChirpVLC is to trade throughput for prolonged distance by exploiting Chirp Spread Spectrum (CSS) modulation. Specifically, 1) we modulate the luminous intensity as a sinusoidal waveform with a linearly varying frequency and design different spreading factors (SF) for different environmental conditions. 2) We design range adaptation scheme for luminance sensing range to help receivers achieve better signal-to-noise ratio (SNR). 3) ChirpVLC supports many-to-one and non-line-of-sight communications, breaking through the limitations of visible light communication. We implement ChirpVLC and conduct extensive real-world experiments. The results show that ChirpVLC can extend the transmission distance of 5W COTS LEDs to over 100 meters, and the distance/energy utility is increased by 532% compared to the existing work.
|confname = IDEA
|link = https://uestc.feishu.cn/file/Pbq3bWgKJoTQObx79f3cf6gungb
|title= ChirpVLC:Extending The Distance of Low-cost Visible Light Communication with CSS Modulation
|speaker=Mengyu
|date=2024-12-06
}}
{{Hist_seminar
{{Hist_seminar
|abstract = On-device Deep Neural Network (DNN) training has been recognized as crucial for privacy-preserving machine learning at the edge. However, the intensive training workload and limited onboard computing resources pose significant challenges to the availability and efficiency of model training. While existing works address these challenges through native resource management optimization, we instead leverage our observation that edge environments usually comprise a rich set of accompanying trusted edge devices with idle resources beyond a single terminal. We propose Asteroid, a distributed edge training system that breaks the resource walls across heterogeneous edge devices for efficient model training acceleration. Asteroid adopts a hybrid pipeline parallelism to orchestrate distributed training, along with a judicious parallelism planning for maximizing throughput under certain resource constraints. Furthermore, a fault-tolerant yet lightweight pipeline replay mechanism is developed to tame the device-level dynamics for training robustness and performance stability. We implement Asteroid on heterogeneous edge devices with both vision and language models, demonstrating up to 12.2× faster training than conventional parallelism methods and 2.1× faster than state-of-the-art hybrid parallelism methods through evaluations. Furthermore, Asteroid can recover training pipeline 14× faster than baseline methods while preserving comparable throughput despite unexpected device exiting and failure.
|abstract = On-device Deep Neural Network (DNN) training has been recognized as crucial for privacy-preserving machine learning at the edge. However, the intensive training workload and limited onboard computing resources pose significant challenges to the availability and efficiency of model training. While existing works address these challenges through native resource management optimization, we instead leverage our observation that edge environments usually comprise a rich set of accompanying trusted edge devices with idle resources beyond a single terminal. We propose Asteroid, a distributed edge training system that breaks the resource walls across heterogeneous edge devices for efficient model training acceleration. Asteroid adopts a hybrid pipeline parallelism to orchestrate distributed training, along with a judicious parallelism planning for maximizing throughput under certain resource constraints. Furthermore, a fault-tolerant yet lightweight pipeline replay mechanism is developed to tame the device-level dynamics for training robustness and performance stability. We implement Asteroid on heterogeneous edge devices with both vision and language models, demonstrating up to 12.2× faster training than conventional parallelism methods and 2.1× faster than state-of-the-art hybrid parallelism methods through evaluations. Furthermore, Asteroid can recover training pipeline 14× faster than baseline methods while preserving comparable throughput despite unexpected device exiting and failure.
Line 353: Line 530:
|speaker=Zhenghua
|speaker=Zhenghua
|date=2024-01-04}}
|date=2024-01-04}}
====2023====
====2023====
{{Hist_seminar
{{Hist_seminar

Latest revision as of 18:49, 16 September 2025

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}