Difference between revisions of "Resource:Previous Seminars"

From MobiNetS
Jump to: navigation, search
 
(5 intermediate revisions by one other user not shown)
Line 1: Line 1:
=== History ===
=== History ===
{{Hist_seminar
|abstract = Video analytics is widespread in various applications serving our society. Recent advances of content enhancement in video analytics offer significant benefits for the bandwidth saving and accuracy improvement. However, existing content-enhanced video analytics systems are excessively computationally expensive and provide extremely low throughput. In this paper, we present region-based content enhancement, that enhances only the important regions in videos, to improve analytical accuracy. Our system, RegenHance, enables high-accuracy and high-throughput video analytics at the edge by 1) a macroblock-based region importance predictor that identifies the important regions fast and precisely, 2) a region-aware enhancer that stitches sparsely distributed regions into dense tensors and enhances them efficiently, and 3) a profile-based execution planer that allocates appropriate resources for enhancement and analytics components. We prototype RegenHance on five heterogeneous edge devices. Experiments on two analytical tasks reveal that region-based enhancement improves the overall accuracy of 10-19% and achieves 2-3x throughput compared to the state-of-the-art frame-based enhancement methods.
|confname =NSDI'25
|link = https://arxiv.org/pdf/2407.16990
|title= Region-based Content Enhancement for Efficient Video Analytics at the Edge
|speaker=Xinyan
|date=2025-03-07
}}{{Hist_seminar
|abstract = Occluded person re-identification is a challenging task as human body parts could be occluded by some obstacles (e.g. trees, cars, and pedestrians) in certain scenes. Some existing pose-guided methods solve this problem by aligning body parts according to graph matching, but these graph-based methods are not intuitive and complicated. Therefore, we propose a transformer-based Pose-guided Feature Disentangling (PFD) method by utilizing pose information to clearly disentangle semantic components (e.g. human body or joint parts) and selectively match non-occluded parts correspondingly. First, Vision Transformer (ViT) is used to extract the patch features with its strong capability. Second, to preliminarily disentangle the pose information from patch information, the matching and distributing mechanism is leveraged in Pose-guided Feature Aggregation (PFA) module. Third, a set of learnable semantic views are introduced in transformer decoder to implicitly enhance the disentangled body part features. However, those semantic views are not guaranteed to be related to the body without additional supervision. Therefore, Pose-View Matching (PVM) module is proposed to explicitly match visible body parts and automatically separate occlusion features. Fourth, to better prevent the interference of occlusions, we design a Pose-guided Push Loss to emphasize the features of visible body parts. Extensive experiments over five challenging datasets for two tasks (occluded and holistic Re-ID) demonstrate that our proposed PFD is superior promising, which performs favorably against state-of-the-art methods. Code is available at this https URL
|confname =AAAI'22
|link = https://arxiv.org/abs/2112.02466
|title= Pose-guided Feature Disentangling for Occluded Person Re-identification Based on Transformer
|speaker=Bairong
|date=2025-03-07
}}
{{Hist_seminar
|abstract = The emerging programmable networks sparked significant research on Intelligent Network Data Plane (INDP), which achieves learning-based traffic analysis at line-speed. Prior art in INDP focus on deploying tree/forest models on the data plane. We observe a fundamental limitation in tree-based INDP approaches: although it is possible to represent even larger tree/forest tables on the data plane, the flow features that are computable on the data plane are fundamentally limited by hardware constraints. In this paper, we present BoS to push the boundaries of INDP by enabling Neural Network (NN) driven traffic analysis at line-speed. Many types of NNs (such as Recurrent Neural Network (RNN), and transformers) that are designed to work with sequential data have advantages over tree-based models, because they can take raw network data as input without complex feature computations on the fly. However, the challenge is significant: the recurrent computation scheme used in RNN inference is fundamentally different from the match-action paradigm used on the network data plane. BoS addresses this challenge by (i) designing a novel data plane friendly RNN architecture that can execute unlimited RNN time steps with limited data plane stages, effectively achieving line-speed RNN inference; and (ii) complementing the on-switch RNN model with an off-switch transformer-based traffic analysis module to further boost the overall performance. We implement a prototype of BoS using a P4 programmable switch as our data plane, and extensively evaluate it over multiple traffic analysis tasks. The results show that BoS outperforms state-of-the-art in both analysis accuracy and scalability..
|confname =NSDI'24
|link = https://www.usenix.org/conference/nsdi24/presentation/yan
|title= Brain-on-Switch: Towards Advanced Intelligent Network Data Plane via NN-Driven Traffic Analysis at Line-Speed
|speaker=Youwei
|date=2025-02-28
}}
{{Hist_seminar
|abstract = Recent advances in quantum information science enabled the development of quantum communication network prototypes and created an opportunity to study full-stack quantum network architectures. This work develops SeQUeNCe, a comprehensive, customizable quantum network simulator. Our simulator consists of five modules: hardware models, entanglement management protocols, resource management, network management, and application. This framework is suitable for simulation of quantum network prototypes that capture the breadth of current and future hardware technologies and protocols. We implement a comprehensive suite of network protocols and demonstrate the use of SeQUeNCe by simulating a photonic quantum network with nine routers equipped with quantum memories. The simulation capabilities are illustrated in three use cases. We show the dependence of quantum network throughput on several key hardware parameters and study the impact of classical control message latency. We also investigate quantum memory usage efficiency in routers and demonstrate that redistributing memory according to anticipated load increases network capacity by 69.1% and throughput by 6.8%. We design SeQUeNCe to enable comparisons of alternative quantum network technologies, experiment planning, and validation and to aid with new protocol design. We are releasing SeQUeNCe as an open source tool and aim to generate community interest in extending it.
|confname =IOPSCIENCE'21
|link = https://iopscience.iop.org/article/10.1088/2058-9565/ac22f6/meta
|title= SeQUeNCe: a customizable discrete-event simulator of quantum networks
|speaker=Junzhe
|date=2025-02-21
}}{{Hist_seminar
|abstract = This article proposes a remote environmental monitoring system based on low-power Internet of Things, which is applied in smart agriculture to achieve remote and real-time measurement of temperature, humidity, and light intensity parameters in the crop growth environment within the coverage range of the device The system adopts low-power Internet of Things technology, which has the characteristics of wide coverage, multiple connections, fast speed, low cost, low power consumption, and excellent architecture. The overall design of the system includes multiple environmental monitoring nodes, a LoRa gateway, and corresponding environmental monitoring upper computer software. In terms of system software, it involves programming of node MCU and client upper computer software. The key technology implementation includes the hardware design and implementation of low-power sensor nodes and the development of LoRa protocol. System testing and performance analysis show that the optimized LoRa protocol performs well in communication distance, power consumption, stability, and other aspects, laying the foundation for the efficient operation of the system. This study provides a powerful tool for sustainable resource management, which helps to promote agricultural modernization and rural revitalization.
|confname =CISCE'24
|link = https://ieeexplore.ieee.org/abstract/document/10653076
|title= A Long Distance Environmental Monitoring System Based on Low Power IoT
|speaker= Ayesha Rasool
|date=2025-02-21
}}
{{Hist_seminar
|abstract = Recently, smart roadside infrastructure (SRI) has demonstrated the potential of achieving fully autonomous driving systems. To explore the potential of infrastructure-assisted autonomous driving, this paper presents the design and deployment of Soar, the first end-to-end SRI system specifically designed to support autonomous driving systems. Soar consists of both software and hardware components carefully designed to overcome various system and physical challenges. Soar can leverage the existing operational infrastructure like street lampposts for a lower barrier of adoption. Soar adopts a new communication architecture that comprises a bi-directional multi-hop I2I network and a downlink I2V broadcast service, which are designed based on off-the-shelf 802.11ac interfaces in an integrated manner. Soar also features a hierarchical DL task management framework to achieve desirable load balancing among nodes and enable them to collaborate efficiently to run multiple data-intensive autonomous driving applications. We deployed a total of 18 Soar nodes on existing lampposts on campus, which have been operational for over two years. Our real-world evaluation shows that Soar can support a diverse set of autonomous driving applications and achieve desirable real-time performance and high communication reliability. Our findings and experiences in this work offer key insights into the development and deployment of next-generation smart roadside infrastructure and autonomous driving systems.
|confname =MobiCom'24
|link = https://dl.acm.org/doi/abs/10.1145/3636534.3649352
|title= Soar: Design and Deployment of A Smart Roadside Infrastructure System for Autonomous Driving
|speaker=Jiahao
|date=2025-01-10
}}{{Hist_seminar
|abstract = GPUs are increasingly utilized for running DNN tasks on emerging mobile edge devices. Beyond accelerating single task inference, their value is also particularly apparent in efficiently executing multiple DNN tasks, which often have strict latency requirements in applications. Preemption is the main technology to ensure multitasking timeliness, but mobile edges primarily offer two priorities for task queues, and existing methods thus achieve only coarse-grained preemption by categorizing DNNs into real-time and best-effort, permitting a real-time task to preempt best-effort ones. However, the efficacy diminishes significantly when other real-time tasks run concurrently, but this is already common in mobile edge applications. Due to different hardware characteristics, solutions from other platforms are unsuitable. For instance, GPUs on traditional mobile devices primarily assist CPU processing and lack special preemption support, mainly following FIFO in GPU scheduling. Clouds handle concurrent task execution, but focus on allocating one or more GPUs per complex model, whereas on mobile edges, DNNs mainly vie for one GPU. This paper introduces Pantheon, designed to offer fine-grained preemption, enabling real-time tasks to preempt each other and best-effort tasks. Our key observation is that the two-tier GPU stream priorities, while underexplored, are sufficient. Efficient preemption can be realized through software design by innovative scheduling and novel exploitation of the nested redundancy principle for DNN models. Evaluation on a diverse set of DNNs shows substantial improvements in deadline miss rate and accuracy of Pantheon over state-of-the-art methods.
|confname =MobiSys'24
|link = https://dl.acm.org/doi/abs/10.1145/3643832.3661878
|title= Pantheon: Preemptible Multi-DNN Inference on Mobile Edge GPUs
|speaker=Jiale
|date=2025-01-10
}}
{{Hist_seminar
|abstract = Volumetric videos offer a unique interactive experience and have the potential to enhance social virtual reality and telepresence. Streaming volumetric videos to multiple users remains a challenge due to its tremendous requirements of network and computation resources. In this paper, we develop MuV2, an edge-assisted multi-user mobile volumetric video streaming system to support important use cases such as tens of students simultaneously consuming volumetric content in a classroom. MuV2 achieves high scalability and good streaming quality through three orthogonal designs: hybridizing direct streaming of 3D volumetric content with remote rendering, dynamically sharing edge-transcoded views across users, and multiplexing encoding tasks of multiple transcoding sessions into a limited number of hardware encoders on the edge. MuV2 then integrates the three designs into a holistic optimization framework. We fully implement MuV2 and experimentally demonstrate that MuV2 can deliver high-quality volumetric videos to over 30 concurrent untethered mobile devices with a single WiFi access point and a commodity edge server.
|confname =MobiCom'24
|link = https://dl.acm.org/doi/abs/10.1145/3636534.3649364
|title= MuV2: Scaling up Multi-user Mobile Volumetric Video Streaming via Content Hybridization and Sharing
|speaker=Jiyi
|date=2025-01-03
}}{{Hist_seminar
|abstract = The advent of 5G promises high bandwidth with the introduction of mmWave technology recently, paving the way for throughput-sensitive applications. However, our measurements in commercial 5G networks show that frequent handovers in 5G, due to physical limitations of mmWave cells, introduce significant under-utilization of the available bandwidth. By analyzing 5G link-layer and TCP traces, we uncover that improper interactions between these two layers causes multiple inefficiencies during handovers. To mitigate these, we propose M2HO, a novel device-centric solution that can predict and recognize different stages of a handover and perform state-dependent mitigation to markedly improve throughput. M2HO is transparent to the firmware, base stations, servers, and applications. We implement M2HO and our extensive evaluations validate that it yields significant improvements in TCP throughput with frequent handovers.
|confname =MobiCom'24
|link = https://dl.acm.org/doi/abs/10.1145/3636534.3690680
|title= M2HO: Mitigating the Adverse Effects of 5G Handovers on TCP
|speaker=Jiacheng
|date=2025-01-03
}}
====2024====
====2024====
{{Hist_seminar
{{Hist_seminar
Line 368: Line 436:
|speaker=Zhenghua
|speaker=Zhenghua
|date=2024-01-04}}
|date=2024-01-04}}
====2023====
====2023====
{{Hist_seminar
{{Hist_seminar

Latest revision as of 11:47, 14 March 2025

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}