Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
Line 7: Line 7:
===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract=The massive connection of LoRa brings serious collision interference. Existing collision decoding methods cannot effectively deal with the adjacent collisions that occur when the collided symbols are adjacent in the frequency spectrum. The decoding features relied on by the existing methods will be corrupted by adjacent collisions. To address these issues, we propose Paralign, which is the first LoRa collision decoder supporting decoding LoRa collisions with confusing symbols via parallel alignment. The key enabling technology behind Paralign is tha there is no spectrum leakage with periodic truncation of a chirp. Paralign leverages the precise spectrum obtained by aligning the de-chirped periodic signals from each packet in parallel for spectrum filtering and power filtering. To aggregate correlation peaks in different windows of the same symbol, Paralign matches the peaks of multiple interfering windows to the interested window based on the time offset between collided packets. Moreover, a periodic truncation method is proposed to address the multiple candidate peak problem caused by side lobes of confusing symbols. We evaluate Paralign using USRP N210 in a 20-node network. Experimental results demonstrate that Paralign can significantly improve network throughput, which is over 1.46× higher than state-of-the-art methods.
|abstract=While a number of recent efforts have explored the use of "cloud offload" to enable deep learning on IoT devices, these have not assumed the use of duty-cycled radios like BLE. We argue that radio duty-cycling significantly diminishes the performance of existing cloud-offload methods. We tackle this problem by leveraging a previously unexplored opportunity to use early-exit offload enhanced with prioritized communication, dynamic pooling, and dynamic fusion of features. We show that our system, FLEET, achieves significant benefits in accuracy, latency, and compute budget compared to state-of-art local early exit, remote processing, and model partitioning schemes across a range of DNN models, datasets, and IoT platforms.
|confname=ToSN '23
|confname=MobiCom '23
|link=https://dl.acm.org/doi/10.1145/3571586
|link=https://dl.acm.org/doi/10.1145/3570361.3592514
|title=Decoding LoRa Collisions via Parallel Alignment
|title=Re-thinking computation offload for efficient inference on IoT devices with duty-cycled radios
|speaker=Kai Chen
|speaker=Yang Wang
|date=2024-01-04}}
|date=2024-01-11}}
{{Latest_seminar
{{Latest_seminar
|abstract=Human activity recognition is important for a wide range of applications such as surveillance systems and human-computer interaction. Computer vision based human activity recognition suffers from performance degradation in many real-world scenarios where the illumination is poor. On the other hand, recently proposed WiFi sensing that leverage ubiquitous WiFi signal for activity recognition is not affected by illumination but has low accuracy in dynamic environments. In this paper, we propose WiMix, a lightweight and robust multimodal system that leverages both WiFi and vision for human activity recognition. To deal with complex real-world environments, we design a lightweight mix cross attention module for automatic WiFi and video weight distribution. To reduce the system response time while ensuring the sensing accuracy, we design an end-to-end framework together with an efficient classifier to extract spatial and temporal features of two modalities. Extensive experiments are conducted in the real-world scenarios and the results demonstrate that WiMix achieves 98.5% activity recognition accuracy in 3 scenarios, which outperforms the state-of-the-art 89.6% sensing accuracy using WiFi and video modalities. WiMix can also reduce the inference latency from 1268.25ms to 217.36ms, significantly improving the response time.
|abstract=Provenance tracking has been widely used in the recent literature to debug system vulnerabilities and find the root causes behind faults, errors, or crashes over a running system. However, the existing approaches primarily developed graph-based models for provenance tracking over monolithic applications running directly over the operating system kernel. In contrast, the modern DevOps-based service-oriented architecture relies on distributed platforms, like serverless computing that uses container-based sandboxing over the kernel. Provenance tracking over such a distributed micro-service architecture is challenging, as the application and system logs are generated asynchronously and follow heterogeneous nomenclature and logging formats. This paper develops a novel approach to combining system and micro-services logs together to generate a Universal Provenance Graph (UPG) that can be used for provenance tracking over serverless architecture. We develop a Loadable Kernel Module (LKM) for runtime unit identification over the logs by intercepting the system calls with the help from the control flow graphs over the static application binaries. Finally, we design a regular expression-based log optimization method for reverse query parsing over the generated UPG. A thorough evaluation of the proposed UPG model with different benchmarked serverless applications shows the system’s effectiveness.
|confname=MASS '23
|confname=INFOCOM '23
|link=https://ieeexplore.ieee.org/abstract/document/10298524
|link=https://ieeexplore.ieee.org/abstract/document/10228884
|title=WiMix: A Lightweight Multimodal Human Activity Recognition System based on WiFi and Vision
|title=DisProTrack: Distributed Provenance Tracking over Serverless Applications
|speaker=Haotian
|speaker=Xinyu
|date=2024-01-04}}
|date=2024-01-11}}
{{Latest_seminar
{{Latest_seminar
|abstract=In wireless networks, the Named Data Networking (NDN) architecture maximizes contents’ availability throughout multiple network paths based on multicast-based communication combined with stateful forwarding and caching in intermediate nodes. Despite its benefits, the downside of the architecture resides in the packet flooding not efficiently prevented by current forwarding strategies and mainly associated with the native flooding of the architecture or malicious packets from Interest Flooding Attack (IFA). This work introduces iFLAT, a multicriteria-based forwarding strategy for i nterest FL ooding mitig AT ion on named data wireless networking. It follows a cross-layer approach that considers: (i) the Received Signal Strength (RSS) to adjust itself to the current status of the wireless network links, (ii) the network traffic ( meInfo ) to detect flooding issues and traffic anomalies, and (iii) a fake Interest Blacklist to identify IFA-related flooding. In doing so, the strategy achieves better efficiency on Interest flooding mitigation, addressing both native and IFA types of flooding. We evaluated the proposed strategy by reproducing a Flying Ad hoc Network (FANET) composed of Unmanned Aerial Vehicles (UAVs) featured by the NDN stack deployed in all nodes. Simulation results show that iFLAT can effectively detect and mitigate flooding, regardless of its origin or nature, achieving greater packet forwarding efficiency than competing strategies. In broadcast storm and IFA scenarios, it achieved 25.75% and 37.37% of traffic reduction, whereas Interest satisfaction rates of 16.36% and 51.78% higher, respectively.
|abstract=While radio communication still dominates in 5G, light and radios are expected to complement each other in the coming 6G networks. Visible Light Communication (VLC) is therefore attracting a tremendous amount of attention from both academia and industry. Recent studies showed that the front camera of pervasive smartphones is an ideal candidate to serve as the VLC receiver. While promising, we observe a recent trend with smartphones that can greatly hinder the adoption of smartphones for VLC, i.e., smartphones are moving towards full-screen for the best user experience. This trend forces front cameras to be placed under the devices' screen---leading to the so-called Under-Screen Camera (USC)---but we observe a severe performance degradation in VLC with USC: the transmission range is reduced from a few meters to merely 0.04 m, and the throughput is decreased by more than 90%. To address this issue, we leverage the unique spatiotemporal characteristics of the rolling shutter effect on USC to design a pixel-sweeping algorithm to identify the sampling points with minimal interference from the translucent screen. We further propose a novel slope-boosting demodulation method to deal with color shift brought by the leakage interference. We build a proof-of-concept prototype using two commercial smart-phones. Experiment results show that our proposed design reduces the BER by two orders of magnitude on average and improves the data rate by 59×: from 914 b/s to 54.43 kb/s. The transmission range is extended by roughly 100×: from 0.04 m to 4.2 m.
|confname=TMC '23
|confname=MobiSys '23
|link=https://ieeexplore.ieee.org/document/9888056
|link=https://dl.acm.org/doi/abs/10.1145/3581791.3596855
|title=A Multicriteria-Based Forwarding Strategy for Interest Flooding Mitigation on Named Data Wireless Networking
|title=When VLC Meets Under-Screen Camera
|speaker=Zhenghua
|speaker=Jiacheng
|date=2024-01-04}}
|date=2024-01-11}}
{{Latest_seminar
{{Latest_seminar
|abstract=While recent work explored streaming volumetric content on-demand, there is little effort on live volumetric video streaming that bears the potential of bringing more exciting applications than its on-demand counterpart. To fill this critical gap, in this paper, we propose MetaStream, which is, to the best of our knowledge, the first practical live volumetric content capture, creation, delivery, and rendering system for immersive applications such as virtual, augmented, and mixed reality. To address the key challenge of the stringent latency requirement for processing and streaming a huge amount of 3D data, MetaStream integrates several innovations into a holistic system, including dynamic camera calibration, edge-assisted object segmentation, cross-camera redundant point removal, and foveated volumetric content rendering. We implement a prototype of MetaStream using commodity devices and extensively evaluate its performance. Our results demonstrate that MetaStream achieves low-latency live volumetric video streaming at close to 30 frames per second on WiFi networks. Compared to state-of-the-art systems, MetaStream reduces end-to-end latency by up to 31.7% while improving visual quality by up to 12.5%.
|abstract=While recent work explored streaming volumetric content on-demand, there is little effort on live volumetric video streaming that bears the potential of bringing more exciting applications than its on-demand counterpart. To fill this critical gap, in this paper, we propose MetaStream, which is, to the best of our knowledge, the first practical live volumetric content capture, creation, delivery, and rendering system for immersive applications such as virtual, augmented, and mixed reality. To address the key challenge of the stringent latency requirement for processing and streaming a huge amount of 3D data, MetaStream integrates several innovations into a holistic system, including dynamic camera calibration, edge-assisted object segmentation, cross-camera redundant point removal, and foveated volumetric content rendering. We implement a prototype of MetaStream using commodity devices and extensively evaluate its performance. Our results demonstrate that MetaStream achieves low-latency live volumetric video streaming at close to 30 frames per second on WiFi networks. Compared to state-of-the-art systems, MetaStream reduces end-to-end latency by up to 31.7% while improving visual quality by up to 12.5%.
Line 33: Line 33:
|title=MetaStream: Live Volumetric Content Capture, Creation, Delivery, and Rendering in Real Time
|title=MetaStream: Live Volumetric Content Capture, Creation, Delivery, and Rendering in Real Time
|speaker=Jiale
|speaker=Jiale
|date=2024-01-04}}
|date=2024-01-11}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Revision as of 12:33, 8 January 2024

Time: Thursday 9:00-10:30
Address: 4th Research Building A518
Useful links: Readling list; Schedules; Previous seminars.

Latest

  1. [MobiCom '23] Re-thinking computation offload for efficient inference on IoT devices with duty-cycled radios, Yang Wang
    Abstract: While a number of recent efforts have explored the use of "cloud offload" to enable deep learning on IoT devices, these have not assumed the use of duty-cycled radios like BLE. We argue that radio duty-cycling significantly diminishes the performance of existing cloud-offload methods. We tackle this problem by leveraging a previously unexplored opportunity to use early-exit offload enhanced with prioritized communication, dynamic pooling, and dynamic fusion of features. We show that our system, FLEET, achieves significant benefits in accuracy, latency, and compute budget compared to state-of-art local early exit, remote processing, and model partitioning schemes across a range of DNN models, datasets, and IoT platforms.
  2. [INFOCOM '23] DisProTrack: Distributed Provenance Tracking over Serverless Applications, Xinyu
    Abstract: Provenance tracking has been widely used in the recent literature to debug system vulnerabilities and find the root causes behind faults, errors, or crashes over a running system. However, the existing approaches primarily developed graph-based models for provenance tracking over monolithic applications running directly over the operating system kernel. In contrast, the modern DevOps-based service-oriented architecture relies on distributed platforms, like serverless computing that uses container-based sandboxing over the kernel. Provenance tracking over such a distributed micro-service architecture is challenging, as the application and system logs are generated asynchronously and follow heterogeneous nomenclature and logging formats. This paper develops a novel approach to combining system and micro-services logs together to generate a Universal Provenance Graph (UPG) that can be used for provenance tracking over serverless architecture. We develop a Loadable Kernel Module (LKM) for runtime unit identification over the logs by intercepting the system calls with the help from the control flow graphs over the static application binaries. Finally, we design a regular expression-based log optimization method for reverse query parsing over the generated UPG. A thorough evaluation of the proposed UPG model with different benchmarked serverless applications shows the system’s effectiveness.
  3. [MobiSys '23] When VLC Meets Under-Screen Camera, Jiacheng
    Abstract: While radio communication still dominates in 5G, light and radios are expected to complement each other in the coming 6G networks. Visible Light Communication (VLC) is therefore attracting a tremendous amount of attention from both academia and industry. Recent studies showed that the front camera of pervasive smartphones is an ideal candidate to serve as the VLC receiver. While promising, we observe a recent trend with smartphones that can greatly hinder the adoption of smartphones for VLC, i.e., smartphones are moving towards full-screen for the best user experience. This trend forces front cameras to be placed under the devices' screen---leading to the so-called Under-Screen Camera (USC)---but we observe a severe performance degradation in VLC with USC: the transmission range is reduced from a few meters to merely 0.04 m, and the throughput is decreased by more than 90%. To address this issue, we leverage the unique spatiotemporal characteristics of the rolling shutter effect on USC to design a pixel-sweeping algorithm to identify the sampling points with minimal interference from the translucent screen. We further propose a novel slope-boosting demodulation method to deal with color shift brought by the leakage interference. We build a proof-of-concept prototype using two commercial smart-phones. Experiment results show that our proposed design reduces the BER by two orders of magnitude on average and improves the data rate by 59×: from 914 b/s to 54.43 kb/s. The transmission range is extended by roughly 100×: from 0.04 m to 4.2 m.
  4. [MobiCom '23] MetaStream: Live Volumetric Content Capture, Creation, Delivery, and Rendering in Real Time, Jiale
    Abstract: While recent work explored streaming volumetric content on-demand, there is little effort on live volumetric video streaming that bears the potential of bringing more exciting applications than its on-demand counterpart. To fill this critical gap, in this paper, we propose MetaStream, which is, to the best of our knowledge, the first practical live volumetric content capture, creation, delivery, and rendering system for immersive applications such as virtual, augmented, and mixed reality. To address the key challenge of the stringent latency requirement for processing and streaming a huge amount of 3D data, MetaStream integrates several innovations into a holistic system, including dynamic camera calibration, edge-assisted object segmentation, cross-camera redundant point removal, and foveated volumetric content rendering. We implement a prototype of MetaStream using commodity devices and extensively evaluate its performance. Our results demonstrate that MetaStream achieves low-latency live volumetric video streaming at close to 30 frames per second on WiFi networks. Compared to state-of-the-art systems, MetaStream reduces end-to-end latency by up to 31.7% while improving visual quality by up to 12.5%.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}