Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
 
(73 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''Thursday 9:00-10:30'''
|time='''2024-12-06 10:30-12:00'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}


===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract=The massive connection of LoRa brings serious collision interference. Existing collision decoding methods cannot effectively deal with the adjacent collisions that occur when the collided symbols are adjacent in the frequency spectrum. The decoding features relied on by the existing methods will be corrupted by adjacent collisions. To address these issues, we propose Paralign, which is the first LoRa collision decoder supporting decoding LoRa collisions with confusing symbols via parallel alignment. The key enabling technology behind Paralign is tha there is no spectrum leakage with periodic truncation of a chirp. Paralign leverages the precise spectrum obtained by aligning the de-chirped periodic signals from each packet in parallel for spectrum filtering and power filtering. To aggregate correlation peaks in different windows of the same symbol, Paralign matches the peaks of multiple interfering windows to the interested window based on the time offset between collided packets. Moreover, a periodic truncation method is proposed to address the multiple candidate peak problem caused by side lobes of confusing symbols. We evaluate Paralign using USRP N210 in a 20-node network. Experimental results demonstrate that Paralign can significantly improve network throughput, which is over 1.46× higher than state-of-the-art methods.
|abstract = Packet routing in virtual networks requires virtual-to-physical address translation. The address mappings are updated by a single party, i.e., the network administrator, but they are read by multiple devices across the network when routing tenant packets. Existing approaches face an inherent read-write performance tradeoff: they either store these mappings in dedicated gateways for fast updates at the cost of slower forwarding or replicate them at end-hosts and suffer from slow updates.SwitchV2P aims to escape this tradeoff by leveraging the network switches to transparently cache the address mappings while learning them from the traffic. SwitchV2P brings the mappings closer to the sender, thus reducing the first packet latency and translation overheads, while simultaneously enabling fast mapping updates, all without changing existing routing policies and deployed gateways. The topology-aware data-plane caching protocol allows the switches to transparently adapt to changing network conditions and varying in-switch memory capacity.Our evaluation shows the benefits of in-network address mapping, including an up to 7.8× and 4.3× reduction in FCT and first packet latency respectively, and a substantial reduction in translation gateway load. Additionally, SwitchV2P achieves up to a 1.9× reduction in bandwidth overheads and requires order-of-magnitude fewer gateways for equivalent performance.
|confname=ToSN '23
|confname =SIGCOMM'24
|link=https://dl.acm.org/doi/10.1145/3571586
|link = https://dl.acm.org/doi/abs/10.1145/3651890.3672213
|title=Decoding LoRa Collisions via Parallel Alignment
|title= In-Network Address Caching for Virtual Networks
|speaker=Kai Chen
|speaker=Dongting
|date=2024-01-04}}
|date=2024-12-06
{{Latest_seminar
}}{{Latest_seminar
|abstract=Human activity recognition is important for a wide range of applications such as surveillance systems and human-computer interaction. Computer vision based human activity recognition suffers from performance degradation in many real-world scenarios where the illumination is poor. On the other hand, recently proposed WiFi sensing that leverage ubiquitous WiFi signal for activity recognition is not affected by illumination but has low accuracy in dynamic environments. In this paper, we propose WiMix, a lightweight and robust multimodal system that leverages both WiFi and vision for human activity recognition. To deal with complex real-world environments, we design a lightweight mix cross attention module for automatic WiFi and video weight distribution. To reduce the system response time while ensuring the sensing accuracy, we design an end-to-end framework together with an efficient classifier to extract spatial and temporal features of two modalities. Extensive experiments are conducted in the real-world scenarios and the results demonstrate that WiMix achieves 98.5% activity recognition accuracy in 3 scenarios, which outperforms the state-of-the-art 89.6% sensing accuracy using WiFi and video modalities. WiMix can also reduce the inference latency from 1268.25ms to 217.36ms, significantly improving the response time.
|abstract = Visible light communication (VLC) has become an important complementary means to electromagnetic communications due to its freedom from interference. However, existing Internet-of-Things (IoT) VLC links can reach only <10 meters, which has significantly limited the applications of VLC to the vast and diverse scenarios. In this paper, we propose ChirpVLC, a novel modulation method to prolong VLC distance from ≤10 meters to over 100 meters. The basic idea of ChirpVLC is to trade throughput for prolonged distance by exploiting Chirp Spread Spectrum (CSS) modulation. Specifically, 1) we modulate the luminous intensity as a sinusoidal waveform with a linearly varying frequency and design different spreading factors (SF) for different environmental conditions. 2) We design range adaptation scheme for luminance sensing range to help receivers achieve better signal-to-noise ratio (SNR). 3) ChirpVLC supports many-to-one and non-line-of-sight communications, breaking through the limitations of visible light communication. We implement ChirpVLC and conduct extensive real-world experiments. The results show that ChirpVLC can extend the transmission distance of 5W COTS LEDs to over 100 meters, and the distance/energy utility is increased by 532% compared to the existing work.
|confname=MASS '23
|confname = IDEA
|link=https://ieeexplore.ieee.org/abstract/document/10298524
|link = https://uestc.feishu.cn/file/Pbq3bWgKJoTQObx79f3cf6gungb
|title=WiMix: A Lightweight Multimodal Human Activity Recognition System based on WiFi and Vision
|title= ChirpVLC:Extending The Distance of Low-cost Visible Light Communication with CSS Modulation
|speaker=Haotian
|speaker=Mengyu
|date=2024-01-04}}
|date=2024-12-06
{{Latest_seminar
}}
|abstract=In wireless networks, the Named Data Networking (NDN) architecture maximizes contents’ availability throughout multiple network paths based on multicast-based communication combined with stateful forwarding and caching in intermediate nodes. Despite its benefits, the downside of the architecture resides in the packet flooding not efficiently prevented by current forwarding strategies and mainly associated with the native flooding of the architecture or malicious packets from Interest Flooding Attack (IFA). This work introduces iFLAT, a multicriteria-based forwarding strategy for i nterest FL ooding mitig AT ion on named data wireless networking. It follows a cross-layer approach that considers: (i) the Received Signal Strength (RSS) to adjust itself to the current status of the wireless network links, (ii) the network traffic ( meInfo ) to detect flooding issues and traffic anomalies, and (iii) a fake Interest Blacklist to identify IFA-related flooding. In doing so, the strategy achieves better efficiency on Interest flooding mitigation, addressing both native and IFA types of flooding. We evaluated the proposed strategy by reproducing a Flying Ad hoc Network (FANET) composed of Unmanned Aerial Vehicles (UAVs) featured by the NDN stack deployed in all nodes. Simulation results show that iFLAT can effectively detect and mitigate flooding, regardless of its origin or nature, achieving greater packet forwarding efficiency than competing strategies. In broadcast storm and IFA scenarios, it achieved 25.75% and 37.37% of traffic reduction, whereas Interest satisfaction rates of 16.36% and 51.78% higher, respectively.
 
|confname=TMC '23
|link=https://ieeexplore.ieee.org/document/9888056
|title=A Multicriteria-Based Forwarding Strategy for Interest Flooding Mitigation on Named Data Wireless Networking
|speaker=Zhenghua
|date=2024-01-04}}
{{Latest_seminar
|abstract=While recent work explored streaming volumetric content on-demand, there is little effort on live volumetric video streaming that bears the potential of bringing more exciting applications than its on-demand counterpart. To fill this critical gap, in this paper, we propose MetaStream, which is, to the best of our knowledge, the first practical live volumetric content capture, creation, delivery, and rendering system for immersive applications such as virtual, augmented, and mixed reality. To address the key challenge of the stringent latency requirement for processing and streaming a huge amount of 3D data, MetaStream integrates several innovations into a holistic system, including dynamic camera calibration, edge-assisted object segmentation, cross-camera redundant point removal, and foveated volumetric content rendering. We implement a prototype of MetaStream using commodity devices and extensively evaluate its performance. Our results demonstrate that MetaStream achieves low-latency live volumetric video streaming at close to 30 frames per second on WiFi networks. Compared to state-of-the-art systems, MetaStream reduces end-to-end latency by up to 31.7% while improving visual quality by up to 12.5%.
|confname=MobiCom '23
|link=https://dl.acm.org/doi/abs/10.1145/3570361.3592530
|title=MetaStream: Live Volumetric Content Capture, Creation, Delivery, and Rendering in Real Time
|speaker=Jiale
|date=2024-01-04}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 11:28, 6 December 2024

Time: 2024-12-06 10:30-12:00
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [SIGCOMM'24] In-Network Address Caching for Virtual Networks, Dongting
    Abstract: Packet routing in virtual networks requires virtual-to-physical address translation. The address mappings are updated by a single party, i.e., the network administrator, but they are read by multiple devices across the network when routing tenant packets. Existing approaches face an inherent read-write performance tradeoff: they either store these mappings in dedicated gateways for fast updates at the cost of slower forwarding or replicate them at end-hosts and suffer from slow updates.SwitchV2P aims to escape this tradeoff by leveraging the network switches to transparently cache the address mappings while learning them from the traffic. SwitchV2P brings the mappings closer to the sender, thus reducing the first packet latency and translation overheads, while simultaneously enabling fast mapping updates, all without changing existing routing policies and deployed gateways. The topology-aware data-plane caching protocol allows the switches to transparently adapt to changing network conditions and varying in-switch memory capacity.Our evaluation shows the benefits of in-network address mapping, including an up to 7.8× and 4.3× reduction in FCT and first packet latency respectively, and a substantial reduction in translation gateway load. Additionally, SwitchV2P achieves up to a 1.9× reduction in bandwidth overheads and requires order-of-magnitude fewer gateways for equivalent performance.
  2. [IDEA] ChirpVLC:Extending The Distance of Low-cost Visible Light Communication with CSS Modulation, Mengyu
    Abstract: Visible light communication (VLC) has become an important complementary means to electromagnetic communications due to its freedom from interference. However, existing Internet-of-Things (IoT) VLC links can reach only <10 meters, which has significantly limited the applications of VLC to the vast and diverse scenarios. In this paper, we propose ChirpVLC, a novel modulation method to prolong VLC distance from ≤10 meters to over 100 meters. The basic idea of ChirpVLC is to trade throughput for prolonged distance by exploiting Chirp Spread Spectrum (CSS) modulation. Specifically, 1) we modulate the luminous intensity as a sinusoidal waveform with a linearly varying frequency and design different spreading factors (SF) for different environmental conditions. 2) We design range adaptation scheme for luminance sensing range to help receivers achieve better signal-to-noise ratio (SNR). 3) ChirpVLC supports many-to-one and non-line-of-sight communications, breaking through the limitations of visible light communication. We implement ChirpVLC and conduct extensive real-world experiments. The results show that ChirpVLC can extend the transmission distance of 5W COTS LEDs to over 100 meters, and the distance/energy utility is increased by 532% compared to the existing work.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}