Difference between revisions of "Resource:Previous Seminars"

From MobiNetS
Jump to: navigation, search
Line 1: Line 1:
=== History ===
=== History ===
{{Hist_seminar
|abstract = Unlike traditional data collection applications (e.g., environment monitoring) that are dominated by uplink transmissions, the newly emerging applications (e.g., device actuation, firmware update, packet reception acknowledgement) also pose ever-increasing demands on downlink transmission capabilities. However, current LoRaWAN falls short in supporting such applications primarily due to downlink-uplink asymmetry. While the uplink can concurrently receive multiple packets, downlink transmission is limited to a single logical channel at a time, which fundamentally hinders the deployment of downlink-hungry applications. To tackle this practical challenge, FDLoRa develops the first-of-its-kind in-band full-duplex LoRa gateway design with novel solutions to mitigate the impact of self-interference (i.e., strong downlink interference to ultra-weak uplink reception), which unleashes the full spectrum for in-band downlink transmissions without compromising the reception of weak uplink packets. Built upon the full-duplex gateways, FDLoRa introduces a new downlink framework to support concurrent downlink transmissions over multiple logical channels of available gateways. Evaluation results demonstrate that FDLoRa boosts downlink capacity by 5.7x compared to LoRaWAN on a three-gateway testbed and achieves 2.58x higher downlink concurrency per gateway than the state-of-the-art.
|confname = Sensys'24
|link = https://dl.acm.org/doi/10.1145/3666025.3699338
|title= FDLoRa: Tackling Downlink-Uplink Asymmetry with Full-duplex LoRa Gateways
|speaker= Kai Chen
|date=2025-10-23
}}{{Hist_seminar
|abstract =Recent years have witnessed a widespread adoption of containers. While containers simplify and accelerate application development, existing container network technologies either incur significant overhead, which hurts performance for distributed applications, or lose flexibility or compatibility, which hinders the widespread deployment in production. We carefully analyze the kernel data path of an overlay network, quantifying the time consumed by each segment of the data path and identifying the extra overhead in an overlay network compared to bare metal. We observe that this extra overhead generates repetitive results among packets, which inspires us to introduce caches within an overlay network. We design and implement ONCache (Overlay Network Cache), a cache-based container overlay network, to eliminate the extra overhead while maintaining flexibility and compatibility. We implement ONCache using the extended Berkeley Packet Filter (eBPF) with only 524 lines of code, and integrate it as a plugin of Antrea. With ONCache, containers attain networking performance akin to that of bare metal. Compared to the standard overlay networks, ONCache improves throughput and request-response transaction rate by 12% and 36% for TCP (20% and 34% for UDP), respectively, while significantly reducing per-packet CPU overhead. Popular distributed applications also benefit from ONCache.
|confname = NSDI'25
|link = https://www.usenix.org/conference/nsdi25/presentation/lin-shengkai
|title= ONCache: A Cache-Based Low-Overhead Container Overlay Network
|speaker= Daobing Zeng
|date=2025-10-24
}}
{{Hist_seminar
{{Hist_seminar
|abstract = We present HyperCam, an energy-efficient image classification pipeline that enables computer vision tasks onboard low-power IoT camera systems. HyperCam leverages hyperdimensional computing to perform training and inference efficiently on low-power microcontrollers. We implement a low-power wireless camera platform using off-the-shelf hardware and demonstrate that HyperCam can achieve an accuracy of 93.60%, 84.06%, 92.98%, and 72.79% for MNIST, Fashion-MNIST, Face Detection, and Face Identification tasks, respectively, while significantly outperforming other classifiers in resource efficiency. \revSpecifically, it delivers inference latency of 0.08-0.27s while using 42.91-63.00KB flash memory and 22.25KB RAM at peak. Among other machine learning classifiers such as SVM, xgBoost, MicroNets, MobileNetV3, and MCUNetV3, HyperCam is the only classifier that achieves competitive accuracy while maintaining competitive memory footprint and inference latency that meets the resource requirements of low-power camera systems.
|abstract = We present HyperCam, an energy-efficient image classification pipeline that enables computer vision tasks onboard low-power IoT camera systems. HyperCam leverages hyperdimensional computing to perform training and inference efficiently on low-power microcontrollers. We implement a low-power wireless camera platform using off-the-shelf hardware and demonstrate that HyperCam can achieve an accuracy of 93.60%, 84.06%, 92.98%, and 72.79% for MNIST, Fashion-MNIST, Face Detection, and Face Identification tasks, respectively, while significantly outperforming other classifiers in resource efficiency. \revSpecifically, it delivers inference latency of 0.08-0.27s while using 42.91-63.00KB flash memory and 22.25KB RAM at peak. Among other machine learning classifiers such as SVM, xgBoost, MicroNets, MobileNetV3, and MCUNetV3, HyperCam is the only classifier that achieves competitive accuracy while maintaining competitive memory footprint and inference latency that meets the resource requirements of low-power camera systems.

Revision as of 10:07, 31 October 2025

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}