Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
(wenliang updates seminars)
 
(120 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2022-6-20 10:30'''
|time='''Friday 10:30-12:00'''
|addr=4th Research Building A527-B
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
}}
}}
Line 7: Line 7:
===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract = The development of intelligent traffic light control systems is essential for smart transportation management. While some efforts have been made to optimize the use of individual traffic lights in an isolated way, related studies have largely ignored the fact that the use of multi-intersection traffic lights is spatially influenced, as well as the temporal dependency of historical traffic status for current traffic light control. To that end, in this article, we propose a novel Spatio-Temporal Multi-Agent Reinforcement Learning (STMARL) framework for effectively capturing the spatio-temporal dependency of multiple related traffic lights and control these traffic lights in a coordinating way. Specifically, we first construct the traffic light adjacency graph based on the spatial structure among traffic lights. Then, historical traffic records will be integrated with current traffic status via Recurrent Neural Network structure. Moreover, based on the temporally-dependent traffic information, we design a Graph Neural Network based model to represent relationships among multiple traffic lights, and the decision for each traffic light will be made in a distributed way by the deep Q-learning method. Finally, the experimental results on both synthetic and real-world data have demonstrated the effectiveness of our STMARL framework, which also provides an insightful understanding of the influence mechanism among multi-intersection traffic lights.
|abstract=LoRa has emerged as one of the promising long-range and low-power wireless communication technologies for Internet of Things (IoT). With the massive deployment of LoRa networks, the ability to perform Firmware Update Over-The-Air (FUOTA) is becoming a necessity for unattended LoRa devices. LoRa Alliance has recently dedicated the specification for FUOTA, but the existing solution has several drawbacks, such as low energy efficiency, poor transmission reliability, and biased multicast grouping. In this paper, we propose a novel energy-efficient, reliable, and beamforming-assisted FUOTA system for LoRa networks named FLoRa, which is featured with several techniques, including delta scripting, channel coding, and beamforming. In particular, we first propose a novel joint differencing and compression algorithm to generate the delta script for processing gain, which unlocks the potential of incremental FUOTA in LoRa networks. Afterward, we design a concatenated channel coding scheme to enable reliable transmission against dynamic link quality. The proposed scheme uses a rateless code as outer code and an error detection code as inner code to achieve coding gain. Finally, we design a beamforming strategy to avoid biased multicast and compromised throughput for power gain. Experimental results on a 20-node testbed demonstrate that FLoRa improves network transmission reliability by up to 1.51 × and energy efficiency by up to 2.65 × compared with the existing solution in LoRaWAN.
|confname= TMC 2022
|confname=IPSN 2023
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9240060
|link=https://dl.acm.org/doi/10.1145/3583120.3586963
|title=STMARL: A Spatio-Temporal Multi-Agent Reinforcement Learning Approach for Cooperative Traffic Light Control
|title=FLoRa: Energy-Efficient, Reliable, and Beamforming-Assisted Over-The-Air Firmware Update in LoRa Networks
|speaker=Xianyang
|speaker=Kai Chen
}}
|date=2024-05-10}}
{{Latest_seminar
|abstract = We formulate computation offloading as a decentralized decision-making problem with autonomous agents. We design an interaction mechanism that incentivizes agents to align private and system goals by balancing between competition and cooperation. The mechanism provably has Nash equilibria with optimal resource allocation in the static case. For a dynamic environment, we propose a novel multi-agent online learning algorithm that learns with partial, delayed and noisy state information, and a reward signal that reduces information need to a great extent. Empirical results confirm that through learning, agents significantly improve both system and individual performance, e.g., 40% offloading failure rate reduction, 32% communication overhead reduction, up to 38% computation resource savings in low contention, 18% utilization increase with reduced load variation in high contention, and improvement in fairness. Results also confirm the algorithm's good convergence and generalization property in significantly different environments.
|confname= INFOCOM 2022
|link=https://www.jianguoyun.com/p/DWeMmMMQrvr2CBivtsYEIAA
|title=Multi-Agent Distributed Reinforcement Learningfor Making Decentralized Offloading Decisions
|speaker=Wenjie
}}
{{Latest_seminar
|abstract = Federated learning (FL) has emerged in edge computing to address limited bandwidth and privacy concerns of traditional cloud-based centralized training. However, the existing FL mechanisms may lead to long training time and consume a tremendous amount of communication resources. In this paper, we propose an efficient FL mechanism, which divides the edge nodes into K clusters by balanced clustering. The edge nodes in one cluster forward their local updates to cluster header for aggregation by synchronous method, called cluster aggregation, while all cluster headers perform the asynchronous method for global aggregation. This processing procedure is called hierarchical aggregation. Our analysis shows that the convergence bound depends on the number of clusters and the training epochs. We formally define the resource-efficient federated learning with hierarchical aggregation (RFL-HA) problem. We propose an efficient algorithm to determine the optimal cluster structure (i.e., the optimal value of K) with resource constraints and extend it to deal with the dynamic network conditions. Extensive simulation results obtained from our study for different models and datasets show that the proposed algorithms can reduce completion time by 34.8%-70% and the communication resource by 33.8%-56.5% while achieving a similar accuracy, compared with the well-known FL mechanisms.
|confname= INFOCOM 2021
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9488756
|title=Resource-Efficient Federated Learning with Hierarchical Aggregation in Edge Computing
|speaker=Jianqi
}}
{{Latest_seminar
{{Latest_seminar
|abstract = The increased use of deep neural networks has stimulated the growing demand for cloud-based model serving platforms. Serverless computing offers a simplified solution: users deploy models as serverless functions and let the platform handle provisioning and scaling. However, serverless functions have constrained resources in CPU and memory, making them inefficient or infeasible to serve large neural networks-which have become increasingly popular. In this paper, we present Gillis, a serverless-based model serving system that automatically partitions a large model across multiple serverless functions for faster inference and reduced memory footprint per function. Gillis employs two novel model partitioning algorithms that respectively achieve latency-optimal serving and cost-optimal serving with SLO compliance. We have implemented Gillis on three serverless platforms-AWS Lambda, Google Cloud Functions, and KNIX-with MXNet as the serving backend. Experimental evaluations against popular models show that Gillis supports serving very large neural networks, reduces the inference latency substantially, and meets various SLOs with a low serving cost.
|abstract=As a promising infrastructure, edge storage systems have drawn many attempts to efficiently distribute and share data among edge servers. However, it remains open to meeting the increasing demand for similarity retrieval across servers. The intrinsic reason is that the existing solutions can only return an exact data match for a query while more general edge applications require the data similar to a query input from any server. To fill this gap, this paper pioneers a new paradigm to support high-dimensional similarity search at network edges. Specifically, we propose Prophet, the first known architecture for similarity data indexing. We first divide the feature space of data into plenty of subareas, then project both subareas and edge servers into a virtual plane where the distances between any two points can reflect not only data similarity but also network latency. When any edge server submits a request for data insert, delete, or query, it computes the data feature and the virtual coordinates; then iteratively forwards the request through greedy routing based on the forwarding tables and the virtual coordinates. By Prophet, similar high-dimensional features would be stored by a common server or several nearby servers. Compared with distributed hash tables in P2P networks, Prophet requires logarithmic servers to access for a data request and reduces the network latency from the logarithmic to the constant level of the server number. Experimental results indicate that Prophet achieves comparable retrieval accuracy and shortens the query latency by 55%~70% compared with centralized schemes.
|confname= ICDCS 2021
|confname=INFOCOM 2023
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9546452
|link=https://ieeexplore.ieee.org/abstract/document/10228941/
|title=Gillis: Serving Large Neural Networks in Serverless Functions with Automatic Model Partitioning
|title=Prophet: An Efficient Feature Indexing Mechanism for Similarity Data Sharing at Network Edge
|speaker=Kun Wang
|speaker=Rong Cong
}}
|date=2024-05-10}}
 
 
=== History ===
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 20:19, 6 May 2024

Time: Friday 10:30-12:00
Address: 4th Research Building A518
Useful links: Readling list; Schedules; Previous seminars.

Latest

  1. [IPSN 2023] FLoRa: Energy-Efficient, Reliable, and Beamforming-Assisted Over-The-Air Firmware Update in LoRa Networks, Kai Chen
    Abstract: LoRa has emerged as one of the promising long-range and low-power wireless communication technologies for Internet of Things (IoT). With the massive deployment of LoRa networks, the ability to perform Firmware Update Over-The-Air (FUOTA) is becoming a necessity for unattended LoRa devices. LoRa Alliance has recently dedicated the specification for FUOTA, but the existing solution has several drawbacks, such as low energy efficiency, poor transmission reliability, and biased multicast grouping. In this paper, we propose a novel energy-efficient, reliable, and beamforming-assisted FUOTA system for LoRa networks named FLoRa, which is featured with several techniques, including delta scripting, channel coding, and beamforming. In particular, we first propose a novel joint differencing and compression algorithm to generate the delta script for processing gain, which unlocks the potential of incremental FUOTA in LoRa networks. Afterward, we design a concatenated channel coding scheme to enable reliable transmission against dynamic link quality. The proposed scheme uses a rateless code as outer code and an error detection code as inner code to achieve coding gain. Finally, we design a beamforming strategy to avoid biased multicast and compromised throughput for power gain. Experimental results on a 20-node testbed demonstrate that FLoRa improves network transmission reliability by up to 1.51 × and energy efficiency by up to 2.65 × compared with the existing solution in LoRaWAN.
  2. [INFOCOM 2023] Prophet: An Efficient Feature Indexing Mechanism for Similarity Data Sharing at Network Edge, Rong Cong
    Abstract: As a promising infrastructure, edge storage systems have drawn many attempts to efficiently distribute and share data among edge servers. However, it remains open to meeting the increasing demand for similarity retrieval across servers. The intrinsic reason is that the existing solutions can only return an exact data match for a query while more general edge applications require the data similar to a query input from any server. To fill this gap, this paper pioneers a new paradigm to support high-dimensional similarity search at network edges. Specifically, we propose Prophet, the first known architecture for similarity data indexing. We first divide the feature space of data into plenty of subareas, then project both subareas and edge servers into a virtual plane where the distances between any two points can reflect not only data similarity but also network latency. When any edge server submits a request for data insert, delete, or query, it computes the data feature and the virtual coordinates; then iteratively forwards the request through greedy routing based on the forwarding tables and the virtual coordinates. By Prophet, similar high-dimensional features would be stored by a common server or several nearby servers. Compared with distributed hash tables in P2P networks, Prophet requires logarithmic servers to access for a data request and reduces the network latency from the logarithmic to the constant level of the server number. Experimental results indicate that Prophet achieves comparable retrieval accuracy and shortens the query latency by 55%~70% compared with centralized schemes.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Template loop detected: Resource:Previous Seminars

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}