Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
Line 7: Line 7:
===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract = Localizing ground devices (GDs) is an important requirement for a wide variety of applications, such as infrastructure monitoring, precision agriculture, search and rescue operations, to name a few. To this end, unmanned aerial vehicles (UAVs) or drones offer a promising technology due to their flexibility. However, the distance measurements performed using a drone, an integral part of a localization procedure, incur several errors that affect the localization accuracy. In this paper, we provide analytical expressions for the impact of different kinds of measurement errors on the ground distance between the UAV and GDs. We review three range-based and three range-free localization algorithms, identify their source of errors, and analytically derive the error bounds resulting from aggregating multiple inaccurate measurements. We then extend the range-free algorithms for improved accuracy. We validate our theoretical analysis and compare the observed localization error of the algorithms after collecting data from a testbed using ten GDs and one drone, equipped with ultra wide band (UWB) antennas and operating in an open field. Results show that our analysis closely matches with experimental localization errors. Moreover, compared to their original counterparts, the extended range-free algorithms significantly improve the accuracy.
|abstract = As intelligence is moving from data centers to the edges, intelli�gent edge devices such as smartphones, drones, robots, and smart IoT devices are equipped with the capability to altogether train a deep learning model on the devices from the data collected by themselves. Despite its considerable value, the key bottleneck of making on-device distributed training practically useful in real�world deployments is that they consume a significant amount of training time under wireless networks with constrained bandwidth. To tackle this critical bottleneck, we present Mercury, an impor�tance sampling-based framework that enhances the training effi�ciency of on-device distributed training without compromising the accuracies of the trained models. The key idea behind the design of Mercury is to focus on samples that provide more important information in each training iteration. In doing this, the training efficiency of each iteration is improved. As such, the total number of iterations can be considerably reduced so as to speed up the overall training process. We implemented Mercury and deployed it on a self-developed testbed. We demonstrate its effectiveness and show that Mercury consistently outperforms two status quo frameworks on six commonly used datasets across tasks in image classification, speech recognition, and natural language processing.  
|confname= TMC 2022
|confname= ACM SenSys 2021
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9184260
|link=https://www.egr.msu.edu/~mizhang/papers/2021_SenSys_Mercury.pdf
|title= Measurement Errors in Range-Based Localization Algorithms for UAVs: Analysis and Experimentation
|title=Mercury: Efficient On-Device Distributed DNN Training via Stochastic Importance Sampling
|speaker=Luwei
|speaker=Jiajun
}}
}}
{{Latest_seminar
{{Latest_seminar
|abstract = This work proposes AMIS, an edge computing-based adaptive video streaming system. AMIS explores the power of edge computing in three aspects. First, with video contents pre-cached in the local buffer, AMIS is content-aware which adapts the video playout strategy based on the scene features of video contents and quality of experience (QoE) of users. Second, AMIS is channel-aware which measures the channel conditions in real-time and estimates the wireless bandwidth. Third, by integrating the content features and channel estimation, AMIS applies the deep reinforcement learning model to optimize the playout strategy towards the best QoE. Therefore, AMIS is an intelligent content- and channel-aware scheme which fully explores the intelligence of edge computing and adapts to general environments and QoE requirements. Using trace-driven simulations, we show that AMIS can succeed in improving the average QoE by 14%-46% as compared to the state-of-the-art adaptive bitrate algorithms.
|abstract = Many datacenters and clouds manage storage systems sepa�rately from computing services for better manageability and
|confname= INFOCOM 2021
resource utilization. These existing disaggregated storage systems use hard disks or SSDs as storage media. Recently, the technology of persistent memory (PM) has matured and seen initial adoption in several datacenters. Disaggregating PM could enjoy the same benefits of traditional disaggre�gated storage systems, but it requires new designs because of its memory-like performance and byte addressability. In this paper, we explore the design of disaggregating PM and managing them remotely from compute servers, a
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9488426
model we call passive disaggregated persistent memory, or pDPM. Compared to the alternative of managing PM at stor�age servers, pDPM significantly lowers monetary and energy costs and avoids scalability bottlenecks at storage servers. We built three key-value store systems using the pDPM model. The first one lets all compute nodes directly access and manage storage nodes. The second uses a central coor�dinator to orchestrate the communication between compute and storage nodes. These two systems have various perfor�mance and scalability limitations. To solve these problems, we built Clover, a pDPM system that separates the location,
|title=AMIS:EdgeComputingBasedAdaptiveMobileVideoStreaming
communication mechanism, and management strategy of the data plane and the metadata/control plane. Compute nodes access storage nodes directly for data operations, while one or few global metadata servers handle all metadata/control operations. From our extensive evaluation of the three pDPM systems, we found Clover to be the best-performing pDPM system. Its performance under common datacenter work�loads is similar to non-pDPM remote in-memory key-value store, while reducing CapEx and OpEx by 1.4× and 3.9×.
|confname= Usenix ATC 2020
|link=https://www.usenix.org/system/files/atc20-tsai.pdf
|title=Disaggregating Persistent Memory and Controlling Them Remotely: An Exploration of Passive Disaggregated Key-Value Stores
|speaker=Silence
|speaker=Silence
}}
}}

Revision as of 22:25, 29 May 2022

Time: 2022-5-23 10:30
Address: 4th Research Building A527-B
Useful links: Readling list; Schedules; Previous seminars.

Latest

  1. [ACM SenSys 2021] Mercury: Efficient On-Device Distributed DNN Training via Stochastic Importance Sampling, Jiajun
    Abstract: As intelligence is moving from data centers to the edges, intelli�gent edge devices such as smartphones, drones, robots, and smart IoT devices are equipped with the capability to altogether train a deep learning model on the devices from the data collected by themselves. Despite its considerable value, the key bottleneck of making on-device distributed training practically useful in real�world deployments is that they consume a significant amount of training time under wireless networks with constrained bandwidth. To tackle this critical bottleneck, we present Mercury, an impor�tance sampling-based framework that enhances the training effi�ciency of on-device distributed training without compromising the accuracies of the trained models. The key idea behind the design of Mercury is to focus on samples that provide more important information in each training iteration. In doing this, the training efficiency of each iteration is improved. As such, the total number of iterations can be considerably reduced so as to speed up the overall training process. We implemented Mercury and deployed it on a self-developed testbed. We demonstrate its effectiveness and show that Mercury consistently outperforms two status quo frameworks on six commonly used datasets across tasks in image classification, speech recognition, and natural language processing.
  2. [Usenix ATC 2020] Disaggregating Persistent Memory and Controlling Them Remotely: An Exploration of Passive Disaggregated Key-Value Stores, Silence
    Abstract: Many datacenters and clouds manage storage systems sepa�rately from computing services for better manageability and

resource utilization. These existing disaggregated storage systems use hard disks or SSDs as storage media. Recently, the technology of persistent memory (PM) has matured and seen initial adoption in several datacenters. Disaggregating PM could enjoy the same benefits of traditional disaggre�gated storage systems, but it requires new designs because of its memory-like performance and byte addressability. In this paper, we explore the design of disaggregating PM and managing them remotely from compute servers, a model we call passive disaggregated persistent memory, or pDPM. Compared to the alternative of managing PM at stor�age servers, pDPM significantly lowers monetary and energy costs and avoids scalability bottlenecks at storage servers. We built three key-value store systems using the pDPM model. The first one lets all compute nodes directly access and manage storage nodes. The second uses a central coor�dinator to orchestrate the communication between compute and storage nodes. These two systems have various perfor�mance and scalability limitations. To solve these problems, we built Clover, a pDPM system that separates the location, communication mechanism, and management strategy of the data plane and the metadata/control plane. Compute nodes access storage nodes directly for data operations, while one or few global metadata servers handle all metadata/control operations. From our extensive evaluation of the three pDPM systems, we found Clover to be the best-performing pDPM system. Its performance under common datacenter work�loads is similar to non-pDPM remote in-memory key-value store, while reducing CapEx and OpEx by 1.4× and 3.9×.


History

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Template loop detected: Resource:Previous Seminars

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}