Difference between revisions of "Resource:Previous Seminars"

From MobiNetS
Jump to: navigation, search
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=== History ===
=== History ===
====2024====
====2024====
{{Hist_seminar
|abstract = On-device Deep Neural Network (DNN) training has been recognized as crucial for privacy-preserving machine learning at the edge. However, the intensive training workload and limited onboard computing resources pose significant challenges to the availability and efficiency of model training. While existing works address these challenges through native resource management optimization, we instead leverage our observation that edge environments usually comprise a rich set of accompanying trusted edge devices with idle resources beyond a single terminal. We propose Asteroid, a distributed edge training system that breaks the resource walls across heterogeneous edge devices for efficient model training acceleration. Asteroid adopts a hybrid pipeline parallelism to orchestrate distributed training, along with a judicious parallelism planning for maximizing throughput under certain resource constraints. Furthermore, a fault-tolerant yet lightweight pipeline replay mechanism is developed to tame the device-level dynamics for training robustness and performance stability. We implement Asteroid on heterogeneous edge devices with both vision and language models, demonstrating up to 12.2× faster training than conventional parallelism methods and 2.1× faster than state-of-the-art hybrid parallelism methods through evaluations. Furthermore, Asteroid can recover training pipeline 14× faster than baseline methods while preserving comparable throughput despite unexpected device exiting and failure.
|confname = MobiCom'24
|link = https://dl.acm.org/doi/abs/10.1145/3636534.3649363
|title= Asteroid: Resource-Efficient Hybrid Pipeline Parallelism for Collaborative DNN Training on Heterogeneous Edge Devices
|speaker=Congrong
|date=2024-11-29
}}
{{Hist_seminar
|abstract = The need for cooperation among intelligent edge devices has popularized cooperative multi-agent reinforcement learning (MARL) in multi-target coverage. However, many research efforts rely heavily on parameter sharing among homogeneous agents, which hampers coverage performance. The heterogeneity of computing and sensing capabilities, along with the time-varying dynamics of computing resources, pose significant challenges. To address these challenges, we propose a resource-sensitive multi-agent reinforcement learning framework based on heterogeneous edge devices (SmartHE). SmartHE decomposes the target coverage task into two hierarchical levels: 1) Executor-level task: A central coordinator assigns a subset of executors (i.e., cameras or agents) to execute action policies, aiming to minimize overall policy inference time and energy consumption by leveraging resource heterogeneity. 2) Target-level task: Each executor ignores irrelevant targets that fall outside the coverage radius of the executor based on the estimated target states and ignores redundant targets that could be more effectively covered by other executors based on the utility estimation. This enables each executor to focus on extracting features that optimize coverage. Through this dual-task framework, SmartHE efficiently improves the system performance.
|confname = IDEA
|link = https://mobinets.cn/site/Resource:Seminar
|title= SmartHE: Resource-sensitive MARL framework based on heterogeneous edge devices
|speaker=Xianyang
|date=2024-11-29
}}
{{Hist_seminar
|abstract = Collaborative inference is the current state-of-the-art solution for mobile-server neural network inference offloading. However, we find that existing collaborative inference solutions only focus on partitioning the DNN computation, which is only a small part of achieving an efficient DNN offloading system. What ultimately determines the performance of DNN offloading is how the execution system utilizes the characteristics of the given DNN offloading task on the mobile, network, and server resources of the offloading environment. To this end, we design CoActo, a DNN execution system built from the ground up for mobile-server inference offloading. Our key design philosophy is Coactive Inference Offloading, which is a new, improved concept of DNN offloading that adds two properties, 1) fine-grained expression of DNNs and 2) concurrency of runtime resources, to existing collaborative inference. In CoActo, system components go beyond simple model splitting of existing approaches and operate more proactively to achieve the coactive execution of inference workloads. CoActo dynamically schedules concurrent interleaving of the mobile, server, and network operations to actively increase resource utilization, enabling lower end-to-end latency. We implement CoActo for various mobile devices and server environments and evaluate our system with distinct environment settings and DNN models. The experimental results show that our system achieves up to 2.1 times speed-up compared to the state-of-the-art collaborative inference solutions.
|confname = Mobisys'24
|link = https://dl.acm.org/doi/10.1145/3643832.3661885
|title= CoActo: CoActive Neural Network Inference Offloading with Fine-grained and Concurrent Execution
|speaker=Zhenhua
|date=2024-11-22
}}
{{Hist_seminar
|abstract = Caching is an indispensable technique for low-cost and fast data serving. The eviction algorithm, at the heart of a cache, has been primarily designed to maximize efficiency—reducing the cache miss ratio. Many eviction algorithms have been designed in the past decades. However, they all trade off throughput, simplicity, or both for higher efficiency. Such a compromise often hinders adoption in production systems.This work presents SIEVE, an algorithm that is simpler than LRU and provides better than state-of-the-art efficiency and scalability for web cache workloads. We implemented SIEVE in five production cache libraries, requiring fewer than 20 lines of code changes on average. Our evaluation on 1559 cache traces from 7 sources shows that SIEVE achieves up to 63.2% lower miss ratio than ARC. Moreover, SIEVE has a lower miss ratio than 9 state-of-the-art algorithms on more than 45% of the 1559 traces, while the next best algorithm only has a lower miss ratio on 15%. SIEVE's simplicity comes with superior scalability as cache hits require no locking. Our prototype achieves twice the throughput of an optimized 16-thread LRU implementation. SIEVE is more than an eviction algorithm; it can be used as a cache primitive to build advanced eviction algorithms just like FIFO and LRU.
|confname =NSDI'24
|link = https://www.usenix.org/conference/nsdi24/presentation/zhang-yazhuo
|title= SIEVE is Simpler than LRU: an Efficient Turn-Key Eviction Algorithm for Web Caches
|speaker=Haotian
|date=2024-11-22
}}
{{Hist_seminar
|abstract = In this paper, we revisit the problem of the current routing system in terms of prediction scalability and routing result optimality. Specifically, the current traffic prediction models are not suitable for large urban networks due to the incomplete information of traffic conditions. Besides, existing routing systems can only plan the routes based on the past traffic conditions and struggle to update the optimal route for vehicles in real-time. As a result, the actual route taken by vehicles is different from the ground-truth optimal path. Therefore, we propose a Just-In-Time Predictive Route Planning framework to tackle these two problems. Firstly, we propose a Travel Time Constrained Top- kn Shortest Path algorithm which pre-computes a set of candidate paths with several switch points. This empowers vehicles to continuously have the opportunity to switch to better paths taking into account real-time traffic condition changes. Moreover, we present a query-driven prediction paradigm with ellipse-based searching space estimation, along with an efficient multi-queries handling mechanism. This not only allows for targeted traffic prediction by prioritizing regions with valuable yet outdated traffic information, but also provides optimal results for multiple queries based on real-time traffic evolution. Evaluations on two real-life road networks demonstrate the effectiveness and efficiency of our framework and methods.
|confname =ICDE'24
|link = https://ieeexplore.ieee.org/document/10598147/authors#authors
|title= A Just-In-Time Framework for Continuous Routing
|speaker=Zhenguo
|date=2024-11-8
}}
{{Hist_seminar
|abstract = Many networking tasks now employ deep learning (DL) to solve complex prediction and optimization problems. However, current design philosophy of DL-based algorithms entails intensive engineering overhead due to the manual design of deep neural networks (DNNs) for different networking tasks. Besides, DNNs tend to achieve poor generalization performance on unseen data distributions/environments. Motivated by the recent success of large language models (LLMs), this work studies the LLM adaptation for networking to explore a more sustainable design philosophy. With the powerful pre-trained knowledge, the LLM is promising to serve as the foundation model to achieve "one model for all tasks" with even better performance and stronger generalization. In pursuit of this vision, we present NetLLM, the first framework that provides a coherent design to harness the powerful capabilities of LLMs with low efforts to solve networking problems. Specifically, NetLLM empowers the LLM to effectively process multimodal data in networking and efficiently generate task-specific answers. Besides, NetLLM drastically reduces the costs of fine-tuning the LLM to acquire domain knowledge for networking. Across three networking-related use cases - viewport prediction, adaptive bitrate streaming and cluster job scheduling, we showcase that the NetLLM-adapted LLM significantly outperforms state-of-the-art algorithms.
|confname =SIGCOMM'24
|link = https://dl.acm.org/doi/abs/10.1145/3651890.3672268
|title= NetLLM: Adapting Large Language Models for Networking
|speaker=Yinghao
|date=2024-11-8
}}
{{Hist_seminar
{{Hist_seminar
|abstract = Sparsely-activated Mixture-of-Expert (MoE) layers have found practical applications in enlarging the model size of large-scale foundation models, with only a sub-linear increase in computation demands. Despite the wide adoption of hybrid parallel paradigms like model parallelism, expert parallelism, and expert-sharding parallelism (i.e., MP+EP+ESP) to support MoE model training on GPU clusters, the training efficiency is hindered by communication costs introduced by these parallel paradigms. To address this limitation, we propose Parm, a system that accelerates MP+EP+ESP training by designing two dedicated schedules for placing communication tasks. The proposed schedules eliminate redundant computations and communications and enable overlaps between intra-node and inter-node communications, ultimately reducing the overall training time. As the two schedules are not mutually exclusive, we provide comprehensive theoretical analyses and derive an automatic and accurate solution to determine which schedule should be applied in different scenarios. Experimental results on an 8-GPU server and a 32-GPU cluster demonstrate that Parm outperforms the state-of-the-art MoE training system, DeepSpeed-MoE, achieving 1.13× to 5.77× speedup on 1296 manually configured MoE layers and approximately 3× improvement on two real-world MoE models based on BERT and GPT-2.
|abstract = Sparsely-activated Mixture-of-Expert (MoE) layers have found practical applications in enlarging the model size of large-scale foundation models, with only a sub-linear increase in computation demands. Despite the wide adoption of hybrid parallel paradigms like model parallelism, expert parallelism, and expert-sharding parallelism (i.e., MP+EP+ESP) to support MoE model training on GPU clusters, the training efficiency is hindered by communication costs introduced by these parallel paradigms. To address this limitation, we propose Parm, a system that accelerates MP+EP+ESP training by designing two dedicated schedules for placing communication tasks. The proposed schedules eliminate redundant computations and communications and enable overlaps between intra-node and inter-node communications, ultimately reducing the overall training time. As the two schedules are not mutually exclusive, we provide comprehensive theoretical analyses and derive an automatic and accurate solution to determine which schedule should be applied in different scenarios. Experimental results on an 8-GPU server and a 32-GPU cluster demonstrate that Parm outperforms the state-of-the-art MoE training system, DeepSpeed-MoE, achieving 1.13× to 5.77× speedup on 1296 manually configured MoE layers and approximately 3× improvement on two real-world MoE models based on BERT and GPT-2.
|confname =INFOCOM‘24
|confname =INFOCOM'24
|link = https://ieeexplore.ieee.org/abstract/document/10621327
|link = https://ieeexplore.ieee.org/abstract/document/10621327
|title= Parm: Efficient Training of Large Sparsely-Activated Models with Dedicated Schedules
|title= Parm: Efficient Training of Large Sparsely-Activated Models with Dedicated Schedules
Line 19: Line 67:
{{Hist_seminar
{{Hist_seminar
|abstract = Video super-resolution (VSR) on mobile devices aims to restore high-resolution frames from their low-resolution counterparts, satisfying the requirements of performance, FLOPs and latency. On one hand, partial feature processing, as a classic and acknowledged strategy, is developed in current studies to reach an appropriate trade-off between FLOPs and accuracy. However, the splitting of partial feature processing strategy are usually performed in a blind manner, thereby reducing the computational efficiency and performance gains. On the other hand, current methods for mobile platforms primarily treat VSR as an extension of single-image super-resolution to reduce model calculation and inference latency. However, lacking inter-frame information interaction in current methods results in a suboptimal latency and accuracy trade-off. To this end, we propose a novel architecture, termed Feature Aggregating Network with Inter-frame Interaction (FANI), a lightweight yet considering frame-wise correlation VSR network, which could achieve real-time inference while maintaining superior performance. Our FANI accepts adjacent multi-frame low-resolution images as input and generally consists of several fully-connection-embedded modules, i.e., Multi-stage Partial Feature Distillation (MPFD) for capturing multi-level feature representations. Moreover, considering the importance of inter-frame alignment, we further employ a tiny Attention-based Frame Alignment (AFA) module to promote inter-frame information flow and aggregation efficiently. Extensive experiments on the well-known dataset and real-world mobile device demonstrate the superiority of our proposed FANI, which means that our FANI could be well adapted to mobile devices and produce visually pleasing results.
|abstract = Video super-resolution (VSR) on mobile devices aims to restore high-resolution frames from their low-resolution counterparts, satisfying the requirements of performance, FLOPs and latency. On one hand, partial feature processing, as a classic and acknowledged strategy, is developed in current studies to reach an appropriate trade-off between FLOPs and accuracy. However, the splitting of partial feature processing strategy are usually performed in a blind manner, thereby reducing the computational efficiency and performance gains. On the other hand, current methods for mobile platforms primarily treat VSR as an extension of single-image super-resolution to reduce model calculation and inference latency. However, lacking inter-frame information interaction in current methods results in a suboptimal latency and accuracy trade-off. To this end, we propose a novel architecture, termed Feature Aggregating Network with Inter-frame Interaction (FANI), a lightweight yet considering frame-wise correlation VSR network, which could achieve real-time inference while maintaining superior performance. Our FANI accepts adjacent multi-frame low-resolution images as input and generally consists of several fully-connection-embedded modules, i.e., Multi-stage Partial Feature Distillation (MPFD) for capturing multi-level feature representations. Moreover, considering the importance of inter-frame alignment, we further employ a tiny Attention-based Frame Alignment (AFA) module to promote inter-frame information flow and aggregation efficiently. Extensive experiments on the well-known dataset and real-world mobile device demonstrate the superiority of our proposed FANI, which means that our FANI could be well adapted to mobile devices and produce visually pleasing results.
|confname = ICDM‘23
|confname = ICDM'23
|link = https://ieeexplore.ieee.org/abstract/document/10415812
|link = https://ieeexplore.ieee.org/abstract/document/10415812
|title= Feature Aggregating Network with Inter-Frame Interaction for Efficient Video Super-Resolution
|title= Feature Aggregating Network with Inter-Frame Interaction for Efficient Video Super-Resolution

Latest revision as of 21:47, 5 December 2024

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}