Difference between revisions of "Resource:Previous Seminars"

From MobiNetS
Jump to: navigation, search
 
(20 intermediate revisions by 3 users not shown)
Line 1: Line 1:
=== History ===
=== History ===
====2024====
{{Hist_seminar
|abstract = Running deep neural networks (DNNs) on large-scale videos from widely distributed cameras presents two significant challenges. Firstly, video quality for analytical purposes is severely impacted by the camera deployment environment, which is termed Pixel Recession in this paper. Secondly, low-latency video streaming from the source camera to edge servers is greatly hindered by the rapid expansion of video traffic. Despite numerous efforts such as enhancing the video structure, uneven encoding, and filtering frames captured on camera, these methods have proven insufficient to address the challenges at hand. We propose Spliceosome, a novel video analytics system that effectively overcomes the pixel recession and streaming bottlenecks. In brief, Spliceosome 1) recovers from pixel recession by adaptive video knobs (i.e., brightness and contrast) tuning in ARP (anchor region proposal) granularity, and 2) lowers the transmission volume by video thinning, which uses only single-channel information for video encoding. We implemented Spliceosome using only commercial off-the-shelf hardware. Our experimental results demonstrate that Spliceosome outperforms other alternative designs by 4.71-14.47%, 40.94-58.71%, and 14.28% in detection accuracy, end-to-end delay, and efficiency of DNNs inference, respectively.
|confname =ToN'25
|link = https://ieeexplore.ieee.org/abstract/document/10843977
|title= Spliceosome: On-Camera Video Thinning and Tuning for Timely and Accurate Analytics
|speaker=Zhongwei Sun
|date=2025-11-28
}}{{Hist_seminar
|abstract =The rapid expansion of large language models (LLMs) requires the development of extensive GPU clusters, with companies deploying clusters with tens to hundreds of thousands of GPUs. This growth significantly expands the design space for LLM training systems, requiring thorough exploration of different parallelization strategies, communication parameters, congestion control, fabric topology, etc. Current methods require up to 10k simulation experiments to identify optimal configurations, with inadequate exploration leading to significant degradation of training performance. In this paper, we tackle the overlooked problem of efficiently conducting parallel simulation experiments for design space exploration. Our {{Hist_seminar
|abstract = As Large Language Models (LLMs) continue to scale, optimizing their deployment requires efficient hardware and system co-design. However, current LLM performance evaluation frameworks fail to capture both chip-level execution details and system-wide behavior, making it difficult to assess realistic performance bottlenecks. In this work, we introduce ReaLLM, a trace-driven simulation framework designed to bridge the gap between detailed accelerator design and large-scale inference evaluation. Unlike prior simulators, ReaLLM integrates kernel profiling derived from detailed microarchitectural simulations with a new trace-driven end-to-end system simulator, enabling precise evaluation of parallelism strategies, batching techniques, and scheduling policies. To address the high computational cost of exhaustive simulations, ReaLLM constructs a precomputed kernel library based on hypothesized scenarios, interpolating results to efficiently explore a vast design space of LLM inference systems. Our validation against real hardware demonstrates the framework's accuracy, achieving an average end-to-end latency prediction error of only 9.1% when simulating inference tasks running on 4 NVIDIA H100 GPUs. We further use ReaLLM to evaluate popular LLMs' end-to-end performance across traces from different applications and identify key system bottlenecks, showing that modern GPU-based LLM inference is increasingly compute-bound rather than memory-bandwidth bound at large scale. Additionally, we significantly reduce simulation time with our precomputed kernel library by a factor of 6× for full-simulations and 164× for workload SLO exploration. ReaLLM is open-source and available at https://github.com/bespoke-silicon-group/reallm..
|confname =ASAP'25
|link = https://ieeexplore.ieee.org/abstract/document/11113621
|title= ReaLLM: A Trace-Driven Framework for Rapid Simulation of Large-Scale LLM Inference
|speaker=JunZhe
|date=2025-11-21
}}{{Hist_seminar
|abstract =With the proliferation of mobile devices, spatial crowdsourcing has emerged as a promising paradigm for facilitating location-based services, encompassing various applications across academia and industries. Recently, pioneering works have attempted to infer workers' mobility patterns from historical data to improve the quality of task assignment. However, these studies have overlooked or under-examined issues such as the dynamic mobility patterns of crowd workers, especially in the context of newcomers, the misalignment between the objectives of mobility prediction and task assignment, and the effective utilization of predicted mobility patterns. In this paper, we investigate a problem we term Task Assignment in Mobility Prediction-aware Spatial Crowdsourcing (TAMP). To address the TAMP problem, we first propose a task-adaptive meta-learning algorithm, which trains a set of specific meta-knowledge for workers' mobility prediction models through game theory-based learning task clustering and meta-training within each cluster. Then, we design a task assignment-oriented loss function and develop a task assignment algorithm that incorporates prediction performance, prioritizing assignments with higher confidence of completion. Extensive experiments on real-world datasets validate that our proposed methods can effectively improve the quality of task assignment.
|confname =ICDE'25
|link = https://ieeexplore.ieee.org/document/11113007
|title= Effective Task Assignment in Mobility Prediction-Aware Spatial Crowdsourcing
|speaker= Zhenguo
|date=2025-11-21
}}{{Hist_seminar
|abstract = Entanglement distribution across remote distances is critical for many quantum applications. Currently, the de facto approach for remote entanglement distribution relies on optical fiber for on-the-ground entanglement distribution. However, the fiber-based approach is incapable of global-scale entanglement distribution due to intrinsic limitations. This paper investigates a new hybrid ground-satellite quantum network architecture (QuESat) for global-scale entanglement distribution, integrating an on-the-ground fiber network with a global-scale passive optical network built with low-Earth-orbit satellites. The satellite network provides dynamic construction of photon lightpaths based on near-vacuum beam guides constructed via adjustable arrays of lenses, forwarding photons from one ground station to another with very high efficiency over long distances compared to using fiber. To assess the feasibility and effectiveness of QuESat for global communication, we formulate lightpath provisioning and entanglement distribution problems, considering the orbital dynamics of satellites and the time-varying entanglement demands from ground users. A two-stage algorithm is developed to dynamically configure the beam guides and distribute entanglements, respectively. The algorithm combines randomized and deterministic rounding for lightpath provisioning to enable global connectivity, with optimal entanglement swapping for distributing entanglements to meet users' demands. By developing a ground-satellite quantum network simulator, QuESat achieves multi-fold improvements compared to repeater networks.
|confname = INFOCOM'25
|link = https://ieeexplore.ieee.org/document/11044649
|title= QuESat: Satellite-Assisted Quantum Internet for Global-Scale Entanglement Distribution
|speaker= Yaliang
|date=2025-11-07
}}{{Hist_seminar
|abstract =The global business of transnational enterprises demands geo-distributed databases, where the leader-follower-based consensus protocols are the key to guaranteeing consistency of replicas spread across regions. Compared with traditional databases running in a single data center, determining which node is the leader in consensus protocol has a greater per-formance impact in geo-distributed databases running across multiple data centers. However, the performance of legacy leader management is far from satisfactory due to the network and application dynamics (e.g., network delay, node popularity, operation read-write ratio). This paper proposes GeoLM toward performance-oriented leader management for geo-distributed consensus protocols. GeoLM captures the network and application dynamics and proactively conducts seamless leader handovers with bounded switching costs. Our geo-distributed experimental results show that GeoLM improves performance up to 49.75% over the baselines (e.g., Raft and Geo-Raft) and achieves considerably good performance compared to state-of-the-art consensus protocols (e.g., SwiftPaxos, CURP, and EPaxos).
|confname = INFOCOM'25
|link = https://ieeexplore.ieee.org/document/11044598
|title= GeoLM: Performance-oriented Leader Management for Geo-Distributed Consensus Protocol
|speaker= Linqi Liu
|date=2025-11-07
}}{{Hist_seminar
|abstract = Immersive telepresence has the potential to revolutionize remote communication by offering a highly interactive and engaging user experience. However, state-of-the-art exchanges large volumes of 3D content to achieve satisfactory visual quality, resulting in substantial Internet bandwidth consumption. To tackle this challenge, we introduce MagicStream, a first-of-its-kind semantic-driven immersive telepresence system that effectively extracts and delivers compact semantic details of captured 3D representation of users, instead of traditional bit-by-bit communication of raw content. To minimize bandwidth consumption while maintaining low end-to-end latency and high visual quality, MagicStream incorporates the following key innovations: (1) efficient extraction of user's skin/cloth color and motion semantics based on lighting characteristics and body keypoints, respectively; (2) novel, real-time human body reconstruction from motion semantics; and (3) on-the-fly neural rendering of users' immersive representation with color semantics. We implement a prototype of MagicStream and extensively evaluate its performance through both controlled experiments and user trials. Our results show that, compared to existing schemes, MagicStream can drastically reduce Internet bandwidth usage by up to 1195X while maintaining good visual quality.
|confname = Sensys'24
|link = https://dl.acm.org/doi/10.1145/3666025.3699344
|title= MagicStream: Bandwidth-conserving Immersive Telepresence via Semantic Communication
|speaker= Mengfan Wang
|date=2025-10-31
}}{{Hist_seminar
|abstract =To fulfill computing demands of numerous Internet of Things (IoT) devices in infrastructure-free regions, low earth orbit (LEO) satellite edge computing has been proposed in recent years, to circumvent the latency arising from long backhaul and link congestion in traditional cloud computing mode. This article proposes a novel time-varying graph-based collaborative task offloading strategy for LEO satellite IoT to reduce task computing latency. To this end, a computing coordinate graph (CCG) is designed to characterize the time-varying topology and resource distribution of LEO satellite networks. When a task is offloaded to LEO satellite networks because local computing capability is unable to meet latency constraint, the position of the task access satellite in the CCG is determined first. Then, the expanded hop counts from all satellite nodes to the access satellite are calculated, which informs the partitioning of different node sets. Afterwards, considering both link and on-board computing resources, with the access satellite as the reference node, the minimum total task computing latency for each node set is obtained in an ascending order of the expanded hop counts. Finally, the minimum one among obtained latency values is the anticipated total task computing latency. Simulation results demonstrate the effectiveness of the proposed task offloading strategy in reducing task computing latency.
|confname = Systems Joural
|link = https://ieeexplore.ieee.org/document/11024019
|title= Collaborative Task Offloading for LEO Satellite Internet of Things: A Novel Computing Coordinate Graph-Based Approach
|speaker= Yifei Zhou
|date=2025-10-31
}}
{{Hist_seminar
|abstract = Unlike traditional data collection applications (e.g., environment monitoring) that are dominated by uplink transmissions, the newly emerging applications (e.g., device actuation, firmware update, packet reception acknowledgement) also pose ever-increasing demands on downlink transmission capabilities. However, current LoRaWAN falls short in supporting such applications primarily due to downlink-uplink asymmetry. While the uplink can concurrently receive multiple packets, downlink transmission is limited to a single logical channel at a time, which fundamentally hinders the deployment of downlink-hungry applications. To tackle this practical challenge, FDLoRa develops the first-of-its-kind in-band full-duplex LoRa gateway design with novel solutions to mitigate the impact of self-interference (i.e., strong downlink interference to ultra-weak uplink reception), which unleashes the full spectrum for in-band downlink transmissions without compromising the reception of weak uplink packets. Built upon the full-duplex gateways, FDLoRa introduces a new downlink framework to support concurrent downlink transmissions over multiple logical channels of available gateways. Evaluation results demonstrate that FDLoRa boosts downlink capacity by 5.7x compared to LoRaWAN on a three-gateway testbed and achieves 2.58x higher downlink concurrency per gateway than the state-of-the-art.
|confname = Sensys'24
|link = https://dl.acm.org/doi/10.1145/3666025.3699338
|title= FDLoRa: Tackling Downlink-Uplink Asymmetry with Full-duplex LoRa Gateways
|speaker= Kai Chen
|date=2025-10-23
}}{{Hist_seminar
|abstract =Recent years have witnessed a widespread adoption of containers. While containers simplify and accelerate application development, existing container network technologies either incur significant overhead, which hurts performance for distributed applications, or lose flexibility or compatibility, which hinders the widespread deployment in production. We carefully analyze the kernel data path of an overlay network, quantifying the time consumed by each segment of the data path and identifying the extra overhead in an overlay network compared to bare metal. We observe that this extra overhead generates repetitive results among packets, which inspires us to introduce caches within an overlay network. We design and implement ONCache (Overlay Network Cache), a cache-based container overlay network, to eliminate the extra overhead while maintaining flexibility and compatibility. We implement ONCache using the extended Berkeley Packet Filter (eBPF) with only 524 lines of code, and integrate it as a plugin of Antrea. With ONCache, containers attain networking performance akin to that of bare metal. Compared to the standard overlay networks, ONCache improves throughput and request-response transaction rate by 12% and 36% for TCP (20% and 34% for UDP), respectively, while significantly reducing per-packet CPU overhead. Popular distributed applications also benefit from ONCache.
|confname = NSDI'25
|link = https://www.usenix.org/conference/nsdi25/presentation/lin-shengkai
|title= ONCache: A Cache-Based Low-Overhead Container Overlay Network
|speaker= Daobing Zeng
|date=2025-10-24
}}
{{Hist_seminar
|abstract = We present HyperCam, an energy-efficient image classification pipeline that enables computer vision tasks onboard low-power IoT camera systems. HyperCam leverages hyperdimensional computing to perform training and inference efficiently on low-power microcontrollers. We implement a low-power wireless camera platform using off-the-shelf hardware and demonstrate that HyperCam can achieve an accuracy of 93.60%, 84.06%, 92.98%, and 72.79% for MNIST, Fashion-MNIST, Face Detection, and Face Identification tasks, respectively, while significantly outperforming other classifiers in resource efficiency. \revSpecifically, it delivers inference latency of 0.08-0.27s while using 42.91-63.00KB flash memory and 22.25KB RAM at peak. Among other machine learning classifiers such as SVM, xgBoost, MicroNets, MobileNetV3, and MCUNetV3, HyperCam is the only classifier that achieves competitive accuracy while maintaining competitive memory footprint and inference latency that meets the resource requirements of low-power camera systems.
|confname = Arxiv
|link = https://arxiv.org/html/2501.10547v1
|title= HyperCam: Low-Power Onboard Computer Vision for IoT Cameras
|speaker= Menghao Liu
|date=2025-10-17
}}{{Hist_seminar
|abstract = We present NIER, a video conferencing system that can adaptively maintain a low bitrate (e.g., 10–100 Kbps) with reasonable visual quality while being robust to packet losses. We use key-point-based deep image animation (DIA) as a key building block and address a series of networking and system challenges to make NIER practical. Our evaluations show that NIER significantly outperforms the baseline solutions.
|confname =SIGCOMM'25 (short paper)
|link = https://dl.acm.org/doi/pdf/10.1145/3718958.3750518
|title= NIER: Practical Neural-enhanced Low-bitrate Video Conferencing
|speaker=Xinyan Wang
|date=2025-9-26
}}{{Hist_seminar
|abstract = Distributed Edge Computing (DEC) has emerged as a novel paradigm, owing to its superior performance in communication latency, parallel computing efficiency, and energy consumption. With the surge of tasks in generative artificial intelligence, DEC faces higher demands for parallel computing efficiency. Scheduling multiple tasks for simultaneous processing, rather than one-by-one handling, could enhance parallel efficiency. Multiple tasks have multi-dependencies, i.e., sequence dependency, attribute similarity, and attribute correlation. Utilizing the bidirectional edges of traditional graphs to represent multi-dependencies can lead to an explosion in quantity. A hypergraph, with its hyperedges capable of connecting any number of vertices, can significantly solve the above problem. However, the multi-dependencies are rarely studied in the current research, posing the challenges, including incapable representing and unable capturing of multi-dependency hypergraph. In this work, we introduce a Joint communication and computation scheduling for hypErgraph Tasks in DEC, namely HypeJet, To effectively represent multi-dependencies, we employ hypergraph construction to represent task attributes and utilize hypergraph partitioning to clarify and refine task attribute correlations, enhancing parallel efficiency. In response to the challenge of capturing multi-dependencies, we employ a scheduling mechanism with the hypergraph neural network that efficiently acquires higher-order attribute correlated information among convolution matrices, providing enriched contextual information on multi-dependencies that supports decision-making in scheduling tasks. The evaluations using real-world traces demonstrate an 18.07% improvement in parallel efficiency of task scheduling.
|confname =INFOCOM'25
|link = https://ieeexplore.ieee.org/abstract/document/11044587
|title= HyperJet: Joint Communication and Computation Scheduling for Hypergraph Tasks in Distributed Edge Computing
|speaker= Yi Zhou
|date=2025-9-26
}}{{Hist_seminar
|abstract = Localization of networked nodes is an essential problem in emerging applications, including first-responder navigation, automated manufacturing lines, vehicular and drone navigation, asset tracking, Internet of Things, and 5G communication networks. In this paper, we present Locate3D, a novel system for peer-to-peer node localization and orientation estimation in large networks. Unlike traditional range-only methods, Locate3D introduces angle-of-arrival (AoA) data as an added network topology constraint. The system solves three key challenges: it uses angles to reduce the number of measurements required by 4× and jointly uses range and angle data for location estimation. We develop a spanning-tree approach for fast location updates, and to ensure the output graphs are rigid and uniquely realizable, even in occluded or weakly connected areas. Locate3D cuts down latency by up to 75% without compromising accuracy, surpassing standard range-only solutions. It has a 0.86 meter median localization error for building-scale multi-floor networks (32 nodes, 0 anchors) and 12.09 meters for large-scale networks (100,000 nodes, 15 anchors).
|confname =NSDI'25
|link = https://www.usenix.org/conference/nsdi25/presentation/garg
|title= Large Network UWB Localization: Algorithms and Implementation
|speaker=Bangguo
|date=2025-9-26
}}
{{Hist_seminar
|abstract = With cloud-side computing and rendering, mobile cloud gaming (MCG) is expected to deliver high-quality gaming experiences to budget mobile devices. However, our measurement on representative MCG platforms reveals that even under good network conditions, all platforms exhibit high interactive latency of 112–403 ms, from a user-input action to its display response, that critically affects users’ quality of experience. Moreover, jitters in network latency often lead to significant fluctuations in interactive latency. In this work, we collaborate with a commercial MCG platform to conduct the first in-depth analysis on the interactive latency of cloud gaming. We identify VSync, the synchronization primitive of Android graphics pipeline, to be a key contributor to the excessive interactive latency; as many as five VSync events are intricately invoked, which serialize the complex graphics processing logic on both the client and cloud sides. To address this, we design an end-to-end VSync regulator, dubbed LoopTailor, which minimizes VSync events by decoupling game rendering from the lengthy cloud-side graphics pipeline and coordinating cloud game rendering directly with the client. We implement LoopTailor on the collaborated platform and commodity Android devices, reducing the interactive latency (by ∼34%) to stably below 100 ms.
|confname =NSDI'25
|link = https://www.usenix.org/conference/nsdi25/presentation/li-yang
|title= Dissecting and Streamlining the Interactive Loop of Mobile Cloud Gaming
|speaker= Li Chen
|date=2025-9-9
}}{{Hist_seminar
|abstract = The local deployment of large language models (LLMs) on mobile devices has garnered increasing attention due to its advantages in enhancing user privacy and enabling offline operation. However, given the limited computational resources of a single mobile device, only small language models (SLMs) with restricted capabilities can currently be supported. In this paper, we explore the potential of leveraging the collective computing power of multiple mobile devices to collaboratively support more efficient local LLM inference. We evaluate the feasibility and efficiency of existing parallelism techniques under the constraints of mobile devices and wireless network, identifying that chunked pipeline parallelism holds promise for realizing this vision. Building on this insight, we propose FlexSpark, a novel solution designed to achieve efficient and robust multi-device collaborative inference. FlexSpark incorporates priority scheduling, ordered communication, and elastic compression to maximize wireless bandwidth utilization, and thus accelerates distributed inference. Preliminary experimental results demonstrate that FlexSpark achieves up to a 2 × speedup compared to state-of-the-art frameworks, significantly enhancing the practicality and scalability of LLM deployment on mobile devices.
|confname =APNet'25
|link = https://dl.acm.org/doi/10.1145/3735358.3735368
|title= FlexSpark: Robust and Efficient Multi-Device Collaborative Inference over Wireless Network
|speaker=Ruizhen
|date=2025-9-19
}}
{{Hist_seminar
|abstract = Reconfigurable Intelligent Surfaces (RIS) are a promising technology for creating smart radio environments by controlling wireless propagation. However, several factors hinder the integration of RIS technology into existing cellular networks, including the incompatibility of RIS control interfaces with 5G PHY/MAC procedures for synchronizing radio scheduling decisions and RIS operation, and the cost and energy limitations of passive RIS technology. This paper presents RISENSE, a system for practical RIS integration in cellular networks. First, we propose a novel, low-cost, and low-power RIS design capable of decoding control messages without complex baseband operations or additional RF chains, utilizing a power sensor and a network of microstrip lines and couplers. Second, we design an effective in-band wireless RIS control interface, compatible with 5G PHY/MAC procedures, that embeds amplitude-modulated (AM) RIS control commands directly into standard OFDM-modulated 5G data channels. Finally, we propose a low-overhead protocol that supports swift on-demand RIS re-con gurability, making it adaptable to varying channel conditions and user mobility, while minimizing the wastage of 5G OFDM symbols. Our experiments validate the design of RISENSE and our evaluation shows that our system can reconfigure a RIS at the same pace as users move, boosting 5G coverage where static or slow RIS controllers cannot.
|confname = Mobisys'25
|link = https://dspace.networks.imdea.org/handle/20.500.12761/1925
|title= RISENSE: Long-Range In-Band Wireless Control of Passive Reconfigurable Intelligent Surfaces
|speaker= Haifeng
|date=2025-9-12
}}
{{Hist_seminar
|abstract = Traditional 3D content representations include dense point clouds that consume large amounts of data and hence network bandwidth, while newer representations such as neural radiance fields suffer from poor frame rates due to their non-standard volumetric rendering pipeline. 3D Gaussian splats (3DGS) can be seen as a generalization of point clouds that meet the best of both worlds, with high visual quality and efficient rendering for real-time frame rates. However, delivering 3DGS scenes from a hosting server to client devices is still challenging due to high network data consumption (e.g., 1.5 GB for a single scene). The goal of this work is to create an efficient 3D content delivery framework that allows users to view high quality 3D scenes with 3DGS as the underlying data representation. The main contributions of the paper are: (1) Creating new layered 3DGS scenes for efficient delivery, (2) Scheduling algorithms to choose what splats to download at what time, and (3) Trace-driven experiments from users wearing virtual reality headsets to evaluate the visual quality and latency. Our system for Layered 3D Gaussian Splats delivery (L3GS) demonstrates high visual quality, achieving 16.9% higher average SSIM compared to baselines, and also works with other compressed 3DGS representations. The code is available at https://github.com/mavens-lab/layered_3d_gaussian_splats.
|confname =Mobicom'25
|link = https://arxiv.org/html/2504.05517v1
|title= L3GS: Layered 3D Gaussian Splats for Efficient 3D Scene Delivery
|speaker=Jiyi
|date=2025-9-12
}}
{{Hist_seminar
|abstract = This year, we are embracing the exciting new trends in AIoT including MLsys, LLMs, embodied perception, volumetric videos, etc. Papers collected from top venues in 2025 will be discussed in-depth, and research problems and new ideas are to be discovered!
|confname = Begin of new semester
|link = https://mobinets.cn/site/Resource:Paper_Carnival_2025
|title= Paper Carnival 2025
|speaker=All
|date=2025-08-27
}}
{{Hist_seminar
|abstract = In the metaverse era, point cloud video (PCV) streaming on mobile XR devices is pivotal. While most current methods focus on PCV compression from traditional 3-DoF video services, emerging AI techniques extract vital semantic information, producing content resembling the original. However, these are early-stage and computationally intensive. To enhance the inference efficacy of AI-based approaches, accommodate dynamic environments, and facilitate applicability to metaverse XR devices, we present ISCom, an interest-aware semantic communication scheme for lightweight PCV streaming. ISCom is featured with a region-of-interest (ROI) selection module, a lightweight encoder-decoder training module, and a learning-based scheduler to achieve real-time PCV decoding and rendering on resource-constrained devices. ISCom’s dual-stage ROI selection provides significantly reduces data volume according to real-time interest. The lightweight PCV encoder-decoder training is tailored to resource-constrained devices and adapts to the heterogeneous computing capabilities of devices. Furthermore, We provide a deep reinforcement learning (DRL)-based scheduler to select optimal encoder-decoder model for various devices adaptivelly, considering the dynamic network environments and device computing capabilities. Our extensive experiments demonstrate that ISCom outperforms baselines on mobile devices, achieving a minimum rendering frame rate improvement of 10 FPS and up to 22 FPS. Furthermore, our method significantly reduces memory usage by 41.7% compared to the state-of-the-art AITransfer method. These results highlight the effectiveness of ISCom in enabling lightweight PCV streaming and its potential to improve immersive experiences for emerging metaverse application.
|confname =JSAC'24
|link = https://dl.acm.org/doi/10.1109/JSAC.2023.3345430
|title= ISCom: Interest-Aware Semantic Communication Scheme for Point Cloud Video Streaming on Metaverse XR Devices
|speaker=Jiyi
|date=2025-06-13
}}
{{Hist_seminar
|abstract = Scientific Illustration Tutorial
|confname = TUTORIAL
|link = https://mobinets.cn/Resource:Seminar
|title= Idea share
|speaker=OldBee
|date=2025-06-13
}}
{{Hist_seminar
|abstract = Deploying deep convolutional neural networks (CNNs) for edge-based video analytics poses significant challenges due to the intensive computing demands. Model partitioning has emerged as a promising solution by offloading segments of CNNs to multiple proximal edge devices for collaborative inference. However, this approach often incurs substantial cross-device transmission overhead, particularly in handling intermediate feature maps. To address these limitations, we propose ReDream (REsidual feature-DRivEn mixed spArse coding for Model partitioning), a novel edge-centric video analytics framework that jointly optimizes  transmission efficiency and inference accuracy. ReDream introduces two key innovations: 1) It enhances the sparsity of intermediate features by replacing activation functions with ReLU in selected CNN layers and retraining, thereby increasing the proportion of zero-valued elements. 2) It leverages the heterogeneous distribution of feature data across layers by applying a mixed sparse coding scheme, i.e., selecting different compression methods adaptively to optimize model partitioning. These optimizations enable ReDream to support more efficient cross-device inference while maintaining high model accuracy, making it well-suited for real-time deployment in collaborative edge environments.
|confname = IDEA
|link = https://mns.uestc.cn/wiki/Research:InProgress/MixedSparseCoding
|title= ReDream: Residual Feature-Driven Mixed Sparse Coding for Model Partitioning
|speaker=Xianyang
|date=2025-05-23
}}
{{Hist_seminar
|abstract = While existing strategies to execute deep learning-based classification on low-power platforms assume the models are trained on all classes of interest, this paper posits that adopting context-awareness i.e. narrowing down a classification task to the current deployment context consisting of only recent inference queries can substantially enhance performance in resource-constrained environments. We propose a new paradigm, CACTUS, for scalable and efficient context-aware classification where a micro-classifier recognizes a small set of classes relevant to the current context and, when context change happens (e.g., a new class comes into the scene), rapidly switches to another suitable micro-classifier. CACTUS features several innovations, including optimizing the training cost of context-aware classifiers, enabling on-the-fly context-aware switching between classifiers, and balancing context switching costs and performance gains via simple yet effective switching policies. We show that CACTUS achieves significant benefits in accuracy, latency, and compute budget across a range of datasets and IoT platforms.
|confname = Mobisys'24
|link = https://dl.acm.org/doi/abs/10.1145/3643832.3661888
|title= CACTUS: Dynamically Switchable Context-aware micro-Classifiers for Efficient IoT Inference
|speaker= Zhenhua
|date=2025-04-18
}}
{{Hist_seminar
|abstract = Nowadays, volumetric videos have emerged as an attractive multimedia application providing highly immersive watching experiences since viewers could adjust their viewports at 6 degrees-of-freedom. However, the point cloud frames composing the video are prohibitively large, and effective compression techniques should be developed. There are two classes of compression methods. One suggests exploiting the conventional video codecs (2D-based methods) and the other proposes to compress the points in 3D space directly (3D-based methods). Though the 3D-based methods feature fast coding speeds, their compression ratios are low since the failure of leveraging inter-frame redundancy. To resolve this problem, we design a patch-wise compression framework working in the 3D space. Specifically, we search rigid moves of patches via the iterative closest point algorithm and construct a common geometric structure, which is followed by color compensation. We implement our decoder on a GPU platform so that real-time decoding and rendering are realized. We compare our method with GROOT, the state-of-the-art 3D-based compression method, and it reduces the bitrate by up to 5.98×. Moreover, by trimming invisible content, our scheme achieves comparable bandwidth demand of V-PCC, the representative 2D-based method, in FoV-adaptive streaming.
|confname = TC'24
|link = https://ieeexplore.ieee.org/document/10360355
|title= A GPU-Enabled Real-Time Framework for Compressing and Rendering Volumetric Videos
|speaker=Mengfan
|date=2025-04-18
}}
{{Hist_seminar
|abstract = Cross-silo federated learning (FL) enables multiple institutions (clients) to collaboratively build a global model without sharing their private data. To prevent privacy leakage during aggregation, homomorphic encryption (HE) is widely used to encrypt model updates, yet incurs high computation and communication overheads. To reduce these overheads, packed HE (PHE) has been proposed to encrypt multiple plaintexts into a single ciphertext. However, the original design of PHE does not consider the heterogeneity among different clients, an intrinsic problem in cross-silo FL, often resulting in undermined training efficiency with slow convergence and stragglers. In this work, we propose FedPHE, an efficiently packed homomorphically encrypted FL framework with secure weighted aggregation and client selection to tackle the heterogeneity problem. Specifically, using CKKS with sparsification, FedPHE can achieve efficient encrypted weighted aggregation by accounting for contributions of local updates to the global model. To mitigate the straggler effect, we devise a sketching-based client selection scheme to cherry-pick representative clients with heterogeneous models and computing capabilities. We show, through rigorous security analysis and extensive experiments, that FedPHE can efficiently safeguard clients’ privacy, achieve a training speedup of 1.85 − 4.44×, cut the communication overhead by 1.24 − 22.62× , and reduce the straggler effect by up to 1.71 − 2.39×.
|confname =INFOCOM24'
|link = https://ieeexplore.ieee.org/abstract/document/10621440
|title= Efficient and Straggler-Resistant Homomorphic Encryption for Heterogeneous Federated Learning
|speaker=Dongting
|date=2025-03-28
}}{{Hist_seminar
|abstract = Entanglement routing (ER) in quantum networks must guarantee entanglement fidelity, a property that is crucial for applications such as quantum key distribution, quantum computation, and quantum sensing. Conventional ER approaches assume that network links can only generate entanglements with a fixed fidelity, and then they rely on purification to improve endto-end fidelities. However, recent advances in entanglement generation technologies show that quantum links can be configured by choosing among different fidelity/entanglement-rate combinations (defined in this paper as link configurations), hence enabling a more flexible assignment of quantum-network resources for meeting specific application requirements. To exploit this opportunity, we introduce the problem of link configuration for fidelityconstrained routing and purification (LC-FCRP) in Quantum Networks. We first formulate a simplified FCRP version as a Mixed Integer Linear Programming (MILP) model, where the link fidelity can be adjusted within a finite set. Then, to explore the full space of possible link configurations, we propose a link configuration algorithm based on a novel shortest-pathbased fidelity determination (SPFD) algorithm w/o Bayesian Optimization, which can be applied on top of any existing ER algorithm. Numerical results demonstrate that link configuration improves the acceptance ratio of existing ER algorithms by 87%.
|confname =INFOCOM25'
|link = https://re.public.polimi.it/bitstream/11311/1281986/1/final_infocom25_link_configuration_for_entanglement_routing.pdf
|title= Link Configuration for Fidelity-Constrained Entanglement Routing in Quantum Networks
|speaker=Yaliang
|date=2025-03-27
}}
{{Hist_seminar
|abstract = Large language models (LLMs) have demonstrated remarkable reasoning capabilities across diverse domains. Recent studies have shown that increasing test-time computation enhances LLMs' reasoning capabilities. This typically involves extensive sampling at inference time guided by an external LLM verifier, resulting in a two-player system. Despite external guidance, the effectiveness of this system demonstrates the potential of a single LLM to tackle complex tasks. Thus, we pose a new research problem: Can we internalize the searching capabilities to fundamentally enhance the reasoning abilities of a single LLM? This work explores an orthogonal direction focusing on post-training LLMs for autoregressive searching (i.e., an extended reasoning process with self-reflection and self-exploration of new strategies). To achieve this, we propose the Chain-of-Action-Thought (COAT) reasoning and a two-stage training paradigm: 1) a small-scale format tuning stage to internalize the COAT reasoning format and 2) a large-scale self-improvement stage leveraging reinforcement learning. Our approach results in Satori, a 7B LLM trained on open-source models and data. Extensive empirical evaluations demonstrate that Satori achieves state-of-the-art performance on mathematical reasoning benchmarks while exhibits strong generalization to out-of-domain tasks. Code, data, and models will be fully open-sourced.
|confname = Arxiv
|link = https://arxiv.org/abs/2502.02508
|title= Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search
|speaker=Qinyong
|date=2025-03-14
}}{{Hist_seminar
|abstract = Light bulbs have been recently explored to design Light Fidelity (LiFi) communication to battery-free tags, thus complementing Radiofrequency (RF) backscatter in the uplink. In this paper, we show that LiFi and RF backscatter are complementary and have unexplored interactions. We introduce PassiveLiFi, a battery-free system that uses LiFi to transmit RF backscatter at a meagre power budget. We address several challenges on the system design in the LiFi transmitter, the tag and the RF receiver. We design the first LiFi transmitter that implements a chirp spread spectrum (CSS) using the visible light spectrum. We use a small bank of solar cells for both communication and harvesting, and reconfigure them based on the amount of harvested energy and desired data rate. We further alleviate the low responsiveness of solar cells with a new low-power receiver design in the tag. We design and implement a novel technique for embedding multiple symbols in the RF backscatter based on delayed chirps. Experimental results with an RF carrier of 17dBm show that we can generate RF backscatter with a range of 92.1 meters/ μW consumed in the tag, which is almost double with respect to prior work.
|confname =ToN'23
|link = https://ieeexplore.ieee.org/document/10371205/
|title= LiFi for Low-Power and Long-Range RF Backscatter
|speaker=Mengyu
|date=2025-03-14
}}
{{Hist_seminar
|abstract = Video analytics is widespread in various applications serving our society. Recent advances of content enhancement in video analytics offer significant benefits for the bandwidth saving and accuracy improvement. However, existing content-enhanced video analytics systems are excessively computationally expensive and provide extremely low throughput. In this paper, we present region-based content enhancement, that enhances only the important regions in videos, to improve analytical accuracy. Our system, RegenHance, enables high-accuracy and high-throughput video analytics at the edge by 1) a macroblock-based region importance predictor that identifies the important regions fast and precisely, 2) a region-aware enhancer that stitches sparsely distributed regions into dense tensors and enhances them efficiently, and 3) a profile-based execution planer that allocates appropriate resources for enhancement and analytics components. We prototype RegenHance on five heterogeneous edge devices. Experiments on two analytical tasks reveal that region-based enhancement improves the overall accuracy of 10-19% and achieves 2-3x throughput compared to the state-of-the-art frame-based enhancement methods.
|confname =NSDI'25
|link = https://arxiv.org/pdf/2407.16990
|title= Region-based Content Enhancement for Efficient Video Analytics at the Edge
|speaker=Xinyan
|date=2025-03-07
}}{{Hist_seminar
|abstract = Occluded person re-identification is a challenging task as human body parts could be occluded by some obstacles (e.g. trees, cars, and pedestrians) in certain scenes. Some existing pose-guided methods solve this problem by aligning body parts according to graph matching, but these graph-based methods are not intuitive and complicated. Therefore, we propose a transformer-based Pose-guided Feature Disentangling (PFD) method by utilizing pose information to clearly disentangle semantic components (e.g. human body or joint parts) and selectively match non-occluded parts correspondingly. First, Vision Transformer (ViT) is used to extract the patch features with its strong capability. Second, to preliminarily disentangle the pose information from patch information, the matching and distributing mechanism is leveraged in Pose-guided Feature Aggregation (PFA) module. Third, a set of learnable semantic views are introduced in transformer decoder to implicitly enhance the disentangled body part features. However, those semantic views are not guaranteed to be related to the body without additional supervision. Therefore, Pose-View Matching (PVM) module is proposed to explicitly match visible body parts and automatically separate occlusion features. Fourth, to better prevent the interference of occlusions, we design a Pose-guided Push Loss to emphasize the features of visible body parts. Extensive experiments over five challenging datasets for two tasks (occluded and holistic Re-ID) demonstrate that our proposed PFD is superior promising, which performs favorably against state-of-the-art methods. Code is available at this https URL
|confname =AAAI'22
|link = https://arxiv.org/abs/2112.02466
|title= Pose-guided Feature Disentangling for Occluded Person Re-identification Based on Transformer
|speaker=Bairong
|date=2025-03-07
}}
{{Hist_seminar
|abstract = The emerging programmable networks sparked significant research on Intelligent Network Data Plane (INDP), which achieves learning-based traffic analysis at line-speed. Prior art in INDP focus on deploying tree/forest models on the data plane. We observe a fundamental limitation in tree-based INDP approaches: although it is possible to represent even larger tree/forest tables on the data plane, the flow features that are computable on the data plane are fundamentally limited by hardware constraints. In this paper, we present BoS to push the boundaries of INDP by enabling Neural Network (NN) driven traffic analysis at line-speed. Many types of NNs (such as Recurrent Neural Network (RNN), and transformers) that are designed to work with sequential data have advantages over tree-based models, because they can take raw network data as input without complex feature computations on the fly. However, the challenge is significant: the recurrent computation scheme used in RNN inference is fundamentally different from the match-action paradigm used on the network data plane. BoS addresses this challenge by (i) designing a novel data plane friendly RNN architecture that can execute unlimited RNN time steps with limited data plane stages, effectively achieving line-speed RNN inference; and (ii) complementing the on-switch RNN model with an off-switch transformer-based traffic analysis module to further boost the overall performance. We implement a prototype of BoS using a P4 programmable switch as our data plane, and extensively evaluate it over multiple traffic analysis tasks. The results show that BoS outperforms state-of-the-art in both analysis accuracy and scalability..
|confname =NSDI'24
|link = https://www.usenix.org/conference/nsdi24/presentation/yan
|title= Brain-on-Switch: Towards Advanced Intelligent Network Data Plane via NN-Driven Traffic Analysis at Line-Speed
|speaker=Youwei
|date=2025-02-28
}}
{{Hist_seminar
|abstract = Recent advances in quantum information science enabled the development of quantum communication network prototypes and created an opportunity to study full-stack quantum network architectures. This work develops SeQUeNCe, a comprehensive, customizable quantum network simulator. Our simulator consists of five modules: hardware models, entanglement management protocols, resource management, network management, and application. This framework is suitable for simulation of quantum network prototypes that capture the breadth of current and future hardware technologies and protocols. We implement a comprehensive suite of network protocols and demonstrate the use of SeQUeNCe by simulating a photonic quantum network with nine routers equipped with quantum memories. The simulation capabilities are illustrated in three use cases. We show the dependence of quantum network throughput on several key hardware parameters and study the impact of classical control message latency. We also investigate quantum memory usage efficiency in routers and demonstrate that redistributing memory according to anticipated load increases network capacity by 69.1% and throughput by 6.8%. We design SeQUeNCe to enable comparisons of alternative quantum network technologies, experiment planning, and validation and to aid with new protocol design. We are releasing SeQUeNCe as an open source tool and aim to generate community interest in extending it.
|confname =IOPSCIENCE'21
|link = https://iopscience.iop.org/article/10.1088/2058-9565/ac22f6/meta
|title= SeQUeNCe: a customizable discrete-event simulator of quantum networks
|speaker=Junzhe
|date=2025-02-21
}}{{Hist_seminar
|abstract = This article proposes a remote environmental monitoring system based on low-power Internet of Things, which is applied in smart agriculture to achieve remote and real-time measurement of temperature, humidity, and light intensity parameters in the crop growth environment within the coverage range of the device The system adopts low-power Internet of Things technology, which has the characteristics of wide coverage, multiple connections, fast speed, low cost, low power consumption, and excellent architecture. The overall design of the system includes multiple environmental monitoring nodes, a LoRa gateway, and corresponding environmental monitoring upper computer software. In terms of system software, it involves programming of node MCU and client upper computer software. The key technology implementation includes the hardware design and implementation of low-power sensor nodes and the development of LoRa protocol. System testing and performance analysis show that the optimized LoRa protocol performs well in communication distance, power consumption, stability, and other aspects, laying the foundation for the efficient operation of the system. This study provides a powerful tool for sustainable resource management, which helps to promote agricultural modernization and rural revitalization.
|confname =CISCE'24
|link = https://ieeexplore.ieee.org/abstract/document/10653076
|title= A Long Distance Environmental Monitoring System Based on Low Power IoT
|speaker= Ayesha Rasool
|date=2025-02-21
}}
{{Hist_seminar
|abstract = Recently, smart roadside infrastructure (SRI) has demonstrated the potential of achieving fully autonomous driving systems. To explore the potential of infrastructure-assisted autonomous driving, this paper presents the design and deployment of Soar, the first end-to-end SRI system specifically designed to support autonomous driving systems. Soar consists of both software and hardware components carefully designed to overcome various system and physical challenges. Soar can leverage the existing operational infrastructure like street lampposts for a lower barrier of adoption. Soar adopts a new communication architecture that comprises a bi-directional multi-hop I2I network and a downlink I2V broadcast service, which are designed based on off-the-shelf 802.11ac interfaces in an integrated manner. Soar also features a hierarchical DL task management framework to achieve desirable load balancing among nodes and enable them to collaborate efficiently to run multiple data-intensive autonomous driving applications. We deployed a total of 18 Soar nodes on existing lampposts on campus, which have been operational for over two years. Our real-world evaluation shows that Soar can support a diverse set of autonomous driving applications and achieve desirable real-time performance and high communication reliability. Our findings and experiences in this work offer key insights into the development and deployment of next-generation smart roadside infrastructure and autonomous driving systems.
|confname =MobiCom'24
|link = https://dl.acm.org/doi/abs/10.1145/3636534.3649352
|title= Soar: Design and Deployment of A Smart Roadside Infrastructure System for Autonomous Driving
|speaker=Jiahao
|date=2025-01-10
}}{{Hist_seminar
|abstract = GPUs are increasingly utilized for running DNN tasks on emerging mobile edge devices. Beyond accelerating single task inference, their value is also particularly apparent in efficiently executing multiple DNN tasks, which often have strict latency requirements in applications. Preemption is the main technology to ensure multitasking timeliness, but mobile edges primarily offer two priorities for task queues, and existing methods thus achieve only coarse-grained preemption by categorizing DNNs into real-time and best-effort, permitting a real-time task to preempt best-effort ones. However, the efficacy diminishes significantly when other real-time tasks run concurrently, but this is already common in mobile edge applications. Due to different hardware characteristics, solutions from other platforms are unsuitable. For instance, GPUs on traditional mobile devices primarily assist CPU processing and lack special preemption support, mainly following FIFO in GPU scheduling. Clouds handle concurrent task execution, but focus on allocating one or more GPUs per complex model, whereas on mobile edges, DNNs mainly vie for one GPU. This paper introduces Pantheon, designed to offer fine-grained preemption, enabling real-time tasks to preempt each other and best-effort tasks. Our key observation is that the two-tier GPU stream priorities, while underexplored, are sufficient. Efficient preemption can be realized through software design by innovative scheduling and novel exploitation of the nested redundancy principle for DNN models. Evaluation on a diverse set of DNNs shows substantial improvements in deadline miss rate and accuracy of Pantheon over state-of-the-art methods.
|confname =MobiSys'24
|link = https://dl.acm.org/doi/abs/10.1145/3643832.3661878
|title= Pantheon: Preemptible Multi-DNN Inference on Mobile Edge GPUs
|speaker=Jiale
|date=2025-01-10
}}
{{Hist_seminar
{{Hist_seminar
|abstract = Volumetric videos offer a unique interactive experience and have the potential to enhance social virtual reality and telepresence. Streaming volumetric videos to multiple users remains a challenge due to its tremendous requirements of network and computation resources. In this paper, we develop MuV2, an edge-assisted multi-user mobile volumetric video streaming system to support important use cases such as tens of students simultaneously consuming volumetric content in a classroom. MuV2 achieves high scalability and good streaming quality through three orthogonal designs: hybridizing direct streaming of 3D volumetric content with remote rendering, dynamically sharing edge-transcoded views across users, and multiplexing encoding tasks of multiple transcoding sessions into a limited number of hardware encoders on the edge. MuV2 then integrates the three designs into a holistic optimization framework. We fully implement MuV2 and experimentally demonstrate that MuV2 can deliver high-quality volumetric videos to over 30 concurrent untethered mobile devices with a single WiFi access point and a commodity edge server.
|abstract = Volumetric videos offer a unique interactive experience and have the potential to enhance social virtual reality and telepresence. Streaming volumetric videos to multiple users remains a challenge due to its tremendous requirements of network and computation resources. In this paper, we develop MuV2, an edge-assisted multi-user mobile volumetric video streaming system to support important use cases such as tens of students simultaneously consuming volumetric content in a classroom. MuV2 achieves high scalability and good streaming quality through three orthogonal designs: hybridizing direct streaming of 3D volumetric content with remote rendering, dynamically sharing edge-transcoded views across users, and multiplexing encoding tasks of multiple transcoding sessions into a limited number of hardware encoders on the edge. MuV2 then integrates the three designs into a holistic optimization framework. We fully implement MuV2 and experimentally demonstrate that MuV2 can deliver high-quality volumetric videos to over 30 concurrent untethered mobile devices with a single WiFi access point and a commodity edge server.
Line 16: Line 272:
|date=2025-01-03
|date=2025-01-03
}}
}}
====2024====
{{Hist_seminar
{{Hist_seminar
|abstract = Packet routing in virtual networks requires virtual-to-physical address translation. The address mappings are updated by a single party, i.e., the network administrator, but they are read by multiple devices across the network when routing tenant packets. Existing approaches face an inherent read-write performance tradeoff: they either store these mappings in dedicated gateways for fast updates at the cost of slower forwarding or replicate them at end-hosts and suffer from slow updates.SwitchV2P aims to escape this tradeoff by leveraging the network switches to transparently cache the address mappings while learning them from the traffic. SwitchV2P brings the mappings closer to the sender, thus reducing the first packet latency and translation overheads, while simultaneously enabling fast mapping updates, all without changing existing routing policies and deployed gateways. The topology-aware data-plane caching protocol allows the switches to transparently adapt to changing network conditions and varying in-switch memory capacity.Our evaluation shows the benefits of in-network address mapping, including an up to 7.8× and 4.3× reduction in FCT and first packet latency respectively, and a substantial reduction in translation gateway load. Additionally, SwitchV2P achieves up to a 1.9× reduction in bandwidth overheads and requires order-of-magnitude fewer gateways for equivalent performance.
|abstract = Packet routing in virtual networks requires virtual-to-physical address translation. The address mappings are updated by a single party, i.e., the network administrator, but they are read by multiple devices across the network when routing tenant packets. Existing approaches face an inherent read-write performance tradeoff: they either store these mappings in dedicated gateways for fast updates at the cost of slower forwarding or replicate them at end-hosts and suffer from slow updates.SwitchV2P aims to escape this tradeoff by leveraging the network switches to transparently cache the address mappings while learning them from the traffic. SwitchV2P brings the mappings closer to the sender, thus reducing the first packet latency and translation overheads, while simultaneously enabling fast mapping updates, all without changing existing routing policies and deployed gateways. The topology-aware data-plane caching protocol allows the switches to transparently adapt to changing network conditions and varying in-switch memory capacity.Our evaluation shows the benefits of in-network address mapping, including an up to 7.8× and 4.3× reduction in FCT and first packet latency respectively, and a substantial reduction in translation gateway load. Additionally, SwitchV2P achieves up to a 1.9× reduction in bandwidth overheads and requires order-of-magnitude fewer gateways for equivalent performance.
Line 383: Line 640:
|speaker=Zhenghua
|speaker=Zhenghua
|date=2024-01-04}}
|date=2024-01-04}}
====2023====
====2023====
{{Hist_seminar
{{Hist_seminar

Latest revision as of 20:28, 4 December 2025

History

|abstract =The rapid expansion of large language models (LLMs) requires the development of extensive GPU clusters, with companies deploying clusters with tens to hundreds of thousands of GPUs. This growth significantly expands the design space for LLM training systems, requiring thorough exploration of different parallelization strategies, communication parameters, congestion control, fabric topology, etc. Current methods require up to 10k simulation experiments to identify optimal configurations, with inadequate exploration leading to significant degradation of training performance. In this paper, we tackle the overlooked problem of efficiently conducting parallel simulation experiments for design space exploration. Our

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}