Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
 
(78 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2024-10-25 10:30-12:00'''
|time='''2026-01-30 10:30'''
|addr=4th Research Building A533
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}
Line 8: Line 8:


{{Latest_seminar
{{Latest_seminar
|abstract = Video super-resolution (VSR) on mobile devices aims to restore high-resolution frames from their low-resolution counterparts, satisfying the requirements of performance, FLOPs and latency. On one hand, partial feature processing, as a classic and acknowledged strategy, is developed in current studies to reach an appropriate trade-off between FLOPs and accuracy. However, the splitting of partial feature processing strategy are usually performed in a blind manner, thereby reducing the computational efficiency and performance gains. On the other hand, current methods for mobile platforms primarily treat VSR as an extension of single-image super-resolution to reduce model calculation and inference latency. However, lacking inter-frame information interaction in current methods results in a suboptimal latency and accuracy trade-off. To this end, we propose a novel architecture, termed Feature Aggregating Network with Inter-frame Interaction (FANI), a lightweight yet considering frame-wise correlation VSR network, which could achieve real-time inference while maintaining superior performance. Our FANI accepts adjacent multi-frame low-resolution images as input and generally consists of several fully-connection-embedded modules, i.e., Multi-stage Partial Feature Distillation (MPFD) for capturing multi-level feature representations. Moreover, considering the importance of inter-frame alignment, we further employ a tiny Attention-based Frame Alignment (AFA) module to promote inter-frame information flow and aggregation efficiently. Extensive experiments on the well-known dataset and real-world mobile device demonstrate the superiority of our proposed FANI, which means that our FANI could be well adapted to mobile devices and produce visually pleasing results.
|abstract = LoRa technology promises to enable Internet of Things applications over large geographical areas. However, its performance is often hampered by poor channel quality in urban environments, where blockage and multipath effects are prevalent. Our study uncovers that a slight shift in the position or attitude of the receiving antenna can substantially improve the received signal quality. This phenomenon can be attributed to the rich multipath characteristics of wireless signal propagation in urban environments, wherein even small antenna movement can alter the dominant signal path or reduce the polarization angular difference between transceivers. Leveraging these key observations, we propose and implement MoLoRa, an intelligent mobile antenna system designed to enhance LoRa packet reception. At its core, MoLoRa represents the position and attitude of an antenna as a state and employs a statistical optimization method to search for states that offer optimal signal quality efficiently. Through extensive evaluation, we demonstrate that MoLoRa achieves a maximum Signal-to-Noise Ratio (SNR) gain of 13 dB in a few attempts, enabling formerly problematic blind spots to reconnect and strengthening links for other nodes.
|confname = ICDM‘23
|confname =SenSys'25
|link = https://ieeexplore.ieee.org/abstract/document/10415812
|link = https://dl.acm.org/doi/10.1145/3715014.3722075
|title= Feature Aggregating Network with Inter-Frame Interaction for Efficient Video Super-Resolution
|title= MoLoRa: Intelligent Mobile Antenna System for Enhanced LoRa Reception in Urban Environments
|speaker=Shuhong
|speaker=Kai Chen
|date=2024-10-25
|date=2026-1-30
}}
}}
{{Latest_seminar
{{Latest_seminar
|abstract = The proliferation of edge devices has pushed computing from the cloud to the data sources, and video analytics is among the most promising applications of edge computing. Running video analytics is compute- and latency-sensitive, as video frames are analyzed by complex deep neural networks (DNNs) which put severe pressure on resource-constrained edge devices. To resolve the tension between inference latency and resource cost, we present Polly, a cross-camera inference system that enables co-located cameras with different but overlapping fields of views (FoVs) to share inference results between one another, thus eliminating the redundant inference work for objects in the same physical area. Polly’s design solves two basic challenges of cross-camera inference: how to identify overlapping FoVs automatically, and how to share inference results accurately across cameras. Evaluation on NVIDIA Jetson Nano with a real-world traffic surveillance dataset shows that Polly reduces the inference latency by up to 71.4% while achieving almost the same detection accuracy with state-of-the-art systems.
|abstract =Large language models (LLMs) achieve superior performance in generative tasks. However, due to the natural gap between language model generation and structured information extraction in three dimensions: task type, output format, and modeling granularity, they often fall short in structured information extraction, a crucial capability for effective data utilization on the web. In this paper, we define the generation process of the language model as the controllable state transition, aligning the generation and extraction processes to ensure the integrity of the output structure and adapt to the goals of the information extraction task. Furthermore, we propose the Structure2Text decider to help the language model understand the fine-grained extraction information, which converts the structured output into natural language and makes state decisions, thereby focusing on the task-specific information kernels, and alleviating language model hallucinations and incorrect content generation. We conduct extensive experiments and detailed analyses on myriad information extraction tasks, including named entity recognition, relation extraction, and event argument extraction. Our method not only achieves significant performance improvements but also considerably enhances the model's capability to generate precise and relevant content, making the extracted content easy to parse.
|confname= INFOCOM'23
|confname =WWW'25
|link = https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10229045
|link = https://dl.acm.org/doi/abs/10.1145/3696410.3714571
|title= Cross-Camera Inference on the Constrained Edge
|title= Bridging the Gap: Aligning Language Model Generation with Structured Information Extraction via Controllable State Transition
|speaker=Xinyan
|speaker=Daobin
|date=2024-10-25
|date=2026-1-30
}}
{{Latest_seminar
|abstract = Smart cameras with on-device deep learning inference capabilities are enabling distributed video analytics at the data source without sending raw video data over the often unreliable and congested wireless network. However, how to unleash the full potential of the computing power of the camera network requires careful coordination among the distributed cameras, catering to the uneven workload distribution and the heterogeneous computing capabilities. This paper presents CrossVision, a distributed framework for real-time video analytics, that retains all video data on cameras while achieving low inference delay and high inference accuracy. The key idea behind CrossVision is that there is a significant information redundancy in the video content captured by cameras with overlapped Field-of-Views (FoVs), which can be exploited to reduce inference workload as well as improve inference accuracy between correlated cameras. CrossVision consists of three main components to realize its function: a Region-of-Interest (RoI) Matcher that discovers video content correlation based on a segmented FoV transformation scheme; a Workload Balancer that implements a randomized workload balancing strategy based on a bulk-queuing analysis, taking into account the cameras’ predicted future workload arrivals; an Accuracy Guard that ensures that the inference accuracy is not sacrificed as redundant information is discarded. We evaluate CrossVision in a hardware-augmented simulator and on real-world cross-camera datasets, and the results show that CrossVision is able to significantly reduce inference delay while improving the inference accuracy compared to a variety of baseline approaches.
|confname= TMC'24
|link = https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10202594
|title= CrossVision: Real-Time On-Camera Video Analysis via Common RoI Load Balancing
|speaker=Xinyan
|date=2024-10-25
}}
}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 10:51, 30 January 2026

Time: 2026-01-30 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [SenSys'25] MoLoRa: Intelligent Mobile Antenna System for Enhanced LoRa Reception in Urban Environments, Kai Chen
    Abstract: LoRa technology promises to enable Internet of Things applications over large geographical areas. However, its performance is often hampered by poor channel quality in urban environments, where blockage and multipath effects are prevalent. Our study uncovers that a slight shift in the position or attitude of the receiving antenna can substantially improve the received signal quality. This phenomenon can be attributed to the rich multipath characteristics of wireless signal propagation in urban environments, wherein even small antenna movement can alter the dominant signal path or reduce the polarization angular difference between transceivers. Leveraging these key observations, we propose and implement MoLoRa, an intelligent mobile antenna system designed to enhance LoRa packet reception. At its core, MoLoRa represents the position and attitude of an antenna as a state and employs a statistical optimization method to search for states that offer optimal signal quality efficiently. Through extensive evaluation, we demonstrate that MoLoRa achieves a maximum Signal-to-Noise Ratio (SNR) gain of 13 dB in a few attempts, enabling formerly problematic blind spots to reconnect and strengthening links for other nodes.
  2. [WWW'25] Bridging the Gap: Aligning Language Model Generation with Structured Information Extraction via Controllable State Transition, Daobin
    Abstract: Large language models (LLMs) achieve superior performance in generative tasks. However, due to the natural gap between language model generation and structured information extraction in three dimensions: task type, output format, and modeling granularity, they often fall short in structured information extraction, a crucial capability for effective data utilization on the web. In this paper, we define the generation process of the language model as the controllable state transition, aligning the generation and extraction processes to ensure the integrity of the output structure and adapt to the goals of the information extraction task. Furthermore, we propose the Structure2Text decider to help the language model understand the fine-grained extraction information, which converts the structured output into natural language and makes state decisions, thereby focusing on the task-specific information kernels, and alleviating language model hallucinations and incorrect content generation. We conduct extensive experiments and detailed analyses on myriad information extraction tasks, including named entity recognition, relation extraction, and event argument extraction. Our method not only achieves significant performance improvements but also considerably enhances the model's capability to generate precise and relevant content, making the extracted content easy to parse.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}