Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
 
(21 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''Friday 10:30-12:00'''
|time='''2024-09-29 10:30-12:00'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}


===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract=Connected autonomous vehicles have boosted a high demand on communication throughput in order to timely share the information collected by in-car sensors (e.g., LiDAR). While visible light communication (VLC) has shown its capability to offer Gigabit-level throughput for applications with high demand for data rate, most are performed indoors and the throughput of outdoor VLC drops to a few Mbps. To fill this performance gap, this paper presents RayTrack, an interference-free outdoor mobile VLC system. The key idea of RayTrack is to use a small but real-time adjustable FOV according to the transmitter location, which can effectively repel interference from the environment and from other transmitters and boost the system throughput. The idea also realizes virtual point-to-point links, and eliminates the need of link access control. To be able to minimize the transmitter detection time to only 20 ms, RayTrack leverages a high-compression-ratio compressive sensing scheme, incorporating a dual-photodiode architecture, optimized measurement matrix and Gaussian-based basis to increase sparsity. Real-world driving experiments show that RayTrack is able to achieve a data rate of 607.9 kbps with over 90% detection accuracy and lower than 15% bit error rate at 35 m, with 70 - 100 km/hr driving speed. To the best of our knowledge, this is the first working outdoor VLC system which can offer such range, throughput and error performance while accommodating freeway mobility.
|abstract = Overlapping cameras offer exciting opportunities to view a scene from different angles, allowing for more advanced, comprehensive and robust analysis. However, existing video analytics systems for multi-camera streams are mostly limited to (i) per-camera processing and aggregation and (ii) workload-agnostic centralized processing architectures. In this paper, we present Argus, a distributed video analytics system with cross-camera collaboration on smart cameras. We identify multi-camera, multi-target tracking as the primary task of multi-camera video analytics and develop a novel technique that avoids redundant, processing-heavy identification tasks by leveraging object-wise spatio-temporal association in the overlapping fields of view across multiple cameras. We further develop a set of techniques to perform these operations across distributed cameras without cloud support at low latency by (i) dynamically ordering the camera and object inspection sequence and (ii) flexibly distributing the workload across smart cameras, taking into account network transmission and heterogeneous computational capacities. Evaluation of three real-world overlapping camera datasets with two Nvidia Jetson devices shows that Argus reduces the number of object identifications and end-to-end latency by up to 7.13× and 2.19× (4.86× and 1.60× compared to the state-of-the-art), while achieving comparable tracking quality.
|confname=MobiSys'21
|confname=TMC' 24
|link=https://dl.acm.org/doi/10.1145/3458864.3466867
|link = https://ieeexplore.ieee.org/abstract/document/10682605
|title=RayTrack: enabling interference-free outdoor mobile VLC with dynamic field-of-view
|title= Argus: Enabling Cross-Camera Collaboration for Video Analytics on Distributed Smart Cameras
|speaker=Mengyu
|speaker=Bairong
|date=2024-06-07}}
|date=2024-9-29
}}
 
{{Latest_seminar
{{Latest_seminar
|abstract=Volumetric videos offer viewers more immersive experiences, enabling a variety of applications. However, state-of-the-art streaming systems still need hundreds of Mbps, exceeding the common bandwidth capabilities of mobile devices. We find a research gap in reusing inter-frame redundant information to reduce bandwidth consumption, while the existing inter-frame compression methods rely on the so-called explicit correlation, i.e., the redundancy from the same/adjacent locations in the previous frame, which does not apply to highly dynamic frames or dynamic viewports. This work introduces a new concept called implicit correlation, i.e., the consistency of topological structures, which stably exists in dynamic frames and is beneficial for reducing bandwidth consumption. We design a mobile volumetric video streaming system Hermes consisting of an implicit correlation encoder to reduce bandwidth consumption and a hybrid streaming method that adapts to dynamic viewports. Experiments show that Hermes achieves a frame rate of 30+ FPS over daily networks and on commodity smartphones, with at least 3.37x improvement compared with two baselines.
|abstract = We present FarfetchFusion, a fully mobile live 3D telepresence system. Enabling mobile live telepresence is a challenging problem as it requires i) realistic reconstruction of the user and ii) high responsiveness for immersive experience. We first thoroughly analyze the live 3D telepresence pipeline and identify three critical challenges: i) 3D data streaming latency and compression complexity, ii) computational complexity of volumetric fusion-based 3D reconstruction, and iii) inconsistent reconstruction quality due to sparsity of mobile 3D sensors. To tackle the challenges, we propose a disentangled fusion approach, which separates invariant regions and dynamically changing regions with our low-complexity spatio-temporal alignment technique, topology anchoring. We then design and implement an end-to-end system, which achieves realistic reconstruction quality comparable to existing server-based solutions while meeting the real-time performance requirements (<100 ms end-to-end latency, 30 fps throughput, <16 ms motion-to-photon latency) solely relying on mobile computation capability.
|confname=MM'23
|confname=MobiCom' 23
|link=https://dl.acm.org/doi/pdf/10.1145/3581783.3613907
|link = https://dl.acm.org/doi/abs/10.1145/3570361.3592525
|title=Hermes: Leveraging Implicit Inter-Frame Correlation for Bandwidth-Efficient Mobile Volumetric Video Streaming
|title= FarfetchFusion: Towards Fully Mobile Live 3D Telepresence Platform
|speaker=Mengfan
|speaker=Mengfan
|date=2024-06-07}}
|date=2024-9-29
}}
 
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 23:44, 26 September 2024

Time: 2024-09-29 10:30-12:00
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [TMC' 24] Argus: Enabling Cross-Camera Collaboration for Video Analytics on Distributed Smart Cameras, Bairong
    Abstract: Overlapping cameras offer exciting opportunities to view a scene from different angles, allowing for more advanced, comprehensive and robust analysis. However, existing video analytics systems for multi-camera streams are mostly limited to (i) per-camera processing and aggregation and (ii) workload-agnostic centralized processing architectures. In this paper, we present Argus, a distributed video analytics system with cross-camera collaboration on smart cameras. We identify multi-camera, multi-target tracking as the primary task of multi-camera video analytics and develop a novel technique that avoids redundant, processing-heavy identification tasks by leveraging object-wise spatio-temporal association in the overlapping fields of view across multiple cameras. We further develop a set of techniques to perform these operations across distributed cameras without cloud support at low latency by (i) dynamically ordering the camera and object inspection sequence and (ii) flexibly distributing the workload across smart cameras, taking into account network transmission and heterogeneous computational capacities. Evaluation of three real-world overlapping camera datasets with two Nvidia Jetson devices shows that Argus reduces the number of object identifications and end-to-end latency by up to 7.13× and 2.19× (4.86× and 1.60× compared to the state-of-the-art), while achieving comparable tracking quality.
  1. [MobiCom' 23] FarfetchFusion: Towards Fully Mobile Live 3D Telepresence Platform, Mengfan
    Abstract: We present FarfetchFusion, a fully mobile live 3D telepresence system. Enabling mobile live telepresence is a challenging problem as it requires i) realistic reconstruction of the user and ii) high responsiveness for immersive experience. We first thoroughly analyze the live 3D telepresence pipeline and identify three critical challenges: i) 3D data streaming latency and compression complexity, ii) computational complexity of volumetric fusion-based 3D reconstruction, and iii) inconsistent reconstruction quality due to sparsity of mobile 3D sensors. To tackle the challenges, we propose a disentangled fusion approach, which separates invariant regions and dynamically changing regions with our low-complexity spatio-temporal alignment technique, topology anchoring. We then design and implement an end-to-end system, which achieves realistic reconstruction quality comparable to existing server-based solutions while meeting the real-time performance requirements (<100 ms end-to-end latency, 30 fps throughput, <16 ms motion-to-photon latency) solely relying on mobile computation capability.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}