Difference between revisions of "Resource:Paper Carnival 2025"
From MobiNetS
| Line 48: | Line 48: | ||
---- | ---- | ||
=== '''Session 5''': LLM Code Generation === | === '''Session 5''': LLM Code Generation === | ||
===Use of CodeGen - Youwei "16:20-16:40"=== | ===1. Use of CodeGen - Youwei "16:20-16:40"=== | ||
* [Mobicom'25] [https://arxiv.org/pdf/2412.18116 AutoDroid-V2: Boosting SLM-based GUI Agents via Code Generation] | * [Mobicom'25] [https://arxiv.org/pdf/2412.18116 AutoDroid-V2: Boosting SLM-based GUI Agents via Code Generation] | ||
===Code Translation - Bairong "16:40-17:00"=== | ===2. Code Translation - Bairong "16:40-17:00"=== | ||
* [ICSE'25] [https://arxiv.org/abs/2411.01063 InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation] | * [ICSE'25] [https://arxiv.org/abs/2411.01063 InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation] | ||
* [Arxiv] [https://arxiv.org/abs/2503.05346 AutoIOT: LLM-Driven Automated Natural Language Programming for AIoT Applications] | * [Arxiv] [https://arxiv.org/abs/2503.05346 AutoIOT: LLM-Driven Automated Natural Language Programming for AIoT Applications] | ||
Revision as of 15:59, 27 August 2025
Day 1
Session 1: Networked System
1. LLM Cache Optimization - Qinyong Li "9:00-9:40"
- [Eurosys'25] CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion
- [SigComm'24] CacheGen: KV Cache Compression and Streaming for Fast Large Language Model Serving
2. Disaggregated OS - Haifeng "9:40-10:00"
- [NSDI'25] Beehive: A Scalable Disaggregated Memory Runtime Exploiting Asynchrony of Multithreaded Programs
Break "10:00-10:05"
Session 2: Video Analytics for AIoT
1. ML for VA - Xinyan "10:05-10:45"
- [ISCA'24] DACAPO: Accelerating Continuous Learning in Autonomous Systems for Video Analytics
- [NSDI'23] RECL: Responsive Resource-Efficient Continuous Learning for Video Analytics
- [MM'23] Edge-Assisted On-Device Model Update for Video Analytics in Adverse Environments
2. Image Offloading Revisit - Yi Zhou "10:45-11:05"
3. Vision Drones - Jiahao "11:05-11:25"
- [T-RO'25] FAPP: Fast and Adaptive Perception and Planning for UAVs in Dynamic Cluttered Environments
Break "11:25-14:00"
Session 3: Networking
1. Quantum Networks - Yaliang "14:00-14:40"
- [INFOCOM'22] E2E Fidelity Aware Routing and Purification for Throughput Maximization in Quantum Networks
- [JSAC'24] On Optimum Entanglement Purification Scheduling in Quantum Networks
- [INFOCOM'25] Link Configuration for Fidelity-Constrained Entanglement Routing in Quantum Networks
2. V2V Networks - Zhenguo Bi "14:40-15:00"
- [INFOCOM'25] RoCooper: Robust Cooperative Perception under Vehicle-to-Vehicle Communication Impairments
Break "15:00-15:05"
Session 4: LoRa
1. ML in LoRa Reception - Kai Chen"15:05-15:45"
- [ICNP'23] Hi2LoRa: Exploring Highly Dimensional and Highly Accurate Features to Push LoRaWAN Concurrency Limits with Low Implementation Cost
- [SenSys'24] Enhancing LoRa Reception with Generative Models: Channel-Aware Denoising of LoRaPHY Signals
- [ICNP'24] Deepdetangle: Deep Learning-Based Fusion of Chirp-Level and Packet-Level Features for Lora Parallel Decoding
2. Performance - Mengyu "15:45-16:15"
- [ToN'24] RALoRa: Rateless-Enabled Link Adaptation for LoRa Networking
- [TMC'25] Enhancing Link Performance for Mobile LoRa Networks
Break "16:15-16:20"
Session 5: LLM Code Generation
1. Use of CodeGen - Youwei "16:20-16:40"
2. Code Translation - Bairong "16:40-17:00"
- [ICSE'25] InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation
- [Arxiv] AutoIOT: LLM-Driven Automated Natural Language Programming for AIoT Applications
Day 2
Session 6: Volumetric video
1.Volume Video- Mengfan "9:00-9:20"
2.Volume Video- Jiyi "9:20-10:00"
- [TOG'23] 3D Gaussian Splatting for Real-Time Radiance Field Rendering
- [TOG'24] V3: Viewing Volumetric Videos on Mobiles via Streamable 2D Dynamic Gaussians
Break "10:00-10:05"
Session 7: Edge Deployment and Inference
1. Edge LLM - Junzhe "10:05-10:25"
- [INFOCOM'25] TensAllo: Adaptive Deployment of LLMs on Resource-Constrained Heterogeneous Edge Devices
2. Inference Optimization - Ruizheng "10:25-10:45"
- [INFOCOM'25] DUNE: Distributed Inference in the User Plane
