Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
 
(133 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''Thursday 16:20-18:00'''
|time='''2025-12-12 10:30'''
|addr=4th Research Building A518
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}


===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract=Global-scale IPv6 scan, critical for network measurement and management, is still a mission to be accomplished due to its vast address space. To tackle this challenge, IPv6 scan generally leverages pre-defined seed addresses to guide search directions. Under this general principle, however, the core problem of effectively using the seeds is largely open. In this work, we propose a novel IPv6 active search strategy, namely HMap6, which significantly improves the use of seeds, w.r.t. the marginal benefit, for large-scale active address discovery in various prefixes. Using a heuristic search strategy for efficient seed collection and alias prefix detection under a wide range of BGP prefixes, HMap6 can greatly expand the scan coverage. Real-world experiments over the Internet in billion-scale scans show that HMap6 can discover 29.39M unique /80 prefixes with active addresses, an 11.88% improvement over the state-of-the-art methods. Furthermore, the IPv6 hitlists from HMap6 include all-responsive IPv6 addresses with rich information. This result sharply differs from existing public IPv6 hitlists, which contain non-responsive and filtered addresses, and pushes the IPv6 hitlists from quantity to quality. To encourage and benefit further IPv6 measurement studies, we released our tool along with our IPv6 hitlists and the detected alias prefixes.
|abstract = Code translation is a crucial activity in the software development and maintenance process, and researchers have recently begun to focus on using pre-trained large language models (LLMs) for code translation. However, existing LLMs only learn the contextual semantics of code during pre-training, neglecting executability information closely related to the execution state of the code, which results in unguaranteed code executability and unreliable automated code translation. To address this issue, we propose ExeCoder, an LLM specifically designed for code translation, aimed at utilizing executability representations such as functional semantics, syntax structures, and variable dependencies to enhance the capabilities of LLMs in code translation. To evaluate the effectiveness of ExeCoder, we manually enhanced the widely used benchmark TransCoder-test, resulting in a benchmark called TransCoder-test-X that serves LLMs. Evaluation of TransCoder-test-X indicates that ExeCoder achieves state-of-the-art performance in code translation, surpassing existing open-source code LLMs by over 10.88% to 38.78% and over 27.44% to 42.97% on two metrics, and even outperforms the renowned closed-source LLM GPT-4o.  
|confname=INFOCOM '23
|confname =EMNLP'25
|link=https://ieeexplore.ieee.org/abstract/document/10229089
|link = https://arxiv.org/abs/2501.18460
|title=Search in the Expanse: Towards Active and Global IPv6 Hitlists
|title= ExeCoder: Empowering Large Language Models with Executability Representation for Code Translation
|speaker=Xinyu
|speaker=Youwei Ran
|date=2023-11-2}}
|date=2025-12-12
}}
{{Latest_seminar
{{Latest_seminar
|abstract=LoRa networks have been deployed in many orchards for environmental monitoring and crop management. An accurate propagation model is essential for efficiently deploying a LoRa network in orchards, e.g., determining gateway coverage and sensor placement. Although some propagation models have been studied for LoRa networks, they are not suitable for orchard environments, because they do not consider the shadowing effect on wireless propagation caused by the ground and tree canopies. This paper presents FLog, a propagation model for LoRa signals in orchard environments. FLog leverages a unique feature of orchards, i.e., all trees have similar shapes and are planted regularly in space. We develop a 3D model of the orchards. Once we have the location of a sensor and a gateway, we know the mediums that the wireless signal traverse. Based on this knowledge, we generate the First Fresnel Zone (FFZ) between the sender and the receiver. The intrinsic path loss exponents (PLE) of all mediums can be combined into a classic Log-Normal Shadowing model in the FFZ. Extensive experiments in almond orchards show that FLog reduces the link quality estimation error by 42.7% and improves gateway coverage estimation accuracy by 70.3%, compared with a widely-used propagation model.
|abstract =Imitation learning from human demonstrations has shown impressive performance in robotics. However, most results focus on table-top manipulation, lacking the mobility and dexterity necessary for generally useful tasks. In this work, we develop a system for imitating mobile manipulation tasks that are bimanual and require whole-body control. We first present Mobile ALOHA, a low-cost and whole-body teleoperation system for data collection. It augments the ALOHA system with a mobile base, and a whole-body teleoperation interface. Using data collected with Mobile ALOHA, we then perform supervised behavior cloning and find that co-training with existing static ALOHA datasets boosts performance on mobile manipulation tasks. With 50 demonstrations for each task, co-training can increase success rates by up to 90%, allowing Mobile ALOHA to autonomously complete complex mobile manipulation tasks such as sauteing and serving a piece of shrimp, opening a two-door wall cabinet to store heavy cooking pots, calling and entering an elevator, and lightly rinsing a used pan using a kitchen faucet. We will open-source all the hardware and software implementations upon publication.
|confname=IPSN '23
|confname =CoRL'24
|link=https://dl.acm.org/doi/10.1145/3583120.3586969
|link = https://openreview.net/forum?id=FO6tePGRZj
|title=Link Quality Modeling for LoRa Networks in Orchards
|title= Mobile ALOHA: Learning Bimanual Mobile Manipulation using Low-Cost Whole-Body Teleoperation
|speaker=Jiacheng
|speaker=Yi Zhou
|date=2023-11-02}}
|date=2025-12-12
{{Latest_seminar
}}
|abstract=Quality of Experience (QoE) is one of the most important quality indicators for video streaming applications. But it is still an open question how to assess QoE value objectively and quantitatively over continuous time both for academia and industry. In this paper, we carry out an extensive data study on user behaviors in one of the largest short-video service providers. The measurement data reveals that the user’s exiting behavior in viewing video streams is an appropriate choice as a continuous-time QoE metric. Secondly, we build a quantitative QoE model to objectively assess the quality of video playback by discretizing the playback session into the Markov chain. By collecting 7 billion viewing session logs which cover users from 20 CDN providers and 40 Internet service providers, the proposed state-chain-based model of State-Exiting Ratio (SER) is validated. The experimental results show that the modeling error of SER and session duration are less than 2% and 10s respectively. By using the proposed scheme to optimize adaptive video streaming, the average session duration is improved up to 60% to baseline, and 20% to the existing black-box-like machine learning methods.
|confname=INFOCOM '23
|link=https://ieeexplore.ieee.org/document/10228896
|title=Rebuffering but not Suffering: Exploring Continuous-Time Quantitative QoE by User’s Exiting Behaviors
|speaker=Jiajun
|date=2023-11-02}}
{{Latest_seminar
|abstract=The resource efficiency of video analytics workloads is critical for large-scale deployments on edge nodes and cloud clusters. Recent advanced systems have benefited from techniques including video compression, frame filtering, and deep model acceleration. However, based on our year-long experience of operating a real-time video analytics system on more than 1000 cameras, we identified a previously overlooked bottleneck of end- to-end concurrency: video decoding. To support concurrent video inference at scale, in this work, we investigate a new task, named video packet gating, which selectively filters packets before running a decoder. We propose a
novel multi-view embedding approach for video packets and present PacketGame that has both theoretical performance guarantee and practical system designs. Experiments on both public datasets and a real system show PacketGame saves 52.0-79.3% decoding costs and achieves 2.1-4.8× concurrency compared to original workloads. Comparisons with four state-of-the-art complementary methods show the superiority of PacketGame in end-to-end concurrency.
|confname=SIGCOMM '23
|link=https://yuanmu97.github.io/preprint/packetgame_sigcomm23.pdf
|title=PacketGame: Multi-Stream Packet Gating for Concurrent Video Inference at Scale
|speaker=Shuhong
|date=2023-11-02}}
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 23:32, 11 December 2025

Time: 2025-12-12 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [EMNLP'25] ExeCoder: Empowering Large Language Models with Executability Representation for Code Translation, Youwei Ran
    Abstract: Code translation is a crucial activity in the software development and maintenance process, and researchers have recently begun to focus on using pre-trained large language models (LLMs) for code translation. However, existing LLMs only learn the contextual semantics of code during pre-training, neglecting executability information closely related to the execution state of the code, which results in unguaranteed code executability and unreliable automated code translation. To address this issue, we propose ExeCoder, an LLM specifically designed for code translation, aimed at utilizing executability representations such as functional semantics, syntax structures, and variable dependencies to enhance the capabilities of LLMs in code translation. To evaluate the effectiveness of ExeCoder, we manually enhanced the widely used benchmark TransCoder-test, resulting in a benchmark called TransCoder-test-X that serves LLMs. Evaluation of TransCoder-test-X indicates that ExeCoder achieves state-of-the-art performance in code translation, surpassing existing open-source code LLMs by over 10.88% to 38.78% and over 27.44% to 42.97% on two metrics, and even outperforms the renowned closed-source LLM GPT-4o.
  2. [CoRL'24] Mobile ALOHA: Learning Bimanual Mobile Manipulation using Low-Cost Whole-Body Teleoperation, Yi Zhou
    Abstract: Imitation learning from human demonstrations has shown impressive performance in robotics. However, most results focus on table-top manipulation, lacking the mobility and dexterity necessary for generally useful tasks. In this work, we develop a system for imitating mobile manipulation tasks that are bimanual and require whole-body control. We first present Mobile ALOHA, a low-cost and whole-body teleoperation system for data collection. It augments the ALOHA system with a mobile base, and a whole-body teleoperation interface. Using data collected with Mobile ALOHA, we then perform supervised behavior cloning and find that co-training with existing static ALOHA datasets boosts performance on mobile manipulation tasks. With 50 demonstrations for each task, co-training can increase success rates by up to 90%, allowing Mobile ALOHA to autonomously complete complex mobile manipulation tasks such as sauteing and serving a piece of shrimp, opening a two-door wall cabinet to store heavy cooking pots, calling and entering an elevator, and lightly rinsing a used pan using a kitchen faucet. We will open-source all the hardware and software implementations upon publication.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}