Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
(wenliang updates seminars)
 
(170 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2023-02-27 9:30'''
|time='''2025-12-12 10:30'''
|addr=4th Research Building A527-B
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}


===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract = Visible light communications (VLC) is a good candidate technology for the 6th generation (6G) wireless communications. Red, green, and blue (RGB) light-emitting diodes (LEDs) based VLC has become an important research branch due to its low price and high reliability. However, the saturation of photodiode (PD) caused by the ambient background light may seriously degrade the bit error rate (BER) performance of an RGB-VLC system's three spatially uncoupled information streams (i.e., red, green, and blue LEDs can transmit different data packets simultaneously) in practical applications. To mitigate the ambient light interference in point-to-point RGB-VLC systems, we propose, PNC-VLC, a network-coded scheme that uses two LEDs with the same color at the transmitter to transmit two different data streams and we make use of the naturally overlapped signals at the receiver to formulate physical-layer network coding (PNC). The adaptivity of PNC-VLC could effectively improve the BER degradation problem caused by the saturation of PD under the influence of ambient light. We conducted simulations based on the parameters of commercial off-the-shelf (COTS) products to prove the superiority of the PNC-VLC under the influence of four typical illuminants. Simulation results show that the PNC-VLC system can maintain a better and more stable system BER performance under different ambient background light conditions. Remarkably, with 2/3 throughput efficiency, PNC-VLC can bring 133.3% gain to the BER performance when compared with RGB-VLC under the Illuminant A interference model, making it a good option for VLC applications with unpredictable ambient background interferences.
|abstract = Code translation is a crucial activity in the software development and maintenance process, and researchers have recently begun to focus on using pre-trained large language models (LLMs) for code translation. However, existing LLMs only learn the contextual semantics of code during pre-training, neglecting executability information closely related to the execution state of the code, which results in unguaranteed code executability and unreliable automated code translation. To address this issue, we propose ExeCoder, an LLM specifically designed for code translation, aimed at utilizing executability representations such as functional semantics, syntax structures, and variable dependencies to enhance the capabilities of LLMs in code translation. To evaluate the effectiveness of ExeCoder, we manually enhanced the widely used benchmark TransCoder-test, resulting in a benchmark called TransCoder-test-X that serves LLMs. Evaluation of TransCoder-test-X indicates that ExeCoder achieves state-of-the-art performance in code translation, surpassing existing open-source code LLMs by over 10.88% to 38.78% and over 27.44% to 42.97% on two metrics, and even outperforms the renowned closed-source LLM GPT-4o.  
|confname=IEEE Photonics Journal 2023
|confname =EMNLP'25
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10028767
|link = https://arxiv.org/abs/2501.18460
|title=Physical-Layer Network Coding Enhanced Visible Light Communications Using RGB LEDs
|title= ExeCoder: Empowering Large Language Models with Executability Representation for Code Translation
|speaker=Jiahui}}
|speaker=Youwei Ran
|date=2025-12-12
}}
{{Latest_seminar
{{Latest_seminar
|abstract = Mobile edge computing (MEC), as a key ingredient of the 5G ecosystem, is envisioned to support demanding applications with stringent latency requirements. The basic idea is to deploy servers close to end-users, e.g., on the network edge-side instead of the remote cloud. While conceptually reasonable, we find that the operational 5G is not coordinated with MEC and thus suffers from intolerable long response latency. In this work, we propose Tutti, which couples 5G RAN and MEC at the user space to assure the performance of latency-critical video analytics. To enable such capacity, Tutti precisely customizes the application service demand by fusing instantaneous wireless dynamics from the 5G RAN and application-layer content changes from edge servers. Tutti then enforces a deadline-sensitive resource provision for meeting the application service demand by real-time interaction between 5G RAN and edge servers in a lightweight and standard-compatible way. We prototype and evaluate Tutti on a software-defined platform, which shows that Tutti reduces the response latency by an average of 61.69% compared with the existing 5G MEC system, as well as negligible interaction costs.
|abstract =Imitation learning from human demonstrations has shown impressive performance in robotics. However, most results focus on table-top manipulation, lacking the mobility and dexterity necessary for generally useful tasks. In this work, we develop a system for imitating mobile manipulation tasks that are bimanual and require whole-body control. We first present Mobile ALOHA, a low-cost and whole-body teleoperation system for data collection. It augments the ALOHA system with a mobile base, and a whole-body teleoperation interface. Using data collected with Mobile ALOHA, we then perform supervised behavior cloning and find that co-training with existing static ALOHA datasets boosts performance on mobile manipulation tasks. With 50 demonstrations for each task, co-training can increase success rates by up to 90%, allowing Mobile ALOHA to autonomously complete complex mobile manipulation tasks such as sauteing and serving a piece of shrimp, opening a two-door wall cabinet to store heavy cooking pots, calling and entering an elevator, and lightly rinsing a used pan using a kitchen faucet. We will open-source all the hardware and software implementations upon publication.
|confname=Mobicom 2022
|confname =CoRL'24
|link=https://dl.acm.org/doi/pdf/10.1145/3498361.3539765
|link = https://openreview.net/forum?id=FO6tePGRZj
|title=Tutti: coupling 5G RAN and mobile edge computing for latency-critical video analytics
|title= Mobile ALOHA: Learning Bimanual Mobile Manipulation using Low-Cost Whole-Body Teleoperation
|speaker=Silience}}
|speaker=Yi Zhou
 
|date=2025-12-12
 
}}
 
=== History ===
 
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 23:32, 11 December 2025

Time: 2025-12-12 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [EMNLP'25] ExeCoder: Empowering Large Language Models with Executability Representation for Code Translation, Youwei Ran
    Abstract: Code translation is a crucial activity in the software development and maintenance process, and researchers have recently begun to focus on using pre-trained large language models (LLMs) for code translation. However, existing LLMs only learn the contextual semantics of code during pre-training, neglecting executability information closely related to the execution state of the code, which results in unguaranteed code executability and unreliable automated code translation. To address this issue, we propose ExeCoder, an LLM specifically designed for code translation, aimed at utilizing executability representations such as functional semantics, syntax structures, and variable dependencies to enhance the capabilities of LLMs in code translation. To evaluate the effectiveness of ExeCoder, we manually enhanced the widely used benchmark TransCoder-test, resulting in a benchmark called TransCoder-test-X that serves LLMs. Evaluation of TransCoder-test-X indicates that ExeCoder achieves state-of-the-art performance in code translation, surpassing existing open-source code LLMs by over 10.88% to 38.78% and over 27.44% to 42.97% on two metrics, and even outperforms the renowned closed-source LLM GPT-4o.
  2. [CoRL'24] Mobile ALOHA: Learning Bimanual Mobile Manipulation using Low-Cost Whole-Body Teleoperation, Yi Zhou
    Abstract: Imitation learning from human demonstrations has shown impressive performance in robotics. However, most results focus on table-top manipulation, lacking the mobility and dexterity necessary for generally useful tasks. In this work, we develop a system for imitating mobile manipulation tasks that are bimanual and require whole-body control. We first present Mobile ALOHA, a low-cost and whole-body teleoperation system for data collection. It augments the ALOHA system with a mobile base, and a whole-body teleoperation interface. Using data collected with Mobile ALOHA, we then perform supervised behavior cloning and find that co-training with existing static ALOHA datasets boosts performance on mobile manipulation tasks. With 50 demonstrations for each task, co-training can increase success rates by up to 90%, allowing Mobile ALOHA to autonomously complete complex mobile manipulation tasks such as sauteing and serving a piece of shrimp, opening a two-door wall cabinet to store heavy cooking pots, calling and entering an elevator, and lightly rinsing a used pan using a kitchen faucet. We will open-source all the hardware and software implementations upon publication.

History

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}