Difference between revisions of "Resource:Seminar"

From MobiNetS
Jump to: navigation, search
m
 
(159 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{SemNote
{{SemNote
|time='''2023-04-20 9:30'''
|time='''2025-12-05 10:30'''
|addr=4th Research Building A527-B
|addr=4th Research Building A518
|note=Useful links: [[Resource:Reading_List|Readling list]]; [[Resource:Seminar_schedules|Schedules]]; [[Resource:Previous_Seminars|Previous seminars]].
|note=Useful links: [[Resource:Reading_List|📚 Readling list]]; [[Resource:Seminar_schedules|📆 Schedules]]; [[Resource:Previous_Seminars|🧐 Previous seminars]].
}}
}}


===Latest===
===Latest===
{{Latest_seminar
{{Latest_seminar
|abstract =Low-power wireless networks have the potential to enable applications that are of great importance to industry and society. However, existing network protocols do not meet the dependability requirements of many scenarios as the failure of a single node or link can completely disrupt communication and take significant time and energy to recover. This paper presents Hydra, a low-power wireless protocol that guarantees robust communication despite arbitrary node and link failures. Unlike most existing deterministic protocols, Hydra steers clear of centralized coordination to avoid a single point of failure. Instead, all nodes are equivalent in terms of protocol logic and configuration, performing coordination tasks such as synchronization and scheduling concurrently. This concept of concurrent coordination relies on a novel distributed consensus algorithm that yields provably unique decisions with low delay and energy overhead. In addition to a theoretical analysis, we evaluate Hydra in a multi-hop network of 23 nodes. Our experiments demonstrate that Hydra withstands random node failures without increasing coordination overhead and that it re-establishes efficient and reliable data exchange within seconds after a major disruption.
|abstract = Intermediate reasoning or acting steps have successfully improved large language models (LLMs) for handling various downstream natural language processing (NLP) tasks. When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as in chain-of-thought (CoT) prompting, and then output code with the natural language or other structured intermediate steps. However, such output is not suitable for code translation or generation tasks since the standard CoT has different logical structures and forms of expression with the code. In this work, we introduce the universal code (UniCode) as the intermediate representation. It is a description of algorithm steps using a mix of conventions of programming languages, such as assignment operator, conditional operator, and loop. Hence, we collect an instruction dataset UniCoder-Instruct to train our model UniCoder on multi-task learning objectives. UniCoder-Instruct comprises natural-language questions, code solutions, and the corresponding universal code. The alignment between the intermediate universal code representation and the final code solution significantly improves the quality of the generated code. The experimental results demonstrate that UniCoder with the universal code significantly outperforms the previous prompting methods by a large margin, showcasing the effectiveness of the structural clues in pseudo-code.
|confname=IPSN 2023
|confname =ACL'24
|link=https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/602741/ipsn23-22.pdf?sequence=1&isAllowed=y
|link = https://arxiv.org/abs/2406.16441
|title=Hydra: Concurrent Coordination for Fault-tolerant Networking
|title= UniCoder: Scaling Code Large Language Model via Universal Code
|speaker=Pengfei}}
|speaker=Bairong Liu
|date=2025-12-05
}}
{{Latest_seminar
{{Latest_seminar
|abstract = We report our experiences of developing, deploying, and evaluating MLoc, a smartphone-based indoor localization system for malls. MLoc uses Bluetooth Low Energy RSSI and geomagnetic field strength as fingerprints. We develop efficient approaches for large-scale, outsourced training data collection. We also design robust online algorithms for localizing and tracking users' positions in complex malls. Since 2018, MLoc has been deployed in 7 cities in China, and used by more than 1 million customers. We conduct extensive evaluations at 35 malls in 7 cities, covering 152K m2 mall areas with a total walking distance of 215 km (1,100 km training data). MLoc yields a median location tracking error of 2.4m. We further characterize the behaviors of MLoc's customers (472K users visiting 12 malls), and demonstrate that MLoc is a promising marketing platform through a promotion event. The e-coupons delivered through MLoc yield an overall conversion rate of 22%. To facilitate future research on mobile sensing and indoor localization, we have released a large dataset (43 GB at the time when this paper was published) that contains IMU, BLE, GMF readings, and the localization ground truth collected by trained testers from 37 shopping malls.
|abstract =LoRaWANs are envisioned to connect billions of IoT devices through thousands of physically overlapping yet logically orthogonal channels (termed logical channels). These logical channels hold significant potential for enabling highly concurrent scalable IoT connectivity. Large-scale deployments however face strong interference between logical channels. This practical issue has been largely overlooked by existing works but becomes increasingly prominent as LoRaWAN scales up. To address this issue, we introduce Canas, an innovative gateway design that is poised to orthogonalize the logical channels by eliminating mutual interference. To this end, Canas develops a series of novel solutions to accurately extract the meta-information of individual ultra-weak LoRa signals from the received overlapping channels. The meta-information is then leveraged to accurately reconstruct and subtract the LoRa signals over thousands of logical channels iteratively. Real-world evaluations demonstrate that Canas can enhance concurrent transmissions across overlapping logical channels by 2.3× compared to the best known related works.
|confname=MobiCom 2022
|confname =TMC'25
|link=https://dl.acm.org/doi/pdf/10.1145/3495243.3517021
|link = https://ieeexplore.ieee.org/abstract/document/11160677
|title=Experience: practical indoor localization for malls
|title= Resolving Inter-Logical Channel Interference for Large-scale LoRa Deployments
|speaker=Zhuoliu}}
|speaker=Mengyu
{{Latest_seminar
|date=2025-12-05
|abstract = Low-earth-orbit (LEO) satellite mega-constellations promise broadband, low-latency network infrastructure from space for terrestrial users in remote areas. However, they face new QoS bottlenecks from infrastructure mobility due to the fast-moving LEO satellites and earth’s rotations. Both cause frequent space-ground link churns and challenge the network latency, bandwidth, and availability at the global scale. Today’s LEO networks mask infrastructure mobility with fixed anchors (ground stations) but cause single-point bandwidth/latency bottlenecks. Instead, we design LBP to remove the LEO network’s QoS bottlenecks from infrastructure mobility. LBP removes remote terrestrial fixed anchors via geographic addressing for shorter latencies and more bandwidth. It adopts local, orbit direction-aware geographic routing to avoid global routing updates for high network availability. LBP further shortens the routing paths by refining handover policies by satellites’ orbital directions. Our experiments in controlled testbeds and trace-driven emulations validate LBP’s 1.64× network latency reduction, 9.66× more bandwidth, and improve network availability to 100%.
}}
|confname=IWQoS 2022
|link=https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9796680
|title=Geographic Low-Earth-Orbit Networking without QoS Bottlenecks from Infrastructure Mobility
|speaker=Kun}}
 
 
 
=== History ===
 
{{Resource:Previous_Seminars}}
{{Resource:Previous_Seminars}}

Latest revision as of 09:25, 5 December 2025

Time: 2025-12-05 10:30
Address: 4th Research Building A518
Useful links: 📚 Readling list; 📆 Schedules; 🧐 Previous seminars.

Latest

  1. [ACL'24] UniCoder: Scaling Code Large Language Model via Universal Code, Bairong Liu
    Abstract: Intermediate reasoning or acting steps have successfully improved large language models (LLMs) for handling various downstream natural language processing (NLP) tasks. When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as in chain-of-thought (CoT) prompting, and then output code with the natural language or other structured intermediate steps. However, such output is not suitable for code translation or generation tasks since the standard CoT has different logical structures and forms of expression with the code. In this work, we introduce the universal code (UniCode) as the intermediate representation. It is a description of algorithm steps using a mix of conventions of programming languages, such as assignment operator, conditional operator, and loop. Hence, we collect an instruction dataset UniCoder-Instruct to train our model UniCoder on multi-task learning objectives. UniCoder-Instruct comprises natural-language questions, code solutions, and the corresponding universal code. The alignment between the intermediate universal code representation and the final code solution significantly improves the quality of the generated code. The experimental results demonstrate that UniCoder with the universal code significantly outperforms the previous prompting methods by a large margin, showcasing the effectiveness of the structural clues in pseudo-code.
  2. [TMC'25] Resolving Inter-Logical Channel Interference for Large-scale LoRa Deployments, Mengyu
    Abstract: LoRaWANs are envisioned to connect billions of IoT devices through thousands of physically overlapping yet logically orthogonal channels (termed logical channels). These logical channels hold significant potential for enabling highly concurrent scalable IoT connectivity. Large-scale deployments however face strong interference between logical channels. This practical issue has been largely overlooked by existing works but becomes increasingly prominent as LoRaWAN scales up. To address this issue, we introduce Canas, an innovative gateway design that is poised to orthogonalize the logical channels by eliminating mutual interference. To this end, Canas develops a series of novel solutions to accurately extract the meta-information of individual ultra-weak LoRa signals from the received overlapping channels. The meta-information is then leveraged to accurately reconstruct and subtract the LoRa signals over thousands of logical channels iteratively. Real-world evaluations demonstrate that Canas can enhance concurrent transmissions across overlapping logical channels by 2.3× compared to the best known related works.

History

|abstract =The rapid expansion of large language models (LLMs) requires the development of extensive GPU clusters, with companies deploying clusters with tens to hundreds of thousands of GPUs. This growth significantly expands the design space for LLM training systems, requiring thorough exploration of different parallelization strategies, communication parameters, congestion control, fabric topology, etc. Current methods require up to 10k simulation experiments to identify optimal configurations, with inadequate exploration leading to significant degradation of training performance. In this paper, we tackle the overlooked problem of efficiently conducting parallel simulation experiments for design space exploration. Our

2024

2023

2022

2021

2020

  • [Topic] [ The path planning algorithm for multiple mobile edge servers in EdgeGO], Rong Cong, 2020-11-18

2019

2018

2017

Instructions

请使用Latest_seminar和Hist_seminar模板更新本页信息.

    • 修改时间和地点信息
    • 将当前latest seminar部分的code复制到这个页面
    • 将{{Latest_seminar... 修改为 {{Hist_seminar...,并增加对应的日期信息|date=
    • 填入latest seminar各字段信息
    • link请务必不要留空,如果没有link则填本页地址 https://mobinets.org/index.php?title=Resource:Seminar
  • 格式说明
    • Latest_seminar:

{{Latest_seminar
|confname=
|link=
|title=
|speaker=
}}

    • Hist_seminar

{{Hist_seminar
|confname=
|link=
|title=
|speaker=
|date=
}}