Ascendra: Dynamic Request Prioritization for Efficient LLM Serving
Paper | Code not available
Authors: Azam Ikram, Xiang Li, Sameh Elnikety, Saurabh Bagchi
Venue: arXiv
Date: April 25, 2025
The rapid advancement of Large Language Models (LLMs) has driven the need for more efficient serving strategies. In this context, efficiency refers to the proportion of requests that meet their Service Level Objectives (SLOs), particularly for Time To First Token (TTFT) and Time Between Tokens (TBT). However, existing systems often prioritize one metric at the cost of the other. We present Ascendra, an LLM serving system designed to meet both TTFT and TBT SLOs simultaneously. The core insight behind Ascendra is that a request’s urgency evolves as it approaches its deadline. To leverage this, Ascendra partitions GPU resources into two types of instances: low-priority and high-priority. Low-priority instances maximize throughput by processing requests out of arrival order, but at the risk of request starvation. To address this, Ascendra employs a performance model to predict requests at risk of missing their SLOs and proactively offloads them to high-priority instances. High-priority instances are optimized for low-latency execution and handle urgent requests nearing their deadlines. This partitioned architecture enables Ascendra to effectively balance high throughput and low latency. Extensive evaluation shows that Ascendra improves system throughput by up to 1.7x compared to vLLM and Sarathi-Serve while meeting both TTFT and TBT SLOs.
HopTrack: A Real-time Multi-Object Tracking System for Embedded Devices
Authors: Xiang Li, Cheng Chen, Yuan-Yao Lou, Mustafa Abdallah, Kwang Taik Kim, Saurabh Bagchi
Venue: Under review for IEEE Journal on Selected Areas in Communications
Date: November 01, 2024
Multi-Object Tracking (MOT) poses significant challenges in computer vision. Despite its wide application in robotics, autonomous driving, and smart manufacturing, there is limited literature addressing the specific challenges of running MOT on embedded devices. State-of-the-art MOT trackers designed for high-end GPUs often experience low processing rates (<11fps) when deployed on embedded devices. Existing MOT frameworks for embedded devices proposed strategies such as fusing the detector model with the feature embedding model to reduce inference latency or combining different trackers to improve tracking accuracy, but tend to compromise one for the other. This paper introduces HopTrack, a real-time multi-object tracking system tailored for embedded devices. Our system employs a novel discretized static and dynamic matching approach along with an innovative content-aware dynamic sampling technique to enhance tracking accuracy while meeting the real-time requirement. Compared with the best high-end GPU modified baseline Byte (Embed) and the best existing baseline on embedded devices MobileNet-JDE, HopTrack achieves a processing speed of up to 39.29 fps on NVIDIA AGX Xavier with a multi-object tracking accuracy (MOTA) of up to 63.12% on the MOT16 benchmark, outperforming both counterparts by 2.15% and 4.82%, respectively. Additionally, the accuracy improvement is coupled with the reduction in energy consumption (20.8%), power (5%), and memory usage (8%), which are crucial resources on embedded devices. HopTrack is also detector agnostic allowing the flexibility of plug-and-play.
Dynamic DAG-Application Scheduling for Multi-Tier Edge Computing in Heterogeneous Networks
Paper | Code not available
Authors: Xiang Li, Mustafa Abdallah, Yuan-Yao Lou, Mung Chiang, Kwang Taik Kim, Saurabh Bagchi
Venue: arXiv
Date: September 27, 2024
Edge computing is deemed a promising technique to execute latency-sensitive applications by offloading computation-intensive tasks to edge servers. Extensive research has been conducted in the field of end-device to edge server task offloading for several goals, including latency minimization, energy optimization, and resource optimization. However, few of them consider our mobile computing devices (smartphones, tablets, and laptops) to be edge devices. In this paper, we propose a novel multi-tier edge computing framework, which we refer to as M-TEC, that aims to optimize latency, reduce the probability of failure, and optimize cost while accounting for the sporadic failure of personally owned devices and the changing network conditions. We conduct experiments with a real testbed and a real commercial CBRS 4G network, and the results indicate that M-TEC is capable of reducing the end-to-end latency of applications by at least 8\% compared to the best baseline under a variety of network conditions, while providing reliable performance at an affordable cost.
DAG-based Task Orchestration for Edge Computing
Authors: Xiang Li, Mustafa Abdallah; Shikhar Suryavansh, Mung Chiang, Kwang Taik Kim, Saurabh Bagchi
Venue: 41st International Symposium on Reliable Distributed Systems (Acceptance rate: 24/104 = 23.1%)
Date: June 01, 2022
Edge computing promises to exploit underlying computation resources closer to users to help run latency-sensitive applications such as augmented reality and video analytics. However, one key missing piece has been how to incorporate personally owned, unmanaged devices into a usable edge computing system. The primary challenges arise due to the heterogeneity, lack of interference management, and unpredictable availability of such devices. In this paper we propose an orchestration framework IBDASH, which orchestrates application tasks on an edge system that comprises a mix of commercial and personal edge devices. IBDASH targets reducing both end-to-end latency of execution and probability of failure for applications that have dependency among tasks, captured by directed acyclic graphs (DAGs). IBDASH takes memory constraints of each edge device and network bandwidth into consideration. To assess the effectiveness of IBDASH, we run real application tasks on real edge devices with widely varying capabilities. We feed these measurements into a simulator that runs IBDASH at scale. Compared to three state-of-the-art edge orchestration schemes and two intuitive baselines, IBDASH reduces the end-to-end latency and probability of failure, by 14% and 41% on average respectively. The main takeaway from our work is that it is feasible to combine personal and commercial devices into a usable edge computing platform, one that delivers low and predictable latency and high availability.