← Back to Papers

EdgeShard: Efficient LLM Inference via Collaborative Edge Computing

Mingjin Zhang, Xiaoming Shen, Jiannong Cao, Zeyang Cui, Shan Jiang
IEEE Internet of Things Journal | 2025
EdgeShard partitions large language models into shards distributed across collaborative edge devices and cloud servers, enabling efficient LLM inference without accuracy loss by jointly optimizing device selection and model partitioning via dynamic programming.

Problem Statement

Deploying LLMs on IoT/edge systems is critical for low-latency, privacy-preserving applications, but cloud-only deployment incurs high latency and bandwidth costs. Existing solutions either offload to the cloud (still latency-heavy) or compress models via quantization (causing accuracy degradation), leaving a gap for efficient, lossless edge-native LLM inference.

Key Novelty

  • First system to deploy LLMs entirely within a collaborative edge-cloud environment without relying heavily on remote cloud infrastructure, preserving full model accuracy
  • Adaptive joint device selection and model partition formulation that accounts for device heterogeneity, bandwidth constraints, and model layer complexity simultaneously
  • Efficient dynamic programming algorithm that optimizes both inference latency and throughput across distributed heterogeneous edge devices

Evaluation Highlights

  • Up to 50% reduction in inference latency compared to state-of-the-art baselines on real-world testbed using Llama2 series models
  • Up to 2x throughput improvement over state-of-the-art methods, validated on physical hardware with Llama2 serial models

Breakthrough Assessment

6/10 EdgeShard is a solid systems contribution that meaningfully advances practical LLM deployment on edge infrastructure, but the core techniques (model sharding, dynamic programming for partitioning) are extensions of known methods rather than fundamental algorithmic innovations; its primary value is in the integrated system design and demonstrated real-world gains.

Methodology

  1. Profile the LLM (e.g., Llama2) layer-by-layer for computation cost and inter-layer communication overhead, and characterize each available edge/cloud device for compute capacity and inter-device bandwidth
  2. Formulate the joint device selection and model partition problem as an optimization objective minimizing end-to-end inference latency and maximizing throughput under device and bandwidth constraints
  3. Solve the formulated problem using an efficient dynamic programming algorithm that determines the optimal shard boundaries and device assignments, then deploy shards on the selected distributed devices for collaborative pipeline inference

System Components

EdgeShard Partitioner

Splits the LLM into contiguous layer groups (shards) based on computation and communication cost profiles, producing an assignment of shards to specific devices

Adaptive Device Selector

Jointly selects which edge devices and cloud servers participate in inference, accounting for heterogeneous hardware capabilities and network bandwidth between nodes

Dynamic Programming Optimizer

Efficiently solves the joint partitioning and device selection problem to find the globally optimal configuration minimizing latency and maximizing throughput

Collaborative Inference Pipeline

Orchestrates sequential shard execution across distributed devices, managing data transfer between shards to sustain pipelined token generation

Results

Metric State-of-the-Art Baseline EdgeShard Delta
Inference Latency (Llama2) Baseline latency Up to 50% lower -50%
Inference Throughput (Llama2) Baseline throughput Up to 2x higher +100%
Model Accuracy Degraded (quantization) or full (cloud offload) No accuracy loss 0% degradation

Key Takeaways

  • Model sharding across heterogeneous edge devices is a viable alternative to quantization for resource-constrained LLM deployment, achieving latency/throughput gains without sacrificing accuracy — relevant for privacy-sensitive IoT applications
  • Joint optimization of device selection and partition boundaries is critical; greedy or decoupled approaches likely leave significant performance on the table, motivating the use of DP-based global solvers even in online settings
  • Real-world testbed validation on Llama2 models provides credible evidence that collaborative edge inference is practically deployable today, making EdgeShard's design patterns (profiling → formulation → DP solve → deploy) a reusable blueprint for edge ML systems

Abstract

Large language models (LLMs) have shown great success in content generation and intelligent intelligent decision making for IoT systems. Traditionally, LLMs are deployed on the cloud, incurring prolonged latency, high bandwidth costs, and privacy concerns. More recently, edge computing has been considered promising in addressing such concerns because the edge devices are closer to data sources. However, edge devices are cursed by their limited resources and can hardly afford LLMs. Existing studies address such a limitation by offloading heavy workloads from edge to cloud or compressing LLMs via model quantization. These methods either still rely heavily on the remote cloud or suffer substantial accuracy loss. This work is the first to deploy LLMs on a collaborative edge computing environment, in which edge devices and cloud servers share resources and collaborate to infer LLMs with high efficiency and no accuracy loss. We design EdgeShard, a novel approach to partition a computation-intensive LLM into affordable shards and deploy them on distributed devices. The partition and distribution are nontrivial, considering device heterogeneity, bandwidth limitations, and model complexity. To this end, we formulate an adaptive joint device selection and model partition problem and design an efficient dynamic programming algorithm to optimize the inference latency and throughput. Extensive experiments of the popular Llama2 serial models on a real-world testbed reveal that EdgeShard achieves up to 50% latency reduction and $2 \times $ throughput improvement over the state-of-the-art.

Generated on 2026-02-21 using Claude