Dataset Viewer
Auto-converted to Parquet Duplicate
commit_hash
string
pr_url
string
pr_date
string
timeline_extracted_at
string
analysis_extracted_at
string
models
list
perf_command
string
has_serving
bool
has_latency
bool
has_throughput
bool
uses_lm_eval
bool
commit_subject
string
commit_message
string
commit_date
string
files_changed
list
stats
dict
diff_text
string
apis
list
affected_paths
list
repo
string
hardware
string
lm_eval_command
string
021f76e4f49861b2e9ea9ccff06a46d577e3c548
https://github.com/sgl-project/sglang/pull/6994
2025-06-11
2025-09-11 18:56:41
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct", "algoprog/fact-generation-llama-3.1-8b-instruct-lora" ]
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --disable-radix-cache --lora-paths lora=algoprog/fact-generation-llama-3.1-8b-instruct-lora python3 -m sglang.bench_serving --backend sglang --num-prompt 480 --request-rate 8 --lora-name lora
true
false
false
true
[Perf] Refactor LoRAManager to eliminate stream syncs and redundant computations (#6994)
[Perf] Refactor LoRAManager to eliminate stream syncs and redundant computations (#6994)
2025-06-11T16:18:57-07:00
[ "python/sglang/srt/lora/lora_manager.py", "python/sglang/srt/lora/mem_pool.py" ]
{ "commit_year": 2025, "num_edited_lines": 122, "num_files": 2, "num_hunks": 6, "num_non_test_edited_lines": 122, "num_non_test_files": 2, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/lora/lora_manager.py b/python/sglang/srt/lora/lora_manager.py index 45050df53..9d0295808 100644 --- a/python/sglang/srt/lora/lora_manager.py +++ b/python/sglang/srt/lora/lora_manager.py @@ -81,7 +81,7 @@ class LoRAManager: seg_indptr=torch.zeros( self.max_bs_in_cuda_graph + 1, dtype=torch.int32 ), - max_len=0, + max_len=1, weight_indices=torch.zeros( self.max_bs_in_cuda_graph, dtype=torch.int32 ), @@ -89,6 +89,17 @@ class LoRAManager: scalings=torch.zeros(self.max_loras_per_batch, dtype=torch.float), ) + # Initialize seg_lens and seg_indptr for CUDA graph as they remain constant + # across batches. + self.cuda_graph_batch_info.seg_lens[: self.max_bs_in_cuda_graph].fill_(1) + torch.cumsum( + self.cuda_graph_batch_info.seg_lens[: self.max_bs_in_cuda_graph], + dim=0, + out=self.cuda_graph_batch_info.seg_indptr[ + 1 : self.max_bs_in_cuda_graph + 1 + ], + ) + def init_loras(self): # Config of each LoRA adapter self.configs: Dict[str, LoRAConfig] = {} @@ -159,6 +170,45 @@ class LoRAManager: # set up batch info shared by all lora modules bs = forward_batch.batch_size + def transfer_adapter_info( + weight_indices_out: torch.Tensor, + lora_ranks_out: torch.Tensor, + scalings_out: torch.Tensor, + ): + """ + Transfer adapter metadata (weight indices, LoRA rank, scalings) from host + to device (CUDA) asynchronously. + """ + weight_indices = [0] * len(forward_batch.lora_paths) + lora_ranks = [0] * self.max_loras_per_batch + scalings = [0] * self.max_loras_per_batch + for i, lora_path in enumerate(forward_batch.lora_paths): + weight_indices[i] = self.memory_pool.get_buffer_id(lora_path) + if lora_path is not None: + lora = self.loras[lora_path] + lora_ranks[weight_indices[i]] = lora.config.hf_config["r"] + scalings[weight_indices[i]] = lora.scaling + + # Use pinned memory to avoid synchronizations during host-to-device transfer + weight_indices_tensor = torch.tensor( + weight_indices, dtype=torch.int32, pin_memory=True, device="cpu" + ) + lora_ranks_tensor = torch.tensor( + lora_ranks, dtype=torch.int32, pin_memory=True, device="cpu" + ) + scalings_tensor = torch.tensor( + scalings, dtype=torch.float, pin_memory=True, device="cpu" + ) + + # Copy to device tensors asynchronously + weight_indices_out[:bs].copy_(weight_indices_tensor, non_blocking=True) + lora_ranks_out[: self.max_loras_per_batch].copy_( + lora_ranks_tensor, non_blocking=True + ) + scalings_out[: self.max_loras_per_batch].copy_( + scalings_tensor, non_blocking=True + ) + if ( hasattr(self, "max_bs_in_cuda_graph") and bs <= self.max_bs_in_cuda_graph @@ -166,51 +216,46 @@ class LoRAManager: ): # Do in-place updates when CUDA graph is enabled and the batch forward mode # could use CUDA graph. - self.cuda_graph_batch_info.bs = bs - self.cuda_graph_batch_info.seg_lens[:bs].fill_(1) - torch.cumsum( - self.cuda_graph_batch_info.seg_lens[:bs], - dim=0, - out=self.cuda_graph_batch_info.seg_indptr[1 : bs + 1], + + transfer_adapter_info( + self.cuda_graph_batch_info.weight_indices, + self.cuda_graph_batch_info.lora_ranks, + self.cuda_graph_batch_info.scalings, ) - self.cuda_graph_batch_info.max_len = 1 - for i, lora_path in enumerate(forward_batch.lora_paths): - self.cuda_graph_batch_info.weight_indices[i] = ( - self.memory_pool.get_buffer_id(lora_path) - ) - if lora_path is not None: - lora = self.loras[lora_path] - self.cuda_graph_batch_info.lora_ranks[ - self.cuda_graph_batch_info.weight_indices[i] - ] = lora.config.hf_config["r"] - self.cuda_graph_batch_info.scalings[ - self.cuda_graph_batch_info.weight_indices[i] - ] = lora.scaling + self.cuda_graph_batch_info.bs = bs + self.cuda_graph_batch_info.max_len = 1 batch_info = self.cuda_graph_batch_info else: + weight_indices = torch.empty((bs,), dtype=torch.int32, device=self.device) + lora_ranks = torch.zeros( + (self.max_loras_per_batch,), dtype=torch.int64, device=self.device + ) + scalings = torch.zeros( + (self.max_loras_per_batch,), dtype=torch.float, device=self.device + ) + transfer_adapter_info( + weight_indices, + lora_ranks, + scalings, + ) + seg_lens = ( forward_batch.extend_seq_lens if forward_batch.forward_mode.is_extend() else torch.ones(bs, device=self.device) ) + + max_len = ( + # Calculate max_len from the CPU copy to avoid D2H transfer. + max(forward_batch.extend_seq_lens_cpu) + if forward_batch.forward_mode.is_extend() + else 1 + ) + seg_indptr = torch.zeros((bs + 1,), dtype=torch.int32, device=self.device) seg_indptr[1:] = torch.cumsum(seg_lens, dim=0) - max_len = int(torch.max(seg_lens)) - weight_indices = torch.empty((bs,), dtype=torch.int64, device=self.device) - lora_ranks = torch.zeros( - (self.max_loras_per_batch,), dtype=torch.int64, device="cuda" - ) - scalings = torch.zeros( - (self.max_loras_per_batch,), dtype=torch.float, device="cuda" - ) - for i, lora_path in enumerate(forward_batch.lora_paths): - weight_indices[i] = self.memory_pool.get_buffer_id(lora_path) - if lora_path is not None: - lora = self.loras[lora_path] - lora_ranks[weight_indices[i]] = lora.config.hf_config["r"] - scalings[weight_indices[i]] = lora.scaling batch_info = LoRABatchInfo( bs=bs, seg_lens=seg_lens, diff --git a/python/sglang/srt/lora/mem_pool.py b/python/sglang/srt/lora/mem_pool.py index 8b8d21332..7e69c4aab 100644 --- a/python/sglang/srt/lora/mem_pool.py +++ b/python/sglang/srt/lora/mem_pool.py @@ -132,12 +132,13 @@ class LoRAMemoryPool: for buffer_id in range(self.max_loras_per_batch): # Prioritize empty slots if self.buffer_id_to_uid[buffer_id] == "": - return buffer_id, "" + return buffer_id for buffer_id in range(self.max_loras_per_batch): # Evict unneeded lora if self.buffer_id_to_uid[buffer_id] not in cur_uids: - return buffer_id, self.buffer_id_to_uid[buffer_id] + self.uid_to_buffer_id.pop(self.buffer_id_to_uid[buffer_id]) + return buffer_id raise ValueError( "No available buffer slots found. Please ensure the number of active loras is less than max_loras_per_batch." @@ -145,9 +146,7 @@ class LoRAMemoryPool: for uid in cur_uids: if uid not in self.uid_to_buffer_id: - buffer_id, evicted_lora_uid = get_available_buffer_slot() - if evicted_lora_uid != "": - self.uid_to_buffer_id.pop(evicted_lora_uid) + buffer_id = get_available_buffer_slot() self.load_lora_weight_to_buffer( uid, buffer_id, lora_adapters.get(uid, None) )
[ "LoRAManager.init_cuda_graph_batch_info", "LoRAManager.prepare_lora_batch", "LoRAMemoryPool.prepare_lora_batch" ]
[ "python/sglang/srt/lora/lora_manager.py", "python/sglang/srt/lora/mem_pool.py", "python/sglang/api.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
132dad874d2e44592d03a112e4b7d63b153e8346
https://github.com/sgl-project/sglang/pull/6922
2025-06-07
2025-09-11 18:56:53
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
[PD] Optimize transfer queue forward logic for dummy rank (#6922)
[PD] Optimize transfer queue forward logic for dummy rank (#6922)
2025-06-06T18:26:14-07:00
[ "python/sglang/srt/disaggregation/mooncake/conn.py" ]
{ "commit_year": 2025, "num_edited_lines": 7, "num_files": 1, "num_hunks": 2, "num_non_test_edited_lines": 7, "num_non_test_files": 1, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/disaggregation/mooncake/conn.py b/python/sglang/srt/disaggregation/mooncake/conn.py index 824f76709..eb8ad44e2 100644 --- a/python/sglang/srt/disaggregation/mooncake/conn.py +++ b/python/sglang/srt/disaggregation/mooncake/conn.py @@ -562,6 +562,12 @@ class MooncakeKVManager(BaseKVManager): ) return + if bootstrap_room not in self.transfer_infos: + # This means that the current rank is a dummy rank for this request, + # and it has already been marked as success, so there is no need to + # add further chunks into the transfer queue. + return + # NOTE(shangming): sharding according to the dst_infos to make sure # requests with the same dst_sessions will be added into the same # queue, which enables early abort with failed sessions. @@ -578,7 +584,6 @@ class MooncakeKVManager(BaseKVManager): prefill_aux_index=aux_index, ) ) - self.update_status(bootstrap_room, KVPoll.WaitingForInput) def check_status(self, bootstrap_room: int): return self.request_status[bootstrap_room]
[ "MooncakeKVManager.add_transfer_request" ]
[ "python/sglang/srt/disaggregation/nixl/conn.py", "python/sglang/srt/disaggregation/mooncake/conn.py", "python/sglang/srt/disaggregation/ascend/conn.py", "python/sglang/srt/disaggregation/common/conn.py", "python/sglang/srt/disaggregation/fake/conn.py", "python/sglang/srt/disaggregation/base/conn.py", "python/sglang/srt/disaggregation/mooncake/transfer_engine.py", "python/sglang/srt/disaggregation/ascend/transfer_engine.py", "python/sglang/api.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
187b85b7f38496653948a2aba546d53c09ada0f3
https://github.com/sgl-project/sglang/pull/7393
2025-06-20
2025-09-11 18:56:31
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
[PD] Optimize custom mem pool usage and bump mooncake version (#7393)
[PD] Optimize custom mem pool usage and bump mooncake version (#7393)
2025-06-20T09:50:39-07:00
[ "python/sglang/srt/disaggregation/mooncake/memory_pool.py", "python/sglang/srt/mem_cache/memory_pool.py", "scripts/ci_install_dependency.sh" ]
{ "commit_year": 2025, "num_edited_lines": 65, "num_files": 3, "num_hunks": 4, "num_non_test_edited_lines": 65, "num_non_test_files": 3, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/disaggregation/mooncake/memory_pool.py b/python/sglang/srt/disaggregation/mooncake/memory_pool.py deleted file mode 100644 index 6e8edaf92..000000000 --- a/python/sglang/srt/disaggregation/mooncake/memory_pool.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import threading -from importlib import resources -from typing import Dict, Final, Optional - -import torch -from torch.cuda.memory import CUDAPluggableAllocator - - -# TODO(shangming): move this class into mooncake's package for more general use cases -class MooncakeNVLinkAllocator: - _instances: Dict[torch.device, CUDAPluggableAllocator] = {} - _lock: Final = threading.Lock() - - @classmethod - def _get_so_path(cls) -> str: - """Dynamically locate hook.so in the mooncake package installation""" - try: - # Attempt to locate package resource - with resources.path("mooncake", "hook.so") as so_path: - if so_path.exists(): - return str(so_path) - except (ImportError, FileNotFoundError, TypeError): - pass - - # Fallback strategy: check in package location via import metadata - try: - import mooncake - - base_path = os.path.dirname(os.path.abspath(mooncake.__file__)) - so_path = os.path.join(base_path, "hook.so") - if os.path.exists(so_path): - return so_path - except (ImportError, FileNotFoundError, TypeError): - raise ImportError( - "SGLANG_MOONCAKE_CUSTOM_MEM_POOL require mooncake-transfer-engine >= 0.3.3.post2." - ) - - @classmethod - def get_allocator(cls, device: torch.device) -> CUDAPluggableAllocator: - with cls._lock: - if device not in cls._instances: - so_path = cls._get_so_path() - cls._instances[device] = CUDAPluggableAllocator( - so_path, "mc_nvlink_malloc", "mc_nvlink_free" - ) - return cls._instances[device] diff --git a/python/sglang/srt/mem_cache/memory_pool.py b/python/sglang/srt/mem_cache/memory_pool.py index c01807f1b..b5be2bb1b 100644 --- a/python/sglang/srt/mem_cache/memory_pool.py +++ b/python/sglang/srt/mem_cache/memory_pool.py @@ -270,12 +270,10 @@ class MHATokenToKVPool(KVCache): "SGLANG_MOONCAKE_CUSTOM_MEM_POOL", "false" ) if self.enable_custom_mem_pool: - from sglang.srt.disaggregation.mooncake.memory_pool import ( - MooncakeNVLinkAllocator, - ) - # TODO(shangming): abstract custom allocator class for more backends - allocator = MooncakeNVLinkAllocator.get_allocator(self.device) + from mooncake.allocator import NVLinkAllocator + + allocator = NVLinkAllocator.get_allocator(self.device) self.custom_mem_pool = torch.cuda.MemPool(allocator.allocator()) else: self.custom_mem_pool = None @@ -602,12 +600,10 @@ class MLATokenToKVPool(KVCache): "SGLANG_MOONCAKE_CUSTOM_MEM_POOL", "false" ) if self.enable_custom_mem_pool: - from sglang.srt.disaggregation.mooncake.memory_pool import ( - MooncakeNVLinkAllocator, - ) - # TODO(shangming): abstract custom allocator class for more backends - allocator = MooncakeNVLinkAllocator.get_allocator(self.device) + from mooncake.allocator import NVLinkAllocator + + allocator = NVLinkAllocator.get_allocator(self.device) self.custom_mem_pool = torch.cuda.MemPool(allocator.allocator()) else: self.custom_mem_pool = None diff --git a/scripts/ci_install_dependency.sh b/scripts/ci_install_dependency.sh index 922c886c4..a1808019e 100755 --- a/scripts/ci_install_dependency.sh +++ b/scripts/ci_install_dependency.sh @@ -23,7 +23,7 @@ pip install -e "python[dev]" pip list # Install additional dependencies -pip install mooncake-transfer-engine==0.3.2.post1 nvidia-cuda-nvrtc-cu12 +pip install mooncake-transfer-engine==0.3.4 nvidia-cuda-nvrtc-cu12 # For lmms_evals evaluating MMMU git clone --branch v0.3.3 --depth 1 https://github.com/EvolvingLMMs-Lab/lmms-eval.git
[ "sglang.srt.mem_cache.memory_pool.MHATokenToKVPool", "sglang.srt.mem_cache.memory_pool.MLATokenToKVPool</APIS>" ]
[ "python/sglang/srt/mem_cache/memory_pool.py", "python/sglang/api.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
1acca3a2c685221cdb181c2abda4f635e1ead435
https://github.com/sgl-project/sglang/pull/5969
2025-05-02
2025-09-11 18:58:13
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
FA3 speed up: skip len operation and get batch size directly from forward batch (#5969)
FA3 speed up: skip len operation and get batch size directly from forward batch (#5969)
2025-05-02T00:26:12-07:00
[ "python/sglang/srt/layers/attention/flashattention_backend.py" ]
{ "commit_year": 2025, "num_edited_lines": 2, "num_files": 1, "num_hunks": 1, "num_non_test_edited_lines": 2, "num_non_test_files": 1, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/layers/attention/flashattention_backend.py b/python/sglang/srt/layers/attention/flashattention_backend.py index 9579b19f2..c148ac159 100644 --- a/python/sglang/srt/layers/attention/flashattention_backend.py +++ b/python/sglang/srt/layers/attention/flashattention_backend.py @@ -338,7 +338,7 @@ class FlashAttentionBackend(AttentionBackend): """Initialize forward metadata hence all layers in the forward pass can reuse it.""" metadata = FlashAttentionMetadata() seqlens_in_batch = forward_batch.seq_lens - batch_size = len(seqlens_in_batch) + batch_size = forward_batch.batch_size device = seqlens_in_batch.device if forward_batch.forward_mode.is_decode_or_idle():
[ "FlashAttentionBackend.init_forward_metadata" ]
[ "python/sglang/srt/layers/attention/flashattention_backend.py", "python/sglang/api.py", "examples/runtime/engine/offline_batch_inference.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
205d5cb407f7860c79df870b3f045d74b8292f77
https://github.com/sgl-project/sglang/pull/6356
2025-05-17
2025-09-11 18:57:40
2026-01-03 16:29:32
[ "RedHatAI/Llama-4-Scout-17B-16E-Instruct-quantized.w4a16" ]
python3 -m sglang.bench_serving --backend sglang --model RedHatAI/Llama-4-Scout-17B-16E-Instruct-quantized.w4a16 --dataset-name random --num-prompts 100 --random-input 1000 --random-output 1000 --random-range-ratio 1.0 --max-concurrency 64
true
false
false
true
perf: Optimize local attention memory allocation in FlashAttentionBackend (#6356)
perf: Optimize local attention memory allocation in FlashAttentionBackend (#6356)
2025-05-17T01:45:46-07:00
[ "python/sglang/srt/layers/attention/flashattention_backend.py" ]
{ "commit_year": 2025, "num_edited_lines": 70, "num_files": 1, "num_hunks": 2, "num_non_test_edited_lines": 70, "num_non_test_files": 1, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/layers/attention/flashattention_backend.py b/python/sglang/srt/layers/attention/flashattention_backend.py index 2f974ea9a..a626ff0d8 100644 --- a/python/sglang/srt/layers/attention/flashattention_backend.py +++ b/python/sglang/srt/layers/attention/flashattention_backend.py @@ -1434,19 +1434,7 @@ class FlashAttentionBackend(AttentionBackend): self.decode_cuda_graph_metadata[bs] = metadata if self.attention_chunk_size is not None: - metadata.local_attn_metadata = FlashAttentionMetadata.LocalAttentionMetadata( - local_query_start_loc=self.decode_cuda_graph_local_attn_metadata[ - "local_query_start_loc" - ], - local_seqused_k=self.decode_cuda_graph_local_attn_metadata[ - "local_seqused_k" - ], - local_block_table=self.decode_cuda_graph_local_attn_metadata[ - "local_block_table" - ], - local_max_query_len=1, - local_max_seq_len=1, - ) + self._update_local_attn_metadata_for_capture(metadata, batch_size) elif forward_mode.is_target_verify(): if self.topk <= 1: @@ -1807,6 +1795,62 @@ class FlashAttentionBackend(AttentionBackend): ) metadata.local_attn_metadata = local_metadata + def _update_local_attn_metadata_for_capture( + self, metadata: FlashAttentionMetadata, bs: int + ): + """Update local attention metadata during CUDA graph capture phase. + + This method calculates the exact buffer sizes needed for local attention metadata + during the CUDA graph capture phase, optimizing memory usage by creating views of + pre-allocated buffers with exactly the sizes needed. + """ + seq_lens_capture = metadata.cache_seqlens_int32 + max_seq_len = int(seq_lens_capture.max().item()) + page_table_capture = metadata.page_table + + cu_seqlens_q_np = metadata.cu_seqlens_q.cpu().numpy() + seqlens_np = seq_lens_capture.cpu().numpy() + ( + seqlens_q_local_np, + cu_seqlens_q_local_np, + seqlens_k_local_np, + block_table_local_np, + ) = make_local_attention_virtual_batches( + self.attention_chunk_size, + cu_seqlens_q_np, + seqlens_np, + page_table_capture, + self.page_size, + ) + + # Get exact dimensions from the calculation + q_len = len(cu_seqlens_q_local_np) + k_len = len(seqlens_k_local_np) + b0 = block_table_local_np.shape[0] if block_table_local_np.shape[0] > 0 else bs + b1 = block_table_local_np.shape[1] if block_table_local_np.shape[1] > 0 else 1 + + # Create views of the pre-allocated buffers with exactly these sizes + # This is the key optimization - we only use the memory we actually need + local_query_start_loc = self.decode_cuda_graph_local_attn_metadata[ + "local_query_start_loc" + ][:q_len] + + local_seqused_k = self.decode_cuda_graph_local_attn_metadata["local_seqused_k"][ + :k_len + ] + + local_block_table = self.decode_cuda_graph_local_attn_metadata[ + "local_block_table" + ][:b0, :b1] + + metadata.local_attn_metadata = FlashAttentionMetadata.LocalAttentionMetadata( + local_query_start_loc=local_query_start_loc, + local_seqused_k=local_seqused_k, + local_block_table=local_block_table, + local_max_query_len=1, + local_max_seq_len=max_seq_len, + ) + def _update_local_attn_metadata_for_replay( self, metadata: FlashAttentionMetadata, bs: int ):
[ "sglang.srt.layers.attention.flashattention_backend.FlashAttentionBackend" ]
[ "python/sglang/srt/layers/attention/flashattention_backend.py", "python/sglang/api.py", "benchmark/lora/launch_server.py", "python/sglang/launch_server.py", "sgl-router/py_src/sglang_router/launch_server.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=RedHatAI/Llama-4-Scout-17B-16E-Instruct-quantized.w4a16,dtype=auto --tasks gsm8k --batch_size auto --limit 100
2ed68d7a6c4737618652cfa0288443a5a5d73b14
https://github.com/sgl-project/sglang/pull/7236
2025-06-24
2025-09-11 18:56:22
2026-01-03 16:29:32
[ "deepseek-ai/DeepSeek-V2-Lite-Chat" ]
python3 -m sglang.bench_serving --backend sglang --model deepseek-ai/DeepSeek-V2-Lite-Chat --dataset-name random --num-prompts 100 --random-input 1000 --random-output 1000
true
false
false
true
[PD Disaggregation] replace transfer with batch transfer for better performance (#7236)
[PD Disaggregation] replace transfer with batch transfer for better performance (#7236)
2025-06-24T02:12:04-07:00
[ "python/sglang/srt/disaggregation/mooncake/conn.py", "python/sglang/srt/disaggregation/mooncake/transfer_engine.py" ]
{ "commit_year": 2025, "num_edited_lines": 42, "num_files": 2, "num_hunks": 3, "num_non_test_edited_lines": 42, "num_non_test_files": 2, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/disaggregation/mooncake/conn.py b/python/sglang/srt/disaggregation/mooncake/conn.py index 29e861e9f..92e182dfd 100644 --- a/python/sglang/srt/disaggregation/mooncake/conn.py +++ b/python/sglang/srt/disaggregation/mooncake/conn.py @@ -251,17 +251,19 @@ class MooncakeKVManager(BaseKVManager): # Worker function for processing a single layer def process_layer(src_ptr: int, dst_ptr: int, item_len: int) -> int: + src_addr_list = [] + dst_addr_list = [] + length_list = [] for prefill_index, decode_index in zip(prefill_kv_blocks, dst_kv_blocks): src_addr = src_ptr + int(prefill_index[0]) * item_len dst_addr = dst_ptr + int(decode_index[0]) * item_len length = item_len * len(prefill_index) - - status = self.engine.transfer_sync( - mooncake_session_id, src_addr, dst_addr, length - ) - if status != 0: - return status - return 0 + src_addr_list.append(src_addr) + dst_addr_list.append(dst_addr) + length_list.append(length) + return self.engine.batch_transfer_sync( + mooncake_session_id, src_addr_list, dst_addr_list, length_list + ) futures = [ executor.submit( diff --git a/python/sglang/srt/disaggregation/mooncake/transfer_engine.py b/python/sglang/srt/disaggregation/mooncake/transfer_engine.py index 5643af70b..966f7152c 100644 --- a/python/sglang/srt/disaggregation/mooncake/transfer_engine.py +++ b/python/sglang/srt/disaggregation/mooncake/transfer_engine.py @@ -1,7 +1,7 @@ import json import logging from dataclasses import dataclass -from typing import Optional +from typing import List, Optional logger = logging.getLogger(__name__) @@ -90,5 +90,29 @@ class MooncakeTransferEngine: return ret + def batch_transfer_sync( + self, + session_id: str, + buffers: List[int], + peer_buffer_addresses: List[int], + lengths: List[int], + ) -> int: + """Synchronously transfer data to the specified address.""" + try: + ret = self.engine.batch_transfer_sync_write( + session_id, buffers, peer_buffer_addresses, lengths + ) + except Exception: + ret = -1 + + if ret < 0: + logger.debug( + "Failed to batch transfer data. Buffers: %s, Session: %s, Peer addresses: %s", + buffers, + session_id, + peer_buffer_addresses, + ) + return ret + def get_session_id(self): return self.session_id
[ "MooncakeKVManager.send_kvcache", "MooncakeTransferEngine.batch_transfer_sync" ]
[ "python/sglang/srt/disaggregation/nixl/conn.py", "python/sglang/srt/disaggregation/mooncake/conn.py", "python/sglang/srt/disaggregation/ascend/conn.py", "python/sglang/srt/disaggregation/common/conn.py", "python/sglang/srt/disaggregation/fake/conn.py", "python/sglang/srt/disaggregation/base/conn.py", "python/sglang/srt/disaggregation/mooncake/transfer_engine.py", "python/sglang/srt/disaggregation/ascend/transfer_engine.py", "python/sglang/api.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=deepseek-ai/DeepSeek-V2-Lite-Chat,dtype=auto --tasks gsm8k --batch_size auto --limit 100
31589e177e2df6014607293fb4603cfd63297b67
https://github.com/sgl-project/sglang/pull/6668
2025-05-28
2025-09-11 18:57:12
2026-01-03 16:29:32
[ "deepseek-ai/DeepSeek-V2-Lite-Chat" ]
python3 -m sglang.bench_serving --backend sglang --model deepseek-ai/DeepSeek-V2-Lite-Chat --dataset-name random --random-range-ratio 1 --random-input-len 1000 --random-output-len 1000 --max-concurrency 1
true
false
false
true
Speed up when having padding tokens two-batch overlap (#6668)
Speed up when having padding tokens two-batch overlap (#6668)
2025-05-28T16:00:58-07:00
[ "python/sglang/srt/models/deepseek_v2.py", "python/sglang/srt/two_batch_overlap.py" ]
{ "commit_year": 2025, "num_edited_lines": 83, "num_files": 2, "num_hunks": 12, "num_non_test_edited_lines": 83, "num_non_test_files": 2, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/models/deepseek_v2.py b/python/sglang/srt/models/deepseek_v2.py index 29f18f0ef..b4fc4d7a7 100644 --- a/python/sglang/srt/models/deepseek_v2.py +++ b/python/sglang/srt/models/deepseek_v2.py @@ -454,6 +454,7 @@ class DeepseekV2MoE(nn.Module): num_expert_group=self.num_expert_group, correction_bias=self.correction_bias, routed_scaling_factor=self.routed_scaling_factor, + num_token_non_padded=state.forward_batch.num_token_non_padded, expert_location_dispatch_info=ExpertLocationDispatchInfo.init_new( layer_id=self.layer_id, ), diff --git a/python/sglang/srt/two_batch_overlap.py b/python/sglang/srt/two_batch_overlap.py index 6b0241f40..b417de7ce 100644 --- a/python/sglang/srt/two_batch_overlap.py +++ b/python/sglang/srt/two_batch_overlap.py @@ -110,7 +110,7 @@ def compute_split_indices_for_cuda_graph_replay( class TboCudaGraphRunnerPlugin: def __init__(self): - pass # TODO add logic here + self._tbo_children_num_token_non_padded = torch.zeros((2,), dtype=torch.int32) def capture_one_batch_size(self, batch: ForwardBatch, num_tokens: int): if not global_server_args_dict["enable_two_batch_overlap"]: @@ -124,7 +124,14 @@ class TboCudaGraphRunnerPlugin: # For simplicity, when two_batch_overlap is enabled, we only capture CUDA Graph for tbo=true assert batch.tbo_split_seq_index is not None, f"{num_tokens=}" - TboForwardBatchPreparer.prepare(batch) + self._tbo_children_num_token_non_padded[...] = ( + TboForwardBatchPreparer.compute_tbo_children_num_token_non_padded(batch) + ) + + TboForwardBatchPreparer.prepare_raw( + batch, + tbo_children_num_token_non_padded=self._tbo_children_num_token_non_padded, + ) def replay_prepare( self, forward_mode: ForwardMode, bs: int, num_token_non_padded: int @@ -132,7 +139,20 @@ class TboCudaGraphRunnerPlugin: if not global_server_args_dict["enable_two_batch_overlap"]: return - pass # TODO add logic here + tbo_split_seq_index, tbo_split_token_index = ( + compute_split_indices_for_cuda_graph_replay( + forward_mode=forward_mode, + # TODO support bs!=num_tokens + cuda_graph_num_tokens=bs, + ) + ) + + self._tbo_children_num_token_non_padded[...] = ( + TboForwardBatchPreparer.compute_tbo_children_num_token_non_padded_raw( + tbo_split_token_index=tbo_split_token_index, + num_token_non_padded=num_token_non_padded, + ) + ) class TboDPAttentionPreparer: @@ -207,16 +227,23 @@ class TboDPAttentionPreparer: class TboForwardBatchPreparer: @classmethod def prepare(cls, batch: ForwardBatch): - from sglang.srt.layers.attention.tbo_backend import TboAttnBackend - if batch.tbo_split_seq_index is None: return - tbo_split_token_index = compute_split_token_index( - split_seq_index=batch.tbo_split_seq_index, - forward_mode=batch.forward_mode, - extend_seq_lens=batch.extend_seq_lens_cpu, + tbo_children_num_token_non_padded = ( + cls.compute_tbo_children_num_token_non_padded(batch) ) + cls.prepare_raw( + batch, tbo_children_num_token_non_padded=tbo_children_num_token_non_padded + ) + + @classmethod + def prepare_raw( + cls, batch: ForwardBatch, tbo_children_num_token_non_padded: torch.Tensor + ): + from sglang.srt.layers.attention.tbo_backend import TboAttnBackend + + tbo_split_token_index = cls._compute_split_token_index(batch) if _tbo_debug: logger.info( @@ -229,6 +256,10 @@ class TboForwardBatchPreparer: assert isinstance(batch.attn_backend, TboAttnBackend) attn_backend_child_a, attn_backend_child_b = batch.attn_backend.children + [out_num_token_non_padded_a, out_num_token_non_padded_b] = ( + tbo_children_num_token_non_padded + ) + child_a = cls.filter_batch( batch, start_token_index=0, @@ -236,6 +267,7 @@ class TboForwardBatchPreparer: start_seq_index=0, end_seq_index=batch.tbo_split_seq_index, output_attn_backend=attn_backend_child_a, + out_num_token_non_padded=out_num_token_non_padded_a, ) child_b = cls.filter_batch( batch, @@ -244,6 +276,7 @@ class TboForwardBatchPreparer: start_seq_index=batch.tbo_split_seq_index, end_seq_index=batch.batch_size, output_attn_backend=attn_backend_child_b, + out_num_token_non_padded=out_num_token_non_padded_b, ) assert batch.tbo_children is None @@ -259,9 +292,8 @@ class TboForwardBatchPreparer: start_seq_index: int, end_seq_index: int, output_attn_backend: AttentionBackend, + out_num_token_non_padded: torch.Tensor, ): - from sglang.srt.managers.schedule_batch import global_server_args_dict - num_tokens = batch.input_ids.shape[0] num_seqs = batch.batch_size @@ -342,6 +374,7 @@ class TboForwardBatchPreparer: ), extend_num_tokens=extend_num_tokens, attn_backend=output_attn_backend, + num_token_non_padded=out_num_token_non_padded, tbo_split_seq_index=None, tbo_parent_token_range=(start_token_index, end_token_index), tbo_children=None, @@ -357,7 +390,6 @@ class TboForwardBatchPreparer: top_p_normalized_logprobs=False, top_p=None, mm_inputs=None, - num_token_non_padded=None, ) ) @@ -372,6 +404,32 @@ class TboForwardBatchPreparer: return ForwardBatch(**output_dict) + @classmethod + def compute_tbo_children_num_token_non_padded(cls, batch: ForwardBatch): + return cls.compute_tbo_children_num_token_non_padded_raw( + tbo_split_token_index=cls._compute_split_token_index(batch), + num_token_non_padded=len(batch.input_ids), + ) + + @classmethod + def compute_tbo_children_num_token_non_padded_raw( + cls, tbo_split_token_index: int, num_token_non_padded: int + ): + # TODO we may make padding on both sub-batches to make it slightly more balanced + value_a = min(tbo_split_token_index, num_token_non_padded) + value_b = max(0, num_token_non_padded - tbo_split_token_index) + return torch.tensor([value_a, value_b], dtype=torch.int32).to( + device=global_server_args_dict["device"], non_blocking=True + ) + + @classmethod + def _compute_split_token_index(cls, batch: ForwardBatch): + return compute_split_token_index( + split_seq_index=batch.tbo_split_seq_index, + forward_mode=batch.forward_mode, + extend_seq_lens=batch.extend_seq_lens_cpu, + ) + def _compute_extend_num_tokens(input_ids, forward_mode: ForwardMode): if forward_mode.is_extend():
[ "DeepseekV2ForCausalLM", "TboCudaGraphRunnerPlugin", "TboForwardBatchPreparer" ]
[ "python/sglang/srt/models/deepseek_v2.py", "python/sglang/srt/two_batch_overlap.py", "python/sglang/api.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=deepseek-ai/DeepSeek-V2-Lite-Chat,dtype=auto --tasks gsm8k --batch_size auto --limit 100
6b231325b9782555eb8e1cfcf27820003a98382b
https://github.com/sgl-project/sglang/pull/6649
2025-05-28
2025-09-11 18:57:16
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
[PD Perf] replace Queue to FastQueue (#6649)
[PD Perf] replace Queue to FastQueue (#6649)
2025-05-28T01:37:51-07:00
[ "python/sglang/srt/disaggregation/mooncake/conn.py", "python/sglang/srt/disaggregation/utils.py" ]
{ "commit_year": 2025, "num_edited_lines": 308, "num_files": 2, "num_hunks": 11, "num_non_test_edited_lines": 308, "num_non_test_files": 2, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/disaggregation/mooncake/conn.py b/python/sglang/srt/disaggregation/mooncake/conn.py index 8ab5066ec..9ebdd60f0 100644 --- a/python/sglang/srt/disaggregation/mooncake/conn.py +++ b/python/sglang/srt/disaggregation/mooncake/conn.py @@ -31,6 +31,7 @@ from sglang.srt.disaggregation.base.conn import ( from sglang.srt.disaggregation.mooncake.transfer_engine import MooncakeTransferEngine from sglang.srt.disaggregation.utils import ( DisaggregationMode, + FastQueue, group_concurrent_contiguous, ) from sglang.srt.server_args import ServerArgs @@ -151,7 +152,6 @@ class MooncakeKVManager(BaseKVManager): self.server_socket = zmq.Context().socket(zmq.PULL) self.register_buffer_to_engine() if self.disaggregation_mode == DisaggregationMode.PREFILL: - self.transfer_queue = queue.Queue() self.transfer_infos: Dict[int, Dict[str, TransferInfo]] = {} self.decode_kv_args_table: Dict[str, KVArgsRegisterInfo] = {} self.start_prefill_thread() @@ -159,15 +159,31 @@ class MooncakeKVManager(BaseKVManager): self.session_failures = defaultdict(int) self.failed_sessions = set() self.session_lock = threading.Lock() - # Determine the number of threads to use for kv sender cpu_count = os.cpu_count() - self.executor = concurrent.futures.ThreadPoolExecutor( - get_int_env_var( - "SGLANG_DISAGGREGATION_THREAD_POOL_SIZE", - min(max(1, cpu_count // 8), 8), - ) + transfer_thread_pool_size = get_int_env_var( + "SGLANG_DISAGGREGATION_THREAD_POOL_SIZE", + min(max(4, int(0.75 * cpu_count) // 8), 12), ) + transfer_queue_size = get_int_env_var("SGLANG_DISAGGREGATION_QUEUE_SIZE", 4) + self.transfer_queues: List[FastQueue] = [ + FastQueue() for _ in range(transfer_queue_size) + ] + assert transfer_thread_pool_size >= transfer_queue_size, ( + f"The environment variable SGLANG_DISAGGREGATION_THREAD_POOL_SIZE={transfer_thread_pool_size} must be " + f"greater than or equal to SGLANG_DISAGGREGATION_QUEUE_SIZE={transfer_queue_size}." + ) + self.executors = [ + concurrent.futures.ThreadPoolExecutor( + transfer_thread_pool_size // transfer_queue_size + ) + for _ in range(transfer_queue_size) + ] + for queue, executor in zip(self.transfer_queues, self.executors): + threading.Thread( + target=self.transfer_worker, args=(queue, executor), daemon=True + ).start() + self.bootstrap_time_out = get_int_env_var( "SGLANG_DISAGGREGATION_BOOTSTRAP_TIMEOUT", 30 ) @@ -183,7 +199,7 @@ class MooncakeKVManager(BaseKVManager): ) # Heartbeat failure should be at least 1 self.max_failures = max( - int(os.getenv("SGLANG_DISAGGREGATION_HEARTBEAT_MAX_FAILURE", 2)), 1 + get_int_env_var("SGLANG_DISAGGREGATION_HEARTBEAT_MAX_FAILURE", 2), 1 ) self.start_decode_thread() self.connection_pool: Dict[str, Dict[str, Union[str, int]]] = {} @@ -220,6 +236,7 @@ class MooncakeKVManager(BaseKVManager): prefill_kv_indices: npt.NDArray[np.int64], dst_kv_ptrs: list[int], dst_kv_indices: npt.NDArray[np.int64], + executor: concurrent.futures.ThreadPoolExecutor, ): # Group by indices prefill_kv_blocks, dst_kv_blocks = group_concurrent_contiguous( @@ -251,7 +268,7 @@ class MooncakeKVManager(BaseKVManager): return 0 futures = [ - self.executor.submit( + executor.submit( process_layer, src_ptr, dst_ptr, @@ -298,6 +315,123 @@ class MooncakeKVManager(BaseKVManager): ] ) + def transfer_worker( + self, queue: FastQueue, executor: concurrent.futures.ThreadPoolExecutor + ): + while True: + try: + kv_chunk: TransferKVChunk = queue.get() + reqs_to_be_processed = ( + self.transfer_infos[kv_chunk.room].values() + if kv_chunk.room in self.transfer_infos + else [] + ) + polls = [] + dst_ranks_infos = [] + for req in reqs_to_be_processed: + if not req.is_dummy: + # Early exit if the request has failed + with self.session_lock: + if req.mooncake_session_id in self.failed_sessions: + self.record_failure( + kv_chunk.room, + f"Decode instance could be dead, remote mooncake session {req.mooncake_session_id} is not alive", + ) + self.update_status(kv_chunk.room, KVPoll.Failed) + self.sync_status_to_decode_endpoint( + req.endpoint, + req.dst_port, + req.room, + KVPoll.Failed, + ) + break + + chunked_dst_kv_indice = req.dst_kv_indices[kv_chunk.index_slice] + + # NOTE: This is temporarily a workaround to deal with the case where the prefill_kv_indices + # is mismatched with the dst_kv_indices when page size > 1, this should never happen. + if len(chunked_dst_kv_indice) < len( + kv_chunk.prefill_kv_indices + ): + kv_chunk.prefill_kv_indices = kv_chunk.prefill_kv_indices[ + len(chunked_dst_kv_indice) + ] + logger.warning( + f"len(chunked_dst_kv_indice) = {len(chunked_dst_kv_indice)}, len(kv_chunk.prefill_kv_indices) = {len(kv_chunk.prefill_kv_indices)}" + ) + + ret = self.send_kvcache( + req.mooncake_session_id, + kv_chunk.prefill_kv_indices, + self.decode_kv_args_table[ + req.mooncake_session_id + ].dst_kv_ptrs, + chunked_dst_kv_indice, + executor, + ) + if ret != 0: + with self.session_lock: + self.session_failures[req.mooncake_session_id] += 1 + # Failures should never happen if the session is not dead, if the session fails once, mark it as failed + if self.session_failures[req.mooncake_session_id] >= 1: + self.failed_sessions.add(req.mooncake_session_id) + logger.error( + f"Session {req.mooncake_session_id} failed." + ) + self.record_failure( + kv_chunk.room, + f"Failed to send kv chunk of {kv_chunk.room} to {req.endpoint}:{req.dst_port}", + ) + self.update_status(kv_chunk.room, KVPoll.Failed) + self.sync_status_to_decode_endpoint( + req.endpoint, req.dst_port, req.room, KVPoll.Failed + ) + break + + if kv_chunk.is_last: + # Only the last chunk we need to send the aux data + ret = self.send_aux( + req.mooncake_session_id, + kv_chunk.prefill_aux_index, + self.decode_kv_args_table[ + req.mooncake_session_id + ].dst_aux_ptrs, + req.dst_aux_index, + ) + polls.append(True if ret == 0 else False) + dst_ranks_infos.append( + (req.endpoint, req.dst_port, req.room) + ) + + # Only sync status when all the dst ranks have received the kvcache + if len(polls) == req.required_dst_info_num: + status = KVPoll.Success if all(polls) else KVPoll.Failed + self.update_status(req.room, status) + for endpoint, dst_port, room in dst_ranks_infos: + self.sync_status_to_decode_endpoint( + endpoint, dst_port, room, status + ) + else: + # Dummy request means the decode instance is not used, so its status can be marked as success directly + # Dummy request does not need to sync status to decode endpoint + if kv_chunk.is_last and req.room in self.request_status: + self.update_status(req.room, KVPoll.Success) + + if ( + kv_chunk.room not in self.request_status + or self.check_status(kv_chunk.room) == KVPoll.Success + ): + if kv_chunk.room in self.transfer_infos: + self.transfer_infos.pop(kv_chunk.room) + + except queue.Empty: + continue + except Exception as e: + # NOTE(shangming): Remove this when we make sure the transfer thread is bug-free + raise RuntimeError( + f"Transfer thread failed because of {e}. Prefill instance with bootstrap_port={self.bootstrap_port} is dead." + ) + def start_prefill_thread(self): self.rank_port = get_free_port() self.server_socket.bind(f"tcp://{get_local_ip_by_remote()}:{self.rank_port}") @@ -335,134 +469,7 @@ class MooncakeKVManager(BaseKVManager): if len(self.transfer_infos[room]) == required_dst_info_num: self.update_status(room, KVPoll.WaitingForInput) - def transfer_thread(): - # TODO: Shall we use KVPoll.Transferring state? - while True: - try: - kv_chunk: TransferKVChunk = self.transfer_queue.get(timeout=0.01) - reqs_to_be_processed = ( - self.transfer_infos[kv_chunk.room].values() - if kv_chunk.room in self.transfer_infos - else [] - ) - polls = [] - dst_ranks_infos = [] - for req in reqs_to_be_processed: - if not req.is_dummy: - # Early exit if the request has failed - with self.session_lock: - if req.mooncake_session_id in self.failed_sessions: - self.record_failure( - kv_chunk.room, - f"Decode instance could be dead, remote mooncake session {req.mooncake_session_id} is not alive", - ) - self.update_status(kv_chunk.room, KVPoll.Failed) - self.sync_status_to_decode_endpoint( - req.endpoint, - req.dst_port, - req.room, - KVPoll.Failed, - ) - break - - chunked_dst_kv_indice = req.dst_kv_indices[ - kv_chunk.index_slice - ] - - # NOTE: This is temporarily a workaround to deal with the case where the prefill_kv_indices - # is mismatched with the dst_kv_indices when page size > 1, this should never happen. - if len(chunked_dst_kv_indice) < len( - kv_chunk.prefill_kv_indices - ): - kv_chunk.prefill_kv_indices = ( - kv_chunk.prefill_kv_indices[ - len(chunked_dst_kv_indice) - ] - ) - logger.warning( - f"len(chunked_dst_kv_indice) = {len(chunked_dst_kv_indice)}, len(kv_chunk.prefill_kv_indices) = {len(kv_chunk.prefill_kv_indices)}" - ) - - ret = self.send_kvcache( - req.mooncake_session_id, - kv_chunk.prefill_kv_indices, - self.decode_kv_args_table[ - req.mooncake_session_id - ].dst_kv_ptrs, - chunked_dst_kv_indice, - ) - if ret != 0: - with self.session_lock: - self.session_failures[req.mooncake_session_id] += 1 - # Failures should never happen if the session is not dead, if the session fails once, mark it as failed - if ( - self.session_failures[req.mooncake_session_id] - >= 1 - ): - self.failed_sessions.add( - req.mooncake_session_id - ) - logger.error( - f"Session {req.mooncake_session_id} failed." - ) - self.record_failure( - kv_chunk.room, - f"Failed to send kv chunk of {kv_chunk.room} to {req.endpoint}:{req.dst_port}", - ) - self.update_status(kv_chunk.room, KVPoll.Failed) - self.sync_status_to_decode_endpoint( - req.endpoint, req.dst_port, req.room, KVPoll.Failed - ) - break - - if kv_chunk.is_last: - # Only the last chunk we need to send the aux data - ret = self.send_aux( - req.mooncake_session_id, - kv_chunk.prefill_aux_index, - self.decode_kv_args_table[ - req.mooncake_session_id - ].dst_aux_ptrs, - req.dst_aux_index, - ) - polls.append(True if ret == 0 else False) - dst_ranks_infos.append( - (req.endpoint, req.dst_port, req.room) - ) - - # Only sync status when all the dst ranks have received the kvcache - if len(polls) == req.required_dst_info_num: - status = ( - KVPoll.Success if all(polls) else KVPoll.Failed - ) - self.update_status(req.room, status) - for endpoint, dst_port, room in dst_ranks_infos: - self.sync_status_to_decode_endpoint( - endpoint, dst_port, room, status - ) - else: - # Dummy request means the decode instance is not used, so its status can be marked as success directly - # Dummy request does not need to sync status to decode endpoint - if kv_chunk.is_last and req.room in self.request_status: - self.update_status(req.room, KVPoll.Success) - - if ( - kv_chunk.room not in self.request_status - or self.check_status(kv_chunk.room) == KVPoll.Success - ): - if kv_chunk.room in self.transfer_infos: - self.transfer_infos.pop(kv_chunk.room) - - except queue.Empty: - continue - except Exception as e: - # NOTE(shangming): Remove this when we make sure the transfer thread is bug-free - raise RuntimeError( - f"Transfer thread failed because of {e}. Prefill instance with bootstrap_port={self.bootstrap_port} is dead." - ) - threading.Thread(target=bootstrap_thread).start() - threading.Thread(target=transfer_thread).start() def start_decode_thread(self): self.rank_port = get_free_port() @@ -555,7 +562,14 @@ class MooncakeKVManager(BaseKVManager): ) return - self.transfer_queue.put( + # NOTE(shangming): sharding according to the dst_infos to make sure + # requests with the same dst_sessions will be added into the same + # queue, which enables early abort with failed sessions. + dst_infos = self.transfer_infos[bootstrap_room].keys() + session_port_sum = sum(int(session.split(":")[1]) for session in dst_infos) + shard_idx = session_port_sum % len(self.transfer_queues) + + self.transfer_queues[shard_idx].put( TransferKVChunk( room=bootstrap_room, prefill_kv_indices=kv_indices, diff --git a/python/sglang/srt/disaggregation/utils.py b/python/sglang/srt/disaggregation/utils.py index 8841d5f1a..db7dd3239 100644 --- a/python/sglang/srt/disaggregation/utils.py +++ b/python/sglang/srt/disaggregation/utils.py @@ -3,6 +3,7 @@ from __future__ import annotations import dataclasses import os import random +import threading import warnings from collections import deque from enum import Enum @@ -281,6 +282,25 @@ class MetadataBuffers: ) +class FastQueue: + def __init__(self): + self._buf = deque() + self._cond = threading.Condition() + + def put(self, item): + with self._cond: + self._buf.append(item) + # wake up a thread of wait() + self._cond.notify() + + def get(self): + with self._cond: + # if queue is empty ,block until is notified() + while not self._buf: + self._cond.wait() + return self._buf.popleft() + + def group_concurrent_contiguous( src_indices: npt.NDArray[np.int64], dst_indices: npt.NDArray[np.int64] ) -> Tuple[List[npt.NDArray[np.int64]], List[npt.NDArray[np.int64]]]:
[ "MooncakeKVManager.add_transfer_request", "MooncakeKVManager.transfer_worker", "FastQueue" ]
[ "python/sglang/srt/disaggregation/nixl/conn.py", "python/sglang/srt/disaggregation/mooncake/conn.py", "python/sglang/srt/disaggregation/ascend/conn.py", "python/sglang/srt/disaggregation/common/conn.py", "python/sglang/srt/disaggregation/fake/conn.py", "python/sglang/srt/disaggregation/base/conn.py", "python/sglang/utils.py", "python/sglang/srt/utils.py", "sgl-kernel/python/sgl_kernel/utils.py", "python/sglang/srt/disaggregation/utils.py", "python/sglang/srt/weight_sync/utils.py", "python/sglang/srt/layers/utils.py", "python/sglang/srt/distributed/utils.py", "python/sglang/srt/managers/utils.py", "python/sglang/srt/function_call/utils.py", "python/sglang/srt/configs/utils.py", "python/sglang/srt/connector/utils.py", "python/sglang/srt/model_loader/utils.py", "python/sglang/srt/lora/utils.py", "python/sglang/srt/disaggregation/common/utils.py", "python/sglang/srt/layers/attention/utils.py", "python/sglang/srt/layers/quantization/utils.py", "python/sglang/srt/layers/moe/utils.py", "python/sglang/srt/entrypoints/openai/utils.py", "python/sglang/srt/layers/quantization/compressed_tensors/utils.py", "python/sglang/srt/disaggregation/mooncake/transfer_engine.py", "python/sglang/srt/disaggregation/ascend/transfer_engine.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
73b13e69b4207f240650c6b51eba7a7204f64939
https://github.com/sgl-project/sglang/pull/7285
2025-06-20
2025-09-11 18:56:27
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
Optimize DP attn scheduling for speculative decoding (#7285)
Optimize DP attn scheduling for speculative decoding (#7285)
2025-06-20T15:06:41-07:00
[ "python/sglang/srt/managers/scheduler.py" ]
{ "commit_year": 2025, "num_edited_lines": 44, "num_files": 1, "num_hunks": 3, "num_non_test_edited_lines": 44, "num_non_test_files": 1, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/managers/scheduler.py b/python/sglang/srt/managers/scheduler.py index 8253a303b..14ed362cf 100644 --- a/python/sglang/srt/managers/scheduler.py +++ b/python/sglang/srt/managers/scheduler.py @@ -1399,29 +1399,6 @@ class Scheduler( self.metrics_collector.log_stats(self.stats) self._publish_kv_events() - def coordinate_spec_dp_attn_batch(self, new_batch: Optional[ScheduleBatch]): - """Coordinate the DP attention batch.""" - - local_info = torch.tensor( - [ - (new_batch is not None), - ], - dtype=torch.int64, - ) - global_info = torch.empty( - (self.server_args.dp_size, self.attn_tp_size, 1), - dtype=torch.int64, - ) - torch.distributed.all_gather_into_tensor( - global_info.flatten(), - local_info, - group=self.tp_cpu_group, - ) - any_new_batch = any( - global_info[:, 0, 0].tolist() - ) # Any DP worker has forward batch - return any_new_batch - def get_next_batch_to_run(self) -> Optional[ScheduleBatch]: # Merge the prefill batch into the running batch chunked_req_to_exclude = set() @@ -1456,13 +1433,15 @@ class Scheduler( new_batch = self.get_new_batch_prefill() - # TODO(ch-wan): minor refactor is needed here to improve readability - any_new_batch = ( - self.server_args.enable_dp_attention - and not self.spec_algorithm.is_none() - and self.coordinate_spec_dp_attn_batch(new_batch) - ) - if new_batch is not None or any_new_batch: + need_dp_attn_preparation = require_mlp_sync(self.server_args) + + if need_dp_attn_preparation and not self.spec_algorithm.is_none(): + # In speculative decoding, prefill batches and decode batches cannot be processed in the same DP attention group. + # We prepare idle batches in advance to skip preparing decode batches when there are prefill batches in the group. + new_batch, _ = self.prepare_dp_attn_batch(new_batch) + need_dp_attn_preparation = new_batch is None + + if new_batch is not None: # Run prefill first if possible ret = new_batch else: @@ -1473,8 +1452,9 @@ class Scheduler( else: ret = None - if require_mlp_sync(self.server_args): - ret, _ = self.prepare_mlp_sync_batch(ret) + # Handle DP attention + if need_dp_attn_preparation: + ret, _ = self.prepare_dp_attn_batch(ret) return ret
[ "sglang.srt.managers.scheduler.Scheduler.get_next_batch_to_run", "sglang.srt.managers.scheduler.Scheduler.coordinate_spec_dp_attn_batch", "sglang.srt.managers.scheduler.Scheduler.prepare_dp_attn_batch" ]
[ "python/sglang/srt/managers/scheduler.py", "python/sglang/api.py", "examples/runtime/engine/fastapi_engine_inference.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
a191a0e47c2f0b0c8aed28080b9cb78624365e92
https://github.com/sgl-project/sglang/pull/6593
2025-05-26
2025-09-11 18:57:26
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
Improve performance of two batch overlap in some imbalanced cases (#6593)
Improve performance of two batch overlap in some imbalanced cases (#6593)
2025-05-25T22:36:18-07:00
[ "python/sglang/srt/two_batch_overlap.py", "test/srt/test_two_batch_overlap.py" ]
{ "commit_year": 2025, "num_edited_lines": 56, "num_files": 2, "num_hunks": 3, "num_non_test_edited_lines": 56, "num_non_test_files": 2, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/two_batch_overlap.py b/python/sglang/srt/two_batch_overlap.py index 0fbc3c8e7..79ba76d49 100644 --- a/python/sglang/srt/two_batch_overlap.py +++ b/python/sglang/srt/two_batch_overlap.py @@ -40,13 +40,21 @@ def compute_split_seq_index( def _split_array_by_half_sum(arr: Sequence[int]) -> int: overall_sum = sum(arr) - accumulator, split_index = 0, 0 - for value in arr[:-1]: - accumulator += value - split_index += 1 - if accumulator >= overall_sum // 2: + left_sum = 0 + min_diff = float("inf") + best_index = 0 + + for i in range(1, len(arr)): + left_sum += arr[i - 1] + right_sum = overall_sum - left_sum + diff = abs(left_sum - right_sum) + if diff <= min_diff: + min_diff = diff + best_index = i + else: break - return split_index + + return best_index def compute_split_token_index( diff --git a/test/srt/test_two_batch_overlap.py b/test/srt/test_two_batch_overlap.py index 89e793ca6..765679fc3 100644 --- a/test/srt/test_two_batch_overlap.py +++ b/test/srt/test_two_batch_overlap.py @@ -4,6 +4,8 @@ from types import SimpleNamespace import requests +from sglang.srt.model_executor.forward_batch_info import ForwardMode +from sglang.srt.two_batch_overlap import compute_split_seq_index from sglang.srt.utils import kill_process_tree from sglang.test.run_eval import run_eval from sglang.test.test_utils import ( @@ -68,5 +70,39 @@ class TestTwoBatchOverlap(unittest.TestCase): self.assertGreater(metrics["score"], 0.5) +class TestTwoBatchOverlapUnitTest(unittest.TestCase): + # TODO change tests when having 6328 + def test_compute_split_seq_index(self): + for num_tokens, expect in [ + (0, 0), + (100, 50), + (99, 49), + ]: + actual = compute_split_seq_index( + forward_mode=ForwardMode.DECODE, num_tokens=num_tokens, extend_lens=None + ) + self.assertEqual(actual, expect) + + for extend_lens, expect in [ + ([], 0), + ([42], 0), + ([42, 999], 1), + ([999, 42], 1), + ([4096, 4096, 4096, 4096], 2), + ([4095, 4096, 4096, 4096, 1], 2), + ([1, 4095, 4096, 4096, 4096], 3), + ([4097, 4096, 4096, 4095, 1], 2), + ([1, 1, 1, 1, 99999], 4), + ([99999, 1, 1, 1, 1], 1), + ]: + actual = compute_split_seq_index( + forward_mode=ForwardMode.EXTEND, + num_tokens=None, + extend_lens=extend_lens, + ) + print(f"{extend_lens=} {expect=} {actual=}") + self.assertEqual(actual, expect) + + if __name__ == "__main__": unittest.main()
[ "sglang.srt.two_batch_overlap.compute_split_seq_index" ]
[ "python/sglang/srt/two_batch_overlap.py", "python/sglang/api.py", "python/sglang/srt/model_executor/forward_batch_info.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
c087ddd6865a52634326a05af66429cb5531cd16
https://github.com/sgl-project/sglang/pull/6627
2025-05-28
2025-09-11 18:57:20
2026-01-03 16:29:32
[ "deepseek-ai/DeepSeek-V2-Lite-Chat" ]
python -m sglang.bench_one_batch --model deepseek-ai/DeepSeek-V2-Lite-Chat --trust-remote-code --tp 1 --batch-size 1 --input 128 --output 256
false
false
true
true
Refine pre_reorder_triton_kernel slightly to improve performance (#6627)
Refine pre_reorder_triton_kernel slightly to improve performance (#6627)
2025-05-28T00:15:23-07:00
[ "benchmark/kernels/fused_moe_triton/benchmark_ep_pre_reorder_triton.py", "python/sglang/srt/layers/moe/ep_moe/kernels.py" ]
{ "commit_year": 2025, "num_edited_lines": 113, "num_files": 2, "num_hunks": 5, "num_non_test_edited_lines": 113, "num_non_test_files": 2, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/benchmark/kernels/fused_moe_triton/benchmark_ep_pre_reorder_triton.py b/benchmark/kernels/fused_moe_triton/benchmark_ep_pre_reorder_triton.py new file mode 100644 index 000000000..c62424357 --- /dev/null +++ b/benchmark/kernels/fused_moe_triton/benchmark_ep_pre_reorder_triton.py @@ -0,0 +1,100 @@ +import argparse +import itertools + +import pandas as pd +import torch +import triton + +from sglang.srt.layers.moe.ep_moe.kernels import pre_reorder_triton_kernel + + +def benchmark_pre_reorder(batch_size, topk, model_config): + hidden_size = model_config["hidden_size"] + block_size = model_config["block_size"] + expert_range = model_config["expert_range"] + + input_ptr = torch.randn(batch_size, hidden_size, dtype=torch.float16, device="cuda") + gateup_input_ptr = torch.zeros( + batch_size * topk, hidden_size, dtype=torch.float16, device="cuda" + ) + src2dst_ptr = torch.randint( + 0, batch_size * topk, (batch_size, topk), dtype=torch.int32, device="cuda" + ) + topk_ids_ptr = torch.randint( + expert_range[0], + expert_range[1] + 1, + (batch_size, topk), + dtype=torch.int32, + device="cuda", + ) + a1_scales_ptr = torch.rand( + expert_range[1] - expert_range[0] + 1, dtype=torch.float32, device="cuda" + ) + + input_ptr = input_ptr.view(-1) + gateup_input_ptr = gateup_input_ptr.view(-1) + src2dst_ptr = src2dst_ptr.view(-1) + topk_ids_ptr = topk_ids_ptr.view(-1) + + def run_kernel(): + pre_reorder_triton_kernel[(batch_size,)]( + input_ptr, + gateup_input_ptr, + src2dst_ptr, + topk_ids_ptr, + a1_scales_ptr, + expert_range[0], + expert_range[1], + topk, + hidden_size, + block_size, + ) + + for _ in range(10): + run_kernel() + torch.cuda.synchronize() + + ms, _, _ = triton.testing.do_bench(run_kernel, quantiles=[0.5, 0.2, 0.8]) + return ms + + +def main(): + parser = argparse.ArgumentParser() + parser.add_argument("--hidden-size", type=int, required=True) + parser.add_argument("--block-size", type=int, default=512) + args = parser.parse_args() + + model_config = { + "hidden_size": args.hidden_size, + "block_size": args.block_size, + "expert_range": (0, 255), + } + + batch_sizes = [64, 128, 256, 512, 640, 768, 1024] + topks = [2, 4, 8] + configs = list(itertools.product(batch_sizes, topks)) + + # Prepare results dict: keys = topk, each row is indexed by batch_size + results_dict = {topk: {} for topk in topks} + + for batch_size, topk in configs: + ms = benchmark_pre_reorder(batch_size, topk, model_config) + results_dict[topk][batch_size] = ms + + # Build dataframe + df = pd.DataFrame( + { + "batch_size": batch_sizes, + **{ + f"TopK={topk}": [results_dict[topk].get(bs, None) for bs in batch_sizes] + for topk in topks + }, + } + ) + + print("\npre-reorder-performance:") + print(df.to_string(index=False, float_format="%.6f")) + + +if __name__ == "__main__": + main() diff --git a/python/sglang/srt/layers/moe/ep_moe/kernels.py b/python/sglang/srt/layers/moe/ep_moe/kernels.py index 8c005527a..56c6c7db7 100644 --- a/python/sglang/srt/layers/moe/ep_moe/kernels.py +++ b/python/sglang/srt/layers/moe/ep_moe/kernels.py @@ -184,8 +184,10 @@ def pre_reorder_triton_kernel( src_idx = tl.program_id(0) src2dst_ptr = src2dst_ptr + src_idx * topk topk_ids_ptr = topk_ids_ptr + src_idx * topk - src_ptr = input_ptr + src_idx * hidden_size + + vec = tl.arange(0, BLOCK_SIZE) + for idx in range(topk): expert_id = tl.load(topk_ids_ptr + idx) if expert_id >= start_expert_id and expert_id <= end_expert_id: @@ -197,7 +199,7 @@ def pre_reorder_triton_kernel( dst_idx = tl.load(src2dst_ptr + idx) dst_ptr = gateup_input_ptr + dst_idx * hidden_size for start_offset in tl.range(0, hidden_size, BLOCK_SIZE): - offset = start_offset + tl.arange(0, BLOCK_SIZE) + offset = start_offset + vec mask = offset < hidden_size in_data = tl.load(src_ptr + offset, mask=mask).to(tl.float32) out_data = (in_data * scale).to(OutDtype) @@ -481,8 +483,11 @@ def post_reorder_triton_kernel( computed = False store_ptr = output_ptr + src_idx * hidden_size + + vec = tl.arange(0, BLOCK_SIZE) + for start_offset in tl.range(0, hidden_size, BLOCK_SIZE): - offset = start_offset + tl.arange(0, BLOCK_SIZE) + offset = start_offset + vec mask = offset < hidden_size sum_vec = tl.zeros([BLOCK_SIZE], dtype=InDtype) @@ -499,7 +504,7 @@ def post_reorder_triton_kernel( if computed == False: for start_offset in tl.range(0, hidden_size, BLOCK_SIZE): - offset = start_offset + tl.arange(0, BLOCK_SIZE) + offset = start_offset + vec mask = offset < hidden_size tl.store( store_ptr + offset, tl.zeros([BLOCK_SIZE], dtype=InDtype), mask=mask
[ "sglang.srt.layers.moe.ep_moe.kernels.pre_reorder_triton_kernel", "sglang.srt.layers.moe.ep_moe.kernels.post_reorder_triton_kernel" ]
[ "python/sglang/srt/layers/moe/ep_moe/kernels.py", "python/sglang/srt/layers/moe/fused_moe_triton/layer.py", "python/sglang/srt/layers/moe/ep_moe/layer.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=deepseek-ai/DeepSeek-V2-Lite-Chat,dtype=auto --tasks gsm8k --batch_size auto --limit 100
da47621ccc4f8e8381f3249257489d5fe32aff1b
https://github.com/sgl-project/sglang/pull/7058
2025-06-13
2025-09-11 18:56:38
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
Minor speedup topk postprocessing (#7058)
Minor speedup topk postprocessing (#7058)
2025-06-13T00:50:18-07:00
[ "python/sglang/srt/layers/moe/topk.py" ]
{ "commit_year": 2025, "num_edited_lines": 24, "num_files": 1, "num_hunks": 2, "num_non_test_edited_lines": 24, "num_non_test_files": 1, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/layers/moe/topk.py b/python/sglang/srt/layers/moe/topk.py index f5dceac78..0c3d92b66 100644 --- a/python/sglang/srt/layers/moe/topk.py +++ b/python/sglang/srt/layers/moe/topk.py @@ -249,6 +249,15 @@ def _mask_topk_ids_padded_region( topk_ids[indices >= num_token_non_padded, :] = -1 +@torch.compile(dynamic=True, backend=get_compiler_backend()) +def _biased_grouped_topk_postprocess( + topk_ids, expert_location_dispatch_info, num_token_non_padded +): + topk_ids = topk_ids_logical_to_physical(topk_ids, expert_location_dispatch_info) + _mask_topk_ids_padded_region(topk_ids, num_token_non_padded) + return topk_ids + + def biased_grouped_topk( hidden_states: torch.Tensor, gating_output: torch.Tensor, @@ -282,14 +291,13 @@ def biased_grouped_topk( num_fused_shared_experts, routed_scaling_factor, ) - # TODO merge into kernel for this branch - topk_ids = topk_ids_logical_to_physical(topk_ids, expert_location_dispatch_info) - # TODO will fuse this into kernel, thus use slow manual operation now - if num_token_non_padded is None: - return topk_weights, topk_ids - torch.compile( - _mask_topk_ids_padded_region, dynamic=True, backend=get_compiler_backend() - )(topk_ids, num_token_non_padded) + # TODO merge into kernel + if (expert_location_dispatch_info is not None) or ( + num_token_non_padded is not None + ): + topk_ids = _biased_grouped_topk_postprocess( + topk_ids, expert_location_dispatch_info, num_token_non_padded + ) return topk_weights, topk_ids else: biased_grouped_topk_fn = (
[ "sglang.srt.layers.moe.topk.biased_grouped_topk", "sglang.srt.layers.moe.topk._biased_grouped_topk_postprocess" ]
[ "python/sglang/srt/layers/moe/topk.py", "python/sglang/api.py", "benchmark/lora/launch_server.py", "python/sglang/launch_server.py", "sgl-router/py_src/sglang_router/launch_server.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
dd1012fcbe2a1fb36c44e10c16f8d0bcd8e9da25
https://github.com/sgl-project/sglang/pull/6764
2025-06-05
2025-09-11 18:56:57
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
[PD] Fix potential perf spike caused by tracker gc and optimize doc (#6764)
[PD] Fix potential perf spike caused by tracker gc and optimize doc (#6764)
2025-06-05T10:56:02-07:00
[ "docs/backend/pd_disaggregation.md", "python/sglang/srt/disaggregation/mooncake/conn.py" ]
{ "commit_year": 2025, "num_edited_lines": 20, "num_files": 2, "num_hunks": 4, "num_non_test_edited_lines": 20, "num_non_test_files": 2, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/docs/backend/pd_disaggregation.md b/docs/backend/pd_disaggregation.md index 9dbc2705d..833f0b3f9 100644 --- a/docs/backend/pd_disaggregation.md +++ b/docs/backend/pd_disaggregation.md @@ -54,8 +54,8 @@ PD Disaggregation with Mooncake supports the following environment variables for #### Prefill Server Configuration | Variable | Description | Default | |:--------:|:-----------:|:--------: -| **`SGLANG_DISAGGREGATION_THREAD_POOL_SIZE`** | Controls the total number of worker threads for KV transfer operations per TP rank | A dynamic value calculated by `int(0.75 * os.cpu_count()) // 8)`, which is limited to be larger than 4 and less than 12 to ensure efficiency and prevent thread race conditions | -| **`SGLANG_DISAGGREGATION_QUEUE_SIZE`** | Sets the maximum pending tasks in the parallel transfer queue | `4` | +| **`SGLANG_DISAGGREGATION_THREAD_POOL_SIZE`** | Controls the total number of worker threads for KVCache transfer operations per TP rank | A dynamic value calculated by `int(0.75 * os.cpu_count()) // 8)`, which is limited to be larger than 4 and less than 12 to ensure efficiency and prevent thread race conditions | +| **`SGLANG_DISAGGREGATION_QUEUE_SIZE`** | Sets the number of parallel transfer queues. KVCache transfer requests from multiple decode instances will be sharded into these queues so that they can share the threads and the transfer bandwidth at the same time. If it is set to `1`, then we transfer requests one by one according to fcfs strategy | `4` | | **`SGLANG_DISAGGREGATION_BOOTSTRAP_TIMEOUT`** | Timeout (seconds) for receiving destination KV indices during request initialization | `30` | #### Decode Server Configuration diff --git a/python/sglang/srt/disaggregation/mooncake/conn.py b/python/sglang/srt/disaggregation/mooncake/conn.py index 940a25d74..824f76709 100644 --- a/python/sglang/srt/disaggregation/mooncake/conn.py +++ b/python/sglang/srt/disaggregation/mooncake/conn.py @@ -191,7 +191,7 @@ class MooncakeKVManager(BaseKVManager): self.heartbeat_failures = {} self.session_pool = defaultdict(requests.Session) self.session_pool_lock = threading.Lock() - self.addr_to_rooms_tracker = defaultdict(list) + self.addr_to_rooms_tracker = defaultdict(set) self.connection_lock = threading.Lock() # Heartbeat interval should be at least 2 seconds self.heartbeat_interval = max( @@ -504,12 +504,14 @@ class MooncakeKVManager(BaseKVManager): if response.status_code == 200: self.heartbeat_failures[bootstrap_addr] = 0 - for bootstrap_room in self.addr_to_rooms_tracker[ + current_rooms = self.addr_to_rooms_tracker[ bootstrap_addr - ]: - # Remove KVPoll.Success requests from the map + ].copy() + + for bootstrap_room in current_rooms: + # Remove KVPoll.Success requests from the tracker if bootstrap_room not in self.request_status: - self.addr_to_rooms_tracker[bootstrap_addr].remove( + self.addr_to_rooms_tracker[bootstrap_addr].discard( bootstrap_room ) else: @@ -879,9 +881,7 @@ class MooncakeKVReceiver(BaseKVReceiver): self.bootstrap_infos = self.kv_mgr.connection_pool[bootstrap_key] assert len(self.bootstrap_infos) > 0 - self.kv_mgr.addr_to_rooms_tracker[self.bootstrap_addr].append( - self.bootstrap_room - ) + self.kv_mgr.addr_to_rooms_tracker[self.bootstrap_addr].add(self.bootstrap_room) self.kv_mgr.update_status(self.bootstrap_room, KVPoll.WaitingForInput) def _get_bootstrap_info_from_server(self, engine_rank, target_dp_group):
[ "MooncakeKVManager", "MooncakeKVReceiver" ]
[ "python/sglang/srt/disaggregation/nixl/conn.py", "python/sglang/srt/disaggregation/mooncake/conn.py", "python/sglang/srt/disaggregation/ascend/conn.py", "python/sglang/srt/disaggregation/common/conn.py", "python/sglang/srt/disaggregation/fake/conn.py", "python/sglang/srt/disaggregation/base/conn.py", "python/sglang/srt/disaggregation/mooncake/transfer_engine.py", "python/sglang/srt/disaggregation/ascend/transfer_engine.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
df7f61ee7d235936e6663f07813d7c03c4ec1603
https://github.com/sgl-project/sglang/pull/6812
2025-06-02
2025-09-11 18:57:05
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
Speed up rebalancing when using non-static dispatch algorithms (#6812)
Speed up rebalancing when using non-static dispatch algorithms (#6812)
2025-06-02T11:18:17-07:00
[ "python/sglang/srt/managers/expert_location.py", "python/sglang/srt/managers/expert_location_dispatch.py" ]
{ "commit_year": 2025, "num_edited_lines": 47, "num_files": 2, "num_hunks": 9, "num_non_test_edited_lines": 47, "num_non_test_files": 2, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/managers/expert_location.py b/python/sglang/srt/managers/expert_location.py index 615e0a440..ea4c67a54 100644 --- a/python/sglang/srt/managers/expert_location.py +++ b/python/sglang/srt/managers/expert_location.py @@ -35,7 +35,8 @@ class ExpertLocationMetadata: physical_to_logical_map: torch.Tensor # (layers, num_physical_experts) logical_to_all_physical_map: torch.Tensor # (layers, num_logical_experts, X) logical_to_all_physical_map_num_valid: torch.Tensor # (layers, num_logical_experts) - logical_to_rank_dispatch_physical_map: torch.Tensor # (layers, num_logical_experts) + # (layers, num_logical_experts) + logical_to_rank_dispatch_physical_map: Optional[torch.Tensor] # -------------------------------- properties ------------------------------------ @@ -70,11 +71,8 @@ class ExpertLocationMetadata: num_layers_2, num_logical_experts_1 = ( self.logical_to_all_physical_map_num_valid.shape ) - num_layers_3, num_logical_experts_2 = ( - self.logical_to_rank_dispatch_physical_map.shape - ) - assert num_layers_0 == num_layers_1 == num_layers_2 == num_layers_3 - assert num_logical_experts_0 == num_logical_experts_1 == num_logical_experts_2 + assert num_layers_0 == num_layers_1 == num_layers_2 + assert num_logical_experts_0 == num_logical_experts_1 assert num_physical_experts_0 == num_physical_experts_1 # -------------------------------- construction ------------------------------------ @@ -117,6 +115,7 @@ class ExpertLocationMetadata: ) return ExpertLocationMetadata._init_raw( + server_args=server_args, ep_size=common["ep_size"], physical_to_logical_map=physical_to_logical_map, logical_to_all_physical_map=logical_to_all_physical_map, @@ -154,6 +153,7 @@ class ExpertLocationMetadata: ) return ExpertLocationMetadata._init_raw( + server_args=server_args, ep_size=common["ep_size"], physical_to_logical_map=physical_to_logical_map.to(server_args.device), logical_to_all_physical_map=logical_to_all_physical_map.to( @@ -184,6 +184,7 @@ class ExpertLocationMetadata: @staticmethod def _init_raw( + server_args: ServerArgs, ep_size: int, physical_to_logical_map: torch.Tensor, logical_to_all_physical_map: torch.Tensor, @@ -204,12 +205,16 @@ class ExpertLocationMetadata: physical_to_logical_map=physical_to_logical_map, logical_to_all_physical_map=logical_to_all_physical_map_padded, logical_to_all_physical_map_num_valid=logical_to_all_physical_map_num_valid, - logical_to_rank_dispatch_physical_map=compute_logical_to_rank_dispatch_physical_map( - logical_to_all_physical_map=logical_to_all_physical_map, - num_gpus=ep_size, - num_physical_experts=num_physical_experts, - # TODO improve when we have real EP rank - ep_rank=torch.distributed.get_rank() % ep_size, + logical_to_rank_dispatch_physical_map=( + compute_logical_to_rank_dispatch_physical_map( + logical_to_all_physical_map=logical_to_all_physical_map, + num_gpus=ep_size, + num_physical_experts=num_physical_experts, + # TODO improve when we have real EP rank + ep_rank=torch.distributed.get_rank() % ep_size, + ) + if server_args.ep_dispatch_algorithm == "static" + else None ), ) @@ -230,8 +235,11 @@ class ExpertLocationMetadata: "logical_to_all_physical_map_num_valid", "logical_to_rank_dispatch_physical_map", ]: + src = getattr(other, field) dst = getattr(self, field) - dst[...] = getattr(other, field) + assert (src is not None) == (dst is not None) + if dst is not None: + dst[...] = src # -------------------------------- usage ------------------------------------ diff --git a/python/sglang/srt/managers/expert_location_dispatch.py b/python/sglang/srt/managers/expert_location_dispatch.py index 6880b01a2..547dd4e72 100644 --- a/python/sglang/srt/managers/expert_location_dispatch.py +++ b/python/sglang/srt/managers/expert_location_dispatch.py @@ -25,7 +25,7 @@ from sglang.srt.managers.schedule_batch import global_server_args_dict class ExpertLocationDispatchInfo: ep_dispatch_algorithm: Literal["static", "random"] # (num_logical_experts,) - partial_logical_to_rank_dispatch_physical_map: torch.Tensor + partial_logical_to_rank_dispatch_physical_map: Optional[torch.Tensor] # (num_logical_experts, X) partial_logical_to_all_physical_map: torch.Tensor # (num_logical_experts,) @@ -42,9 +42,14 @@ class ExpertLocationDispatchInfo: return cls( ep_dispatch_algorithm=ep_dispatch_algorithm, - partial_logical_to_rank_dispatch_physical_map=expert_location_metadata.logical_to_rank_dispatch_physical_map[ - layer_id, : - ], + partial_logical_to_rank_dispatch_physical_map=( + expert_location_metadata.logical_to_rank_dispatch_physical_map[ + layer_id, : + ] + if expert_location_metadata.logical_to_rank_dispatch_physical_map + is not None + else None + ), partial_logical_to_all_physical_map=expert_location_metadata.logical_to_all_physical_map[ layer_id, : ],
[ "sglang.srt.managers.expert_location.ExpertLocationMetadata", "sglang.srt.managers.expert_location_dispatch.ExpertLocationDispatchInfo" ]
[ "python/sglang/srt/eplb/expert_location.py", "python/sglang/srt/eplb/expert_location_dispatch.py", "python/sglang/api.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100
e3ec6bf4b65a50e26e936a96adc7acc618292002
https://github.com/sgl-project/sglang/pull/6814
2025-06-13
2025-09-11 18:56:34
2026-01-03 16:29:32
[ "meta-llama/Llama-3.1-8B-Instruct" ]
python -m sglang.bench_serving --backend sglang --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100
true
false
false
true
Minor speed up block_quant_dequant (#6814)
Minor speed up block_quant_dequant (#6814)
2025-06-13T14:32:46-07:00
[ "python/sglang/srt/layers/quantization/fp8_utils.py" ]
{ "commit_year": 2025, "num_edited_lines": 26, "num_files": 1, "num_hunks": 1, "num_non_test_edited_lines": 26, "num_non_test_files": 1, "num_test_files": 0, "only_non_test_files": 1, "only_test_files": 0 }
diff --git a/python/sglang/srt/layers/quantization/fp8_utils.py b/python/sglang/srt/layers/quantization/fp8_utils.py index 0e1640fcf..86d8155f8 100644 --- a/python/sglang/srt/layers/quantization/fp8_utils.py +++ b/python/sglang/srt/layers/quantization/fp8_utils.py @@ -369,27 +369,15 @@ def block_quant_dequant( The output is an unquantized tensor with dtype. """ block_n, block_k = block_size[0], block_size[1] - n, k = x_q_block.shape - n_tiles = (n + block_n - 1) // block_n - k_tiles = (k + block_k - 1) // block_k - assert n_tiles == x_s.shape[0] - assert k_tiles == x_s.shape[1] - - x_dq_block = torch.empty_like(x_q_block, dtype=dtype) + *_, n, k = x_q_block.shape - for j in range(n_tiles): - for i in range(k_tiles): - x_q_block_tile = x_q_block[ - j * block_n : min((j + 1) * block_n, n), - i * block_k : min((i + 1) * block_k, k), - ] - x_dq_block_tile = x_dq_block[ - j * block_n : min((j + 1) * block_n, n), - i * block_k : min((i + 1) * block_k, k), - ] - x_dq_block_tile[:, :] = x_q_block_tile.to(torch.float32) * x_s[j][i] + # ... n_scale k_scale -> ... (n_scale block_n) (k_scale block_k) + x_scale_repeat = x_s.repeat_interleave(block_n, dim=-2).repeat_interleave( + block_k, dim=-1 + ) + x_scale_repeat = x_scale_repeat[..., :n, :k] - return x_dq_block + return (x_q_block.to(torch.float32) * x_scale_repeat).to(dtype) def channel_quant_to_tensor_quant(
[ "sglang.srt.layers.quantization.fp8_utils.block_quant_dequant" ]
[ "python/sglang/api.py", "python/sglang/srt/layers/quantization/fp8.py", "python/sglang/srt/layers/quantization/fp8_utils.py" ]
sglang
H100
lm_eval --model sglang --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100

ISO-Bench Dataset

A curated dataset of real-world software performance optimization commits from vLLM and SGLang, designed for evaluating AI agents on code optimization tasks.

Dataset Summary

Config Commits Repository
vllm 39 vLLM (LLM inference engine)
sglang 15 SGLang (LLM serving framework)

Each entry represents a human-authored performance optimization commit with:

  • The original commit diff and message
  • Performance benchmark commands (perf_command)
  • Model configurations for benchmarking
  • Hardware requirements
  • API surface analysis

Usage

from datasets import load_dataset

# Load vLLM optimization commits
vllm = load_dataset('Lossfunk/ISO-Bench', 'vllm', split='train')

# Load SGLang optimization commits
sglang = load_dataset('Lossfunk/ISO-Bench', 'sglang', split='train')

# Example: inspect a commit
print(vllm[0]['commit_subject'])
print(vllm[0]['perf_command'])
print(vllm[0]['models'])

Schema

Field Type Description
commit_hash string Short hash of the optimization commit
pr_url string URL to the pull request
commit_subject string Commit message subject line
commit_message string Full commit message
diff_text string Unified diff of the optimization
models list[string] HuggingFace model IDs used for benchmarking
perf_command string Command to run the performance benchmark
has_serving bool Whether commit affects serving performance
has_latency bool Whether commit affects latency
has_throughput bool Whether commit affects throughput
uses_lm_eval bool Whether correctness is validated via lm-eval
lm_eval_command string lm-eval command for correctness validation
files_changed list[string] Files modified in the commit
apis list[string] Affected API endpoints/functions
affected_paths list[string] Code paths affected by the change
hardware string Required hardware (e.g., GPU type)
stats struct Commit statistics (lines changed, files, hunks)

How It Works

Each dataset entry captures a real performance optimization made by an expert developer. AI agents are given the codebase at the parent commit (before optimization) and must independently discover and implement a performance improvement. Their patches are then benchmarked against the human expert's solution using wall-clock timing comparisons.

License

Apache 2.0

Downloads last month
14