Qwen30235B-A22B-Instruct-2507-speculator

Model Overview

  • Verifier: Qwen/Qwen3-235B-A22B-Instruct-2507
  • Speculative Decoding Algorithm: EAGLE-3
  • Model Architecture: Eagle3Speculator
  • Release Date: 12/03/2025
  • Version: 1.0
  • Model Developers: RedHat

This is a speculator model designed for use with Qwen/Qwen3-235B-A22B-Instruct-2507, based on the EAGLE-3 speculative decoding algorithm. It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Llama-3.1-Pro-300K-Filtered dataset and the train_sft split of the HuggingFaceH4/ultrachat_200k dataset. This model should be used with the Qwen/Qwen3-235B-A22B-Instruct-2507 chat template, specifically through the /chat/completions endpoint.

Use with vLLM

vllm serve Qwen/Qwen3-235B-A22B-Instruct-2507 \
  -tp 8 \
  --speculative-config '{
    "model": "RedHatAI/Qwen3-235B-A22B-Instruct-2507-speculator.eagle3",
    "num_speculative_tokens": 3,
    "method": "eagle3"
  }'

Evaluations

Use cases

Use Case Dataset Number of Samples
Coding HumanEval 168
Math Reasoning gsm8k 80
Text Summarization CNN/Daily Mail 80

Acceptance lengths

Use Case k=1 k=2 k=3 k=4 k=5
Coding 1.85 2.52 3.06 3.41 3.71
Math Reasoning 1.85 2.54 3.10 3.55 3.88
Text Summarization 1.61 1.98 2.15 2.24 2.29
Details Configuration
  • repetitions: 1
  • time per experiment: 10min
  • hardware: 8xA100
  • vLLM version: 0.11.2
  • GuideLLM version: 0.3.0

Command

GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
  --target "http://localhost:8000/v1" \
  --data "RedHatAI/speculator_benchmarks" \
  --data-args '{"data_files": "HumanEval.jsonl"}' \
  --rate-type sweep \
  --max-seconds 600 \
  --output-path "Qwen235B-HumanEval.json" \

</details>
Downloads last month
187
Safetensors
Model size
1B params
Tensor type
I64
·
BF16
·
BOOL
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including RedHatAI/Qwen3-235B-A22B-Instruct-2507-speculator.eagle3