All Products
Search
Document Center

Container Service for Kubernetes:kube-scheduler

Last Updated:Sep 18, 2025

kube-scheduler is a control plane component. It schedules pods to nodes in a cluster based on pod scheduling requirements and node resource usage.

Component introduction

Introduction to kube-scheduler

The kube-scheduler determines valid nodes for each pod in the scheduling queue based on the pod's declared Request and the node's Allocatable property. The kube-scheduler then sorts all valid nodes and binds the pod to the most suitable node. By default, the kube-scheduler spreads pods evenly across nodes based on their Request values. For more information, see the official Kubernetes document kube-scheduler.

Introduction to Filter and Score plugins

The Kubernetes Scheduling Framework abstracts complex scheduling logic into plugins. This allows for flexible scheduling extensions. Filter plugins filter out nodes that cannot run a specific pod. Score plugins use algorithms to score the filtered nodes. The score indicates how suitable a node is for running the pod.

The following table lists the enabled Filter and Score scheduling plugins and their default weights for each kube-scheduler version.

Expand to view the default enabled plugins

Component version

Filter

Score

v1.30.1-aliyun.6.5.4.fcac2bdf

  • Default open source plugins:

    Same as the open source community. For more information, see Default Filter plugins for v1.30.1.

  • Default ACK plugins:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • Default open source plugins:

    Same as the open source community. For more information, see Default Score plugins for v1.30.1.

  • Default ACK plugins and their weights:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

v1.28.3-aliyun-6.5.2.7ff57682

  • Default open source plugins:

    Same as the open source community. For more information, see Default Filter plugins for v1.28.3.

  • Default ACK plugins:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • Default open source plugins:

    Same as the open source community. For more information, see Default Score plugins for v1.28.3.

  • Default ACK plugins and their weights:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

v1.26.3-aliyun-6.6.1.605b8a4f

  • Default open source plugins:

    Same as the open source community. For more information, see Default Filter plugins for v1.26.3.

  • Default ACK plugins:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • Default open source plugins:

    Same as the open source community. For more information, see Default Score plugins for v1.26.3.

  • Default ACK plugins:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

Plugin features

Expand to view plugin descriptions and related documentation

Plugin name

Description

Related documentation

NodeNUMAResource

Manages CPU topology-aware scheduling.

Enable CPU topology-aware scheduling

topologymanager

Manages NUMA resource allocation on nodes.

Enable NUMA topology-aware scheduling

EciPodTopologySpread

Enhances topology spread constraints in virtual node scheduling scenarios.

Enable virtual node scheduling policies for a cluster

ipawarescheduling

Schedules based on the number of available IP addresses.

Scheduling FAQ

BatchResourceFit

Enables and manages the colocation of multi-types workloads.

Best practices for colocation of multi-types workloads

PreferredNode

Reserves nodes for node pools where auto scaling is enabled.

Node auto scaling

gpushare

Manages shared GPU scheduling.

Shared GPU scheduling

NetworkTopology

Manages network topology-aware scheduling.

Topology-aware scheduling

CapacityScheduling

Manages CapacityScheduling.

Use Capacity Scheduling

elasticresource

Manages ECI elastic scheduling.

Use ElasticResource for ECI elastic scheduling (No longer maintained)

resourcepolicy

Manages the scheduling of custom elastic resources.

Custom elastic resource priority scheduling

gputopology

Manages GPU topology-aware scheduling.

GPU topology-aware scheduling

ECIBinderV1

Binds virtual nodes in ECI elastic scheduling scenarios.

Schedule pods to run on ECI

loadawarescheduling

Manages load-aware scheduling.

Use load-aware scheduling

EciScheduling

Manages virtual node scheduling.

Enable virtual node scheduling policies for a cluster

Instructions

The kube-scheduler component is installed by default and is ready to use without configuration. To use the latest features and bug fixes, upgrade the kube-scheduler component to the latest version. Log on to the Container Service for Kubernetes (ACK) console, click the target cluster, and then in the navigation pane on the left, choose Operations > Components to upgrade the component.

Change history

Version 1.34 change history

Version number

Change time

Changes

v1.34.0-apsara.6.11.3.ff6b62d8

September 17, 2025

Supports all previous features in ACK clusters of version 1.34.

Version 1.33 change history

Version number

Change time

Changes

v1.33.0-apsara.6.11.4.77470105

September 15, 2025

  • Bug fixes:

    • Fixed an issue where a pod could not be scheduled if multiple containers in the pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests for ACS computing power were made.

v1.33.0-apsara.6.11.3.ed953a31

September 08, 2025

  • New features:

    • Added support for ElasticQuotaTree to use the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in a quota.

    • Added support for NetworkTopology to declare a spread distribution using constraints in JobNetworkTopology.

  • Bug fixes:

    • Fixed an issue where the scheduler component might crash when PodTopologySpread was used.

v1.33.0-aliyun.6.11.2.330dcea7

August 19, 2025

  • Improved the scheduling determinism of GOAT. Nodes are no longer considered not ready if they have the node.cloudprovider.kubernetes.io/uninitialized or node.kubernetes.io/unschedulable property.

  • Fixed an issue in the ElasticQuotaTree fairness check where quotas with an empty Min value or an empty internal Request were incorrectly marked as unmet.

  • Fixed an issue where the scheduler component might crash during the creation of ACS instances.

  • Fixed an issue where the scheduler reported an error if the resources for an InitContainer were empty. (29d1951)

v1.33.0-aliyun.6.11.1.382cd0a6

July 25, 2025

v1.33.0-aliyun.6.11.0.87e9673b

July 18, 2025

  • Improved the scheduling determinism of GOAT. This prevents determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue in Gang scheduling where the pod count for a gang was incorrect if the PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed issues in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption might occur within the same quota when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue in ResourcePolicy where the MaxPod count was occasionally calculated incorrectly after an elastic trigger.

  • Added a fairness check feature to ElasticQuotaTree. If a quota with unmet resource guarantees has pending pods, new pods are not scheduled to quotas that have already met their resource guarantees. To enable this feature, set the StrictFairness parameter of the plugin. The feature is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • Added support for pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. A unit is skipped if it might patch pod labels and the labels might affect the MatchLabels matching of a ReplicaSet or StatefulSet.

v1.33.0-aliyun.6.9.4.8b58e6b4

June 10, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly when using ResourcePolicy.

  • Optimized the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an issue in custom elastic resource priority scheduling where the ResourcePolicy pod count was incorrect.

  • Fixed a potential disk leak issue when using WaitForFirstConsumer-type disks with serverless computing power.

v1.33.0-aliyun.6.9.2.09bce458

April 28, 2025

Supports all previous features in ACK clusters of version 1.33.

Version 1.32 change history

Version number

Change time

Changes

v1.32.0-apsara.6.11.4.4a4f4843

September 15, 2025

  • Bug fixes:

    • Fixed an issue where a pod could not be scheduled if multiple containers in the pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests for ACS computing power were made.

v1.32.0-apsara.6.11.3.b651c575

September 12, 2025

  • New features:

    • Added support for ElasticQuotaTree to use the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in a quota.

    • Added support for NetworkTopology to declare a spread distribution using constraints in JobNetworkTopology.

v1.32.0-aliyun.6.11.2.58302423

August 21, 2025

  • Improved the scheduling determinism of GOAT. Nodes are no longer considered not ready if they have the node.cloudprovider.kubernetes.io/uninitialized or node.kubernetes.io/unschedulable property.

  • Fixed an issue in the ElasticQuotaTree fairness check where quotas with an empty Min value or an empty internal Request were incorrectly marked as unmet.

  • Fixed an issue where the scheduler component might crash during the creation of ACS instances.

v1.32.0-aliyun.6.11.1.ab632d8c

July 25, 2025

v1.32.0-aliyun.6.11.0.0350a0e7

July 18, 2025

  • Improved the scheduling determinism of GOAT. This prevents determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue in Gang scheduling where the pod count for a gang was incorrect if the PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed issues in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption might occur within the same quota when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue in ResourcePolicy where the MaxPod count was occasionally calculated incorrectly after an elastic trigger.

  • Added a fairness check feature to ElasticQuotaTree. If a quota with unmet resource guarantees has pending pods, new pods are not scheduled to quotas that have already met their resource guarantees. To enable this feature, set the StrictFairness parameter of the plugin. The feature is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • Added support for pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. A unit is skipped if it might patch pod labels and the labels might affect the MatchLabels matching of a ReplicaSet or StatefulSet.

v1.32.0-aliyun.6.9.4.d5a8a355

June 04, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly when using ResourcePolicy.

  • Fixed an ElasticQuota preemption anomaly.

v1.32.0-aliyun.6.9.3.515ac311

May 14, 2025

  • Optimized the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an issue in custom elastic resource priority scheduling where the ResourcePolicy pod count was incorrect.

  • Fixed a potential disk leak issue when using WaitForFirstConsumer-type disks with serverless computing power.

v1.32.0-aliyun.6.9.2.09bce458

April 16, 2025

  • Fixed an issue where the ElasticQuota preemption feature was abnormal.

  • Added support for scheduling pods to ACS GPU-HPN nodes in ACK clusters.

v1.32.0-aliyun.6.8.6.bd13955d

April 02, 2025

  • Fixed an issue in ACK Serverless clusters where disks of the WaitForFirstConsumer type were not created by the CSI Plugin.

v1.32.0-aliyun.6.9.0.a1c7461b

February 28, 2025

  • Added support for scheduling based on the number of available IP addresses on a node.

  • Added a plugin to support resource checks before jobs are dequeued from Kube Queue.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.32.0-aliyun.6.8.5.28a2aed7

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue in custom elastic resource priority scheduling where the Max value became invalid after declaring PodLabels.

v1.32.0-aliyun.6.8.4.2b585931

January 17, 2025

Supports all previous features in ACK clusters of version 1.32.

Version 1.31 change history

Version number

Change time

Changes

v1.31.0-apsara.6.11.4.69d7e1fa

September 15, 2025

  • Bug fixes:

    • Fixed an issue where a pod could not be scheduled if multiple containers in the pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests for ACS computing power were made.

v1.31.0-apsara.6.11.3.9b41ad4a

September 12, 2025

  • New features:

    • Added support for ElasticQuotaTree to use the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in a quota.

    • Added support for NetworkTopology to declare a spread distribution using constraints in JobNetworkTopology.

    • Improved the scheduling determinism of GOAT. Nodes are no longer considered not ready if they have the node.cloudprovider.kubernetes.io/uninitialized or node.kubernetes.io/unschedulable property.

  • Bug fixes

    • Fixed an issue in the ElasticQuotaTree fairness check where quotas with an empty Min value or an empty internal Request were incorrectly marked as unmet.

    • Fixed an issue where the scheduler component might crash during the creation of ACS instances.

v1.31.0-aliyun.6.11.1.c9ed2f40

July 25, 2025

v1.31.0-aliyun.6.11.0.ea1f0f94

July 18, 2025

  • Improved the scheduling determinism of GOAT. This prevents determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue in Gang scheduling where the pod count for a gang was incorrect if the PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed issues in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption might occur within the same quota when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue in ResourcePolicy where the MaxPod count was occasionally calculated incorrectly after an elastic trigger.

  • Added a fairness check feature to ElasticQuotaTree. If a quota with unmet resource guarantees has pending pods, new pods are not scheduled to quotas that have already met their resource guarantees. To enable this feature, set the StrictFairness parameter of the plugin. The feature is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • Added support for pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. A unit is skipped if it might patch pod labels and the labels might affect the MatchLabels matching of a ReplicaSet or StatefulSet.

v1.31.0-aliyun.6.9.4.c8e540e8

June 04, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly when using ResourcePolicy.

  • Fixed an ElasticQuota preemption anomaly.

v1.31.0-aliyun.6.9.3.051bb0e8

May 14, 2025

  • Optimized the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an issue in custom elastic resource priority scheduling where the ResourcePolicy pod count was incorrect.

  • Fixed a potential disk leak issue when using WaitForFirstConsumer-type disks with serverless computing power.

v1.31.0-aliyun.6.8.6.520f223d

April 02, 2025

  • Fixed an issue in ACK Serverless clusters where disks of the WaitForFirstConsumer type were not created by the CSI Plugin.

v1.31.0-aliyun.6.9.0.8287816e

February 28, 2025

  • Added support for scheduling based on the number of available IP addresses on a node.

  • Added a plugin to support resource checks before jobs are dequeued from Kube Queue.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.31.0-aliyun.6.8.5.2c6ea085

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue in custom elastic resource priority scheduling where the Max value became invalid after declaring PodLabels.

v1.31.0-aliyun.6.8.4.8f585f26

January 02, 2025

  • Custom elastic resource priority scheduling:

    • Added support for ACS GPU.

    • Fixed a potential ECI instance leak issue when using PVCs in ACK Serverless clusters.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage might be incorrect in ACS resource normalization scenarios.

v1.31.0-aliyun.6.8.3.eeb86afc

December 16, 2024

Custom elastic resource priority scheduling: Added support for multiple ACS-type units.

v1.31.0-aliyun.6.8.2.eeb86afc

December 05, 2024

Custom elastic resource priority scheduling: Added support for defining PodAnnotations in a unit.

v1.31.0-aliyun.6.8.1.116b8e1f

December 02, 2024

  • Optimized the performance of network topology-aware scheduling.

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Added a feature where load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.31.0-aliyun.6.7.1.1943173f

November 06, 2024

  • Custom elastic resource priority scheduling

    • Added support for detecting the number of pods that trigger auto scaling.

    • The `resource: elastic` field in a unit is deprecated. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed a potential anomaly when the ECS instance type changes.

v1.31.0-aliyun.6.7.0.740ba623

November 04, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even without an ElasticQuotaTree.

  • Custom elastic resource priority scheduling

    • Added support for ACS-type units.

v1.31.0-aliyun.6.6.1.5bd14ab0

October 22, 2024

  • Fixed an occasional Invalid Score issue with PodTopologySpread.

  • Improved the event messages for Coscheduling. The number of Coscheduling failures is now included in events.

  • Improved the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

  • Network topology-aware scheduling

    • Fixed an issue where pods could not be scheduled after preemption in network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • Fixed an issue where NUMA topology-aware scheduling did not take effect.

v1.31.0-aliyun.6.6.0.ba473715

September 13, 2024

Supports all previous features in ACK clusters of version 1.31.

Version 1.30 change history

Version number

Change time

Changes

v1.30.3-apsara.6.11.2.463d59c9

September 15, 2025

  • Bug fixes:

    • Fixed an issue where a pod could not be scheduled if multiple containers in the pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests for ACS computing power were made.

v1.30.3-aliyun.6.11.1.c005a0b0

July 25, 2025

v1.30.3-aliyun.6.11.0.84cdcafb

July 18, 2025

  • Improved the scheduling determinism of GOAT. This prevents determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue in Gang scheduling where the pod count for a gang was incorrect if the PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed issues in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption might occur within the same quota when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue in ResourcePolicy where the MaxPod count was occasionally calculated incorrectly after an elastic trigger.

  • Added a fairness check feature to ElasticQuotaTree. If a quota with unmet resource guarantees has pending pods, new pods are not scheduled to quotas that have already met their resource guarantees. To enable this feature, set the StrictFairness parameter of the plugin. The feature is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • Added support for pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. A unit is skipped if it might patch pod labels and the labels might affect the MatchLabels matching of a ReplicaSet or StatefulSet.

v1.30.3-aliyun.6.9.4.818b6506

June 04, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly when using ResourcePolicy.

  • Fixed an ElasticQuota preemption anomaly.

v1.30.3-aliyun.6.9.3.ce7e2faf

May 14, 2025

  • Optimized the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an issue in custom elastic resource priority scheduling where the ResourcePolicy pod count was incorrect.

  • Fixed a potential disk leak issue when using WaitForFirstConsumer-type disks with serverless computing power.

v1.30.3-aliyun.6.8.6.40d5fdf4

April 02, 2025

  • Fixed an issue in ACK Serverless clusters where disks of the WaitForFirstConsumer type were not created by the CSI Plugin.

v1.30.3-aliyun.6.9.0.f08e56a7

February 28, 2025

  • Added support for scheduling based on the number of available IP addresses on a node.

  • Added a plugin to support resource checks before jobs are dequeued from Kube Queue.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.30.3-aliyun.6.8.5.af20249c

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue in custom elastic resource priority scheduling where the Max value became invalid after declaring PodLabels.

v1.30.3-aliyun.6.8.4.946f90e8

January 02, 2025

  • Custom elastic resource priority scheduling:

    • Added support for ACS GPU.

    • Fixed a potential ECI instance leak issue when using PVCs in ACK Serverless clusters.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage might be incorrect in ACS resource normalization scenarios.

v1.30.3-aliyun.6.8.3.697ce9b5

December 16, 2024

Custom elastic resource priority scheduling: Added support for multiple ACS-type units.

v1.30.3-aliyun.6.8.2.a5fa5dbd

December 05, 2024

Custom elastic resource priority scheduling

  • Added support for defining PodAnnotations in a unit.

v1.30.3-aliyun.6.8.1.6dc0fd75

December 02, 2024

  • Optimized the performance of network topology-aware scheduling.

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Added a feature where load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.30.3-aliyun.6.7.1.d992180a

November 06, 2024

  • Custom elastic resource priority scheduling

    • Added support for detecting the number of pods that trigger auto scaling.

    • The `resource: elastic` field in a unit is deprecated. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed a potential anomaly when the ECS instance type changes.

v1.30.3-aliyun.6.7.0.da474ec5

November 04, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even without an ElasticQuotaTree.

  • Custom elastic resource priority scheduling

    • Added support for ACS-type units.

v1.30.3-aliyun.6.6.4.b8940a30

October 22, 2024

  • Fixed an occasional Invalid Score issue with PodTopologySpread.

v1.30.3-aliyun.6.6.3.994ade8a

October 18, 2024

  • Improved the event messages for Coscheduling. The number of Coscheduling failures is now included in events.

  • Improved the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

v1.30.3-aliyun.6.6.2.0be67202

September 23, 2024

  • Network topology-aware scheduling

    • Fixed an issue where pods could not be scheduled after preemption in network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • Fixed an issue where NUMA topology-aware scheduling did not take effect.

v1.30.3-aliyun.6.6.1.d98352c6

September 11, 2024

  • Added preemption support for network topology-aware scheduling.

  • SlurmOperator

    • Added support for hybrid scheduling in Kubernetes & Slurm clusters.

  • Coscheduling

    • Added support for the latest community version of CRDs.

v1.30.3-aliyun.6.5.6.fe7bc1d5

August 20, 2024

Fixed a PodAffinity/PodAntiAffinity scheduling anomaly introduced in v1.30.1-aliyun.6.5.1.5dad3be8.

v1.30.3-aliyun.6.5.5.8b10ee7c

August 01, 2024

  • Rebased to community version v1.30.3.

v1.30.1-aliyun.6.5.5.fcac2bdf

August 01, 2024

  • CapacityScheduling

    • Fixed a potential quota calculation error when using Coscheduling and CapacityScheduling at the same time.

  • GPUShare

    • Fixed an error in calculating the remaining resources of a computing power scheduling node.

  • Custom elastic resource priority scheduling

    • Optimized the node scale-out behavior when using ResourcePolicy with ClusterAutoscaler. Nodes are no longer provisioned when the pods in all units have reached their Max value.

v1.30.1-aliyun.6.5.4.fcac2bdf

July 22, 2024

  • Coscheduling

    • Fixed a quota statistics error when using ECI.

  • Fixed an intermittent "xxx is in cache, so can't be assumed" issue.

v1.30.1-aliyun.6.5.3.9adaeb31

July 10, 2024

Fixed an issue introduced in v1.30.1-aliyun.6.5.1.5dad3be8 where pods remained in the Pending state for a long time.

v1.30.1-aliyun.6.5.1.5dad3be8

June 27, 2024

  • Coscheduling

    • Optimized the scheduling speed of Coscheduling.

  • Added support for sequential pod scheduling.

  • Added support for declaring equivalence classes to improve scheduling performance.

  • Used PreEnqueue to optimize the performance of existing scheduler plugins.

v1.30.1-aliyun.6.4.7.6643d15f

May 31, 2024

  • Supports all previous features in ACK clusters of version 1.30.

Version 1.28 change history

Version number

Change time

Changes

v1.28.12-apsara-6.11.4.a48c5b6c

September 15, 2025

  • Bug fixes:

    • Fixed an issue where a pod could not be scheduled if multiple containers in the pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests for ACS computing power were made.

v1.28.12-apsara-6.11.3.1a06b13e

September 09, 2025

  • New features:

    • Added support for ElasticQuotaTree to use the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in a quota.

v1.28.12-aliyun-6.11.1.f23c663c

July 25, 2025

v1.28.12-aliyun-6.11.0.4003ef92

July 18, 2025

  • Improved the scheduling determinism of GOAT. This prevents determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue in Gang scheduling where the pod count for a gang was incorrect if the PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed issues in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption might occur within the same quota when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue in ResourcePolicy where the MaxPod count was occasionally calculated incorrectly after an elastic trigger.

  • Added a fairness check feature to ElasticQuotaTree. If a quota with unmet resource guarantees has pending pods, new pods are not scheduled to quotas that have already met their resource guarantees. To enable this feature, set the StrictFairness parameter of the plugin. The feature is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • Added support for pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. A unit is skipped if it might patch pod labels and the labels might affect the MatchLabels matching of a ReplicaSet or StatefulSet.

v1.28.12-aliyun-6.9.4.206fc5f8

June 04, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly when using ResourcePolicy.

  • Fixed an ElasticQuota preemption anomaly.

v1.28.12-aliyun-6.9.3.cd73f3fe

May 14, 2025

  • Optimized the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an issue in custom elastic resource priority scheduling where the ResourcePolicy pod count was incorrect.

  • Fixed a potential disk leak issue when using WaitForFirstConsumer-type disks with serverless computing power.

v1.28.12-aliyun-6.8.6.5f05e0ac

April 02, 2025

  • Fixed an issue in ACK Serverless clusters where disks of the WaitForFirstConsumer type were not created by the CSI Plugin.

v1.28.12-aliyun-6.9.0.6a13fa65

February 28, 2025

  • Added support for scheduling based on the number of available IP addresses on a node.

  • Added a plugin to support resource checks before jobs are dequeued from Kube Queue.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.28.12-aliyun-6.8.5.b6aef0d1

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue in custom elastic resource priority scheduling where the Max value became invalid after declaring PodLabels.

v1.28.12-aliyun-6.8.4.b27c0009

January 02, 2025

  • Custom elastic resource priority scheduling:

    • Added support for ACS GPU.

    • Fixed a potential ECI instance leak issue when using PVCs in ACK Serverless clusters.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage might be incorrect in ACS resource normalization scenarios.

v1.28.12-aliyun-6.8.3.70c756e1

December 16, 2024

Custom elastic resource priority scheduling: Added support for multiple ACS-type units.

v1.28.12-aliyun-6.8.2.9a307479

December 05, 2024

Custom elastic resource priority scheduling

  • Added support for defining PodAnnotations in a unit.

v1.28.12-aliyun-6.8.1.db6cdeb8

December 02, 2024

  • Optimized the performance of network topology-aware scheduling.

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Added a feature where load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.28.12-aliyun-6.7.1.44345748

November 06, 2024

  • Custom elastic resource priority scheduling

    • Added support for detecting the number of pods that trigger auto scaling.

    • The `resource: elastic` field in a unit is deprecated. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed a potential anomaly when the ECS instance type changes.

v1.28.12-aliyun-6.7.0.b97fca02

November 04, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even without an ElasticQuotaTree.

  • Custom elastic resource priority scheduling

    • Added support for ACS-type units.

v1.28.12-aliyun-6.6.4.e535a698

October 22, 2024

  • Fixed an occasional Invalid Score issue with PodTopologySpread.

v1.28.12-aliyun-6.6.3.188f750b

October 11, 2024

  • Improved the event messages for Coscheduling. The number of Coscheduling failures is now included in events.

  • Improved the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

v1.28.12-aliyun-6.6.2.054ec1f5

September 23, 2024

  • Network topology-aware scheduling

    • Fixed an issue where pods could not be scheduled after preemption in network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • Fixed an issue where NUMA topology-aware scheduling did not take effect.

v1.28.12-aliyun-6.6.1.348b251d

September 11, 2024

  • Added preemption support for network topology-aware scheduling.

  • SlurmOperator

    • Added support for hybrid scheduling in Kubernetes & Slurm clusters.

v1.28.12-aliyun-6.5.4.79e08301

August 20, 2024

Fixed a PodAffinity/PodAntiaffinity scheduling anomaly introduced in v1.28.3-aliyun-6.5.1.364d020b.

v1.28.12-aliyun-6.5.3.aefde017

August 01, 2024

  • Rebased to community version v1.28.12.

v1.28.3-aliyun-6.5.3.79e08301

August 01, 2024

  • CapacityScheduling

    • Fixed a potential quota calculation error when using Coscheduling and CapacityScheduling at the same time.

  • GPUShare

    • Fixed an error in calculating the remaining resources of a computing power scheduling node.

  • Custom elastic resource priority scheduling

    • Optimized the node scale-out behavior when using ResourcePolicy with ClusterAutoscaler. Nodes are no longer provisioned when the pods in all units have reached their Max value.

v1.28.3-aliyun-6.5.2.7ff57682

July 22, 2024

  • Coscheduling

    • Fixed a quota statistics error when using ECI.

  • Fixed an intermittent "xxx is in cache, so can't be assumed" issue.

  • Fixed an issue introduced in v1.28.3-aliyun-6.5.1.364d020b where pods remained in the Pending state for a long time.

v1.28.3-aliyun-6.5.1.364d020b

June 27, 2024

  • Coscheduling

    • Optimized the scheduling speed of Coscheduling.

  • Added support for sequential pod scheduling.

  • Added support for declaring equivalence classes to improve scheduling performance.

  • Used PreEnqueue to optimize the performance of existing scheduler plugins.

v1.28.3-aliyun-6.4.7.0f47500a

May 24, 2024

  • Network topology-aware scheduling

    • Fixed an issue where network topology-aware scheduling occasionally failed.

v1.28.3-aliyun-6.4.6.f32dc398

May 16, 2024

  • Shared GPU scheduling

    • Fixed a GPU scheduling anomaly that occurred after changing the ack.node.gpu.schedule label of a node from egpu to default in a Lingjun cluster.

  • CapacityScheduling

    • Fixed an occasional error message: running AddPod on PreFilter plugin.

  • Elastic scheduling

    • Added a feature to generate a wait for eci provisioning event when using alibabacloud.com/burst-resource to create an ECI.

v1.28.3-aliyun-6.4.5.a8b4a599

May 09, 2024

v1.28.3-aliyun-6.4.3.f57771d7

March 18, 2024

  • Shared GPU scheduling

    • Added support for submitting a ConfigMap to isolate specific cards.

  • Custom elastic resource priority scheduling

    • Added support for the elastic resource type.

v1.28.3-aliyun-6.4.2.25bc61fb

March 01, 2024

Disabled the SchedulerQueueingHints feature by default. For more information, see Pull Request #122291.

v1.28.3-aliyun-6.4.1.c7db7450

February 21, 2024

  • Added support for NUMA joint allocation.

  • Custom elastic resource priority scheduling

    • Added support for wait attempts between units.

  • Fixed an issue in IP-aware scheduling where the number of schedulable pods was reduced due to an incorrect count of remaining IP addresses.

v1.28.3-aliyun-6.3.1ab2185e

January 10, 2024

  • Custom elastic resource priority scheduling

    • Fixed an issue where ECI zone affinity and spread constraints did not take effect when using custom elastic resource priority scheduling.

  • CPU topology-aware scheduling

    • Prevents the same CPU core from being repeatedly allocated to a single pod, which caused pod startup failures on the node.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when using the alibabacloud.com/burst-resource label to specify a policy, even if the label's value was not `eci` or `eci_only`.

v1.28.3-aliyun-6.2.84d57ad9

December 21, 2023

Added support for MatchLabelKeys in custom elastic resource priority scheduling to automatically group different versions during application releases.

v1.28.3-aliyun-6.1.ac950aa0

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify quotas. You can use quota.scheduling.alibabacloud.com/name on a pod to specify its quota.

    • Added a queue association feature. It supports counting the resources of only the pods managed by Kube Queue.

    • Optimized the preemption logic. In the new version, CapacityScheduling preemption will not cause the pod usage of a preempted quota to fall below its Min value, nor will it cause the pod usage of a preempting quota to exceed its Min value.

  • Custom elastic resource priority

    • Added support for updating the labels of units and nodes in a ResourcePolicy. After an update, the pod's Deletion-Cost is synchronized.

    • Added the IgnoreTerminatingPod option. This option ignores terminating pods when counting the number of pods in a unit.

    • Added the IgnorePreviousPod option. This option ignores pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a unit.

    • Added the PreemptPolicy option. This option supports preemption attempts between units.

  • GPUShare

    • Optimized the scheduling speed of GPUShare. The P99 scheduling latency of the Filter plugin is reduced from milliseconds to microseconds.

v1.28.3-aliyun-5.8-89c55520

October 28, 2023

Supports all previous features in ACK clusters of version 1.28.

Version 1.26 change history

Version number

Change time

Changes

v1.26.3-aliyun-6.8.7.fec3f2bc

May 14, 2025

  • Fixed a potential disk leak issue when using WaitForFirstConsumer-type disks with serverless computing power.

v1.26.3-aliyun-6.9.0.293e663c

February 28, 2025

  • Added support for scheduling based on the number of available IP addresses on a node.

  • Added a plugin to support resource checks before jobs are dequeued from Kube Queue.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.26.3-aliyun-6.8.5.7838feba

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue in custom elastic resource priority scheduling where the Max value became invalid after declaring PodLabels.

v1.26.3-aliyun-6.8.4.4b180111

January 02, 2025

  • Custom elastic resource priority scheduling:

    • Added support for ACS GPU.

    • Fixed a potential ECI instance leak issue when using PVCs in ACK Serverless clusters.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage might be incorrect in ACS resource normalization scenarios.

v1.26.3-aliyun-6.8.3.95c73e0b

December 16, 2024

Custom elastic resource priority scheduling: Added support for multiple ACS-type units.

v1.26.3-aliyun-6.8.2.9c9fa19f

December 05, 2024

Custom elastic resource priority scheduling

  • Added support for defining PodAnnotations in a unit.

v1.26.3-aliyun-6.8.1.a12db674

December 02, 2024

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Added a feature where load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.26.3-aliyun-6.7.1.d466c692

November 06, 2024

  • Custom elastic resource priority scheduling

    • Added support for detecting the number of pods that trigger auto scaling.

    • The `resource: elastic` field in a unit is deprecated. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed a potential anomaly when the ECS instance type changes.

v1.26.3-aliyun-6.7.0.9c293fb7

November 04, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even without an ElasticQuotaTree.

  • Custom elastic resource priority scheduling

    • Added support for ACS-type units.

v1.26.3-aliyun-6.6.4.7a8f3f9d

October 22, 2024

Improved the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

v1.26.3-aliyun-6.6.3.67f250fe

September 04, 2024

  • SlurmOperator

    • Optimized the scheduling performance of the plugin.

v1.26.3-aliyun-6.6.2.9ea0a6f5

August 30, 2024

  • InterPodAffinity

    • Fixed an issue where removing a taint from a new node did not trigger pod rescheduling.

v1.26.3-aliyun-6.6.1.605b8a4f

July 31, 2024

  • SlurmOperator

    • Added support for hybrid scheduling in Kubernetes & Slurm clusters.

  • Custom elastic resource priority scheduling

    • Optimized the feature to avoid unnecessary node provisioning when used with node pools that have auto scaling enabled.

v1.26.3-aliyun-6.4.7.2a77d106

June 27, 2024

  • Coscheduling

    • Optimized the scheduling speed of Coscheduling.

v1.26.3-aliyun-6.4.6.78cacfb4

May 16, 2024

  • CapacityScheduling

    • Fixed an occasional error message: running AddPod on PreFilter plugin.

  • Elastic scheduling

    • Added a feature to generate a wait for eci provisioning event when using alibabacloud.com/burst-resource to create an ECI.

v1.26.3-aliyun-6.4.5.7f36e9b3

May 09, 2024

v1.26.3-aliyun-6.4.3.e7de0a1e

March 18, 2024

  • Shared GPU scheduling

    • Added support for submitting a ConfigMap to isolate specific cards.

  • Custom elastic resource priority scheduling

    • Added support for the elastic resource type.

v1.26.3-aliyun-6.4.1.d24bc3c3

February 21, 2024

  • Optimized the score for virtual nodes in the NodeResourceFit plugin. Virtual nodes now always receive a score of 0 in the NodeResourceFit plugin, allowing Preferred-type NodeAffinity to correctly prioritize scheduling to ECS nodes.

  • Added support for NUMA joint allocation.

  • Custom elastic resource priority scheduling

    • Added support for wait attempts between units.

  • Fixed an issue in IP-aware scheduling where the number of schedulable pods was reduced due to an incorrect count of remaining IP addresses.

v1.26.3-aliyun-6.3.33fdc082

January 10, 2024

  • Custom elastic resource priority

    • Fixed an issue where ECI zone affinity and spread constraints did not take effect when using custom elastic resource priority scheduling.

  • CPU topology-aware scheduling

    • Prevents the same CPU core from being repeatedly allocated to a single pod, which caused pod startup failures on the node.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when using the alibabacloud.com/burst-resource label to specify a policy, even if the label's value was not `eci` or `eci_only`.

  • CapacityScheduling

    • Automatically enabled the job preemption feature in ACK Lingjun clusters.

v1.26.3-aliyun-6.2.d9c15270

December 21, 2023

Added support for MatchLabelKeys in custom elastic resource priority scheduling to automatically group different versions during application releases.

v1.26.3-aliyun-6.1.a40b0eef

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify quotas. You can use quota.scheduling.alibabacloud.com/name on a pod to specify its quota.

    • Added a queue association feature. It supports counting the resources of only the pods managed by Kube Queue.

    • Optimized the preemption logic. In the new version, CapacityScheduling preemption will not cause the pod usage of a preempted quota to fall below its Min value, nor will it cause the pod usage of a preempting quota to exceed its Min value.

  • Custom elastic resource priority

    • Added an update feature. It supports updating the units of a ResourcePolicy and the labels of a node. After an update, the pod's Deletion-Cost is synchronized.

    • Added the IgnoreTerminatingPod option. This option ignores terminating pods when counting the number of pods in a unit.

    • Added the IgnorePreviousPod option. This option ignores pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a unit.

    • Added the PreemptPolicy option. This option supports preemption attempts between units.

  • GPUShare

    • Optimized the scheduling speed of GPUShare. The P99 scheduling latency of the Filter plugin is reduced from milliseconds to microseconds.

v1.26.3-aliyun-5.9-cd4f2cc3

November 16, 2023

  • Improved the display of reasons for scheduling failures caused by unsupported disk types.

v1.26.3-aliyun-5.8-a1482f93

October 16, 2023

  • Added support for Windows node scheduling.

  • Optimized the scheduling speed of Coscheduling when multiple tasks are scheduled at the same time. This reduces task blockage.

v1.26.3-aliyun-5.7-2f57d3ff

September 20, 2023

  • Fixed an occasional Admit failure when scheduling GPUShare pods.

  • Added a plugin to the scheduler that is aware of the remaining IP addresses on a node. Pods are no longer scheduled to a node if it has no available IP addresses.

  • Added a topology-aware scheduling plugin to the scheduler. It supports scheduling pods to the same topology domain and automatically retries across multiple topology domains.

  • The scheduler updates the Usage and Request information of ElasticQuotaTree at a frequency of one second.

v1.26.3-aliyun-5.5-8b98a1cc

July 05, 2023

  • Fixed an issue where pods occasionally remained in the Pending state for a long time during Coscheduling.

  • Improved the user experience when using Coscheduling with elastic node pools. Other pods in a PodGroup no longer trigger node pool scale-out when some pods cannot be scheduled or scaled out due to incorrect node selector configurations.

v1.26.3-aliyun-5.4-21b4da4c

July 03, 2023

  • Fixed an issue where the Max property of ResourcePolicy was invalid.

  • Optimized the impact of many pending pods on scheduler performance. The scheduler's throughput is now similar to when there are no pending pods.

v1.26.3-aliyun-5.1-58a821bf

May 26, 2023

Added support for updating fields such as min-available and Matchpolicy for a PodGroup.

v1.26.3-aliyun-5.0-7b1ccc9d

May 22, 2023

  • The custom elastic resource priority feature now supports declaring the maximum number of replicas in the Unit field.

  • Added support for GPU topology-aware scheduling.

v1.26.3-aliyun-4.1-a520c096

April 27, 2023

Nodes are no longer provisioned by the autoscaler when the ElasticQuota limit is exceeded or the number of gang pods is insufficient.

Version 1.24 change history

Version number

Change time

Changes

v1.24.6-aliyun-6.4.7.e7ffcda5

May 06, 2025

  • Fixed an issue where the Max count in ResourcePolicy was occasionally incorrect.

  • Fixed a potential disk leak issue when using WaitForFirstConsumer-type disks with serverless computing power.

v1.24.6-aliyun-6.5.0.37a567db (Available on whitelist)

November 04, 2024

Custom elastic resource priority scheduling

  • Added support for ACS-type units.

v1.24.6-aliyun-6.4.6.c4d551a0

May 16, 2024

  • CapacityScheduling

    • Fixed an occasional error message: running AddPod on PreFilter plugin.

v1.24.6-aliyun-6.4.5.aab44b4a

May 09, 2024

v1.24.6-aliyun-6.4.3.742bd819

March 18, 2024

  • Shared GPU scheduling

    • Added support for submitting a ConfigMap to isolate specific cards.

  • Custom elastic resource priority scheduling

    • Added support for the elastic resource type.

v1.24.6-aliyun-6.4.1.14ebc575

February 21, 2024

  • Optimized the score for virtual nodes in the NodeResourceFit plugin. Virtual nodes now always receive a score of 0 in the NodeResourceFit plugin, allowing Preferred-type NodeAffinity to correctly prioritize scheduling to ECS nodes.

  • Added support for NUMA joint allocation.

  • Custom elastic resource priority scheduling

    • Added support for wait attempts between units.

  • Fixed an issue in IP-aware scheduling where the number of schedulable pods was reduced due to an incorrect count of remaining IP addresses.

v1.24.6-aliyun-6.3.548a9e59

January 10, 2024

  • Custom elastic resource priority scheduling

    • Fixed an issue where ECI zone affinity and spread constraints did not take effect when using custom elastic resource priority scheduling.

  • CPU topology-aware scheduling

    • Prevents the same CPU core from being repeatedly allocated to a single pod, which caused pod startup failures on the node.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when using the alibabacloud.com/burst-resource label to specify a policy, even if the label's value was not `eci` or `eci_only`.

  • CapacityScheduling

    • Automatically enabled the job preemption feature in ACK Lingjun clusters.

v1.24.6-aliyun-6.2.0196baec

December 21, 2023

Added support for MatchLabelKeys in custom elastic resource priority scheduling to automatically group different versions during application releases.

v1.24.6-aliyun-6.1.1900da95

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify quotas. You can use quota.scheduling.alibabacloud.com/name on a pod to specify its quota.

    • Added a queue association feature. It supports counting the resources of only the pods managed by Kube Queue.

    • Optimized the preemption logic. In the new version, CapacityScheduling preemption will not cause the pod usage of a preempted quota to fall below its Min value, nor will it cause the pod usage of a preempting quota to exceed its Min value.

  • Custom elastic resource priority

    • Added an update feature. It supports updating the units of a ResourcePolicy and the labels of a node. After an update, the pod's Deletion-Cost is synchronized.

    • Added the IgnoreTerminatingPod option. This option ignores terminating pods when counting the number of pods in a unit.

    • Added the IgnorePreviousPod option. This option ignores pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a unit.

    • Added the PreemptPolicy option. This option supports preemption attempts between units.

  • GPUShare

    • Optimized the scheduling speed of GPUShare. The P99 scheduling latency of the Filter plugin is reduced from milliseconds to microseconds.

v1.24.6-aliyun-5.9-e777ab5b

November 16, 2023

  • Improved the display of reasons for scheduling failures caused by unsupported disk types.

v1.24.6-aliyun-5.8-49fd8652

October 16, 2023

  • Added support for Windows node scheduling.

  • Optimized the scheduling speed of Coscheduling when multiple tasks are scheduled at the same time. This reduces task blockage.

v1.24.6-aliyun-5.7-62c7302c

September 20, 2023

  • Fixed an occasional Admit failure when scheduling GPUShare pods.

v1.24.6-aliyun-5.6-2bb99440

August 31, 2023

  • Added a plugin to the scheduler that is aware of the remaining IP addresses on a node. Pods are no longer scheduled to a node if it has no available IP addresses.

  • Added a topology-aware scheduling plugin to the scheduler. It supports scheduling pods to the same topology domain and automatically retries across multiple topology domains.

  • The scheduler updates the Usage and Request information of ElasticQuotaTree at a frequency of one second.

v1.24.6-aliyun-5.5-5e8aac79

July 05, 2023

  • Fixed an issue where pods occasionally remained in the Pending state for a long time during Coscheduling.

  • Improved the user experience when using Coscheduling with elastic node pools. Other pods in a PodGroup no longer trigger node pool scale-out when some pods cannot be scheduled or scaled out due to incorrect node selector configurations.

v1.24.6-aliyun-5.4-d81e785e

July 03, 2023

  • Fixed an issue where the Max property of ResourcePolicy was invalid.

  • Optimized the impact of many pending pods on scheduler performance. The scheduler's throughput is now similar to when there are no pending pods.

v1.24.6-aliyun-5.1-95d8a601

May 26, 2023

Added support for updating fields such as min-available and Matchpolicy for Coscheduling.

v1.24.6-aliyun-5.0-66224258

May 22, 2023

  • The custom elastic resource priority feature now supports declaring the maximum number of replicas in the Unit field.

  • Added support for GPU topology-aware scheduling.

v1.24.6-aliyun-4.1-18d8d243

March 31, 2023

ElasticResource now supports scheduling pods to Arm VK nodes.

v1.24.6-4.0-330eb8b4-aliyun

March 01, 2023

  • GPUShare:

    • Fixed a scheduler status error that occurred when downgrading a GPU node.

    • Fixed an issue where GPU nodes could not allocate the full amount of GPU memory.

    • Added support for preempting GPU pods.

  • Coscheduling:

    • Added support for declaring gangs using PodGroup and Koordinator APIs.

    • Added support for controlling the retry policy of a gang using Matchpolicy.

    • Added support for Gang Group.

    • Gang names must comply with DNS subdomain rules.

  • Custom parameters: Added support for Loadaware-related configuration parameters.

v1.24.6-3.2-4f45222b-aliyun

January 13, 2023

Fixed an issue where inaccurate GPUShare memory calculation prevented pods from using GPU memory properly.

v1.24.6-ack-3.1

November 14, 2022

  • Enabled the score feature for shared GPU scheduling by default (this feature was disabled by default in earlier versions).

  • Added support for load-aware scheduling.

v1.24.6-ack-3.0

September 27, 2022

Added support for Capacity Scheduling.

v1.24.3-ack-2.0

September 21, 2022

  • Added support for shared GPU scheduling.

  • Added support for Coscheduling.

  • Added support for ECI elastic scheduling.

  • Added support for intelligent CPU scheduling.

Version 1.22 change history

Version number

Change time

Changes

v1.22.15-aliyun-6.4.5.e54fd757

May 06, 2025

  • Fixed an issue where the Max count in ResourcePolicy was occasionally incorrect.

  • Fixed a potential disk leak issue when using WaitForFirstConsumer-type disks with serverless computing power.

v1.22.15-aliyun-6.4.4.7fc564f8

May 16, 2024

  • CapacityScheduling

    • Fixed an occasional error message: running AddPod on PreFilter plugin.

v1.22.15-aliyun-6.4.3.e858447b

April 22, 2024

  • Custom elastic resource priority scheduling

    • Fixed an occasional status anomaly when deleting a ResourcePolicy.

v1.22.15-aliyun-6.4.2.4e00a021

March 18, 2024

  • CapacityScheduling

    • Fixed an occasional preemption failure in ACK Lingjun clusters.

  • Added support for manually blacklisting specific GPU cards in a cluster using a Configmap.

v1.22.15-aliyun-6.4.1.1205db85

February 29, 2024

  • Custom elastic resource priority scheduling

    • Fixed an occasional concurrency conflict issue.

v1.22.15-aliyun-6.4.0.145bb899

February 28, 2024

  • CapacityScheduling

    • Fixed a quota statistics error caused by the feature that lets you specify quotas.

v1.22.15-aliyun-6.3.a669ec6f

January 10, 2024

  • Custom elastic resource priority scheduling

    • Fixed an issue where ECI zone affinity and spread constraints did not take effect when using custom elastic resource priority scheduling.

    • Added support for MatchLabelKeys.

  • CPU topology-aware scheduling

    • Fixed an issue where the same CPU core might be repeatedly allocated to a single pod, causing pod startup failures on the node.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when using the alibabacloud.com/burst-resource label to specify a policy, even if the label's value was not `eci` or `eci_only`.

  • CapacityScheduling

    • Automatically enabled the job preemption feature in ACK Lingjun clusters.

v1.22.15-aliyun-6.1.e5bf8b06

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify quotas. You can use quota.scheduling.alibabacloud.com/name on a pod to specify its quota.

    • Added a queue association feature. You can configure a quota to count the resources of only the pods managed by Kube Queue.

    • Optimized the preemption logic. In the new version, CapacityScheduling preemption will not cause the pod usage of a preempted quota to fall below its Min value, nor will it cause the pod usage of a preempting quota to exceed its Min value.

  • Custom elastic resource priority

    • Added an update feature. It supports updating the units of a ResourcePolicy and the labels of a node. After an update, the pod's Deletion-Cost is synchronized.

    • Added the IgnoreTerminatingPod option. This option ignores terminating pods when counting the number of pods in a unit.

    • Added the IgnorePreviousPod option. This option ignores pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a unit.

    • Added the PreemptPolicy option. This option supports preemption attempts between units.

  • GPUShare

    • Optimized the scheduling speed of GPUShare. The P99 scheduling latency of the Filter plugin is reduced from milliseconds to microseconds.

v1.22.15-aliyun-5.9-04a5e6eb

November 16, 2023

  • Improved the display of reasons for scheduling failures caused by unsupported disk types.

v1.22.15-aliyun-5.8-29a640ae

October 16, 2023

  • Added support for Windows node scheduling.

  • Optimized the scheduling speed of Coscheduling when multiple tasks are scheduled at the same time. This reduces task blockage.

v1.22.15-aliyun-5.7-bfcffe21

September 20, 2023

  • Fixed an occasional Admit failure when scheduling GPUShare pods.

v1.22.15-aliyun-5.6-6682b487

August 14, 2023

  • Added a plugin to the scheduler that is aware of the remaining IP addresses on a node. Pods are no longer scheduled to a node if it has no available IP addresses.

  • Added a topology-aware scheduling plugin to the scheduler. It supports scheduling pods to the same topology domain and automatically retries across multiple topology domains.

  • The scheduler updates the Usage and Request information of ElasticQuotaTree at a frequency of one second.

v1.22.15-aliyun-5.5-82f32f68

July 05, 2023

  • Fixed an issue where pods occasionally remained in the Pending state for a long time during Coscheduling.

  • Improved the user experience when using PodGroup with elastic node pools. Other pods in a PodGroup no longer trigger node pool scale-out when some pods cannot be scheduled or scaled out due to incorrect node selector configurations.

v1.22.15-aliyun-5.4-3b914a05

July 03, 2023

  • Fixed an issue where the Max property of ResourcePolicy was invalid.

  • Optimized the impact of many pending pods on scheduler performance. The scheduler's throughput is now similar to when there are no pending pods.

v1.22.15-aliyun-5.1-8a479926

May 26, 2023

Added support for updating fields such as min-available and Matchpolicy for a PodGroup.

v1.22.15-aliyun-5.0-d1ab67d9

May 22, 2023

  • The custom elastic resource priority feature now supports declaring the maximum number of replicas in the Unit field.

  • Added support for GPU topology-aware scheduling.

v1.22.15-aliyun-4.1-aec17f35

March 31, 2023

ElasticResource now supports scheduling pods to Arm VK nodes.

v1.22.15-aliyun-4.0-384ca5d5

March 3, 2023

  • GPUShare:

    • Fixed a scheduler status error that occurred when downgrading a GPU node.

    • Fixed an issue where GPU nodes could not allocate the full amount of GPU memory.

    • Added support for preempting GPU pods.

  • Coscheduling:

    • Added support for declaring gangs using PodGroup and Koordinator APIs.

    • Added support for controlling the retry policy of a gang using Matchpolicy.

    • Added support for Gang Group.

    • Gang names must comply with DNS subdomain rules.

  • Custom parameters: Added support for Loadaware-related configuration parameters.

v1.22.15-2.1-a0512525-aliyun

January 10, 2023

Fixed an issue where inaccurate GPUShare memory calculation prevented pods from using GPU memory properly.

v1.22.15-ack-2.0

November 30, 2022

  • Added support for custom scheduler parameters.

  • Added support for load-aware scheduling.

  • Added support for elastic scheduling based on node pool priority.

  • Added support for shared GPU computing power scheduling.

v1.22.3-ack-1.1

February 27, 2022

Fixed an issue where shared GPU scheduling failed when the cluster had only one node.

v1.22.3-ack-1.0

January 04, 2021

  • Added support for intelligent CPU scheduling.

  • Added support for Coscheduling.

  • Added support for Capacity Scheduling.

  • Added support for ECI elastic scheduling.

  • Added support for shared GPU scheduling.

Version 1.20 change history

Version number

Change time

Changes

v1.20.11-aliyun-10.6-f95f7336

September 22, 2023

  • Fixed an occasional quota usage statistics error in ElasticQuotaTree.

v1.20.11-aliyun-10.3-416caa03

May 26, 2023

  • Fixed an occasional cache error in GPUShare on earlier Kubernetes versions.

v1.20.11-aliyun-10.2-f4a371d3

April 27, 2023

  • ElasticResource now supports scheduling pods to Arm VK nodes.

  • Fixed a scheduling failure in load-aware scheduling caused by CPU usage exceeding the requested amount.

v1.20.11-aliyun-10.0-ae867721

April 03, 2023

Added support for MatchPolicy in Coscheduling.

v1.20.11-aliyun-9.2-a8f8c908

March 08, 2023

  • CapacityScheduling: Fixed a scheduler status error caused by quotas with the same name.

  • Added support for disk scheduling.

  • Shared GPU scheduling:

    • Fixed a scheduler status error that occurred when downgrading a GPU node.

    • Fixed an issue where GPU nodes occasionally could not allocate the full amount of GPU memory.

    • Added support for preempting GPU pods.

  • CPU topology-aware scheduling: Pods with CPU scheduling enabled are not scheduled to nodes where Numa is not enabled.

  • Added support for custom parameters.

v1.20.4-ack-8.0

August 29, 2022

Fixed known bugs.

v1.20.4-ack-7.0

February 22, 2022

Added support for elastic scheduling based on node pool priority.

v1.20.4-ack-4.0

September 02, 2021

  • Added support for load-aware scheduling.

  • Added support for ECI elastic scheduling.

v1.20.4-ack-3.0

May 26, 2021

Added support for intelligent CPU scheduling based on sockets and L3 cache.

v1.20.4-ack-2.0

May 14, 2021

Added support for Capacity Scheduling.

v1.20.4-ack-1.0

April 07, 2021

  • Added support for intelligent CPU scheduling.

  • Added support for Coscheduling.

  • Added support for GPU topology-aware scheduling.

  • Added support for shared GPU scheduling.

Version 1.18 change history

Version number

Change time

Changes

v1.18-ack-4.0

September 02, 2021

Added support for load-aware scheduling.

v1.18-ack-3.1

June 05, 2021

Made ECI scheduling compatible with node pools.

v1.18-ack-3.0

March 12, 2021

Added support for unified ECI/ECS scheduling.

v1.18-ack-2.0

November 30, 2020

Added support for GPU topology-aware scheduling and shared GPU scheduling.

v1.18-ack-1.0

September 24, 2020

Added support for intelligent CPU scheduling and Coscheduling.

Version 1.16 change history

Version number

Change time

Changes

v1.16-ack-1.0

July 21, 2020

  • Added support for intelligent CPU scheduling in Kubernetes v1.16 clusters.

  • Added support for Coscheduling in Kubernetes v1.16 clusters.