In a Kubernetes cluster, an ALB Ingress provides Layer 7 load balancing by managing external access to cluster services (Service). This topic describes how to use an ALB Ingress to forward requests from different domain names or URL paths to different backend server groups, redirect HTTP requests to HTTPS, and implement features such as phased release.
Index
Feature classification | Configuration example |
ALB Ingress configuration | |
Port/Protocol configuration | |
Forwarding rule configuration | |
Advanced Configuration |
Prerequisites
The ALB Ingress controller is installed in the cluster. For more information, see Manage the ALB Ingress controller.
NoteTo use an ALB Ingress to access Services deployed in an ACK dedicated cluster, you need to first grant the cluster the permissions required by the ALB Ingress controller. For more information, see Authorize an ACK dedicated cluster to access the ALB Ingress controller.
An AlbConfig is created. For more information, see Create an AlbConfig.
Forward requests based on domain names
You can create a simple Ingress to forward requests based on a specified domain name or an empty domain name. The following sections provide examples.
Forward requests based on a normal domain name
Deploy the following templates to create a Service, a deployment, and an Ingress. Requests are forwarded to the Service using the domain name of the Ingress.
Clusters of v1.19 or later
apiVersion: v1 kind: Service metadata: name: demo-service namespace: default spec: ports: - name: port1 port: 80 protocol: TCP targetPort: 8080 selector: app: demo sessionAffinity: None type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: demo namespace: default spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1 imagePullPolicy: IfNotPresent name: demo ports: - containerPort: 8080 protocol: TCP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: demo.domain.ingress.top http: paths: - backend: service: name: demo-service port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters earlier than v1.19
apiVersion: v1 kind: Service metadata: name: demo-service namespace: default spec: ports: - name: port1 port: 80 protocol: TCP targetPort: 8080 selector: app: demo sessionAffinity: None type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: demo namespace: default spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1 imagePullPolicy: IfNotPresent name: demo ports: - containerPort: 8080 protocol: TCP --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: demo.domain.ingress.top http: paths: - backend: serviceName: demo-service servicePort: 80 path: /hello pathType: ImplementationSpecific
Run the following command to access the service using the specified domain name.
Replace ADDRESS with the domain name of the ALB instance. You can run the
kubectl get ing
command to obtain the domain name.curl -H "host: demo.domain.ingress.top" <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Forward requests based on an empty domain name
Deploy the following template to create an Ingress.
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: "" http: paths: - backend: service: name: demo-service port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: demo namespace: default spec: ingressClassName: alb rules: - host: "" http: paths: - backend: serviceName: demo-service servicePort: 80 path: /hello pathType: ImplementationSpecific
Run the following command to access the service using an empty domain name.
Replace ADDRESS with the domain name of the ALB instance. You can run the
kubectl get ing
command to obtain the domain name.curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Forward requests based on URL paths
ALB Ingress supports forwarding requests based on URL paths. You can set different URL matching policies in the pathType
field. The pathType
field supports three matching methods: Exact, ImplementationSpecific, and Prefix.
URL matching policies may conflict. In this case, requests are forwarded based on the priority of the forwarding rules. For more information, see Configure forwarding rule priorities.
Matching method | Rule path | Request path | Does the rule path match the request path? |
Prefix | / | (All paths) | Yes |
Prefix | /foo |
| Yes |
Prefix | /foo/ |
| Yes |
Prefix | /aaa/bb | /aaa/bbb | No |
Prefix | /aaa/bbb | /aaa/bbb | Yes |
Prefix | /aaa/bbb/ | /aaa/bbb | Yes. The request path ignores the trailing forward slash (/) in the rule path. |
Prefix | /aaa/bbb | /aaa/bbb/ | Yes. The rule path matches the trailing forward slash (/) in the request path. |
Prefix | /aaa/bbb | /aaa/bbb/ccc | Yes. The rule path matches a subpath of the request path. |
Prefix | Two rule paths are set at the same time:
| /aaa/ccc | Yes. The request path matches the |
Prefix | Two rule paths are set at the same time:
| /aaa/ccc | Yes. The request path matches the |
Prefix | Two rule paths are set at the same time:
| /ccc | Yes. The request path matches the |
Prefix | /aaa | /ccc | No. The prefix does not match. |
Exact or ImplementationSpecific | /foo | /foo | Yes |
Exact or ImplementationSpecific | /foo | /bar | No |
Exact or ImplementationSpecific | /foo | /foo/ | No |
Exact or ImplementationSpecific | /foo/ | /foo | No |
The following sections provide examples of the three matching methods.
Exact
Deploy the following template to create an Ingress.
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-path namespace: default spec: ingressClassName: alb rules: - http: paths: - path: /hello backend: service: name: demo-service port: number: 80 pathType: Exact
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: demo-path namespace: default spec: ingressClassName: alb rules: - http: paths: - path: /hello backend: serviceName: demo-service servicePort: 80 pathType: Exact
Run the following command to access the service.
Replace ADDRESS with the domain name of the ALB instance. You can run the
kubectl get ing
command to obtain the domain name.curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
(Default) ImplementationSpecific
In an ALB Ingress, this method is processed in the same way as Exact
.
Deploy the following template to create an Ingress.
Run the following command to access the service.
Replace ADDRESS with the domain name of the ALB instance. You can run the
kubectl get ing
command to obtain the domain name.
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-path
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /hello
backend:
service:
name: demo-service
port:
number: 80
pathType: ImplementationSpecific
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: demo-path
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /hello
backend:
serviceName: demo-service
servicePort: 80
pathType: ImplementationSpecific
curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Prefix
Prefix matching is performed on URL paths that are separated by /
. The matching is case-sensitive and is performed on each element in the path.
Deploy the following template to create an Ingress.
Run the following command to access the service.
Replace ADDRESS with the domain name of the ALB instance. You can run the
kubectl get ing
command to obtain the domain name.
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-path-prefix
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
backend:
service:
name: demo-service
port:
number: 80
pathType: Prefix
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: demo-path-prefix
namespace: default
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
backend:
serviceName: demo-service
servicePort: 80
pathType: Prefix
curl <ADDRESS>/hello
Expected output:
{"hello":"coffee"}
Configure health checks
ALB Ingress supports health checks. You can configure health checks by setting the following annotations.
The following example shows a sample YAML file for configuring health checks:
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/healthcheck-enabled: "true"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/healthcheck-protocol: "HTTP"
alb.ingress.kubernetes.io/healthcheck-httpversion: "HTTP1.1"
alb.ingress.kubernetes.io/healthcheck-method: "HEAD"
alb.ingress.kubernetes.io/healthcheck-code: "http_2xx"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "2"
alb.ingress.kubernetes.io/healthy-threshold-count: "3"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure Context Path
- path: /tea
backend:
service:
name: tea-svc
port:
number: 80
# Configure Context Path
- path: /coffee
backend:
service:
name: coffee-svc
port:
number: 80
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/healthcheck-enabled: "true"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/healthcheck-protocol: "HTTP"
alb.ingress.kubernetes.io/healthcheck-method: "HEAD"
alb.ingress.kubernetes.io/healthcheck-httpcode: "http_2xx"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "2"
alb.ingress.kubernetes.io/healthy-threshold-count: "3"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
spec:
ingressClassName: alb
rules:
- http:
paths:
# Configure Context Path.
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
# Configure Context Path.
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
Parameter | Description |
alb.ingress.kubernetes.io/healthcheck-enabled | Specifies whether to enable health checks for the backend server group.
Default value: |
alb.ingress.kubernetes.io/healthcheck-path | The path to which health check requests are sent. Default value: |
alb.ingress.kubernetes.io/healthcheck-protocol | The protocol that is used for health checks.
Default value: |
alb.ingress.kubernetes.io/healthcheck-httpversion | The HTTP version. This parameter takes effect when
Default value: |
alb.ingress.kubernetes.io/healthcheck-method | The health check method.
Default value: Important If |
alb.ingress.kubernetes.io/healthcheck-httpcode | The health check status code. The backend server is considered healthy only when the probe request succeeds and the specified status code is returned. You can specify one or more of the following options. Separate multiple status codes with commas (,).
Default value: |
alb.ingress.kubernetes.io/healthcheck-code | The health check status code. The backend server is considered healthy only when the probe request succeeds and the specified status code is returned. If you use this parameter together with The valid values of this parameter depend on the value of
|
alb.ingress.kubernetes.io/healthcheck-timeout-seconds | The timeout period of a health check. Unit: seconds (s). Valid values: 1 to 300. Default value: |
alb.ingress.kubernetes.io/healthcheck-interval-seconds | The interval at which health checks are performed. Unit: seconds (s). Valid values: 1 to 50. Default value: |
alb.ingress.kubernetes.io/healthy-threshold-count | The number of consecutive health check successes required before a backend server is declared healthy. Valid values: 2 to 10. Default value: |
alb.ingress.kubernetes.io/unhealthy-threshold-count | The number of consecutive health check failures required before a backend server is declared unhealthy. Valid values: 2 to 10. Default value: |
alb.ingress.kubernetes.io/healthcheck-connect-port | The port that is used for health checks. Default value: Note
|
Configure HTTP to HTTPS redirection
You can set the alb.ingress.kubernetes.io/ssl-redirect: "true"
annotation for an ALB Ingress to redirect HTTP requests to HTTPS port 443.
ALB does not allow you to create listeners directly in an Ingress. To ensure that the Ingress works as expected, you must first create the required listener ports and protocols in the AlbConfig. Then, you can associate these listeners with services in the Ingress. For more information about how to create an ALB listener, see Configure ALB listeners using an AlbConfig.
The following example shows a sample configuration:
Clusters of v1.19 or later
apiVersion: v1
kind: Service
metadata:
name: demo-service-ssl
namespace: default
spec:
ports:
- name: port1
port: 80
protocol: TCP
targetPort: 8080
selector:
app: demo-ssl
sessionAffinity: None
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-ssl
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-ssl
template:
metadata:
labels:
app: demo-ssl
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
imagePullPolicy: IfNotPresent
name: demo-ssl
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/ssl-redirect: "true"
name: demo-ssl
namespace: default
spec:
ingressClassName: alb
tls:
- hosts:
- ssl.alb.ingress.top
rules:
- host: ssl.alb.ingress.top
http:
paths:
- backend:
service:
name: demo-service-ssl
port:
number: 80
path: /
pathType: Prefix
Clusters earlier than v1.19
apiVersion: v1
kind: Service
metadata:
name: demo-service-ssl
namespace: default
spec:
ports:
- name: port1
port: 80
protocol: TCP
targetPort: 8080
selector:
app: demo-ssl
sessionAffinity: None
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-ssl
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-ssl
template:
metadata:
labels:
app: demo-ssl
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/alb-sample/cafe:v1
imagePullPolicy: IfNotPresent
name: demo-ssl
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/ssl-redirect: "true"
name: demo-ssl
namespace: default
spec:
ingressClassName: alb
tls:
- hosts:
- ssl.alb.ingress.top
rules:
- host: ssl.alb.ingress.top
http:
paths:
- backend:
serviceName: demo-service-ssl
servicePort: 80
path: /
pathType: Prefix
Support backend HTTPS and gRPC protocols
ALB supports the HTTPS and gRPC protocols for backend services. You can configure the alb.ingress.kubernetes.io/backend-protocol: "grpc"
or alb.ingress.kubernetes.io/backend-protocol: "https"
annotation for an ALB Ingress. To use an Ingress to forward gRPC services, the corresponding domain name must have an SSL Certificate and use the TLS protocol for communication. The following example shows how to configure the gRPC protocol:
After an Ingress is created, the backend protocol cannot be modified. To change the protocol, delete the Ingress and create a new one.
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: "grpc"
name: lxd-grpc-ingress
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grpc-demo-svc
port:
number: 9080
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: "grpc"
name: lxd-grpc-ingress
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: grpc-demo-svc
servicePort: 9080
path: /
pathType: Prefix
Configure regular expressions
Custom forwarding conditions support regular expression matching:
Domain names support case-insensitive regular expression matching (starting with
~
).Paths support case-insensitive regular expression matching (starting with
~
) and case-sensitive regular expression matching (starting with~*
).
You can enable the regular expression mode using the alb.ingress.kubernetes.io/use-regex: "true"
annotation and configure the corresponding regular expression in the custom forwarding condition. The following example shows a sample configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/use-regex: "true" ## Allows the use of regular expressions.
alb.ingress.kubernetes.io/conditions.<YOUR-SVC-NAME>: | ## Replace <YOUR-SVC-NAME> with the actual Service name. The name must be the same as the value of backend.service.name below.
[{
"type": "Path",
"pathConfig": {
"values": [
"~*/pathvalue1", ## You must add ~* or ~ before the regular expression as a flag. The content after ~* or ~ is the effective regular expression. ~* indicates case-sensitive matching, and ~ indicates case-insensitive matching.
"/pathvalue2" ## You do not need to add ~* for an exact match.
]
}
}]
name: ingress-example
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /test-path-for-alb
pathType: Prefix
backend:
service:
name: <YOUR-SVC-NAME> ## The value of <YOUR-SVC-NAME> here must be the same as the service name specified in the custom forwarding rule annotation to indicate the mapping.
port:
number: 88
Configure regular expression prefix matching
The default logic for regular expression matching is a "contains" match. This means that if the request path contains content that matches the regular expression, the rule is hit. To set a "starts with" match, add ^
before the regular expression to match only paths that start with the specified content. For example, ^/api
matches only request paths that start with /api
. The following example shows a sample configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-example
annotations:
alb.ingress.kubernetes.io/use-regex: "true" ## Enable regular expression matching.
alb.ingress.kubernetes.io/conditions.<YOUR-SVC-NAME>: | ## Replace <YOUR-SVC-NAME> here with the actual service name, which must correspond to backend.service.name below.
[
{
"type": "Path",
"pathConfig": {
"values": [
"~*^/pathvalue1", # Starts with ~* or ~ to indicate a regular expression match. ^ indicates "starts with /pathvalue1".
"/pathvalue2" # For normal prefix or exact matches, you do not need to add ~* or ~.
]
}
}
]
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /test-path-for-alb
pathType: Prefix
backend:
service:
name: <YOUR-SVC-NAME> # Replace with the actual Service name, which must be consistent with the annotation.
port:
number: 88
Support rewrite
ALB supports rewrite. You can configure the alb.ingress.kubernetes.io/rewrite-target: /path/${2}
annotation for an ALB Ingress. The following rules apply:
In the
rewrite-target
annotation, variables in the${number}
format must be configured in apath
of the Prefix type.The
path
does not support regular expression symbols, such as*
and?
. You must configure the alb.ingress.kubernetes.io/use-regex: "true" annotation to use regular expression symbols.The
path
must start with/
.
The ALB rewrite feature supports regular expression replacement. The following rules apply:
You can write one or more regular expressions that contain multiple
()
groups in thepath
of the Ingress. You can then configure one or more of the${1}
,${2}
, and${3}
variables in the rewrite path of therewrite-target
annotation. You can capture up to three variables.The rewrite feature lets you combine the results of regular expression matching as parameters to create a custom rewrite rule.
The logic for regular expression replacement for rewrite is as follows: The client request matches a regular expression that contains multiple
()
groups. Therewrite-target
annotation then uses one or more of the${1}
,${2}
, and${3}
variables to construct the new path.
For example, if the path
of an Ingress is set to /sys/(.*)/(.*)/aaa
and the rewrite-target
annotation is set to /${1}/${2}
, a client request with the path /sys/ccc/bbb/aaa
matches the rule. The path
matches /sys/(.*)/(.*)/aaa
. The rewrite-target
annotation then takes effect, replacing ${1}
with ccc
and ${2}
with bbb
. The final request path that the backend server receives is /ccc/bbb
.
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/use-regex: "true" # Allows the path field to use regular expressions.
alb.ingress.kubernetes.io/rewrite-target: /path/${2} # This annotation supports regular expression replacement.
name: rewrite-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /something(/|$)(.*)
pathType: Prefix
backend:
service:
name: rewrite-svc
port:
number: 9080
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/use-regex: "true" # Allows the path field to use regular expressions.
alb.ingress.kubernetes.io/rewrite-target: /path/${2} # This annotation supports regular expression replacement.
name: rewrite-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: rewrite-svc
servicePort: 9080
path: /something(/|$)(.*)
pathType: Prefix
Configure custom listener ports
You can configure custom listener ports for an Ingress. This lets you expose both port 80 and port 443 for a service at the same time. The following example shows a sample configuration:
ALB does not allow you to create listeners directly in an Ingress. To ensure that the Ingress works as expected, you must first create the required listener ports and protocols in the AlbConfig. Then, you can associate these listeners with services in the Ingress. For more information about how to create an ALB listener, see Configure ALB listeners using an AlbConfig.
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
name: cafe-ingress
spec:
ingressClassName: alb
tls:
- hosts:
- demo.alb.ingress.top
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea-svc
pathType: ImplementationSpecific
Configure forwarding rule priorities
By default, Ingresses are sorted based on the following rules to determine the priority of ALB forwarding rules:
Different Ingresses are sorted by the lexicographical order of
namespace/name
. A smaller lexicographical order indicates a higher priority.Within the same Ingress, rules are sorted by the order in which they appear in the
rule
field. A rule that is configured earlier has a higher priority.
If you do not want to rely on the namespace/name
field of an Ingress, you can configure the following Ingress annotation to define the priority of ALB forwarding rules:
The priority of rules within the same listener must be unique. You can use the alb.ingress.kubernetes.io/order
annotation to specify the priority among Ingresses. Valid values range from 1 to 1000. A smaller value indicates a higher priority. The default priority of an Ingress is 10.
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/order: "2"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/order: "2"
name: cafe-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea-svc
pathType: ImplementationSpecific
Implement phased release using annotations
ALB provides complex routing capabilities and supports phased release based on headers, cookies, and weights. You can implement phased release by setting annotations. To enable phased release, you must set the alb.ingress.kubernetes.io/canary: "true"
annotation. You can use different annotations to implement different phased release features:
The priority order for phased release, from high to low, is: header-based, cookie-based, and then weight-based.
During a phased release, do not delete the original rule. If you do, the service may become abnormal. After you verify the phased release, update the backend Service in the original Ingress to the new Service, and then delete the phased release Ingress.
alb.ingress.kubernetes.io/canary-by-header
andalb.ingress.kubernetes.io/canary-by-header-value
: This annotation specifies the value of the request header to match. It lets you customize the value of the request header, but it must be used together withalb.ingress.kubernetes.io/canary-by-header
.When the
header
andheader-value
in the request match the configured values, traffic is routed to the phased release service endpoint.For other
header
values, theheader
is ignored, and other rules allocate traffic to the phased release service set based on phased release priority.
When the request header is
location: hz
, the request is routed to the phased release service. For other headers, traffic is routed to the phased release service based on the phased release weight. The following example shows a sample configuration:Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "1" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-header: "location" alb.ingress.kubernetes.io/canary-by-header-value: "hz" name: demo-canary namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: service: name: demo-service-hello port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "1" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-header: "location" alb.ingress.kubernetes.io/canary-by-header-value: "hz" name: demo-canary namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: serviceName:demo-service-hello servicePort: 80 path: /hello pathType: ImplementationSpecific
alb.ingress.kubernetes.io/canary-by-cookie
: This annotation enables traffic splitting based on cookies.When the configured
cookie
value isalways
, traffic is routed to the phased release service endpoint.When the configured
cookie
value isnever
, traffic is not routed to the phased release service endpoint.
NoteCookie-based phased release does not support custom values. Only
always
andnever
are supported.When the request cookie is
demo=always
, the request is routed to the phased release service. The following example shows a sample configuration:Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "2" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-cookie: "demo" name: demo-canary-cookie namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: service: name: demo-service-hello port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "2" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-by-cookie: "demo" name: demo-canary-cookie namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: serviceName:demo-service-hello servicePort: 80 path: /hello pathType: ImplementationSpecific
alb.ingress.kubernetes.io/canary-weight
: This annotation sets the percentage of requests to be routed to a specified service. The value must be an integer from 0 to 100.The following example sets the weight of the phased release service to 50%:
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "3" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-weight: "50" name: demo-canary-weight namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: service: name: demo-service-hello port: number: 80 path: /hello pathType: ImplementationSpecific
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/order: "3" alb.ingress.kubernetes.io/canary: "true" alb.ingress.kubernetes.io/canary-weight: "50" name: demo-canary-weight namespace: default spec: ingressClassName: alb rules: - http: paths: - backend: serviceName: demo-service-hello servicePort: 80 path: /hello pathType: ImplementationSpecific
Implement session persistence using annotations
ALB Ingress supports session persistence using annotations:
alb.ingress.kubernetes.io/sticky-session
: Specifies whether to enable session persistence. Valid values:true
andfalse
. Default value:false
.alb.ingress.kubernetes.io/sticky-session-type
: The method used to handle cookies. Valid values:Insert
andServer
. Default value:Insert
.Insert
: Inserts a cookie. When a client makes its first request, the load balancer inserts a cookie (SERVERID) into the response. The next time the client sends a request that contains this cookie, the load balancing service forwards the request to the previously recorded backend server.Server
: Rewrites a cookie. When the load balancer detects a custom cookie, it rewrites the original cookie. The next time the client sends a request that contains the new cookie, the load balancing service forwards the request to the previously recorded backend server.
NoteThis parameter takes effect only when
StickySessionEnabled
is set totrue
for the server group.alb.ingress.kubernetes.io/cookie-timeout
: The cookie timeout period in seconds. Valid values range from 1 to 86400. The default value is1000
. Thealb.ingress.kubernetes.io/cookie-timeout
annotation takes effect only whenalb.ingress.kubernetes.io/sticky-session-type
is set toInsert
.alb.ingress.kubernetes.io/cookie
: The custom cookie value. This is a string. The default value is""
. Thealb.ingress.kubernetes.io/cookie
annotation is required and cannot be empty whenalb.ingress.kubernetes.io/sticky-session-type
is set toServer
.
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress-v3
annotations:
alb.ingress.kubernetes.io/sticky-session: "true"
alb.ingress.kubernetes.io/sticky-session-type: "Insert" # When custom cookies are supported, the cookie insertion type must be Server.
alb.ingress.kubernetes.io/cookie-timeout: "1800"
alb.ingress.kubernetes.io/cookie: "test"
spec:
ingressClassName: alb
rules:
- http:
paths:
- backend:
service:
name: tea-svc
port:
number: 80
path: /tea2
pathType: ImplementationSpecific
- backend:
service:
name: coffee-svc
port:
number: 80
path: /coffee2
pathType: ImplementationSpecific
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-v3
annotations:
alb.ingress.kubernetes.io/sticky-session: "true"
alb.ingress.kubernetes.io/sticky-session-type: "Insert" # When custom cookies are supported, the cookie insertion type must be Server.
alb.ingress.kubernetes.io/cookie-timeout: "1800"
alb.ingress.kubernetes.io/cookie: "test"
spec:
ingressClassName: alb
rules:
- http:
paths:
#Configure Context Path.
- path: /tea2
pathType: ImplementationSpecific
backend:
serviceName: tea-svc
servicePort: 80
#Configure Context Path.
- path: /coffee2
pathType: ImplementationSpecific
backend:
serviceName: coffee-svc
servicePort: 80
Specify a scheduling algorithm for a server group
ALB Ingress lets you specify a scheduling algorithm for a server group by setting the alb.ingress.kubernetes.io/backend-scheduler
Ingress annotation. The following example shows a sample configuration:
Clusters of v1.19 or later
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/backend-scheduler: "uch" # You can set this parameter to wrr, sch, or wlc based on your needs.
alb.ingress.kubernetes.io/backend-scheduler-uch-value: "test" # This parameter is required only when the scheduling algorithm is uch. You do not need to configure this parameter when the scheduling algorithm is wrr, sch, or wlc.
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
Clusters earlier than v1.19
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-scheduler: "uch" # You can also set this parameter to wrr, sch, or wlc based on your needs.
alb.ingress.kubernetes.io/backend-scheduler-uch-value: "test" # This parameter is required only when the scheduling algorithm is uch. You do not need to configure this parameter when the scheduling algorithm is wrr, sch, or wlc.
name: cafe-ingress
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /tea-svc
pathType: ImplementationSpecific
The following values are available for the alb.ingress.kubernetes.io/backend-scheduler
annotation:
wrr
: The default value. The higher the weight of a backend server, the higher the probability that it is selected.wlc
: Polling is performed based on the weight set for each backend server and the actual load (number of connections) of the backend server. If the weights are the same, the backend server with the fewest current connections has a higher probability of being selected.sch
: Source IP consistent hash.uch
: URL parameter consistent hash. When the scheduling algorithm for the server group isuch
, you can use thealb.ingress.kubernetes.io/backend-scheduler-uch-value
annotation to specify a URL parameter for consistent hashing.
Cross-domain configuration
The following example shows a sample cross-domain configuration for an ALB Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
annotations:
alb.ingress.kubernetes.io/enable-cors: "true"
alb.ingress.kubernetes.io/cors-expose-headers: ""
alb.ingress.kubernetes.io/cors-allow-methods: "GET,POST"
alb.ingress.kubernetes.io/cors-allow-credentials: "true"
alb.ingress.kubernetes.io/cors-max-age: "600"
alb.ingress.kubernetes.io/cors-allow-origin: "Domain name for cross-domain access"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cloud-nodeport
port:
number: 80
Parameter | Description |
| The sites that are allowed to access server resources through a browser. Separate sites with commas (,). A single value must start with http:// or https:// followed by a valid domain name or a first-level wildcard domain name. IP addresses are not supported. Default value: |
| The allowed cross-domain methods, which are not case-sensitive. Separate cross-domain methods with commas (,). Default value: |
| The request headers that are allowed for cross-domain propagation. You can set this to Default value: |
| The list of headers that are allowed to be exposed. You can set this to Default value: |
| Specifies whether to allow credentials to be carried during cross-domain access. Default value: |
| For non-simple requests, sets the maximum cache time (in seconds) for OPTIONS preflight requests in the browser. Valid values: [0, 172800]. Default value: |
Backend persistent connections
Traditional load balancing uses short-lived connections to access backend server groups. Each request requires a new TCP connection to be established and then disconnected, which can make network connectivity a bottleneck for high-performance systems. With backend persistent connections, the resource consumption for handling the connection layer is greatly reduced, which significantly improves processing performance. You can enable backend persistent connections in an ALB Ingress using the alb.ingress.kubernetes.io/backend-keepalive
annotation. The following example shows a sample configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
annotations:
alb.ingress.kubernetes.io/backend-keepalive: "true"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cloud-nodeport
port:
number: 80
Server groups support IPv6 attachments
After you enable IPv6 attachments for a server group, if you want to attach dual-stack pods to the server group, the cluster must be configured as a dual-stack cluster. The corresponding Service must also be configured with ipFamilies
and ipFamilyPolicy
. In a dual-stack configuration, ipFamilies
should be set to IPv4
or IPv6
, and ipFamilyPolicy
should be set to RequireDualStack
or PreferDualStack
. By adding the alb.ingress.kubernetes.io/enable-ipv6: "true"
annotation to an ALB Ingress, you can enable the IPv6 feature for the specified server group. After you create a dual-stack ALB instance, you can enable IPv6 for the backend server group. This allows the server group to attach both IPv4 and IPv6 backend servers. The following example shows a sample configuration:
The following limitations apply when you enable IPv6 attachments:
If the IPv6 feature is not enabled for the VPC where the server group is located, you cannot enable IPv6 attachments for the server group.
Enabling IPv6 attachments is not supported when you attach IP-type or Function Compute-type server groups using custom forwarding actions.
You cannot enable IPv6 attachments for server groups that are associated with IPv4-only ALB instances.
apiVersion: v1
kind: Service
metadata:
name: tea-svc
annotations:
spec:
# When configuring dual-stack, ipFamilies needs to be set to IPv4 or IPv6, and ipFamilyPolicy needs to be set to RequireDualStack or PreferDualStack.
ipFamilyPolicy: RequireDualStack
ipFamilies:
- IPv4
- IPv6
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: tea
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tea
spec:
replicas: 2
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginxdemos:latest
ports:
- containerPort: 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/enable-ipv6: "true"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
Support QPS throttling
ALB supports queries per second (QPS) throttling for forwarding rules. The value for throttling can range from 1 to 1,000,000. You only need to set the alb.ingress.kubernetes.io/traffic-limit-qps
annotation in the ALB Ingress. The following example shows a sample configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
alb.ingress.kubernetes.io/traffic-limit-qps: "50"
spec:
ingressClassName: alb
rules:
- host: demo.alb.ingress.top
http:
paths:
- path: /tea
pathType: ImplementationSpecific
backend:
service:
name: tea-svc
port:
number: 80
- path: /coffee
pathType: ImplementationSpecific
backend:
service:
name: coffee-svc
port:
number: 80
Backend slow start
After a new pod is added to the backend of a Service, if an ALB Ingress immediately allocates traffic to the new pod, it may cause a transient spike in CPU or memory pressure, which can lead to access abnormalities. Using slow start, the ALB Ingress can gradually transfer traffic to the new pod, which mitigates the impact of sudden traffic bursts. The following example shows a sample configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/slow-start-enabled: "true"
alb.ingress.kubernetes.io/slow-start-duration: "100"
name: alb-ingress
spec:
ingressClassName: alb
rules:
- host: alb.ingress.alibaba.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
Parameter | Description |
alb.ingress.kubernetes.io/slow-start-enabled | Specifies whether to enable the slow start feature.
Disabled by default. |
alb.ingress.kubernetes.io/slow-start-duration | The longer the time it takes for traffic to gradually increase after the slow start is complete, the slower the traffic increases. Unit: seconds (s). Valid values: 30 to 900. Default value: |
Connection draining
When a pod enters the Terminating state, the ALB Ingress removes the pod from the backend. At this point, there may still be ongoing requests in the established connections of the pod. If the ALB Ingress immediately closes all connections, it may cause application errors. Using connection draining, the ALB Ingress can keep the connections open for a specific period after the pod is removed from the backend. This ensures that the application goes offline smoothly after the current requests are processed. The specific working modes of connection draining are as follows:
If connection draining is not enabled, when a pod enters the Terminating state, the ALB Ingress removes the pod from the backend and immediately closes all connections to this pod.
If connection draining is enabled, when a pod enters the Terminating state, the ALB Ingress keeps the ongoing requests open but no longer accepts new requests:
If the pod has ongoing requests, ALB Ingress closes all connections and removes the pod when the connection draining timeout is reached.
If the pod processes all requests before the timeout is reached, ALB Ingress immediately removes the pod.
Before the connection draining period ends, the ALB Ingress does not actively close the connection with the pod. However, it cannot guarantee that the pod is in a running state. You can control the availability of the pod in the Terminating state by configuring spec.terminationGracePeriodSeconds
for the pod or using a preStop Hook.
You can use the following example to configure connection draining:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/connection-drain-enabled: "true"
alb.ingress.kubernetes.io/connection-drain-timeout: "199"
name: alb-ingress
spec:
ingressClassName: alb
rules:
- host: alb.ingress.alibaba.com
http:
paths:
- path: /test
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
Parameter | Description |
alb.ingress.kubernetes.io/connection-drain-enabled | Specifies whether to enable connection draining.
Disabled by default. |
alb.ingress.kubernetes.io/connection-drain-timeout | The connection draining timeout period. Unit: seconds (s). Valid values: 0 to 900. Default value: |
Disable cross-zone load balancing
By default, ALB enables cross-zone load balancing. This means that traffic is evenly distributed to backend services across different zones in the same region. If cross-zone load balancing is disabled for an ALB server group, traffic is distributed only among backend services in the same zone.
Before you disable cross-zone load balancing, make sure that ALB has available backend services configured in each zone and that these servers have sufficient resources. To avoid affecting your business, perform this operation with caution.
You can use the following example to disable cross-zone load balancing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/cross-zone-enabled: "false"
name: alb-ingress
spec:
ingressClassName: alb
rules:
- host: alb.ingress.alibaba.com
http:
paths:
- path: /test
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
Parameter | Description |
alb.ingress.kubernetes.io/cross-zone-enabled | Specifies whether to disable cross-zone forwarding.
Enabled by default. |