In this quickstart, you will use Simple Log Service (SLS) LoongCollector to collect and analyze NGINX access logs from an Elastic Compute Service (ECS) instance. You will learn how to:
Configure log collection with LoongCollector.
Query and qnalyze log data using SQL.
Set up monitoring alerts.
Before you begin
Set up account permissions
Alibaba Cloud account
An Alibaba Cloud account has full, unrestricted access to SLS by default. No action is required.
Resource Access Management (RAM) user
A RAM user has no permissions by default and must be explicitly granted access by the Alibaba Cloud account. There are two ways to do this:
Attach the following system policies:
AliyunLogFullAccess
: To create and manage SLS resources, such as projects and logstores.AliyunECSFullAccess
: To install the collection agent on ECS instances.AliyunOOSFullAccess
: To automatically install the collection agent on ECS instances using CloudOps Orchestration Service (OOS).
Create and attach custom policies
For more granular control, create and attach custom policies to grant the RAM user permissions based on the principle of least privilege.
Prepare an ECS instance
If you do not have an ECS instance, refer to this document to create one. The instance's security group must have an outbound rule that allows traffic on port 80 (HTTP) and port 443 (HTTPS).
Create a project and logstore
Log on to the Simple Log Service console.
Click Create Project:
Region: Select the same region as your ECS instance. This lets you collect logs over the Alibaba Cloud internal network, which speeds up log collection.
Project Name: Enter a globally unique name within Alibaba Cloud, such as
nginx-quickstart-abc
.
Keep the default settings for other configurations and click Create.
On the confirmation page, click Create Logstore.
Enter a logstore name, such as
nginx-access-log
. Keep default settings for other parameters, and click OK.By default, a standard logstore is created, and you are billed by the volume of ingested data.
Step 1. Generate mock logs
Create a script file named
generate_nginx_logs.sh
and paste the following content into the file. This script writes a standard NGINX access log entry to the/var/log/nginx/access.log
file every 5 seconds.Grant the execute permission on the file:
chmod +x generate_nginx_logs.sh
.Run the script in the background:
nohup ./generate_nginx_logs.sh &
.
Step 2. Install LoongCollector
In the dialog box confirming that the logstore was created, click OK to open the Quick Data Import panel.
On the Single Line - Text Logs card, click Integrate Now.
Configure the machine group.
Scenario: Servers
Installation Environment: ECS
Click Create Machine Group. In the Create Machine Group panel, select the ECS instance.
Click Install and Create Machine Group. After the installation is complete, enter a name for the machine group, such as
my-nginx-server
, then click OK.NoteIf the installation fails or remains in a pending state, ensure the ECS instance and project are in the same region.
Click Next to check the machine group's heartbeat status.
For a new machine group, if the heartbeat status is FAIL, click Automatic Retry. The status will change to OK in about two minutes.
Step 3. Create a collection configuration
After the heartbeat status is OK, click Next to open the Logtail Configurations page and configure the following parameters:
Configuration Name: Enter a name, such as
nginx-access-log-config
.File Path: Enter the log collection path,
/var/log/nginx/access.log
.Processing Configuration:
Log Sample: Click Add Log Sample and paste a sample log entry:
192.168.*.* - - [15/Apr/2025:16:40:00 +0800] "GET /nginx-logo.png HTTP/1.1" 0.000 514 200 368 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.*.* Safari/537.36"
Processor Method: Select Data Parsing (NGINX Mode). In the NGINX Log Configuration field, configure the
log_format
. Copy and paste the following content, then click OK.log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $request_time $request_length';
In a production environment, this
log_format
must be consistent with the definition in your NGINX configuration file (usually located at/etc/nginx/nginx.conf
).Log parsing example:
Raw log
Structured log
192.168.*.* - - [15/Apr/2025:16:40:00 +0800] "GET /nginx-logo.png HTTP/1.1" 0.000 514 200 368 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.*.* Safari/537.36"
body_bytes_sent: 368 http_referer: - http_user_agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.x.x Safari/537.36 remote_addr:192.168.*.* remote_user: - request_length: 514 request_method: GET request_time: 0.000 request_uri: /nginx-logo.png status: 200 time_local: 15/Apr/2025:16:40:00
Click Next to go to the Query and Analysis Configurations page. It takes about one minute for the collection configuration to take effect. Click Automatic Refresh. When preview data appears, the configuration is effective.
Step 4. Query and analyze logs
Click Next to proceed to the final page, then click Query Log. The system redirects you to the query and analysis page for the target logstore. Write SQL analysis statements to extract key business and operational metrics from the structured logs. Set the time range to the Last 15 Minutes.
If an error message appears, the index has not yet been configured. Close the dialog box and wait about one minute. Then view the log content from the access.log
file.
Example 1: Total website page views (PV)
Count the total number of log entries within the specified time range.
* | SELECT count(*) AS pv
Example 2: Requests and error rate per minute
Calculate the total number of requests, the number of error requests (HTTP status code ≥ 400), and the error rate per minute.
* | SELECT date_trunc('minute', __time__) as time, count(1) as total_requests, count_if(status >= 400) as error_requests, round(count_if(status >= 400) * 100.0 / count(1), 2) as error_rate GROUP BY time ORDER BY time DESC LIMIT 100
Example 3: PV statistics by request method
Group and count page views by minute and request method (such as GET or POST).
* | SELECT date_format(minute, '%m-%d %H:%i') AS time, request_method, pv FROM ( SELECT date_trunc('minute', __time__) AS minute, request_method, count(*) AS pv FROM log GROUP BY minute, request_method ) ORDER BY minute ASC LIMIT 10000
Step 5. Set up monitoring alerts
Set up monitoring alerts to automatically send notifications when service anomalies occur, such as a sharp increase in errors.
In the navigation pane on the left, click
Alerts.
Create an action policy:
On the
tab, click Create.Set ID, such as
send-notification-to-admin
, and Name.In the Primary Action Policy, click
Action Group.
Select a Notification Method, such as a SMS Message, configure the Recipient, and select an Alert Template.
Click Confirm.
Create an alert rule:
On the Alert Rules tab, click Create Alert.
Enter a rule name, such as
Too many server 5xx errors
.In the Query Statistics field, click Create to set query conditions.
Logstore: Select
nginx-access-log
.Time Range: 15 minutes (Relative).
Query: Enter
status >= 500 | SELECT *
.Click Preview to verify the data, then click OK.
Trigger Condition: Configure the rule to trigger a critical alert when the query result contains more than 100 entries.
This configuration triggers an alert when more than 100 5xx errors occur within 15 minutes.
: Select Simple Log Service Notification and enable it.
Action Policy, select the action policy you created in the previous step.
Repeat Interval: Set it to 15 minutes to avoid excessive notifications.
Click OK to save the alert rule.
Verify the configuration: When alert conditions are met, an alert is sent to the configured notification channel. You can view all triggered alert records on the Alert History page.
Step 6. Clean up resources
To avoid charges, clean up all created resources after you complete the operations.
Stop the log generation script
Connect to the ECS instance and run the following command to stop the log generation script.
kill $(ps aux | grep '[g]enerate_nginx_logs.sh' | awk '{print $2}')
Uninstall LoongCollector (Optional)
To speed up execution, replace
${region_id}
in the following command with the region ID of your ECS instance.wget https://aliyun-observability-release-${region_id}.oss-${region_id}.aliyuncs.com/loongcollector/linux64/latest/loongcollector.sh -O loongcollector.sh;
Run the uninstall command.
chmod +x loongcollector.sh; sudo ./loongcollector.sh uninstall;
Delete the project.
WarningDeleting a project permanently deletes all its log data and configuration information. Confirm your action carefully before deleting to avoid data loss.
On the project list page in the Simple Log Service console, find the project you created, for example,
nginx-quickstart-abc
.In the Actions column, click Delete.
In the panel that appears, enter the project name and select a reason for deletion.
Click OK. This action deletes the project and all its associated resources, including logstores, collection configurations, and alert rules.
FAQ
What should I do if the displayed time is different from the original log time after collection?
By default, the time field (__time__
) in SLS records the log's arrival time at the server. To use the time from the original log entry, add a time parsing plugin in the collection configuration.
Will I be charged for only creating a project and a logstore?
Yes. When you create a logstore, SLS reserves shard resources by default, which may incur active shard lease fees. For more information, see Why am I charged for active shard leases?
How do I troubleshoot log collection failures?
Log collection can fail due to abnormal heartbeats, collection errors, or incorrect LoongCollector (Logtail) configuration. See Troubleshoot Logtail collection failures.
Why can I query logs but not analyze them?
To analyze logs, you must configure a field index for the relevant fields and enable statistics. Check the index configuration of the logstore.
How do I stop being billed for SLS?
You cannot disable SLS after it is activated. If you no longer use the service, stop billing by deleting all projects under your account.