Simple Log Service lets you query and analyze logs by configuring indexes. This feature uses SQL computing capabilities. This topic describes the basic syntax, limits, and SQL functions for this feature.
Reading guide
Simple Log Service provides features for querying and analyzing logs. For more information, see Quick start for log query and analysis.
To query and analyze logs, you must first collect the logs in a Standard Logstore. For more information, see Manage Logstores. After you create indexes, you can perform queries and analysis only on incremental logs. To query and analyze historical log files, you must reindex them.
To query tens of billions of logs, see What do I do if the "Inaccurate query results" message appears in the console?.
Simple Log Service includes several reserved fields by default. To analyze these reserved fields, see Reserved fields.
Query and analysis
Simple Log Service lets you query billions to hundreds of billions of logs in seconds and perform statistical analysis on the results using SQL. You can use search statements independently. However, you must use analytic statements together with search statements. This means that analysis is performed on search results or full data.
Basic syntax
A search statement and an analytic statement are separated by a vertical bar (|
). You can use a search statement independently, but an analytic statement must be used with a search statement. This means that analysis is performed on search results or full data.
Search statement|Analytic statement
Type | Description |
Search statement |
Important Specify no more than 30 conditions in a search statement. |
Analytic statement |
Important
|
Simple Log Service provides ANTLR grammar files for query and analysis. You can use these files with the ANTLR tool to perform custom development based on SLS queries.
The following are the ANTLR grammar files:
Example
* | SELECT status, count(*) AS PV GROUP BY status
The query and analysis results are shown in the following figure:
Advanced features
LiveTail: Monitor online logs in real time to simplify operations and maintenance (O&M).
LogReduce: Extract common patterns from similar logs during log collection to quickly understand the overall log structure.
Contextual query: View the contextual information of a specified log to facilitate troubleshooting and problem identification.
Field analysis: View field distribution, statistical metrics, and TOP 5 time series charts to help you understand your data.
Event configuration: Easily obtain detailed information about raw logs through event configuration.
Overview of datasets (StoreViews): Use the StoreView feature to perform joint queries across regions and Logstores.
Limits on the query feature
Limitations | Description |
Number of keywords | The number of conditional keywords, excluding Boolean operators. A maximum of 30 keywords can be specified for each query. |
Field value size | The maximum size of a single field value is 512 KB. The excess part is not included in queries. If the length of a single field is greater than 512 KB, you may not be able to find the log using a keyword search, but the data is still complete. Note To set the maximum length of a log field value, see Why are field values truncated during query and analysis? |
Number of concurrent operations | Each project supports a maximum of 100 concurrent query operations. For example, 100 users can run query operations in different Logstores of the same project at the same time. |
Returned results | Each query returns a maximum of 100 results per page. You can page through the results to read all of them. |
Fuzzy query | When you perform a fuzzy query, Simple Log Service finds a maximum of 100 matching terms and returns all logs that contain these terms and meet the search criteria. For more information, see Fuzzy query. |
Sorting of query results | By default, results are displayed in descending order by time, accurate to the second. If nanoseconds are present, results are sorted by nanosecond time. |
Limits of the analysis feature
Limit | Standard instance | Dedicated SQL instance | |
SQL enhancement | Complete accuracy | ||
Concurrency | Up to 15 concurrent queries per project. | Up to 100 concurrent queries per project. | Up to 5 concurrent queries per project. |
Data volume | A single query can scan up to 400 MB of log data (excluding cached data). Data exceeding this limit is truncated and marked as incomplete query results. | A single query can scan up to 2 GB of log data (excluding cached data). Data exceeding this limit is truncated and marked as incomplete query results. | Unlimited. |
Method to enable | By default, the log analysis feature is enabled. | A switch is provided for you to manually enable Dedicated SQL. | A switch is provided for you to manually enable Dedicated SQL. |
Fee | Free of charge. | You are charged based on the actual CPU time. | You are charged based on the actual CPU time. |
Data effectiveness mechanism | You can analyze only the data that is written to Simple Log Service after the log analysis feature is enabled. If you need to analyze historical data, you must reindex the historical data. | You can analyze only the data that is written to Simple Log Service after the log analysis feature is enabled. If you need to analyze historical data, you must reindex the historical data. | You can analyze only the data that is written to Simple Log Service after the log analysis feature is enabled. If you need to analyze historical data, you must reindex the historical data. |
Return results | By default, analysis returns up to 100 rows and 100 MB of data. Exceeding 100 MB results in an error. If you need to return more data, use the LIMIT clause. | By default, analysis returns up to 100 rows and 100 MB of data. Exceeding 100 MB results in an error. If you need to return more data, use the LIMIT clause. | By default, analysis returns up to 100 rows and 100 MB of data. Exceeding 100 MB results in an error. If you need to return more data, use the LIMIT clause. |
Maximum field length | The default maximum length for a single field is 2,048 bytes (2 KB) and can be adjusted up to 16,384 bytes (16 KB). Data beyond this limit will not be included in log query and analysis. Note To change this limit, adjust Maximum Field Length. Changes apply only to new data. For more information, see Create indexes. | The default maximum length for a single field is 2,048 bytes (2 KB) and can be adjusted up to 16,384 bytes (16 KB). Data beyond this limit will not be included in log query and analysis. Note To change this limit, adjust Maximum Field Length. Changes apply only to new data. For more information, see Create indexes. | The default maximum length for a single field is 2,048 bytes (2 KB) and can be adjusted up to 16,384 bytes (16 KB). Data beyond this limit will not be included in log query and analysis. Note To change this limit, adjust Maximum Field Length. Changes apply only to new data. For more information, see Create indexes. |
Timeout period | The maximum timeout period for an analysis operation is 55 seconds. | The maximum timeout period for an analysis operation is 55 seconds. | The maximum timeout period for an analysis operation is 55 seconds. |
Number of bits for double-type field values | Double-type field values are limited to 52 bits. Exceeding this can lead to precision loss in floating-point numbers. | Double-type field values are limited to 52 bits. Exceeding this can lead to precision loss in floating-point numbers. | Double-type field values are limited to 52 bits. Exceeding this can lead to precision loss in floating-point numbers. |
FAQ
References
Related API operations
GetLogs - Query logs in a Logstore (results are not compressed)
GetLogsV2 - Query log data in a Logstore (results are compressed)
CreateScheduledSQL - Create a scheduled SQL task