A Logstore is a storage unit in Simple Log Service (SLS) that is used to collect, store, and query log data.
Core concepts
What is a Logstore
A Logstore is a data container in Simple Log Service. You can create multiple Logstores in a project to isolate and manage logs from different services or sources.
In addition, some cloud products and SLS features automatically create dedicated Logstores. These Logstores serve specific purposes, and you cannot write other data to them. For example:
internal-operation_log
: Stores the detailed operation logs for SLS.oss-log-store
: Automatically created when you configure the storage of OSS access logs.
Logstore specifications
Simple Log Service offers two Logstore specifications: Standard and Query. They differ in features and cost.
Type | Cost (index traffic fee comparison) | Scenarios |
Standard | 0.0875 USD per GB | Suitable for scenarios that require data analytics, real-time monitoring, and visualization capabilities, such as interactive analysis, real-time monitoring, or building observability systems. |
Query | USD 0.0146 per GB | Does not support analytics. Suitable for archival scenarios such as log archiving, audit log storage, and troubleshooting that require fast retrieval of log content without analysis. Typical applications include long-term storage of large-scale logs with low access frequency. |
Scope and permissions
Create a basic Logstore
Console
Log on to the Simple Log Service console. In the Projects section, click the project that you want to manage.
On the tab, click the + icon.
On the Create Logstore page, configure the parameters and click OK.
Logstore Type: The default value is Standard.
Billing Mode:
Pay-as-you-go (cannot be changed): You are billed for each resource that you use, such as storage, indexing, and read/write operations. A monthly free quota is provided to help you control costs in small-scale use cases.
Pay-by-ingested-data: You pay only for the raw data that you write. Storage and basic features are free for 30 days. This billing mode has a simpler and more cost-effective structure.
The pay-by-ingested-data mode is ideal when your data retention period is close to 30 days and you require indexing comparable to a full-text index.
Logstore Name: The name must be unique within the project. It serves as the unique identifier for the Logstore and cannot be changed after the Logstore is created.
Data Retention Period: The default value is 30 days.
Keep the default values for the other parameters. For a full list of parameters, see the following table.
API
Modify Logstore configuration
The following parameters can be configured when you create a Logstore. This section describes how to modify these parameters for an existing Logstore.
Click
Log Storage. In the Logstores list, hover over the target Logstore and choose .
In the Logstore Attributes panel, modify the configuration items based on the following scenarios.
Set the data retention period and delete logs
Console
In the Basic Information section, click Modify, change the data retention period, and then click Save.
SLS does not allow you to delete specific log entries. To remove logs, you can expire them by changing the data retention period. Alternatively, you can delete all logs by stopping billing or deleting the Logstore.
Specified Days: Specify an integer from 1 to 3,650. A value of 3,650 indicates permanent storage. When the retention period expires, the logs are deleted.
Permanent Storage: Permanently retains all logs in the Logstore.
The change takes effect immediately, but the deletion of expired data requires some time to complete.
API
In the UpdateLogStore operation, set the value of ttl
to adjust the log retention period.
Use tiered storage to optimize storage costs
Console
In the Basic Information section, click Modify and turn on the Intelligent Tiered Storage switch.
Configure the Storage Policy. The total retention period across all three storage tiers must match the Data Retention Period.
Hot storage: at least 7 days.
IA storage class: at least 30 days.
Archive Storage: at least 60 days.
Click Save. For more information, see Intelligent tiered storage.
API
In the UpdateLogStore operation, set the values of ttl
, hot_ttl
, and infrequentAccessTTL
to dynamically adjust the retention policy for tiered storage.
Collect client-side logs
SLS provides the web tracking feature to collect logs from various clients, such as miniapps, mobile applications (iOS and Android), and web browsers.
You can use this feature in one of the following two ways:
Transmit data by using STS for authentication. This method is suitable for production scenarios. You do not need to modify the Logstore configuration.
Transmit data anonymously using OpenAPI. This method is suitable only for test scenarios. You must enable the switch in the Logstore. For more information, see the following content.
Console
In the Basic Information section, click Modify, turn on the WebTracking switch, and then click Save.
API
In the UpdateLogStore operation, set the enable_tracking
parameter to true
to enable the web tracking feature.
Automatically add the public IP address and arrival time to logs
After you enable this feature, the following information is automatically added to logs during data collection:
__tag__:__client_ip__: the public IP address of the device from which logs are sent.
__tag__:__receive_time__: the time when logs arrive at the SLS server. The time is a UNIX timestamp that indicates the number of seconds that have elapsed since 00:00:00 UTC on January 1, 1970.
Console
In the Basic Information section, click Modify, turn on the Log Public IP switch, and then click Save.
API
In the UpdateLogStore operation, use the appendMeta
parameter to enable this feature.
Adjust collection performance using shards
Each shard supports a write throughput of 5 MB/s or 500 writes/s and a read throughput of 10 MB/s or 100 reads/s. These are soft limits. If the limits are exceeded, the system makes the best effort to provide services but does not guarantee service quality. If the read/write traffic exceeds the read/write capacity of a shard, you must split the shard. This increases the read/write capacity.
Console
In the Basic Information section, click Modify, turn on the Automatic Shard Splitting switch, set the split upper limit, and then click Save.
SLS lets you split and merge a specific shard.
API
Stop billing or delete a Logstore
After a Logstore is deleted, its stored log data is permanently deleted and cannot be recovered. Proceed with caution.
Console
You can perform cleanup before deletion.
Before you delete a Logstore, delete all its associated Logtail configurations.
If data shipping is enabled for the Logstore, stop writing new data to the Logstore and make sure that all existing data in the Logstore is shipped before you delete the Logstore.
Deletion procedure.
On the tab, hover over the target Logstore and choose .
In the Warning dialog box, click Confirm Deletion.
After deletion.
Storage fees are incurred on the day you delete the Logstore. No fees are generated from the following day onward.
After you delete a Logstore, the export tasks, data transformation jobs, and Scheduled SQL tasks that use the Logstore as a data source and the import tasks that use the Logstore as a destination are also deleted.
API
Example configurations for common scenarios
Real-time monitoring and analysis for high-volume services
An online application generates a large volume of business logs in real time. When a failure occurs, you need to quickly locate error logs and monitor key metrics, such as queries per second (QPS) and response latency, with real-time alerts.
Recommended configuration: Standard Logstore + Pay-by-ingested-data + Automatic shard splitting.
Reasoning: A Standard Logstore supports analysis, real-time monitoring, and visualization. For high-volume log ingestion and analysis that may require extensive indexing, pay-by-ingested-data is recommended. Automatic shard splitting ensures sufficient performance for data ingestion and analysis.
Compliance, auditing, and security
Industry regulations require you to store user activity logs and security logs for six months or longer for auditing purposes. However, these logs are queried and analyzed infrequently.
Recommended configuration: Query Logstore + Intelligent tiered storage.
Reasoning: A Query Logstore supports queries only but has lower index traffic costs than a Standard Logstore. Intelligent tiered storage classifies log data based on its age, reducing long-term storage costs.
Related references
Comparison of Logstores in pay-as-you-go mode
The Query specification supports only the pay-as-you-go billing mode. The following table compares the Standard and Query Logstores in this mode.
Item | Standard | Query | |
Cost | USD 0.0875 per GB | USD 0.0146 / GB | |
Feature | Data collection (only for business system log scenarios) | Supported | Does not support collecting cloud product logs. |
Supported | Supported | ||
Supported | Supported | ||
Analysis (SQL statement) | Supported | Unsupported | |
Supported | Support | ||
Supported | Supported | ||
Supported | Unsupported | ||
Supported | Supported | ||
Support | Unsupported | ||
Supported | Only supports alerts based on query statements. | ||
Supported | Unsupported | ||
Supported | Supported | ||
Supported | Supported | ||
Supported | Supported |
Limits
The pay-by-ingested-data mode supports the complete feature set of SLS. Value-added features such as query and analysis, data transformation, intelligent alerting, and data shipping and consumption do not incur additional fees, but are subject to quotas. The following table provides details.
Quota limit | Description |
Data transformation volume | The maximum amount of data that can be transformed for a single Logstore is 100 TB per month. |
Scheduled SQL data volume | The amount of data that can be processed using Scheduled SQL for a single Logstore is 20 TB per month. |
Data shipping volume | The maximum amount of data that can be shipped from a single Logstore is 100 TB per month. |
Data consumption volume | The maximum amount of data that can be consumed from a single Logstore is 100 TB per month. |
Alerting job computation data volume | The amount of data that can be computed for alerting jobs for a single Logstore is 100 TB per month. |
Billing
The cost of a Logstore is mainly determined by the selected billing mode.
Pay-as-you-go: You are billed for each resource that you use, such as storage capacity, index traffic, read/write operations, and the number of shards.
Pay-by-ingested-data: You are charged only for the amount of raw data that you write. This mode includes 30 days of free storage and multiple free features.
Key billing item prices:
Standard index traffic: CNY 0.350 per GB.
Query index traffic: USD 0.0146/GB.
Cost optimization recommendations:
If your log retention period is close to or exceeds 30 days, the pay-by-ingested-data mode is typically more cost-effective.
For scenarios that require only archiving and retrieval, use the Query specification to reduce indexing costs.
Configure intelligent tiered storage to move infrequently accessed data to lower-cost storage tiers.
FAQ
Why can't I create a Logstore?
You can create up to 200 Logstores per project by default. To create more, either delete unused Logstores or request a quota increase.
Log on to the Simple Log Service console. In the Projects section, click the project that you want to manage.
On the project overview page, find Resource Quota in the basic information section and click Manage. In the Resource Quota panel, adjust the quota for the maximum number of Logstores and click Save to submit your request. The approval may take up to one hour.
Why are my logs in SLS missing?
Project or Logstore deletion
If you manually delete a project or Logstore, the logs cannot be recovered. You can use ActionTrail to query for project or Logstore deletion events within the last 90 days.
Your account has an overdue payment. If your payment is more than 7 days overdue, your SLS projects are reclaimed. All data is erased and cannot be recovered. For more information, see Overdue payments.