It is possible to configure more inputs than the maximum number indicated in the table if you have a smaller event size, fewer keys per bucket, or more available CPU and memory resources in your environment. CPU and memory resources were utilized to their fullest. The following input number ceiling was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment. To achieve higher throughput performance beyond this bottleneck, you can further scale out data collection by creating multiple heavy forwarder instances each configured with up to four SQS-based S3 inputs to concurrently ingest data by consuming messages from the same SQS queue. Performance testing of the SQS-based S3 input indicates that optimal performance throughput is reached when running four inputs on a single heavy forwarder instance. The following throughput data was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment.Ĭonsolidate AWS accounts during add-on configuration to reduce CPU usage and increase throughput performance. API throttling error occurs if input streams are greater than 1,000. ![]() (gz, syslog, event size 250 B, S3 key size 2 MB) (gz, json, event size 720 B, S3 key size 2 MB) (plain text, syslog, event size 250 B, S3 key size 2 MB) The throughput data is the maximum performance for each single input achieved in performance testing under specific operating conditions and is subject to change when any of the hardware and software variables changes. MaxQueueSize = 15MB Measured performance data The following settings are configured in the nf file on the heavy forwarder: Throughput data and conclusions are based on performance testing using Splunk platform instances (dedicated heavy forwarders and indexers) running on the following environment: Reference hardware and software environment See Configure Kinesis inputs for the Splunk Add-on for AWS. The Kinesis input for the Splunk Add-on for AWS has its own performance data. Contact Splunk Support for accurate performance tuning and sizing. Because performance varies based on user characteristics, application usage, server configurations, and other factors, specific performance results cannot be guaranteed. Use the information to optimize the Splunk Add-on for AWS add-on in your own production environment. See the following tables for measured throughput data achieved under certain operating conditions. ![]() The rate of data ingestion for this add-on depends on several factors: deployment topology, number of keys in a bucket, file size, file compression format, number of events in a file, event size, and hardware and networking conditions. Performance for the Splunk Add-on for AWS data inputs ![]() If you encounter timeout issues, you can manually type in resource names. If your network to AWS is slow, data transfers might be slow to load. Input configuration screens require data transfer from AWS to populate the services, queues, and buckets available to your accounts. You can also use the System requirements for use of Splunk Enterprise on-premises in the Splunk Enterprise Installation Manual as a reference. These sizing recommendations are based on the Splunk platform hardware configurations in the following table. Remove indexers to a cluster to avoid within-cluster data replication traffic. Add indexers to a cluster to improve indexing and search retrieval performance. Adjust the number of indexers in your cluster based on your actual system performance. This information is based on a generic Splunk hardware configuration. See the following table for the recommended maximum daily indexing volume on a clustered indexer for different AWS source types. ![]() Sizing, performance, and cost considerations for the Splunk Add-on for AWSīefore you configure the Splunk Add-on for Amazon Web Services (AWS), review these sizing, performance, and cost considerations.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |