diff --git a/packages/cloudflare_logpush/_dev/build/docs/README.md b/packages/cloudflare_logpush/_dev/build/docs/README.md index 6b285a8ce32..2745308bdb8 100644 --- a/packages/cloudflare_logpush/_dev/build/docs/README.md +++ b/packages/cloudflare_logpush/_dev/build/docs/README.md @@ -1,91 +1,131 @@ -# Cloudflare Logpush +# Cloudflare Logpush Integration for Elastic ## Overview -The [Cloudflare Logpush](https://www.cloudflare.com/) integration allows you to monitor Access Request, Audit, CASB, Device Posture, DNS, DNS Firewall, Firewall Event, Gateway DNS, Gateway HTTP, Gateway Network, HTTP Request, Magic IDS, NEL Report, Network Analytics, Sinkhole HTTP, Spectrum Event, Network Session and Workers Trace Events logs. Cloudflare is a content delivery network and DDoS mitigation company. Cloudflare provides a network designed to make everything you connect to the Internet secure, private, fast, and reliable; secure your websites, APIs, and Internet applications; protect corporate networks, employees, and devices; and write and deploy code that runs on the network edge. +The [Cloudflare Logpush](https://developers.cloudflare.com/logs/logpush/) integration allows you to monitor Access Request, Audit, CASB, Device Posture, DLP Forensic Copies, DNS, DNS Firewall, Email Security Alerts, Firewall Event, Gateway DNS, Gateway HTTP, Gateway Network, HTTP Request, Magic IDS, NEL Report, Network Analytics, Page Shield, Sinkhole HTTP, Spectrum Event, Zero Trust Network Session, and Workers Trace Events logs. -The Cloudflare Logpush integration can be used in the following modes to collect data: -- HTTP Endpoint mode - Cloudflare pushes logs directly to an HTTP endpoint hosted by your Elastic Agent. -- AWS S3 polling mode - Cloudflare writes data to S3 and Elastic Agent polls the S3 bucket by listing its contents and reading new files. -- AWS S3 SQS mode - Cloudflare writes data to S3, S3 pushes a new object notification to SQS, Elastic Agent receives the notification from SQS, and then reads the S3 object. Multiple Agents can be used in this mode. -- Azure Blob Storage polling mode - Cloudflare writes data to Azure Blob Storage and Elastic Agent polls the Azure Blob Storage containers by listing its contents and reading new files. -- Google Cloud Storage polling mode - Cloudflare writes data to Google Cloud Storage and Elastic Agent polls the GCS buckets by listing its contents and reading new files. +Cloudflare is a content delivery network and DDoS mitigation company. Cloudflare provides a network designed to make everything you connect to the Internet secure, private, fast, and reliable; secure your websites, APIs, and Internet applications; protect corporate networks, employees, and devices; and write and deploy code that runs on the network edge. -For example, you could use the data from this integration to know which websites have the highest traffic, which areas have the highest network traffic, or observe mitigation statistics. +### Compatibility -## Data streams +This integration follows the log schemas and field definitions published in the [Cloudflare Log fields reference](https://developers.cloudflare.com/logs/reference/log-fields/). -The Cloudflare Logpush integration collects logs for the following types of events. For more information on each dataset, refer to the Logs reference section at the end of this page. +Cloudflare Logpush supports delivering logs to the following destinations, which can all be consumed by this integration: -### Zero Trust events +- [HTTP destinations](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/http/) +- [Amazon S3](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/aws-s3/) +- [S3-compatible endpoints](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints/) (including [Cloudflare R2](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/r2/)) +- [Google Cloud Storage](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/google-cloud-storage/) +- [Microsoft Azure Blob Storage](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/) -**Access Request**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/access_requests/). +### How it works -**Audit**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/audit_logs/). +Cloudflare Logpush pushes logs to the destination of your choice. Elastic Agent then reads those logs and ships them to Elasticsearch, where they are processed through each data stream's ingest pipeline. -**CASB findings**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/casb_findings/). +The integration supports the following collection modes: -**Device Posture Results**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/device_posture_results/). +- **HTTP Endpoint mode** — Cloudflare pushes logs directly to an HTTP endpoint hosted by your Elastic Agent. +- **AWS S3 polling mode** — Cloudflare writes logs to an S3 bucket and Elastic Agent polls the bucket by listing its contents and reading new files. +- **AWS S3 SQS mode** — Cloudflare writes logs to S3; S3 publishes object-created notifications to an SQS queue; Elastic Agent receives those notifications from SQS and reads the corresponding S3 objects. This mode supports horizontal scaling across multiple agents. +- **S3-compatible (Cloudflare R2) polling mode** — Cloudflare writes logs to an R2 or other S3-compatible bucket and Elastic Agent polls the bucket using the S3 API. +- **Azure Blob Storage polling mode** — Cloudflare writes logs to an Azure Blob Storage container and Elastic Agent polls the container by listing its contents and reading new files. +- **Google Cloud Storage polling mode** — Cloudflare writes logs to a GCS bucket and Elastic Agent polls the bucket by listing its contents and reading new files. -**DLP Forensic Copies**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/dlp_forensic_copies/). +## What data does this integration collect? -**Email Security Alerts**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/email_security_alerts/). +The Cloudflare Logpush integration collects logs for the following Cloudflare [datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/). Data streams are grouped by whether the underlying dataset is classified as a Cloudflare [Zero Trust dataset](https://developers.cloudflare.com/cloudflare-one/insights/logs/logpush/#zero-trust-datasets) or a non Zero Trust dataset. -**Gateway DNS**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_dns/). +### Zero Trust events -**Gateway HTTP**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_http/). +- `access_request`: HTTP requests to sites protected by Cloudflare Access. See [Access Requests schema](https://developers.cloudflare.com/logs/reference/log-fields/account/access_requests/). +- `audit`: Authentication events through Cloudflare Access, plus account-level configuration and administrative actions. See [Audit Logs schema](https://developers.cloudflare.com/logs/reference/log-fields/account/audit_logs/). +- `casb`: Security issues detected by Cloudflare CASB in connected SaaS applications. See [CASB Findings schema](https://developers.cloudflare.com/logs/reference/log-fields/account/casb_findings/). +- `device_posture`: Device posture status from the Cloudflare One Client (WARP). See [Device Posture Results schema](https://developers.cloudflare.com/logs/reference/log-fields/account/device_posture_results/). +- `gateway_dns`: DNS queries inspected by Cloudflare Gateway. See [Gateway DNS schema](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_dns/). +- `gateway_http`: HTTP requests inspected by Cloudflare Gateway. See [Gateway HTTP schema](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_http/). +- `gateway_network`: Network packets inspected by Cloudflare Gateway. See [Gateway Network schema](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_network/). +- `network_session`: Network session logs for traffic proxied by Cloudflare Gateway. See [Zero Trust Network Session schema](https://developers.cloudflare.com/logs/reference/log-fields/account/zero_trust_network_sessions/). -**Gateway Network**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_network/). +### Non Zero Trust events -**Zero Trust Network Session**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/zero_trust_network_sessions/). +- `dns`: Zone-scoped authoritative DNS query logs. See [DNS logs schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/dns_logs/). +- `dns_firewall`: Cloudflare DNS Firewall query and response logs. See [DNS Firewall logs schema](https://developers.cloudflare.com/logs/reference/log-fields/account/dns_firewall_logs/). +- `dlp_forensic_copies`: Data Loss Prevention forensic copies of content that matched a DLP profile. See [DLP Forensic Copies schema](https://developers.cloudflare.com/logs/reference/log-fields/account/dlp_forensic_copies/). +- `email_security_alerts`: Cloudflare Email Security alerts for phishing, malware, and other email-based threats. See [Email Security Alerts schema](https://developers.cloudflare.com/logs/reference/log-fields/account/email_security_alerts/). +- `firewall_event`: Zone-level Firewall events for requests mitigated by Cloudflare security products (WAF, Rate Limiting, Firewall Rules, etc.). See [Firewall Events schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/firewall_events/). +- `http_request`: HTTP/HTTPS request logs served at the Cloudflare edge. See [HTTP Requests schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/). +- `magic_ids`: Magic Network Monitoring IDS detection logs. See [Magic IDS Detections schema](https://developers.cloudflare.com/logs/reference/log-fields/account/magic_ids_detections/). +- `nel_report`: Network Error Logging (NEL) reports collected from end-user browsers. See [NEL Reports schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/nel_reports/). +- `network_analytics`: Network Analytics (Magic Transit / Magic WAN packet-sampled flow) logs. See [Network Analytics Logs schema](https://developers.cloudflare.com/logs/reference/log-fields/account/network_analytics_logs/). +- `page_shield_events`: Page Shield events reporting changes to scripts and connections observed on protected zones. See [Page Shield Events schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/page_shield_events/). +- `sinkhole_http`: HTTP traffic captured by Cloudflare sinkholes. See [Sinkhole HTTP logs schema](https://developers.cloudflare.com/logs/reference/log-fields/account/sinkhole_http_logs/). +- `spectrum_event`: Cloudflare Spectrum events for TCP/UDP applications proxied through Cloudflare. See [Spectrum Events schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/spectrum_events/). +- `workers_trace`: Cloudflare Workers Trace Events with execution logs and exceptions for Workers scripts. See [Workers Trace Events schema](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/). -### Non Zero Trust events +### Supported use cases -**DNS**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/dns_logs/). +Integrating Cloudflare Logpush with Elastic provides centralized visibility across Cloudflare's edge, Zero Trust, and network-layer products. Common use cases include: -**DNS Firewall**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/dns_firewall_logs/). +- Investigating traffic, WAF, and DDoS-mitigation events from the Cloudflare edge (`http_request`, `firewall_event`, `network_analytics`). +- Monitoring Zero Trust user activity, policy decisions, and device posture (`gateway_http`, `gateway_dns`, `gateway_network`, `access_request`, `device_posture`, `network_session`). +- Detecting data exfiltration and SaaS misconfigurations (`dlp_forensic_copies`, `casb`, `email_security_alerts`). +- Auditing administrative activity on the Cloudflare account (`audit`). +- Troubleshooting DNS and client-side performance issues (`dns`, `dns_firewall`, `nel_report`, `workers_trace`). -**Firewall Event**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/firewall_events/). +## What do I need to use this integration? -**HTTP Request**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/). +### From Elastic -**Magic IDS**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/magic_ids_detections/). +You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. -**NEL Report**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/nel_reports/). +### From Cloudflare -**Network Analytics**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/network_analytics_logs/). +To use this integration, you must be able to create and manage [Cloudflare Logpush jobs](https://developers.cloudflare.com/logs/logpush/logpush-job/) for the datasets you want to collect. -**Page Shield events**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/page_shield_events/). +**Permissions** -**Sinkhole HTTP**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/sinkhole_http_logs/). +Creating and managing Logpush jobs requires an API token or user role with the `Logs Write` permission (or a role that includes it, such as **Super Administrator**, **Administrator**, or **Log Share** with edit permissions). Refer to [Cloudflare Logpush permissions](https://developers.cloudflare.com/logs/logpush/permissions/) for details. -**Spectrum Event**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/spectrum_events/). +- **Zone-scoped datasets** (for example, `http_requests`, `firewall_events`, `dns_logs`, `spectrum_events`, `nel_reports`, `page_shield_events`) require a **zone-scoped token**. +- **Account-scoped datasets** (for example, `audit_logs`, `access_requests`, `casb_findings`, `device_posture_results`, `dlp_forensic_copies`, `email_security_alerts`, `gateway_*`, `dns_firewall_logs`, `magic_ids_detections`, `network_analytics_logs`, `sinkhole_http_logs`, `workers_trace_events`, `zero_trust_network_sessions`) require an **account-scoped token**. +- Zero Trust datasets (Access, Gateway, DEX) additionally require `Zero Trust: PII Read`. -**Workers Trace Events**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/). +**Destination-specific credentials** -## Requirements +Depending on the delivery destination, you also need: -You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. +- **AWS S3 / S3-compatible** — an S3 bucket (or Cloudflare R2 bucket) and credentials (Access Key ID / Secret Access Key, or an IAM role) that Elastic Agent can use to list and read objects. For SQS-based delivery, an SQS queue subscribed to S3 object-created events. +- **Google Cloud Storage** — a GCS bucket and a service account key (JSON) with read access to the bucket. +- **Azure Blob Storage** — a storage account, a blob container, and either a shared access key, a connection string, or OAuth2 client credentials with read access to the container. +- **HTTP Endpoint** — a reachable HTTPS endpoint exposed by Elastic Agent. Cloudflare requires a valid TLS certificate on the destination. + +## How do I deploy this integration? -This module has been tested against **Cloudflare version v4**. +This integration supports Elastic Agent-based installations. -**Note**: It is recommended to use AWS SQS for Cloudflare Logpush. +### Agent-based installation -## Setup +Elastic Agent must be installed. For more details, check the Elastic Agent [installation instructions](docs-content://reference/fleet/install-elastic-agents.md). You can install only one Elastic Agent per host. -### Collect data from AWS S3 Bucket +### Onboard and configure -- Configure [Cloudflare Logpush to Amazon S3](https://developers.cloudflare.com/logs/get-started/enable-destinations/aws-s3/) to send Cloudflare's data to an AWS S3 bucket. -- The default values of the "Bucket List Prefix" are listed below. However, users can set the parameter "Bucket List Prefix" according to their requirements. +Configure one of the following delivery pipelines before enabling the integration in Elastic. - | Data Stream Name | Bucket List Prefix | +#### Collect data from AWS S3 Bucket + +- Configure [Cloudflare Logpush to Amazon S3](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/aws-s3/) to send Cloudflare's data to an AWS S3 bucket. +- The default values of the **Bucket Prefix** are listed below. However, users can set the parameter **Bucket Prefix** according to their requirements. + + | Data Stream Name | Bucket Prefix | | -------------------------- | ---------------------- | | Access Request | access_request | | Audit Logs | audit_logs | | CASB findings | casb | | Device Posture Results | device_posture | + | DLP Forensic Copies | dlp_forensic_copies | | DNS | dns | | DNS Firewall | dns_firewall | + | Email Security Alerts | email_security_alerts | | Firewall Event | firewall_event | | Gateway DNS | gateway_dns | | Gateway HTTP | gateway_http | @@ -94,73 +134,76 @@ This module has been tested against **Cloudflare version v4**. | Magic IDS | magic_ids | | NEL Report | nel_report | | Network Analytics | network_analytics_logs | + | Page Shield Events | page_shield_events | | Zero Trust Network Session | network_session | | Sinkhole HTTP | sinkhole_http | | Spectrum Event | spectrum_event | | Workers Trace Events | workers_trace | -### Collect data from AWS SQS +#### Collect data from AWS SQS 1. If Logpush forwarding to an AWS S3 Bucket hasn't been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation. 2. Follow the steps below for each Logpush data stream that has been enabled: - 1. Create an SQS queue - - To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html). - - While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket. - 2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings: + 1. Create an SQS queue + - To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html). + - While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket. + 2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings: - Event type: `All object create events` (`s3:ObjectCreated:*`) - - Destination: SQS Queue - - Prefix (filter): enter the prefix for this Logpush data stream, e.g. `audit_logs/` - - Select the SQS queue that has been created for this data stream + - Destination: SQS Queue + - Prefix (filter): enter the prefix for this Logpush data stream, e.g. `audit_logs/` + - Select the SQS queue that has been created for this data stream - **Note**: - - A separate SQS queue and S3 bucket notification is required for each enabled data stream. - - Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2) - - Credentials for the above AWS S3 and SQS input types should be configured using the [link](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#aws-credentials-config). - - Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case. +**Note:** +- A separate SQS queue and S3 bucket notification is required for each enabled data stream. +- Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2). +- Credentials for the above AWS S3 and SQS input types should be configured using the [link](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#aws-credentials-config). +- Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case. +- It is recommended to use AWS SQS for Cloudflare Logpush. -### Collect data from S3-Compatible Cloudflare R2 Buckets +#### Collect data from S3-Compatible Cloudflare R2 Buckets -- Configure the [Data Forwarder](https://developers.cloudflare.com/logs/get-started/enable-destinations/r2/) to push logs to Cloudflare R2. +- Configure [Cloudflare Logpush to Cloudflare R2](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/r2/) (or another [S3-compatible endpoint](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints/)) to push logs into an R2 bucket. -**Note**: -- When creating the API token, make sure it has [Admin permissions](https://developers.cloudflare.com/r2/api/s3/tokens/#permissions). This is needed to list buckets and view bucket configuration. +**Note:** +- To obtain the **Access Key ID** and **Secret Access Key**, create an R2 API token by following the [R2 authentication documentation](https://developers.cloudflare.com/r2/api/tokens/). Once the token is created successfully, Cloudflare will display the **Access Key ID** and **Secret Access Key** values. Use these credentials to authenticate the integration. +- When creating the R2 API token, make sure it has [Admin permissions](https://developers.cloudflare.com/r2/api/s3/tokens/#permissions). This is needed to list buckets and view bucket configuration. When configuring the integration to read from S3-Compatible Buckets such as Cloudflare R2, the following steps are required: -- Enable the toggle `Collect logs via S3 Bucket`. -- Make sure that the Bucket Name is set. -- Although you have to create an API token, that token should not be used for authentication with the S3 API. You just have to set the Access Key ID and Secret Access Key. -- Set the endpoint URL which can be found in Bucket Details. Endpoint should be a full URI that will be used as the API endpoint of the service. For Cloudflare R2 buckets, the URI is typically in the form of `https(s)://.r2.cloudflarestorage.com`. +- Enable the **Collect logs via S3 Bucket** toggle. +- Set the **S3-Compatible Bucket Name** (shown as `[Global][S3] S3-Compatible Bucket Name` in the UI) to the R2 bucket name. +- Set the **Endpoint** field to the API endpoint shown in the bucket details. It must be a full URI used as the API endpoint of the service. For Cloudflare R2 buckets, the URI is typically of the form `https://.r2.cloudflarestorage.com`. - Set the **Region** field to `auto`. This is required for all non-AWS S3-compatible buckets on Elastic Agent 8.19.12 and later. For Cloudflare R2, the region is always `auto` per the [R2 S3 API documentation](https://developers.cloudflare.com/r2/api/s3/api/#bucket-region). -- Bucket Prefix is optional for each data stream. +- **Bucket Prefix** is optional for each data stream. -### Collect data from GCS Buckets +#### Collect data from GCS Buckets -- Configure the [Data Forwarder](https://developers.cloudflare.com/logs/get-started/enable-destinations/google-cloud-storage/) to ingest data into a GCS bucket. -- Configure the GCS bucket names and credentials along with the required configurations under the "Collect Cloudflare Logpush logs via Google Cloud Storage" section. -- Make sure the service account and authentication being used, has proper levels of access to the GCS bucket [Manage Service Account Keys](https://cloud.google.com/iam/docs/creating-managing-service-account-keys/) +- Configure [Cloudflare Logpush to Google Cloud Storage](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/google-cloud-storage/) to ingest data into a GCS bucket. +- Configure the GCS bucket names and credentials along with the required configurations under the "Collect Cloudflare Logpush logs via Google Cloud Storage" section. +- Make sure the service account and authentication being used has proper levels of access to the GCS bucket. Refer to [Manage Service Account Keys](https://cloud.google.com/iam/docs/creating-managing-service-account-keys/) for more details. -**Note**: +**Note:** - The GCS input currently does not support fetching of buckets using bucket prefixes, so the bucket names have to be configured manually for each data stream. - The GCS input accepts a service account JSON key or a service account JSON file for authentication. - The GCS input supports JSON/NDJSON data. -### Collect data from Azure Blob Storage +#### Collect data from Azure Blob Storage -- [Enable Microsoft Azure](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/) to ingest data into Azure Blob Storage containers. -- Configure Azure Blob Storage container names and credentials along with the required configurations under the "Collect Cloudflare Logpush logs via Azure Blob Storage" section. -- Make sure the storage account and authentication being used, has proper levels of access to the Azure Blob Storage Container. Please follow the documentation [here](https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-portal) for more details. -- If you want to use RBAC for your account please follow the documentation [here](https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-access-azure-active-directory). +- Configure [Cloudflare Logpush to Microsoft Azure](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/) to ingest data into Azure Blob Storage containers. +- Configure Azure Blob Storage container names and credentials along with the required configurations under the "Collect Cloudflare Logpush logs via Azure Blob Storage" section. +- Make sure the storage account and authentication being used has proper levels of access to the Azure Blob Storage Container. Please follow the documentation [here](https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-portal) for more details. +- If you want to use RBAC for your account, please follow the documentation [here](https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-access-azure-active-directory). -**Note**: +**Note:** - The Azure Blob Storage input does not support fetching from containers using container prefixes, so the containers' names must be configured manually for each data stream. - The Azure Blob Storage input accepts a service account key (shared credentials key), service account URI (connection string) and OAuth2 credentials for authentication. - The Azure Blob Storage input only supports JSON/NDJSON data. -### Collect data from the Cloudflare HTTP Endpoint +#### Collect data from the Cloudflare HTTP Endpoint -- Refer to [Enable HTTP destination](https://developers.cloudflare.com/logs/get-started/enable-destinations/http/) for Cloudflare Logpush. -- Add same custom header along with its value on both the side for additional security. +- Refer to [Enable HTTP destination](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/http/) for Cloudflare Logpush. +- Add the same custom header along with its value on both sides (Cloudflare job and Elastic Agent HTTP input) for additional security. - For example, while creating a job along with a header and value for a particular dataset: + ``` curl --location --request POST 'https://api.cloudflare.com/client/v4/zones//logpush/jobs' \ --header 'X-Auth-Key: ' \ @@ -175,231 +218,284 @@ curl --location --request POST 'https://api.cloudflare.com/client/v4/zones/ **Integrations**. -2. In the integrations search bar type **Cloudflare Logpush**. -3. Click the **Cloudflare Logpush** integration from the search results. -4. Click the **Add Cloudflare Logpush** button to add Cloudflare Logpush integration. -5. Enable the Integration with the HTTP Endpoint, AWS S3 input or GCS input. -6. Under the AWS S3 input, there are two types of inputs: using AWS S3 Bucket or using SQS. -7. Configure Cloudflare to send logs to the Elastic Agent via HTTP Endpoint, or any R2, AWS or GCS Bucket following the specific guides above. +1. In the top search bar in Kibana, search for **Integrations**. +2. In the search bar, type **Cloudflare Logpush**. +3. Select the **Cloudflare Logpush** integration from the search results. +4. Select **Add Cloudflare Logpush** to add the integration. +5. Enable and configure only the collection methods which you will use. + + * To **Collect Cloudflare Logpush logs via HTTP Endpoint**, you'll need to: + - Configure the **Listen Address** and, per data stream, the **Listen Port** and **URL**. + - Optionally configure **Secret Header** / **Secret Value** and **SSL Configuration** to secure the endpoint. + * To **Collect Cloudflare Logpush logs via AWS S3, AWS SQS, or S3-Compatible Buckets**, you'll need to: + - Enable the **Collect logs via S3 Bucket** toggle when polling an S3 or S3-compatible bucket directly. Leave it disabled to consume S3 object-created events from an SQS queue. + - For direct polling, configure the **[S3] Bucket ARN**, the **[S3] Access Point ARN**, or the **[Global][S3] S3-Compatible Bucket Name** (for Cloudflare R2 and other S3-compatible providers). For S3-compatible buckets also set **Endpoint** and **Region**. + - Configure credentials using any of: **Access Key ID** / **Secret Access Key** (plus an optional **Session Token**), a **Role ARN**, or a **Shared Credential File** / **Credential Profile Name**. + - For each enabled data stream, set the **[SQS] Queue URL** (when using SQS) or the **[S3] Bucket Prefix** (when polling an S3 / S3-compatible bucket). For R2 / S3-compatible buckets you may also override the bucket name per data stream using the **[][S3] S3-Compatible Bucket Name** field. + - Tune throughput with **[S3/SQS] Number of Workers** and **[S3] Interval** (polling mode) or **[SQS] Visibility Timeout** / **[SQS] API Timeout** (SQS mode). + * To **Collect Cloudflare Logpush logs via Google Cloud Storage**, you'll need to: + - Configure **Project Id** and either **JSON Credentials key** or **JSON Credentials file path**. + - For each data stream, configure the **Buckets** list and optionally tune **Maximum number of workers**, **Polling**, **Polling interval**, and **Bucket Timeout**. + * To **Collect Cloudflare Logpush logs via Azure Blob Storage**, you'll need to: + - Configure **Account Name** and (optionally) **Storage URL**, and authenticate using either a **Service Account Key**, a **Service Account URI**, or by enabling **Collect logs using OAuth2 authentication** and supplying **Client ID (OAuth2)**, **Client Secret (OAuth2)**, and **Tenant ID (OAuth2)**. + - For each data stream, configure the **Containers** list and optionally tune **Maximum number of workers**, **Polling**, and **Polling interval**. + +6. Select **Save and continue** to save the integration. + +### Validation -## Logs reference +#### Dashboards populated -### access_request +1. In the top search bar in Kibana, search for **Dashboards**. +2. In the search bar, type **Cloudflare Logpush**. +3. Select a dashboard for the dataset you are collecting, and verify the dashboard information is populated. + +## Troubleshooting + +- For help with Elastic ingest tools, check [Common problems](https://www.elastic.co/docs/troubleshoot/ingest/fleet/common-problems). +- For Cloudflare-side troubleshooting and delivery status, refer to the [Logpush health dashboard](https://developers.cloudflare.com/logs/logpush/logpush-health/) and the relevant [destination-specific troubleshooting guide](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/). +- When collecting from Cloudflare R2 via the AWS S3 input, the error `failed to get AWS region for bucket: operation error S3: GetBucketLocation` usually indicates a credentials or permissions problem. Inspect the full API error response to identify the underlying issue. +- When using Azure Blob Storage, SAS tokens must have the **Write-only** permission, the service set to **Blob-only** (`ss=b`), and the resource type set to **Object-only** (`srt=o`). Set an expiration of at least five years to avoid unexpected token expiry. Refer to [Troubleshooting Azure destinations](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/#troubleshooting-azure-destinations) for details. +- When using the HTTP Endpoint input, ensure the Elastic Agent endpoint is reachable over HTTPS with a trusted certificate and that any `secret.header` / `secret.value` pair configured on the agent matches the `header_*` parameter defined in the Logpush job's `destination_conf`. + +## Performance and scaling + +For more information on architectures that can be used for scaling this integration, check the [Ingest Architectures](https://www.elastic.co/docs/manage-data/ingest/ingest-reference-architectures) documentation. + +Additional considerations: + +- For high-volume zones, AWS SQS mode is recommended because it distributes work across multiple Elastic Agents without requiring bucket polling. +- Tune [`max_upload_bytes`, `max_upload_records`, and `max_upload_interval_seconds`](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#max-upload-parameters) on the Logpush job to match the throughput your agents and destination can handle. +- For each input, adjust the worker count and polling interval to balance latency against API calls / egress costs. The relevant fields are **[S3/SQS] Number of Workers** and **[S3] Interval** for the AWS S3 input, and **Maximum number of workers** with **Polling interval** for the Google Cloud Storage and Azure Blob Storage inputs. +- Use Cloudflare Logpush [sampling](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) and [filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) to reduce the volume of low-value events at the source. + +## Reference + +### Logs reference + +#### access_request This is the `access_request` dataset. -#### Example +##### Example {{event "access_request"}} {{fields "access_request"}} -### audit +#### audit This is the `audit` dataset. - -#### Example +##### Example {{event "audit"}} {{fields "audit"}} -### casb +#### casb This is the `casb` dataset. -#### Example +##### Example {{event "casb"}} {{fields "casb"}} -### device_posture +#### device_posture This is the `device_posture` dataset. -#### Example +##### Example {{event "device_posture"}} {{fields "device_posture"}} -### dlp_forensic_copies +#### dlp_forensic_copies This is the `dlp_forensic_copies` dataset. -#### Example +##### Example {{event "dlp_forensic_copies"}} {{fields "dlp_forensic_copies"}} -### dns +#### dns This is the `dns` dataset. -#### Example +##### Example {{event "dns"}} {{fields "dns"}} -### dns_firewall +#### dns_firewall This is the `dns_firewall` dataset. -#### Example +##### Example {{event "dns_firewall"}} {{fields "dns_firewall"}} -### email_security_alerts +#### email_security_alerts This is the `email_security_alerts` dataset. -#### Example +##### Example {{event "email_security_alerts"}} {{fields "email_security_alerts"}} -### firewall_event +#### firewall_event This is the `firewall_event` dataset. -#### Example +##### Example {{event "firewall_event"}} {{fields "firewall_event"}} -### gateway_dns +#### gateway_dns This is the `gateway_dns` dataset. -#### Example +##### Example {{event "gateway_dns"}} {{fields "gateway_dns"}} -### gateway_http +#### gateway_http This is the `gateway_http` dataset. -#### Example +##### Example {{event "gateway_http"}} {{fields "gateway_http"}} -### gateway_network +#### gateway_network This is the `gateway_network` dataset. -#### Example +##### Example {{event "gateway_network"}} {{fields "gateway_network"}} -### http_request +#### http_request This is the `http_request` dataset. -#### Example +##### Example {{event "http_request"}} {{fields "http_request"}} -### magic_ids +#### magic_ids This is the `magic_ids` dataset. -#### Example +##### Example {{event "magic_ids"}} {{fields "magic_ids"}} -### nel_report +#### nel_report This is the `nel_report` dataset. -#### Example +##### Example {{event "nel_report"}} {{fields "nel_report"}} -### network_analytics +#### network_analytics This is the `network_analytics` dataset. -#### Example +##### Example {{event "network_analytics"}} {{fields "network_analytics"}} -### network_session +#### network_session This is the `network_session` dataset. -#### Example +##### Example {{event "network_session"}} {{fields "network_session"}} -### page_shield_events +#### page_shield_events This is the `page_shield_events` dataset. -#### Example +##### Example {{event "page_shield_events"}} {{fields "page_shield_events"}} -### sinkhole_http +#### sinkhole_http This is the `sinkhole_http` dataset. -#### Example +##### Example {{event "sinkhole_http"}} {{fields "sinkhole_http"}} -### spectrum_event +#### spectrum_event This is the `spectrum_event` dataset. -#### Example +##### Example {{event "spectrum_event"}} {{fields "spectrum_event"}} -### workers_trace +#### workers_trace This is the `workers_trace` dataset. -#### Example +##### Example {{event "workers_trace"}} {{fields "workers_trace"}} + +### Inputs used + +These inputs are used in this integration: + +- [http_endpoint](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-http_endpoint) +- [aws-s3](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-aws-s3) +- [gcs](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-gcs) +- [azure-blob-storage](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-azure-blob-storage) diff --git a/packages/cloudflare_logpush/changelog.yml b/packages/cloudflare_logpush/changelog.yml index 927c4b9f3bd..649fb8510d8 100644 --- a/packages/cloudflare_logpush/changelog.yml +++ b/packages/cloudflare_logpush/changelog.yml @@ -1,4 +1,9 @@ # newer versions go on top +- version: "1.44.0" + changes: + - description: Update README documentation as per new guideline. + type: enhancement + link: https://github.com/elastic/integrations/pull/18498 - version: "1.43.5" changes: - description: Update R2 setup documentation to require the `region` field instead of stating it is not needed. diff --git a/packages/cloudflare_logpush/data_stream/access_request/manifest.yml b/packages/cloudflare_logpush/data_stream/access_request/manifest.yml index 2480bd7065d..4bf44d3ae4d 100644 --- a/packages/cloudflare_logpush/data_stream/access_request/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/access_request/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Access Request logs + title: Access Request description: Collect Access Request logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Access Request logs + title: Access Request description: Collect Access Request logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is a Access Request data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is an Access Request data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Access Request logs + title: Access Request description: Collect Access Request logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-access_request - input: azure-blob-storage - title: Access Request logs + title: Access Request description: Collect Access Request logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/audit/manifest.yml b/packages/cloudflare_logpush/data_stream/audit/manifest.yml index c4a8512b410..5a1ee869830 100644 --- a/packages/cloudflare_logpush/data_stream/audit/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/audit/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Audit logs + title: Audit description: Collect Audit logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Audit logs + title: Audit description: Collect Audit logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is an audit data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is an Audit data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Audit logs + title: Audit description: Collect Audit logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-audit - input: azure-blob-storage - title: Audit logs + title: Audit description: Collect Audit logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/casb/manifest.yml b/packages/cloudflare_logpush/data_stream/casb/manifest.yml index 5dda2ba5a07..e5090548f98 100644 --- a/packages/cloudflare_logpush/data_stream/casb/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/casb/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: CASB Findings logs + title: CASB Findings description: Collect CASB Findings logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: CASB Findings logs + title: CASB Findings description: Collect CASB Findings logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: CASB Findings logs + title: CASB Findings description: Collect CASB Findings logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-casb - input: azure-blob-storage - title: CASB Findings logs + title: CASB Findings description: Collect CASB Findings logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/device_posture/manifest.yml b/packages/cloudflare_logpush/data_stream/device_posture/manifest.yml index 67b6adeb2b5..efad8ad55a6 100644 --- a/packages/cloudflare_logpush/data_stream/device_posture/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/device_posture/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Device Posture Results logs + title: Device Posture Results description: Collect Device Posture Results logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Device Posture Results logs + title: Device Posture Results description: Collect Device Posture Results logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Device Posture Results logs + title: Device Posture Results description: Collect Device Posture Results logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,8 +276,8 @@ streams: - forwarded - cloudflare_logpush-device_posture - input: azure-blob-storage - title: Device Posture Results logs logs - description: Collect Device Posture Results logs logs from Cloudflare via Azure Blob Storage. + title: Device Posture Results + description: Collect Device Posture Results logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs vars: @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/dlp_forensic_copies/manifest.yml b/packages/cloudflare_logpush/data_stream/dlp_forensic_copies/manifest.yml index 08068660519..109acad60f7 100644 --- a/packages/cloudflare_logpush/data_stream/dlp_forensic_copies/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/dlp_forensic_copies/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: DLP Forensic Copies logs + title: DLP Forensic Copies description: Collect DLP Forensic Copies logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: DLP Forensic Copies logs + title: DLP Forensic Copies description: Collect DLP Forensic Copies logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is a DLP Forensic Copies data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is a DLP Forensic Copies data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: DLP Forensic Copies logs + title: DLP Forensic Copies description: Collect DLP Forensic Copies logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,7 +276,7 @@ streams: - forwarded - cloudflare_logpush-dlp_forensic_copies - input: azure-blob-storage - title: DLP Forensic Copies logs + title: DLP Forensic Copies description: Collect DLP Forensic Copies logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/dns/manifest.yml b/packages/cloudflare_logpush/data_stream/dns/manifest.yml index 4f8c3659ff5..032efaf72c9 100644 --- a/packages/cloudflare_logpush/data_stream/dns/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/dns/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: DNS logs + title: DNS description: Collect DNS logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: DNS logs + title: DNS description: Collect DNS logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is a dns data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is a DNS data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: DNS logs + title: DNS description: Collect DNS logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-dns - input: azure-blob-storage - title: DNS logs + title: DNS description: Collect DNS logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/dns_firewall/manifest.yml b/packages/cloudflare_logpush/data_stream/dns_firewall/manifest.yml index 8bc4afae743..d0de1c6d1bb 100644 --- a/packages/cloudflare_logpush/data_stream/dns_firewall/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/dns_firewall/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: DNS Firewall logs + title: DNS Firewall description: Collect DNS Firewall logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: DNS Firewall logs + title: DNS Firewall description: Collect DNS Firewall logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: DNS Firewall logs + title: DNS Firewall description: Collect DNS Firewall logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,7 +276,7 @@ streams: - forwarded - cloudflare_logpush-dns_firewall - input: azure-blob-storage - title: DNS Firewall logs + title: DNS Firewall description: Collect DNS Firewall logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/email_security_alerts/manifest.yml b/packages/cloudflare_logpush/data_stream/email_security_alerts/manifest.yml index 33e7415f61b..be749a88e8c 100644 --- a/packages/cloudflare_logpush/data_stream/email_security_alerts/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/email_security_alerts/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Email Security Alert logs + title: Email Security Alert description: Collect Email Security Alert logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Email Security Alert logs + title: Email Security Alert description: Collect Email Security Alert logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is a Email Security Alert data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is an Email Security Alert data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Email Security Alert logs + title: Email Security Alert description: Collect Email Security Alert logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,7 +276,7 @@ streams: - forwarded - cloudflare_logpush-email_security_alerts - input: azure-blob-storage - title: Email Security Alert logs + title: Email Security Alert description: Collect Email Security Alert logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/firewall_event/manifest.yml b/packages/cloudflare_logpush/data_stream/firewall_event/manifest.yml index b7c591f4da2..1f0ec0e6586 100644 --- a/packages/cloudflare_logpush/data_stream/firewall_event/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/firewall_event/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Firewall Event logs + title: Firewall Event description: Collect Firewall Event logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Firewall Event logs + title: Firewall Event description: Collect Firewall Event logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is a firewall event data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is a Firewall Event data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Firewall Event logs + title: Firewall Event description: Collect Firewall Event logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,7 +276,7 @@ streams: - forwarded - cloudflare_logpush-firewall_event - input: azure-blob-storage - title: Firewall Event logs + title: Firewall Event description: Collect Firewall Event logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/gateway_dns/manifest.yml b/packages/cloudflare_logpush/data_stream/gateway_dns/manifest.yml index 2ddaf2d962f..403b5a3e96c 100644 --- a/packages/cloudflare_logpush/data_stream/gateway_dns/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/gateway_dns/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Gateway DNS logs + title: Gateway DNS description: Collect Gateway DNS logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Gateway DNS logs + title: Gateway DNS description: Collect Gateway DNS logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Gateway DNS logs + title: Gateway DNS description: Collect Gateway DNS logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,7 +276,7 @@ streams: - forwarded - cloudflare_logpush-gateway_dns - input: azure-blob-storage - title: Gateway DNS logs + title: Gateway DNS description: Collect Gateway DNS logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/gateway_http/manifest.yml b/packages/cloudflare_logpush/data_stream/gateway_http/manifest.yml index b5b08b528ee..971fd600821 100644 --- a/packages/cloudflare_logpush/data_stream/gateway_http/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/gateway_http/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Gateway HTTP logs + title: Gateway HTTP description: Collect Gateway HTTP logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Gateway HTTP logs + title: Gateway HTTP description: Collect Gateway HTTP logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Gateway HTTP logs + title: Gateway HTTP description: Collect Gateway HTTP logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,7 +276,7 @@ streams: - forwarded - cloudflare_logpush-gateway_http - input: azure-blob-storage - title: Gateway HTTP logs + title: Gateway HTTP description: Collect Gateway HTTP logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/gateway_network/manifest.yml b/packages/cloudflare_logpush/data_stream/gateway_network/manifest.yml index ba4c19fdd90..22b44ece225 100644 --- a/packages/cloudflare_logpush/data_stream/gateway_network/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/gateway_network/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Gateway Network logs + title: Gateway Network description: Collect Gateway Network logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Gateway Network logs + title: Gateway Network description: Collect Gateway Network logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Gateway Network logs + title: Gateway Network description: Collect Gateway Network logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,7 +276,7 @@ streams: - forwarded - cloudflare_logpush-gateway_network - input: azure-blob-storage - title: Gateway Network logs + title: Gateway Network description: Collect Gateway Network logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/http_request/manifest.yml b/packages/cloudflare_logpush/data_stream/http_request/manifest.yml index 35f0b387974..61ef79b478b 100644 --- a/packages/cloudflare_logpush/data_stream/http_request/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/http_request/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: HTTP Request logs + title: HTTP Request description: Collect HTTP Request logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: HTTP Request logs + title: HTTP Request description: Collect HTTP Request logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is a http request data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is an HTTP Request data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: HTTP Request logs + title: HTTP Request description: Collect HTTP Request logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,8 +276,8 @@ streams: - forwarded - cloudflare_logpush-http_request - input: azure-blob-storage - title: HTTP request logs - description: Collect HTTP request logs from Cloudflare via Azure Blob Storage. + title: HTTP Request + description: Collect HTTP Request logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs vars: @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/magic_ids/manifest.yml b/packages/cloudflare_logpush/data_stream/magic_ids/manifest.yml index be5b7641a2a..5046cb86733 100644 --- a/packages/cloudflare_logpush/data_stream/magic_ids/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/magic_ids/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Magic IDS logs + title: Magic IDS description: Collect Magic IDS logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Magic IDS logs + title: Magic IDS description: Collect Magic IDS logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Magic IDS logs + title: Magic IDS description: Collect Magic IDS logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-magic_ids - input: azure-blob-storage - title: Magic IDS logs + title: Magic IDS description: Collect Magic IDS logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/nel_report/manifest.yml b/packages/cloudflare_logpush/data_stream/nel_report/manifest.yml index a83f7959e43..2c232041a87 100644 --- a/packages/cloudflare_logpush/data_stream/nel_report/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/nel_report/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: NEL Report logs + title: NEL Report description: Collect NEL Report logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: NEL Report logs + title: NEL Report description: Collect NEL Report logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is a nel report data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is an NEL Report data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: NEL Report logs + title: NEL Report description: Collect NEL Report logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-nel_report - input: azure-blob-storage - title: NEL Report logs + title: NEL Report description: Collect NEL Report logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/network_analytics/manifest.yml b/packages/cloudflare_logpush/data_stream/network_analytics/manifest.yml index 6fd78184433..368a074c82f 100644 --- a/packages/cloudflare_logpush/data_stream/network_analytics/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/network_analytics/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Network Analytics logs + title: Network Analytics description: Collect Network Analytics logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Network Analytics logs + title: Network Analytics description: Collect Network Analytics logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is a network analytics data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is a Network Analytics data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Network Analytics logs + title: Network Analytics description: Collect Network Analytics logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-network_analytics - input: azure-blob-storage - title: Network Analytics logs + title: Network Analytics description: Collect Network Analytics logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/network_session/manifest.yml b/packages/cloudflare_logpush/data_stream/network_session/manifest.yml index 9917a360e57..a7d6ca92d76 100644 --- a/packages/cloudflare_logpush/data_stream/network_session/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/network_session/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Zero Trust Network Session logs + title: Zero Trust Network Session description: Collect Zero Trust Network Session logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Zero Trust Network Session logs + title: Zero Trust Network Session description: Collect Zero Trust Network Session logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Zero Trust Network Session logs + title: Zero Trust Network Session description: Collect Zero Trust Network Session logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-network_session - input: azure-blob-storage - title: Zero Trust Network Session logs + title: Zero Trust Network Session description: Collect Zero Trust Network Session logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/page_shield_events/manifest.yml b/packages/cloudflare_logpush/data_stream/page_shield_events/manifest.yml index e6a02783e42..634d51abcce 100644 --- a/packages/cloudflare_logpush/data_stream/page_shield_events/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/page_shield_events/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Page Shield Event logs + title: Page Shield Event description: Collect Page Shield Event logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Page Shield Event logs + title: Page Shield Event description: Collect Page Shield Event logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is a Page Shield Event data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is a Page Shield Event data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -196,7 +196,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Page Shield Event logs + title: Page Shield Event description: Collect Page Shield Event logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -276,7 +276,7 @@ streams: - forwarded - cloudflare_logpush-page_shield_events - input: azure-blob-storage - title: Page Shield Event logs + title: Page Shield Event description: Collect Page Shield Event logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -379,9 +379,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/sinkhole_http/manifest.yml b/packages/cloudflare_logpush/data_stream/sinkhole_http/manifest.yml index 3983d0dbf5d..ea49d7e2c22 100644 --- a/packages/cloudflare_logpush/data_stream/sinkhole_http/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/sinkhole_http/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Sinkhole HTTP logs + title: Sinkhole HTTP description: Collect Sinkhole HTTP logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Sinkhole HTTP logs + title: Sinkhole HTTP description: Collect Sinkhole HTTP logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Sinkhole HTTP logs + title: Sinkhole HTTP description: Collect Sinkhole HTTP logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-sinkhole_http - input: azure-blob-storage - title: Sinkhole HTTP logs + title: Sinkhole HTTP description: Collect Sinkhole HTTP logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/spectrum_event/manifest.yml b/packages/cloudflare_logpush/data_stream/spectrum_event/manifest.yml index 14131d87c21..62061575657 100644 --- a/packages/cloudflare_logpush/data_stream/spectrum_event/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/spectrum_event/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Spectrum Event Logs + title: Spectrum Event description: Collect Spectrum Event logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Spectrum Event Logs + title: Spectrum Event description: Collect Spectrum Event logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -90,7 +90,7 @@ streams: multi: false required: false show_user: true - description: "URL of the AWS SQS queue that messages will be received from. \nThis is only required if you want to collect logs via AWS SQS.\nThis is a spectrum event data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." + description: "URL of the AWS SQS queue that messages will be received from.\nThis is only required if you want to collect logs via AWS SQS.\nThis is a Spectrum Event data stream specific queue URL. In order to avoid data loss, do not configure the same SQS queue for more than one data stream." - name: bucket_list_prefix type: text title: '[S3] Bucket Prefix' @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Spectrum Event logs + title: Spectrum Event description: Collect Spectrum Event logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-spectrum_event - input: azure-blob-storage - title: Spectrum Event logs + title: Spectrum Event description: Collect Spectrum Event logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/data_stream/workers_trace/manifest.yml b/packages/cloudflare_logpush/data_stream/workers_trace/manifest.yml index acc412413a1..86019614071 100644 --- a/packages/cloudflare_logpush/data_stream/workers_trace/manifest.yml +++ b/packages/cloudflare_logpush/data_stream/workers_trace/manifest.yml @@ -3,7 +3,7 @@ type: logs streams: - input: http_endpoint template_path: http_endpoint.yml.hbs - title: Workers Trace Event logs + title: Workers Trace Event description: Collect Workers Trace Event logs from Cloudflare via HTTP endpoint. vars: - name: listen_port @@ -80,7 +80,7 @@ streams: description: > The request tracer logs HTTP requests and responses to the agent's local file-system for debugging configurations. Enabling this request tracing compromises security and should only be used for debugging. See [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-http_endpoint.html#_tracer_enabled_3) for details. - input: aws-s3 - title: Workers Trace Event logs + title: Workers Trace Event description: Collect Workers Trace Event logs from Cloudflare via S3 or SQS. template_path: aws-s3.yml.hbs vars: @@ -210,7 +210,7 @@ streams: description: >- Processors are used to reduce the number of fields in the exported event or to enhance the event with metadata. This executes in the agent before the logs are parsed. See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) for details. - input: gcs - title: Workers Trace Event logs + title: Workers Trace Event description: Collect Workers Trace Event logs from Cloudflare via GCS. template_path: gcs.yml.hbs vars: @@ -290,7 +290,7 @@ streams: - forwarded - cloudflare_logpush-workers_trace - input: azure-blob-storage - title: Workers Trace Event logs + title: Workers Trace Event description: Collect Workers Trace Event logs from Cloudflare via Azure Blob Storage. enabled: false template_path: abs.yml.hbs @@ -393,9 +393,9 @@ streams: default: false - name: preserve_duplicate_custom_fields required: true - show_user: false + show_user: true title: Preserve duplicate custom fields - description: Preserve github.audit fields that were copied to Elastic Common Schema (ECS) fields. + description: Preserve custom fields for all ECS mappings. type: bool multi: false default: false diff --git a/packages/cloudflare_logpush/docs/README.md b/packages/cloudflare_logpush/docs/README.md index 6e41ee21ec0..f413ad21d51 100644 --- a/packages/cloudflare_logpush/docs/README.md +++ b/packages/cloudflare_logpush/docs/README.md @@ -1,91 +1,131 @@ -# Cloudflare Logpush +# Cloudflare Logpush Integration for Elastic ## Overview -The [Cloudflare Logpush](https://www.cloudflare.com/) integration allows you to monitor Access Request, Audit, CASB, Device Posture, DNS, DNS Firewall, Firewall Event, Gateway DNS, Gateway HTTP, Gateway Network, HTTP Request, Magic IDS, NEL Report, Network Analytics, Sinkhole HTTP, Spectrum Event, Network Session and Workers Trace Events logs. Cloudflare is a content delivery network and DDoS mitigation company. Cloudflare provides a network designed to make everything you connect to the Internet secure, private, fast, and reliable; secure your websites, APIs, and Internet applications; protect corporate networks, employees, and devices; and write and deploy code that runs on the network edge. +The [Cloudflare Logpush](https://developers.cloudflare.com/logs/logpush/) integration allows you to monitor Access Request, Audit, CASB, Device Posture, DLP Forensic Copies, DNS, DNS Firewall, Email Security Alerts, Firewall Event, Gateway DNS, Gateway HTTP, Gateway Network, HTTP Request, Magic IDS, NEL Report, Network Analytics, Page Shield, Sinkhole HTTP, Spectrum Event, Zero Trust Network Session, and Workers Trace Events logs. -The Cloudflare Logpush integration can be used in the following modes to collect data: -- HTTP Endpoint mode - Cloudflare pushes logs directly to an HTTP endpoint hosted by your Elastic Agent. -- AWS S3 polling mode - Cloudflare writes data to S3 and Elastic Agent polls the S3 bucket by listing its contents and reading new files. -- AWS S3 SQS mode - Cloudflare writes data to S3, S3 pushes a new object notification to SQS, Elastic Agent receives the notification from SQS, and then reads the S3 object. Multiple Agents can be used in this mode. -- Azure Blob Storage polling mode - Cloudflare writes data to Azure Blob Storage and Elastic Agent polls the Azure Blob Storage containers by listing its contents and reading new files. -- Google Cloud Storage polling mode - Cloudflare writes data to Google Cloud Storage and Elastic Agent polls the GCS buckets by listing its contents and reading new files. +Cloudflare is a content delivery network and DDoS mitigation company. Cloudflare provides a network designed to make everything you connect to the Internet secure, private, fast, and reliable; secure your websites, APIs, and Internet applications; protect corporate networks, employees, and devices; and write and deploy code that runs on the network edge. -For example, you could use the data from this integration to know which websites have the highest traffic, which areas have the highest network traffic, or observe mitigation statistics. +### Compatibility -## Data streams +This integration follows the log schemas and field definitions published in the [Cloudflare Log fields reference](https://developers.cloudflare.com/logs/reference/log-fields/). -The Cloudflare Logpush integration collects logs for the following types of events. For more information on each dataset, refer to the Logs reference section at the end of this page. +Cloudflare Logpush supports delivering logs to the following destinations, which can all be consumed by this integration: -### Zero Trust events +- [HTTP destinations](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/http/) +- [Amazon S3](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/aws-s3/) +- [S3-compatible endpoints](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints/) (including [Cloudflare R2](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/r2/)) +- [Google Cloud Storage](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/google-cloud-storage/) +- [Microsoft Azure Blob Storage](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/) -**Access Request**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/access_requests/). +### How it works -**Audit**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/audit_logs/). +Cloudflare Logpush pushes logs to the destination of your choice. Elastic Agent then reads those logs and ships them to Elasticsearch, where they are processed through each data stream's ingest pipeline. -**CASB findings**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/casb_findings/). +The integration supports the following collection modes: -**Device Posture Results**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/device_posture_results/). +- **HTTP Endpoint mode** — Cloudflare pushes logs directly to an HTTP endpoint hosted by your Elastic Agent. +- **AWS S3 polling mode** — Cloudflare writes logs to an S3 bucket and Elastic Agent polls the bucket by listing its contents and reading new files. +- **AWS S3 SQS mode** — Cloudflare writes logs to S3; S3 publishes object-created notifications to an SQS queue; Elastic Agent receives those notifications from SQS and reads the corresponding S3 objects. This mode supports horizontal scaling across multiple agents. +- **S3-compatible (Cloudflare R2) polling mode** — Cloudflare writes logs to an R2 or other S3-compatible bucket and Elastic Agent polls the bucket using the S3 API. +- **Azure Blob Storage polling mode** — Cloudflare writes logs to an Azure Blob Storage container and Elastic Agent polls the container by listing its contents and reading new files. +- **Google Cloud Storage polling mode** — Cloudflare writes logs to a GCS bucket and Elastic Agent polls the bucket by listing its contents and reading new files. -**DLP Forensic Copies**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/dlp_forensic_copies/). +## What data does this integration collect? -**Email Security Alerts**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/email_security_alerts/). +The Cloudflare Logpush integration collects logs for the following Cloudflare [datasets](https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/). Data streams are grouped by whether the underlying dataset is classified as a Cloudflare [Zero Trust dataset](https://developers.cloudflare.com/cloudflare-one/insights/logs/logpush/#zero-trust-datasets) or a non Zero Trust dataset. -**Gateway DNS**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_dns/). +### Zero Trust events -**Gateway HTTP**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_http/). +- `access_request`: HTTP requests to sites protected by Cloudflare Access. See [Access Requests schema](https://developers.cloudflare.com/logs/reference/log-fields/account/access_requests/). +- `audit`: Authentication events through Cloudflare Access, plus account-level configuration and administrative actions. See [Audit Logs schema](https://developers.cloudflare.com/logs/reference/log-fields/account/audit_logs/). +- `casb`: Security issues detected by Cloudflare CASB in connected SaaS applications. See [CASB Findings schema](https://developers.cloudflare.com/logs/reference/log-fields/account/casb_findings/). +- `device_posture`: Device posture status from the Cloudflare One Client (WARP). See [Device Posture Results schema](https://developers.cloudflare.com/logs/reference/log-fields/account/device_posture_results/). +- `gateway_dns`: DNS queries inspected by Cloudflare Gateway. See [Gateway DNS schema](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_dns/). +- `gateway_http`: HTTP requests inspected by Cloudflare Gateway. See [Gateway HTTP schema](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_http/). +- `gateway_network`: Network packets inspected by Cloudflare Gateway. See [Gateway Network schema](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_network/). +- `network_session`: Network session logs for traffic proxied by Cloudflare Gateway. See [Zero Trust Network Session schema](https://developers.cloudflare.com/logs/reference/log-fields/account/zero_trust_network_sessions/). -**Gateway Network**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_network/). +### Non Zero Trust events -**Zero Trust Network Session**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/zero_trust_network_sessions/). +- `dns`: Zone-scoped authoritative DNS query logs. See [DNS logs schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/dns_logs/). +- `dns_firewall`: Cloudflare DNS Firewall query and response logs. See [DNS Firewall logs schema](https://developers.cloudflare.com/logs/reference/log-fields/account/dns_firewall_logs/). +- `dlp_forensic_copies`: Data Loss Prevention forensic copies of content that matched a DLP profile. See [DLP Forensic Copies schema](https://developers.cloudflare.com/logs/reference/log-fields/account/dlp_forensic_copies/). +- `email_security_alerts`: Cloudflare Email Security alerts for phishing, malware, and other email-based threats. See [Email Security Alerts schema](https://developers.cloudflare.com/logs/reference/log-fields/account/email_security_alerts/). +- `firewall_event`: Zone-level Firewall events for requests mitigated by Cloudflare security products (WAF, Rate Limiting, Firewall Rules, etc.). See [Firewall Events schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/firewall_events/). +- `http_request`: HTTP/HTTPS request logs served at the Cloudflare edge. See [HTTP Requests schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/). +- `magic_ids`: Magic Network Monitoring IDS detection logs. See [Magic IDS Detections schema](https://developers.cloudflare.com/logs/reference/log-fields/account/magic_ids_detections/). +- `nel_report`: Network Error Logging (NEL) reports collected from end-user browsers. See [NEL Reports schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/nel_reports/). +- `network_analytics`: Network Analytics (Magic Transit / Magic WAN packet-sampled flow) logs. See [Network Analytics Logs schema](https://developers.cloudflare.com/logs/reference/log-fields/account/network_analytics_logs/). +- `page_shield_events`: Page Shield events reporting changes to scripts and connections observed on protected zones. See [Page Shield Events schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/page_shield_events/). +- `sinkhole_http`: HTTP traffic captured by Cloudflare sinkholes. See [Sinkhole HTTP logs schema](https://developers.cloudflare.com/logs/reference/log-fields/account/sinkhole_http_logs/). +- `spectrum_event`: Cloudflare Spectrum events for TCP/UDP applications proxied through Cloudflare. See [Spectrum Events schema](https://developers.cloudflare.com/logs/reference/log-fields/zone/spectrum_events/). +- `workers_trace`: Cloudflare Workers Trace Events with execution logs and exceptions for Workers scripts. See [Workers Trace Events schema](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/). -### Non Zero Trust events +### Supported use cases -**DNS**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/dns_logs/). +Integrating Cloudflare Logpush with Elastic provides centralized visibility across Cloudflare's edge, Zero Trust, and network-layer products. Common use cases include: -**DNS Firewall**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/dns_firewall_logs/). +- Investigating traffic, WAF, and DDoS-mitigation events from the Cloudflare edge (`http_request`, `firewall_event`, `network_analytics`). +- Monitoring Zero Trust user activity, policy decisions, and device posture (`gateway_http`, `gateway_dns`, `gateway_network`, `access_request`, `device_posture`, `network_session`). +- Detecting data exfiltration and SaaS misconfigurations (`dlp_forensic_copies`, `casb`, `email_security_alerts`). +- Auditing administrative activity on the Cloudflare account (`audit`). +- Troubleshooting DNS and client-side performance issues (`dns`, `dns_firewall`, `nel_report`, `workers_trace`). -**Firewall Event**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/firewall_events/). +## What do I need to use this integration? -**HTTP Request**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/). +### From Elastic -**Magic IDS**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/magic_ids_detections/). +You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. -**NEL Report**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/nel_reports/). +### From Cloudflare -**Network Analytics**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/network_analytics_logs/). +To use this integration, you must be able to create and manage [Cloudflare Logpush jobs](https://developers.cloudflare.com/logs/logpush/logpush-job/) for the datasets you want to collect. -**Page Shield events**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/page_shield_events/). +**Permissions** -**Sinkhole HTTP**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/sinkhole_http_logs/). +Creating and managing Logpush jobs requires an API token or user role with the `Logs Write` permission (or a role that includes it, such as **Super Administrator**, **Administrator**, or **Log Share** with edit permissions). Refer to [Cloudflare Logpush permissions](https://developers.cloudflare.com/logs/logpush/permissions/) for details. -**Spectrum Event**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/zone/spectrum_events/). +- **Zone-scoped datasets** (for example, `http_requests`, `firewall_events`, `dns_logs`, `spectrum_events`, `nel_reports`, `page_shield_events`) require a **zone-scoped token**. +- **Account-scoped datasets** (for example, `audit_logs`, `access_requests`, `casb_findings`, `device_posture_results`, `dlp_forensic_copies`, `email_security_alerts`, `gateway_*`, `dns_firewall_logs`, `magic_ids_detections`, `network_analytics_logs`, `sinkhole_http_logs`, `workers_trace_events`, `zero_trust_network_sessions`) require an **account-scoped token**. +- Zero Trust datasets (Access, Gateway, DEX) additionally require `Zero Trust: PII Read`. -**Workers Trace Events**: See Example Schema [here](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/). +**Destination-specific credentials** -## Requirements +Depending on the delivery destination, you also need: -You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. +- **AWS S3 / S3-compatible** — an S3 bucket (or Cloudflare R2 bucket) and credentials (Access Key ID / Secret Access Key, or an IAM role) that Elastic Agent can use to list and read objects. For SQS-based delivery, an SQS queue subscribed to S3 object-created events. +- **Google Cloud Storage** — a GCS bucket and a service account key (JSON) with read access to the bucket. +- **Azure Blob Storage** — a storage account, a blob container, and either a shared access key, a connection string, or OAuth2 client credentials with read access to the container. +- **HTTP Endpoint** — a reachable HTTPS endpoint exposed by Elastic Agent. Cloudflare requires a valid TLS certificate on the destination. + +## How do I deploy this integration? -This module has been tested against **Cloudflare version v4**. +This integration supports Elastic Agent-based installations. -**Note**: It is recommended to use AWS SQS for Cloudflare Logpush. +### Agent-based installation -## Setup +Elastic Agent must be installed. For more details, check the Elastic Agent [installation instructions](docs-content://reference/fleet/install-elastic-agents.md). You can install only one Elastic Agent per host. -### Collect data from AWS S3 Bucket +### Onboard and configure -- Configure [Cloudflare Logpush to Amazon S3](https://developers.cloudflare.com/logs/get-started/enable-destinations/aws-s3/) to send Cloudflare's data to an AWS S3 bucket. -- The default values of the "Bucket List Prefix" are listed below. However, users can set the parameter "Bucket List Prefix" according to their requirements. +Configure one of the following delivery pipelines before enabling the integration in Elastic. - | Data Stream Name | Bucket List Prefix | +#### Collect data from AWS S3 Bucket + +- Configure [Cloudflare Logpush to Amazon S3](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/aws-s3/) to send Cloudflare's data to an AWS S3 bucket. +- The default values of the **Bucket Prefix** are listed below. However, users can set the parameter **Bucket Prefix** according to their requirements. + + | Data Stream Name | Bucket Prefix | | -------------------------- | ---------------------- | | Access Request | access_request | | Audit Logs | audit_logs | | CASB findings | casb | | Device Posture Results | device_posture | + | DLP Forensic Copies | dlp_forensic_copies | | DNS | dns | | DNS Firewall | dns_firewall | + | Email Security Alerts | email_security_alerts | | Firewall Event | firewall_event | | Gateway DNS | gateway_dns | | Gateway HTTP | gateway_http | @@ -94,73 +134,76 @@ This module has been tested against **Cloudflare version v4**. | Magic IDS | magic_ids | | NEL Report | nel_report | | Network Analytics | network_analytics_logs | + | Page Shield Events | page_shield_events | | Zero Trust Network Session | network_session | | Sinkhole HTTP | sinkhole_http | | Spectrum Event | spectrum_event | | Workers Trace Events | workers_trace | -### Collect data from AWS SQS +#### Collect data from AWS SQS 1. If Logpush forwarding to an AWS S3 Bucket hasn't been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation. 2. Follow the steps below for each Logpush data stream that has been enabled: - 1. Create an SQS queue - - To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html). - - While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket. - 2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings: + 1. Create an SQS queue + - To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html). + - While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket. + 2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings: - Event type: `All object create events` (`s3:ObjectCreated:*`) - - Destination: SQS Queue - - Prefix (filter): enter the prefix for this Logpush data stream, e.g. `audit_logs/` - - Select the SQS queue that has been created for this data stream + - Destination: SQS Queue + - Prefix (filter): enter the prefix for this Logpush data stream, e.g. `audit_logs/` + - Select the SQS queue that has been created for this data stream - **Note**: - - A separate SQS queue and S3 bucket notification is required for each enabled data stream. - - Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2) - - Credentials for the above AWS S3 and SQS input types should be configured using the [link](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#aws-credentials-config). - - Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case. +**Note:** +- A separate SQS queue and S3 bucket notification is required for each enabled data stream. +- Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2). +- Credentials for the above AWS S3 and SQS input types should be configured using the [link](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#aws-credentials-config). +- Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case. +- It is recommended to use AWS SQS for Cloudflare Logpush. -### Collect data from S3-Compatible Cloudflare R2 Buckets +#### Collect data from S3-Compatible Cloudflare R2 Buckets -- Configure the [Data Forwarder](https://developers.cloudflare.com/logs/get-started/enable-destinations/r2/) to push logs to Cloudflare R2. +- Configure [Cloudflare Logpush to Cloudflare R2](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/r2/) (or another [S3-compatible endpoint](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/s3-compatible-endpoints/)) to push logs into an R2 bucket. -**Note**: -- When creating the API token, make sure it has [Admin permissions](https://developers.cloudflare.com/r2/api/s3/tokens/#permissions). This is needed to list buckets and view bucket configuration. +**Note:** +- To obtain the **Access Key ID** and **Secret Access Key**, create an R2 API token by following the [R2 authentication documentation](https://developers.cloudflare.com/r2/api/tokens/). Once the token is created successfully, Cloudflare will display the **Access Key ID** and **Secret Access Key** values. Use these credentials to authenticate the integration. +- When creating the R2 API token, make sure it has [Admin permissions](https://developers.cloudflare.com/r2/api/s3/tokens/#permissions). This is needed to list buckets and view bucket configuration. When configuring the integration to read from S3-Compatible Buckets such as Cloudflare R2, the following steps are required: -- Enable the toggle `Collect logs via S3 Bucket`. -- Make sure that the Bucket Name is set. -- Although you have to create an API token, that token should not be used for authentication with the S3 API. You just have to set the Access Key ID and Secret Access Key. -- Set the endpoint URL which can be found in Bucket Details. Endpoint should be a full URI that will be used as the API endpoint of the service. For Cloudflare R2 buckets, the URI is typically in the form of `https(s)://.r2.cloudflarestorage.com`. +- Enable the **Collect logs via S3 Bucket** toggle. +- Set the **S3-Compatible Bucket Name** (shown as `[Global][S3] S3-Compatible Bucket Name` in the UI) to the R2 bucket name. +- Set the **Endpoint** field to the API endpoint shown in the bucket details. It must be a full URI used as the API endpoint of the service. For Cloudflare R2 buckets, the URI is typically of the form `https://.r2.cloudflarestorage.com`. - Set the **Region** field to `auto`. This is required for all non-AWS S3-compatible buckets on Elastic Agent 8.19.12 and later. For Cloudflare R2, the region is always `auto` per the [R2 S3 API documentation](https://developers.cloudflare.com/r2/api/s3/api/#bucket-region). -- Bucket Prefix is optional for each data stream. +- **Bucket Prefix** is optional for each data stream. -### Collect data from GCS Buckets +#### Collect data from GCS Buckets -- Configure the [Data Forwarder](https://developers.cloudflare.com/logs/get-started/enable-destinations/google-cloud-storage/) to ingest data into a GCS bucket. -- Configure the GCS bucket names and credentials along with the required configurations under the "Collect Cloudflare Logpush logs via Google Cloud Storage" section. -- Make sure the service account and authentication being used, has proper levels of access to the GCS bucket [Manage Service Account Keys](https://cloud.google.com/iam/docs/creating-managing-service-account-keys/) +- Configure [Cloudflare Logpush to Google Cloud Storage](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/google-cloud-storage/) to ingest data into a GCS bucket. +- Configure the GCS bucket names and credentials along with the required configurations under the "Collect Cloudflare Logpush logs via Google Cloud Storage" section. +- Make sure the service account and authentication being used has proper levels of access to the GCS bucket. Refer to [Manage Service Account Keys](https://cloud.google.com/iam/docs/creating-managing-service-account-keys/) for more details. -**Note**: +**Note:** - The GCS input currently does not support fetching of buckets using bucket prefixes, so the bucket names have to be configured manually for each data stream. - The GCS input accepts a service account JSON key or a service account JSON file for authentication. - The GCS input supports JSON/NDJSON data. -### Collect data from Azure Blob Storage +#### Collect data from Azure Blob Storage -- [Enable Microsoft Azure](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/) to ingest data into Azure Blob Storage containers. -- Configure Azure Blob Storage container names and credentials along with the required configurations under the "Collect Cloudflare Logpush logs via Azure Blob Storage" section. -- Make sure the storage account and authentication being used, has proper levels of access to the Azure Blob Storage Container. Please follow the documentation [here](https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-portal) for more details. -- If you want to use RBAC for your account please follow the documentation [here](https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-access-azure-active-directory). +- Configure [Cloudflare Logpush to Microsoft Azure](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/) to ingest data into Azure Blob Storage containers. +- Configure Azure Blob Storage container names and credentials along with the required configurations under the "Collect Cloudflare Logpush logs via Azure Blob Storage" section. +- Make sure the storage account and authentication being used has proper levels of access to the Azure Blob Storage Container. Please follow the documentation [here](https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-portal) for more details. +- If you want to use RBAC for your account, please follow the documentation [here](https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-access-azure-active-directory). -**Note**: +**Note:** - The Azure Blob Storage input does not support fetching from containers using container prefixes, so the containers' names must be configured manually for each data stream. - The Azure Blob Storage input accepts a service account key (shared credentials key), service account URI (connection string) and OAuth2 credentials for authentication. - The Azure Blob Storage input only supports JSON/NDJSON data. -### Collect data from the Cloudflare HTTP Endpoint +#### Collect data from the Cloudflare HTTP Endpoint -- Refer to [Enable HTTP destination](https://developers.cloudflare.com/logs/get-started/enable-destinations/http/) for Cloudflare Logpush. -- Add same custom header along with its value on both the side for additional security. +- Refer to [Enable HTTP destination](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/http/) for Cloudflare Logpush. +- Add the same custom header along with its value on both sides (Cloudflare job and Elastic Agent HTTP input) for additional security. - For example, while creating a job along with a header and value for a particular dataset: + ``` curl --location --request POST 'https://api.cloudflare.com/client/v4/zones//logpush/jobs' \ --header 'X-Auth-Key: ' \ @@ -175,29 +218,74 @@ curl --location --request POST 'https://api.cloudflare.com/client/v4/zones/ **Integrations**. -2. In the integrations search bar type **Cloudflare Logpush**. -3. Click the **Cloudflare Logpush** integration from the search results. -4. Click the **Add Cloudflare Logpush** button to add Cloudflare Logpush integration. -5. Enable the Integration with the HTTP Endpoint, AWS S3 input or GCS input. -6. Under the AWS S3 input, there are two types of inputs: using AWS S3 Bucket or using SQS. -7. Configure Cloudflare to send logs to the Elastic Agent via HTTP Endpoint, or any R2, AWS or GCS Bucket following the specific guides above. +1. In the top search bar in Kibana, search for **Integrations**. +2. In the search bar, type **Cloudflare Logpush**. +3. Select the **Cloudflare Logpush** integration from the search results. +4. Select **Add Cloudflare Logpush** to add the integration. +5. Enable and configure only the collection methods which you will use. + + * To **Collect Cloudflare Logpush logs via HTTP Endpoint**, you'll need to: + - Configure the **Listen Address** and, per data stream, the **Listen Port** and **URL**. + - Optionally configure **Secret Header** / **Secret Value** and **SSL Configuration** to secure the endpoint. + * To **Collect Cloudflare Logpush logs via AWS S3, AWS SQS, or S3-Compatible Buckets**, you'll need to: + - Enable the **Collect logs via S3 Bucket** toggle when polling an S3 or S3-compatible bucket directly. Leave it disabled to consume S3 object-created events from an SQS queue. + - For direct polling, configure the **[S3] Bucket ARN**, the **[S3] Access Point ARN**, or the **[Global][S3] S3-Compatible Bucket Name** (for Cloudflare R2 and other S3-compatible providers). For S3-compatible buckets also set **Endpoint** and **Region**. + - Configure credentials using any of: **Access Key ID** / **Secret Access Key** (plus an optional **Session Token**), a **Role ARN**, or a **Shared Credential File** / **Credential Profile Name**. + - For each enabled data stream, set the **[SQS] Queue URL** (when using SQS) or the **[S3] Bucket Prefix** (when polling an S3 / S3-compatible bucket). For R2 / S3-compatible buckets you may also override the bucket name per data stream using the **[][S3] S3-Compatible Bucket Name** field. + - Tune throughput with **[S3/SQS] Number of Workers** and **[S3] Interval** (polling mode) or **[SQS] Visibility Timeout** / **[SQS] API Timeout** (SQS mode). + * To **Collect Cloudflare Logpush logs via Google Cloud Storage**, you'll need to: + - Configure **Project Id** and either **JSON Credentials key** or **JSON Credentials file path**. + - For each data stream, configure the **Buckets** list and optionally tune **Maximum number of workers**, **Polling**, **Polling interval**, and **Bucket Timeout**. + * To **Collect Cloudflare Logpush logs via Azure Blob Storage**, you'll need to: + - Configure **Account Name** and (optionally) **Storage URL**, and authenticate using either a **Service Account Key**, a **Service Account URI**, or by enabling **Collect logs using OAuth2 authentication** and supplying **Client ID (OAuth2)**, **Client Secret (OAuth2)**, and **Tenant ID (OAuth2)**. + - For each data stream, configure the **Containers** list and optionally tune **Maximum number of workers**, **Polling**, and **Polling interval**. + +6. Select **Save and continue** to save the integration. + +### Validation -## Logs reference +#### Dashboards populated -### access_request +1. In the top search bar in Kibana, search for **Dashboards**. +2. In the search bar, type **Cloudflare Logpush**. +3. Select a dashboard for the dataset you are collecting, and verify the dashboard information is populated. + +## Troubleshooting + +- For help with Elastic ingest tools, check [Common problems](https://www.elastic.co/docs/troubleshoot/ingest/fleet/common-problems). +- For Cloudflare-side troubleshooting and delivery status, refer to the [Logpush health dashboard](https://developers.cloudflare.com/logs/logpush/logpush-health/) and the relevant [destination-specific troubleshooting guide](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/). +- When collecting from Cloudflare R2 via the AWS S3 input, the error `failed to get AWS region for bucket: operation error S3: GetBucketLocation` usually indicates a credentials or permissions problem. Inspect the full API error response to identify the underlying issue. +- When using Azure Blob Storage, SAS tokens must have the **Write-only** permission, the service set to **Blob-only** (`ss=b`), and the resource type set to **Object-only** (`srt=o`). Set an expiration of at least five years to avoid unexpected token expiry. Refer to [Troubleshooting Azure destinations](https://developers.cloudflare.com/logs/logpush/logpush-job/enable-destinations/azure/#troubleshooting-azure-destinations) for details. +- When using the HTTP Endpoint input, ensure the Elastic Agent endpoint is reachable over HTTPS with a trusted certificate and that any `secret.header` / `secret.value` pair configured on the agent matches the `header_*` parameter defined in the Logpush job's `destination_conf`. + +## Performance and scaling + +For more information on architectures that can be used for scaling this integration, check the [Ingest Architectures](https://www.elastic.co/docs/manage-data/ingest/ingest-reference-architectures) documentation. + +Additional considerations: + +- For high-volume zones, AWS SQS mode is recommended because it distributes work across multiple Elastic Agents without requiring bucket polling. +- Tune [`max_upload_bytes`, `max_upload_records`, and `max_upload_interval_seconds`](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#max-upload-parameters) on the Logpush job to match the throughput your agents and destination can handle. +- For each input, adjust the worker count and polling interval to balance latency against API calls / egress costs. The relevant fields are **[S3/SQS] Number of Workers** and **[S3] Interval** for the AWS S3 input, and **Maximum number of workers** with **Polling interval** for the Google Cloud Storage and Azure Blob Storage inputs. +- Use Cloudflare Logpush [sampling](https://developers.cloudflare.com/logs/logpush/logpush-job/api-configuration/#sampling-rate) and [filters](https://developers.cloudflare.com/logs/logpush/logpush-job/filters/) to reduce the volume of low-value events at the source. + +## Reference + +### Logs reference + +#### access_request This is the `access_request` dataset. -#### Example +##### Example An example event for `access_request` looks as following: @@ -365,12 +453,11 @@ An example event for `access_request` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### audit +#### audit This is the `audit` dataset. - -#### Example +##### Example An example event for `audit` looks as following: @@ -528,11 +615,11 @@ An example event for `audit` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### casb +#### casb This is the `casb` dataset. -#### Example +##### Example An example event for `casb` looks as following: @@ -718,11 +805,11 @@ An example event for `casb` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### device_posture +#### device_posture This is the `device_posture` dataset. -#### Example +##### Example An example event for `device_posture` looks as following: @@ -895,11 +982,11 @@ An example event for `device_posture` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### dlp_forensic_copies +#### dlp_forensic_copies This is the `dlp_forensic_copies` dataset. -#### Example +##### Example An example event for `dlp_forensic_copies` looks as following: @@ -1026,11 +1113,11 @@ An example event for `dlp_forensic_copies` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### dns +#### dns This is the `dns` dataset. -#### Example +##### Example An example event for `dns` looks as following: @@ -1169,11 +1256,11 @@ An example event for `dns` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### dns_firewall +#### dns_firewall This is the `dns_firewall` dataset. -#### Example +##### Example An example event for `dns_firewall` looks as following: @@ -1349,11 +1436,11 @@ An example event for `dns_firewall` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### email_security_alerts +#### email_security_alerts This is the `email_security_alerts` dataset. -#### Example +##### Example An example event for `email_security_alerts` looks as following: @@ -1589,11 +1676,11 @@ An example event for `email_security_alerts` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### firewall_event +#### firewall_event This is the `firewall_event` dataset. -#### Example +##### Example An example event for `firewall_event` looks as following: @@ -1832,11 +1919,11 @@ An example event for `firewall_event` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### gateway_dns +#### gateway_dns This is the `gateway_dns` dataset. -#### Example +##### Example An example event for `gateway_dns` looks as following: @@ -2137,11 +2224,11 @@ An example event for `gateway_dns` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### gateway_http +#### gateway_http This is the `gateway_http` dataset. -#### Example +##### Example An example event for `gateway_http` looks as following: @@ -2416,11 +2503,11 @@ An example event for `gateway_http` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### gateway_network +#### gateway_network This is the `gateway_network` dataset. -#### Example +##### Example An example event for `gateway_network` looks as following: @@ -2639,11 +2726,11 @@ An example event for `gateway_network` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### http_request +#### http_request This is the `http_request` dataset. -#### Example +##### Example An example event for `http_request` looks as following: @@ -3064,11 +3151,11 @@ An example event for `http_request` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### magic_ids +#### magic_ids This is the `magic_ids` dataset. -#### Example +##### Example An example event for `magic_ids` looks as following: @@ -3241,11 +3328,11 @@ An example event for `magic_ids` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### nel_report +#### nel_report This is the `nel_report` dataset. -#### Example +##### Example An example event for `nel_report` looks as following: @@ -3372,11 +3459,11 @@ An example event for `nel_report` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### network_analytics +#### network_analytics This is the `network_analytics` dataset. -#### Example +##### Example An example event for `network_analytics` looks as following: @@ -3740,11 +3827,11 @@ An example event for `network_analytics` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### network_session +#### network_session This is the `network_session` dataset. -#### Example +##### Example An example event for `network_session` looks as following: @@ -4010,11 +4097,11 @@ An example event for `network_session` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### page_shield_events +#### page_shield_events This is the `page_shield_events` dataset. -#### Example +##### Example An example event for `page_shield_events` looks as following: @@ -4149,11 +4236,11 @@ An example event for `page_shield_events` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### sinkhole_http +#### sinkhole_http This is the `sinkhole_http` dataset. -#### Example +##### Example An example event for `sinkhole_http` looks as following: @@ -4360,11 +4447,11 @@ An example event for `sinkhole_http` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### spectrum_event +#### spectrum_event This is the `spectrum_event` dataset. -#### Example +##### Example An example event for `spectrum_event` looks as following: @@ -4568,11 +4655,11 @@ An example event for `spectrum_event` looks as following: | log.source.address | Source address from which the log event was read / sent from. | keyword | -### workers_trace +#### workers_trace This is the `workers_trace` dataset. -#### Example +##### Example An example event for `workers_trace` looks as following: @@ -4726,3 +4813,12 @@ An example event for `workers_trace` looks as following: | log.offset | Log offset | long | | log.source.address | Source address from which the log event was read / sent from. | keyword | + +### Inputs used + +These inputs are used in this integration: + +- [http_endpoint](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-http_endpoint) +- [aws-s3](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-aws-s3) +- [gcs](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-gcs) +- [azure-blob-storage](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-azure-blob-storage) diff --git a/packages/cloudflare_logpush/manifest.yml b/packages/cloudflare_logpush/manifest.yml index d2a54582cf7..0bae8d591de 100644 --- a/packages/cloudflare_logpush/manifest.yml +++ b/packages/cloudflare_logpush/manifest.yml @@ -1,8 +1,8 @@ format_version: "3.0.2" name: cloudflare_logpush title: Cloudflare Logpush -version: "1.43.5" -description: Collect and parse logs from Cloudflare API with Elastic Agent. +version: "1.44.0" +description: Collect logs from Cloudflare with Elastic Agent. type: integration categories: - security @@ -40,7 +40,7 @@ icons: policy_templates: - name: cloudflare title: Cloudflare Logpush logs - description: Collect logs from Cloudflare. + description: Collect logs from Cloudflare Logpush. inputs: - type: http_endpoint title: Collect Cloudflare Logpush logs via HTTP Endpoint @@ -194,7 +194,7 @@ policy_templates: required: false show_user: false default: "" - description: The name of the AWS region of the end point. Required when using a non-AWS S3-compatible bucket (e.g. Cloudflare R2). + description: The name of the region of the end point. Required when using a non-AWS S3-compatible bucket (e.g. Cloudflare R2). - name: default_region type: text title: Default AWS Region