# Export to data lakes

**URL:** https://heroiclabs.com/docs/satori/concepts/performance-monitoring/export-to-data-lakes/
**Summary:** Data lake integrations send your player and event data to your existing data warehouse on a continuous basis.
**Keywords:** data export, BigQuery, Snowflake, Redshift, S3, Databricks, data warehouse, BI tools, player data export, event data, data pipeline, analytics integration
**Categories:** satori, monitoring

---


# Export to data lakes

Satori provides built-in analytics, but if your studio has its own data infrastructure, you can export your event stream directly to your data warehouse. 

To configure an integration, open **Settings** in the Satori console and select the **Integrations** tab. In the **Data Lakes** section, each platform has its own configuration tab with the required fields and setup instructions.

Satori supports five platforms:

| Platform | Notes |
|---|---|
| BigQuery | Exports to Google BigQuery. |
| Snowflake | Exports to Snowflake. |
| Redshift | Exports to Amazon Redshift. |
| S3 | Exports to an Amazon S3 bucket. |
| Databricks | Exports to Databricks via S3 in Parquet format. Ingest into Databricks following standard cloud object storage ingestion. |

## Invalid events

Invalid events are events that don't match definitions in your taxonomy. Satori still forwards these rejected events to your data lake so your data is not discarded. Invalid events will show up in data lake exports with an additional `_error` field in the metadata of the event, to indicate the rejection reasons. Use the [debug guide](../../../guides/debug-invalid-events/) to fix the errors.

## BigQuery

The **BigQuery** tab enables you to configure the BigQuery adaptor for Satori, and displays the instructions for doing so.

For detailed information on BigQuery connection, see [Connect to BigQuery](bigquery/).

{{< screenshot src="images/pages/satori/concepts/monitoring/monitoring_datalake_bigquery_integration.png" alt="BigQuery configuration panel showing GCP Project ID, BigQuery Dataset ID, Events Table Name, GCP service account credentials, and Schema Version fields" >}}

## Snowflake

The **Snowflake** tab enables you to configure the Snowflake adaptor for Satori, and displays the instructions for doing so.

For detailed information on Snowflake connection, see [Connect to Snowflake](snowflake/).

{{< screenshot src="images/pages/satori/concepts/monitoring/monitoring_datalake_snowflake_integration.png" alt="Snowflake configuration panel showing Table Name and Snowflake URL fields" >}}

## Redshift

The **Redshift** tab enables you to configure the Redshift adaptor for Satori, and displays the instructions for doing so.

{{< screenshot src="images/pages/satori/concepts/monitoring/monitoring_datalake_redshift_integration.png" alt="Redshift configuration panel showing Table Name and Redshift URL fields" >}}

## S3

The **S3** tab enables you to configure the S3 adaptor for Satori, and displays the instructions for doing so.

{{< screenshot src="images/pages/satori/concepts/monitoring/monitoring_datalake_s3_integration.png" alt="S3 configuration panel showing Access Key ID, Secret Access Key, Region, Bucket, Event Partitioning, Real-time, Flush Interval, and Max File Size fields" >}}

## Databricks

The **Databricks (S3)** tab enables you to configure data lake exports for Databricks Data Lakes via S3 adaptor. Using this integration, your data will be exported to your S3 bucket in Parquet File Format. You can ingest this data to your Databricks data lake following [Databricks documentation](https://docs.databricks.com/aws/en/ingestion/cloud-object-storage/auto-loader).

{{< screenshot src="images/pages/satori/concepts/monitoring/monitoring_datalake_databrick_integration.png" alt="Databricks (S3) configuration panel showing Access Key ID, Secret Access Key, Region, Bucket, Event Partitioning, Real-time, Flush Interval, and Max File Size fields" >}}
