Skip to main content

S3

This page contains the setup guide and reference information for the S3 source connector.

info

Please note that using cloud storage may incur egress costs. Egress refers to data that is transferred out of the cloud storage system, such as when you download files or access them from a different location. For detailed information on egress costs, please consult the Amazon S3 pricing guide.

  • Access to the S3 bucket containing the files to replicate.
  • For private buckets, an AWS account with the ability to grant permissions to read from the bucket.

If you are syncing from a private bucket, you need to authenticate the connection. This can be done either by using an IAM User (with AWS Access Key ID and Secret Access Key) or an IAM Role (with Role ARN). Begin by creating a policy with the necessary permissions:

  1. Log in to your Amazon AWS account and open the IAM console.
  2. In the IAM dashboard, select Policies, then click Create Policy.
  3. Select the JSON tab, then paste the following JSON into the Policy editor (be sure to substitute in your bucket name):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::{your-bucket-name}/*",
"arn:aws:s3:::{your-bucket-name}"
]
}
]
}
note

At this time, object-level permissions alone are not sufficient to successfully authenticate the connection. Please ensure you include the bucket-level permissions as provided in the example above.

  1. Give your policy a descriptive name, then click Create policy.
  1. In the IAM dashboard, click Users. Select an existing IAM user or create a new one by clicking Add users.
  2. If you are using an existing IAM user, click the Add permissions dropdown menu and select Add permissions. If you are creating a new user, you will be taken to the Permissions screen after selecting a name.
  3. Select Attach policies directly, then find and check the box for your new policy. Click Next, then Add permissions.
  4. After successfully creating your user, select the Security credentials tab and click Create access key. You will be prompted to select a use case and add optional tags to your access key. Click Create access key to generate the keys.
caution

Your Secret Access Key will only be visible once upon creation. Be sure to copy and store it securely for future use.

For more information on managing your access keys, please refer to the official AWS documentation.

note

S3 authentication using an IAM role member is not supported using the OSS platform.

note

S3 authentication using an IAM role member must be enabled by a member of the Airbyte team. If you'd like to use this feature, please contact the Sales team for more information.

  1. In the IAM dashboard, click Roles, then Create role.

  2. Choose the AWS account trusted entity type.

  3. Set up a trust relationship for the role. This allows the Airbyte instance's AWS account to assume this role. You will also need to specify an external ID, which is a secret key that the trusting service (Airbyte) and the trusted role (the role you're creating) both know. This ID is used to prevent the "confused deputy" problem. The External ID should be your Airbyte workspace ID, which can be found in the URL of your workspace page. Edit the trust relationship policy to include the external ID:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::094410056844:user/delegated_access_user"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "{your-airbyte-workspace-id}"
}
}
}
]
}
  1. Complete the role creation and note the Role ARN.
  1. Log into your Airbyte Cloud account.
  2. Click Sources and then click + New source.
  3. On the Set up the source page, select S3 from the Source type dropdown.
  4. Enter a name for the S3 connector.
  5. Enter the name of the Bucket containing your files to replicate.
  6. Add a stream
    1. Choose the File Format
    2. In the Format box, use the dropdown menu to select the format of the files you'd like to replicate. The supported formats are CSV, Parquet, Avro and JSONL. Toggling the Optional fields button within the Format box will allow you to enter additional configurations based on the selected format. For a detailed breakdown of these settings, refer to the File Format section below.
    3. Give a Name to the stream
    4. (Optional) Enter the Globs which dictates which files to be synced. This is a regular expression that allows Airbyte to pattern match the specific files to replicate. If you are replicating all the files within your bucket, use ** as the pattern. For more precise pattern matching options, refer to the Globs section below.
    5. (Optional) Modify the Days To Sync If History Is Full value. This gives you control of the lookback window that we will use to determine which files to sync if the state history is full. Details are in the State section below.
    6. (Optional) If you want to enforce a specific schema, you can enter a Input schema. By default, this value is set to {} and will automatically infer the schema from the file(s) you are replicating. For details on providing a custom schema, refer to the User Schema section.
    7. (Optional) Select the Schemaless option, to skip all validation of the records against a schema. If this option is selected the schema will be {"data": "object"} and all downstream data will be nested in a "data" field. This is a good option if the schema of your records changes frequently.
    8. (Optional) Select a Validation Policy to tell Airbyte how to handle records that do not match the schema. You may choose to emit the record anyway (fields that aren't present in the schema may not arrive at the destination), skip the record altogether, or wait until the next discovery (which will happen in the next 24 hours).
  7. To authenticate your private bucket:
    • If using an IAM role, enter the AWS Role ARN.
    • If using IAM user credentials, fill the AWS Access Key ID and AWS Secret Access Key fields with the appropriate credentials.

All other fields are optional and can be left empty. Refer to the S3 Provider Settings section below for more information on each field.

  1. Navigate to the Airbyte Open Source dashboard.
  2. Click Sources and then click + New source.
  3. On the Set up the source page, select S3 from the Source type dropdown.
  4. Enter a name for the S3 connector.
info

The raw file replication feature has the following requirements and limitations:

  • Supported Airbyte Versions:
    • Cloud: All Workspaces
    • OSS / Enterprise: v1.2.0 or later
  • Max File Size: 1GB per file
  • Supported Destinations:
    • S3: v1.4.0 or later

Copy raw files without parsing their contents. Bits are copied into the destination exactly as they appeared in the source. Recommended for use with unstructured text data, non-text and compressed files.

Format options will not be taken into account. Instead, files will be transferred to the file-based destination without parsing underlying data.

The S3 source connector supports the following sync modes:

FeatureSupported?
Full Refresh SyncYes
Incremental SyncYes
Replicate Incremental DeletesNo
Replicate Multiple Files (pattern matching)Yes
Replicate Multiple Streams (distinct tables)Yes
NamespacesNo

There is no predefined streams. The streams are based on content of your bucket.

CompressionSupported?
GzipYes
ZipYes
Bzip2Yes
LzmaNo
XzNo
SnappyNo

Please let us know any specific compressions you'd like to see support for next!

(tl;dr -> path pattern syntax using wcmatch.glob. GLOBSTAR and SPLIT flags are enabled.)

This connector can sync multiple files by using glob-style patterns, rather than requiring a specific path for every file. This enables:

  • Referencing many files with just one pattern, e.g. ** would indicate every file in the bucket.
  • Referencing future files that don't exist yet (and therefore don't have a specific path).

You must provide a path pattern. You can also provide many patterns split with | for more complex directory layouts.

Each path pattern is a reference from the root of the bucket, so don't include the bucket name in the pattern(s).

Some example patterns:

  • ** : match everything.
  • **/*.csv : match all files with specific extension.
  • myFolder/**/*.csv : match all csv files anywhere under myFolder.
  • */** : match everything at least one folder deep.
  • */*/*/** : match everything at least three folders deep.
  • **/file.*|**/file : match every file called "file" with any extension (or no extension).
  • x/*/y/* : match all files that sit in folder x -> any folder -> folder y.
  • **/prefix*.csv : match all csv files with specific prefix.
  • **/prefix*.parquet : match all parquet files with specific prefix.

Let's look at a specific example, matching the following bucket layout:

myBucket
-> log_files
-> some_table_files
-> part1.csv
-> part2.csv
-> images
-> more_table_files
-> part3.csv
-> extras
-> misc
-> another_part1.csv

We want to pick up part1.csv, part2.csv and part3.csv (excluding another_part1.csv for now). We could do this a few different ways:

  • We could pick up every csv file called "partX" with the single pattern **/part*.csv.
  • To be a bit more robust, we could use the dual pattern some_table_files/*.csv|more_table_files/*.csv to pick up relevant files only from those exact folders.
  • We could achieve the above in a single pattern by using the pattern *table_files/*.csv. This could however cause problems in the future if new unexpected folders started being created.
  • We can also recursively wildcard, so adding the pattern extras/**/*.csv would pick up any csv files nested in folders below "extras", such as "extras/misc/another_part1.csv".

As you can probably tell, there are many ways to achieve the same goal with path patterns. We recommend using a pattern that ensures clarity and is robust against future additions to the directory structure.

To perform incremental syncs, Airbyte syncs files from oldest to newest. Each file that's synced (up to 10,000 files) will be added as an entry in a "history" section of the connection's state message. Once history is full, we drop the older messages out of the file, and only read files that were last modified between the date of the newest file in history and Days to Sync if History is Full days prior.

Providing a schema allows for more control over the output of this stream. Without a provided schema, columns and datatypes will be inferred from the first created file in the bucket matching your path pattern and suffix. This will probably be fine in most cases but there may be situations you want to enforce a schema instead, e.g.:

note

Without providing a schema for a CSV file all columns will be inferred as a string.

  • You only care about a specific known subset of the columns. The other columns would all still be included, but packed into the _ab_additional_properties map.
  • Your initial dataset is quite small (in terms of number of records), and you think the automatic type inference from this sample might not be representative of the data in the future.
  • You want to purposely define types for every column.
  • You know the names of columns that will be added to future data and want to include these in the core schema as columns rather than have them appear in the _ab_additional_properties map.

Or any other reason! The schema must be provided as valid JSON as a map of {"column": "datatype"} where each datatype is one of:

  • string
  • number
  • integer
  • object
  • array
  • boolean
  • null

For example:

  • {"id": "integer", "location": "string", "longitude": "number", "latitude": "number"}
  • {"username": "string", "friends": "array", "information": "object"}
note

Please note, the S3 Source connector used to infer schemas from all the available files and then merge them to create a superset schema. Starting from version 2.0.0 the schema inference works based on the first file found only. The first file we consider is the oldest one written to the prefix.

  • AWS Access Key ID: One half of the required credentials for accessing a private bucket.
  • AWS Secret Access Key: The other half of the required credentials for accessing a private bucket.
  • Endpoint: An optional parameter that enables the use of non-Amazon S3 compatible services. If you are using the default Amazon service, leave this field blank.
  • Start Date: An optional parameter that marks a starting date and time in UTC for data replication. Any files that have not been modified since this specified date/time will not be replicated. Use the provided datepicker (recommended) or enter the desired date programmatically in the format YYYY-MM-DDTHH:mm:ssZ. Leaving this field blank will replicate data from all files that have not been excluded by the Path Pattern and Path Prefix.

Since CSV files are effectively plain text, providing specific reader options is often required for correct parsing of the files. These settings are applied when a CSV is created or exported so please ensure that this process happens consistently over time.

  • Header Definition: How headers will be defined. User Provided assumes the CSV does not have a header row and uses the headers provided and Autogenerated assumes the CSV does not have a header row and the CDK will generate headers using for f{i} where i is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can set a value for the "Skip rows before header" option to ignore the header row.
  • Delimiter: Even though CSV is an acronym for Comma Separated Values, it is used more generally as a term for flat file data that may or may not be comma separated. The delimiter field lets you specify which character acts as the separator. To use tab-delimiters, you can set this value to \t. By default, this value is set to ,.
  • Double Quote: This option determines whether two quotes in a quoted CSV value denote a single quote in the data. Set to True by default.
  • Encoding: Some data may use a different character set (typically when different alphabets are involved). See the list of allowable encodings here. By default, this is set to utf8.
  • Escape Character: An escape character can be used to prefix a reserved character and ensure correct parsing. A commonly used character is the backslash (\). For example, given the following data:
Product,Description,Price
Jeans,"Navy Blue, Bootcut, 34\"",49.99

The backslash (\) is used directly before the second double quote (") to indicate that it is not the closing quote for the field, but rather a literal double quote character that should be included in the value (in this example, denoting the size of the jeans in inches: 34" ).

Leaving this field blank (default option) will disallow escaping.

  • False Values: A set of case-sensitive strings that should be interpreted as false values.
  • Null Values: A set of case-sensitive strings that should be interpreted as null values. For example, if the value 'NA' should be interpreted as null, enter 'NA' in this field.
  • Quote Character: In some cases, data values may contain instances of reserved characters (like a comma, if that's the delimiter). CSVs can handle this by wrapping a value in defined quote characters so that on read it can parse it correctly. By default, this is set to ".
  • Skip Rows After Header: The number of rows to skip after the header row.
  • Skip Rows Before Header: The number of rows to skip before the header row.
  • Strings Can Be Null: Whether strings can be interpreted as null values. If true, strings that match the null_values set will be interpreted as null. If false, strings that match the null_values set will be interpreted as the string itself.
  • True Values: A set of case-sensitive strings that should be interpreted as true values.

Apache Parquet is a column-oriented data storage format of the Apache Hadoop ecosystem. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. At the moment, partitioned parquet datasets are unsupported. The following settings are available:

  • Convert Decimal Fields to Floats: Whether to convert decimal fields to floats. There is a loss of precision when converting decimals to floats, so this is not recommended.

The Avro parser uses the Fastavro library. The following settings are available:

  • Convert Double Fields to Strings: Whether to convert double fields to strings. This is recommended if you have decimal numbers with a high degree of precision because there can be a loss precision when handling floating point numbers.

There are currently no options for JSONL parsing.

warning

The Document File Type Format is currently an experimental feature and not subject to SLAs. Use at your own risk.

The Document File Type Format is a special format that allows you to extract text from Markdown, TXT, PDF, Word and Powerpoint documents. If selected, the connector will extract text from the documents and output it as a single field named content. The document_key field will hold a unique identifier for the processed file which can be used as a primary key. The content of the document will contain markdown formatting converted from the original file format. Each file matching the defined glob pattern needs to either be a markdown (md), PDF (pdf), Word (docx) or Powerpoint (.pptx) file.

One record will be emitted for each document. Keep in mind that large files can emit large records that might not fit into every destination as each destination has different limitations for string fields.

This connector utilizes the open source Unstructured library to perform OCR and text extraction from PDFs and MS Word files, as well as from embedded tables and images. You can read more about the parsing logic in the Unstructured docs and you can learn about other Unstructured tools and services at www.unstructured.io.

Config fields reference

Field
Type
Property name
array<object>
streams
string
bucket
string
start_date
object
delivery_method
string
aws_access_key_id
string
role_arn
string
aws_secret_access_key
string
endpoint
string
region_name
string
dataset
string
path_pattern
object
format
string
schema
object
provider
Expand to review
VersionDatePull RequestSubject
4.10.12024-11-1248346Implement file-transfer capabilities
4.9.22024-11-0448259Update dependencies
4.9.12024-10-2947038Update dependencies
4.9.02024-10-1746973Promote releae candidate.
4.9.0-rc.12024-10-1446298Migrate to CDK v5
4.8.52024-10-1246511Update dependencies
4.8.42024-09-2846131Update dependencies
4.8.32024-09-2145757Update dependencies
4.8.22024-09-1445504Update dependencies
4.8.12024-09-0745257Update dependencies
4.8.02024-09-0344908Migrate to CDK v3
4.7.82024-08-3145009Update dependencies
4.7.72024-08-2444732Update dependencies
4.7.62024-08-1944380Update dependencies
4.7.52024-08-1243868Update dependencies
4.7.42024-08-1043667Update dependencies
4.7.32024-08-0343083Update dependencies
4.7.22024-07-2742814Update dependencies
4.7.12024-07-2042205Update dependencies
4.7.02024-07-1641934Update to 3.5.1 CDK
4.6.32024-07-1341934Update dependencies
4.6.22024-07-1041503Update dependencies
4.6.12024-07-0940067Update dependencies
4.6.02024-06-2639573Improve performance: update to Airbyte CDK 2.0.0
4.5.172024-06-0639214[autopull] Upgrade base image to v1.2.2
4.5.162024-05-2938674Avoid error on empty stream when running discover
4.5.152024-05-2038252Replace AirbyteLogger with logging.Logger
4.5.142024-05-0938090Bump python-cdk version to include CSV field length fix
4.5.132024-05-0337776Update airbyte-cdk to fix the discovery command issue
4.5.122024-04-1137001Update airbyte-cdk to flush print buffer for every message
4.5.112024-03-1436160Bump python-cdk version to include CSV tab delimiter fix
4.5.102024-03-1135955Pin transformers transitive dependency
4.5.92024-03-0635857Bump poetry.lock to upgrade transitive dependency
4.5.82024-03-0435808Use cached AWS client
4.5.72024-02-2334895Run incremental syncs with concurrency
4.5.62024-02-2135246Fixes bug that occurred when creating CSV streams with tab delimiter.
4.5.52024-02-1835392Add support filtering by start date
4.5.42024-02-1535055Temporarily revert concurrency
4.5.32024-02-1235164Manage dependencies with Poetry.
4.5.22024-02-0634930Bump CDK version to fix issue when SyncMode is missing from catalog
4.5.12024-02-0231701Add region support
4.5.02024-02-0134591Run full refresh syncs concurrently
4.4.12024-01-3034665Pin moto & CDK version
4.4.02024-01-1233818Add IAM Role Authentication
4.3.12024-01-0433937Prepare for airbyte-lib
4.3.02023-12-1433411Bump CDK version to auto-set primary key for document file streams and support raw txt files
4.2.42023-12-0633187Bump CDK version to hide source-defined primary key
4.2.32023-11-1632608Improve document file type parser
4.2.22023-11-2032677Only read files with ".zip" extension as zipped files
4.2.12023-11-1332357Improve spec schema
4.2.02023-11-0232109Fix docs; add HTTPS validation for S3 endpoint; fix coverage
4.1.42023-10-3031904Update CDK
4.1.32023-10-2531654Reduce image size
4.1.22023-10-2331383Add handling NoSuchBucket error
4.1.12023-10-1931601Base image migration: remove Dockerfile and use the python-connector-base image
4.1.02023-10-1731340Add reading files inside zip archive
4.0.52023-10-1631209Add experimental Markdown/PDF/Docx file format
4.0.42023-09-1830476Remove streams.*.file_type from source-s3 configuration
4.0.32023-09-1330387Bump Airbyte-CDK version to improve messages for record parse errors
4.0.22023-09-0728639Always show S3 Key fields
4.0.12023-09-0630217Migrate inference error to config errors and avoir sentry alerts
4.0.02023-09-0529757New version using file-based CDK
3.1.112023-08-3029986Add config error for conversion error
3.1.102023-08-2929943Add config error for arrow invalid error
3.1.92023-08-2329753Feature parity update for V4 release
3.1.82023-08-1729520Update legacy state and error handling
3.1.72023-08-1729505v4 StreamReader and Cursor fixes
3.1.62023-08-1629480update Pyarrow to version 12.0.1
3.1.52023-08-1529418Avoid duplicate syncs when migrating from v3 to v4
3.1.42023-08-1529382Handle legacy path prefix & path pattern
3.1.32023-08-0529028Update v3 & v4 connector to handle either state message
3.1.22023-07-2928786Add a codepath for using the file-based CDK
3.1.12023-07-2628730Add human readable error message and improve validation for encoding field when it empty
3.1.02023-06-2627725License Update: Elv2
3.0.32023-06-2327651Handle Bucket Access Errors
3.0.22023-06-2227611Fix start date
3.0.12023-06-2227604Add logging for file reading
3.0.02023-05-0225127Remove ab_additional column; Use platform-handled schema evolution
2.2.02023-05-1025937Add support for Parquet Dataset
2.1.42023-05-0125361Parse nested avro schemas
2.1.32023-05-0125706Remove minimum block size for CSV check
2.1.22023-04-1825067Handle block size related errors; fix config validator
2.1.12023-04-1825010Refactor filter logic
2.1.02023-04-1025010Add start_date field to filter files based on LastModified option
2.0.42023-03-2324429Call check with a little block size to save time and memory.
2.0.32023-03-1724178Support legacy datetime format for the period of migration, fix time-zone conversion.
2.0.22023-03-1624157Return empty schema if discover finds no files; Do not infer extra data types when user defined schema is applied.
2.0.12023-03-0623195Fix datetime format string
2.0.02023-03-1423189Infer schema based on one file instead of all the files
1.0.22023-03-0223669Made Advanced Reader Options and Advanced Options truly optional for CSV format
1.0.12023-02-2723502Fix error handling
1.0.02023-02-1723198Fix Avro schema discovery
0.1.322023-02-0722500Speed up discovery
0.1.312023-02-0822550Validate CSV read options and convert options
0.1.302023-01-2521587Make sure spec works as expected in UI
0.1.292023-01-1921604Handle OSError: skip unreachable keys and keep working on accessible ones. Warn a customer
0.1.282023-01-1021210Update block size for json file format
0.1.272022-12-0820262Check config settings for CSV file format
0.1.262022-11-0819006Add virtual-hosted-style option
0.1.242022-10-2818602Wrap errors into AirbyteTracedException pointing to a problem file
0.1.232022-10-1017800Deleted use_ssl and verify_ssl_cert flags and hardcoded to True
0.1.232022-10-1017991Fix pyarrow to JSON schema type conversion for arrays
0.1.222022-09-2817304Migrate to per-stream state
0.1.212022-09-2016921Upgrade pyarrow
0.1.202022-09-1216607Fix for reading jsonl files containing nested structures
0.1.192022-09-1316631Adjust column type to a broadest one when merging two or more json schemas
0.1.182022-08-0114213Add support for jsonl format files.
0.1.172022-07-2114911"decimal" type added for parquet
0.1.162022-07-1314669Fixed bug when extra columns apeared to be non-present in master schema
0.1.152022-05-3112568Fixed possible case of files being missed during incremental syncs
0.1.142022-05-2311967Increase unit test coverage up to 90%
0.1.132022-05-1112730Fixed empty options issue
0.1.122022-05-1112602Added support for Avro file format
0.1.112022-04-3012500Improve input configuration copy
0.1.102022-01-288252Refactoring of files' metadata
0.1.92022-01-069163Work-around for web-UI, backslash - t converts to tab for format.delimiter field.
0.1.72021-11-087499Remove base-python dependencies
0.1.62021-10-156615 & 7058Memory and performance optimisation. Advanced options for CSV parsing.
0.1.52021-09-246398Support custom non Amazon S3 services
0.1.42021-08-135305Support of Parquet format
0.1.32021-08-045197Fixed bug where sync could hang indefinitely on schema inference
0.1.22021-08-025135Fixed bug in spec so it displays in UI correctly
0.1.12021-07-304990Fixed documentation url in source definition
0.1.02021-07-304990Created S3 source connector