awswrangler.s3.read_parquet

awswrangler.s3.read_parquet(path: Union[str, List[str]], filters: Union[List[Tuple], List[List[Tuple]], None] = None, columns: Optional[List[str]] = None, validate_schema: bool = True, chunked: bool = False, dataset: bool = False, categories: List[str] = None, use_threads: bool = True, boto3_session: Optional[boto3.session.Session] = None, s3_additional_kwargs: Optional[Dict[str, str]] = None) → Union[pandas.core.frame.DataFrame, Iterator[pandas.core.frame.DataFrame]]

Read Apache Parquet file(s) from from a received S3 prefix or list of S3 objects paths.

The concept of Dataset goes beyond the simple idea of files and enable more complex features like partitioning and catalog integration (AWS Glue Catalog).

Note

In case of use_threads=True the number of threads that will be spawned will be get from os.cpu_count().

Parameters
  • path (Union[str, List[str]]) – S3 prefix (e.g. s3://bucket/prefix) or list of S3 objects paths (e.g. [s3://bucket/key0, s3://bucket/key1]).

  • filters (Union[List[Tuple], List[List[Tuple]]], optional) – List of filters to apply, like [[('x', '=', 0), ...], ...].

  • columns (List[str], optional) – Names of columns to read from the file(s).

  • validate_schema – Check that individual file schemas are all the same / compatible. Schemas within a folder prefix should all be the same. Disable if you have schemas that are different and want to disable this check.

  • chunked (bool) – If True will break the data in smaller DataFrames (Non deterministic number of lines). Otherwise return a single DataFrame with the whole data.

  • dataset (bool) – If True read a parquet dataset instead of simple file(s) loading all the related partitions as columns.

  • categories (List[str], optional) – List of columns names that should be returned as pandas.Categorical. Recommended for memory restricted environments.

  • use_threads (bool) – True to enable concurrent requests, False to disable multiple threads. If enabled os.cpu_count() will be used as the max number of threads.

  • boto3_session (boto3.Session(), optional) – Boto3 Session. The default boto3 session will be used if boto3_session receive None.

  • s3_additional_kwargs – Forward to s3fs, useful for server side encryption https://s3fs.readthedocs.io/en/latest/#serverside-encryption

Returns

Pandas DataFrame or a Generator in case of chunked=True.

Return type

Union[pandas.DataFrame, Generator[pandas.DataFrame, None, None]]

Examples

Reading all Parquet files under a prefix

>>> import awswrangler as wr
>>> df = wr.s3.read_parquet(path='s3://bucket/prefix/')

Reading all Parquet files under a prefix encrypted with a KMS key

>>> import awswrangler as wr
>>> df = wr.s3.read_parquet(
...     path='s3://bucket/prefix/',
...     s3_additional_kwargs={
...         'ServerSideEncryption': 'aws:kms',
...         'SSEKMSKeyId': 'YOUR_KMY_KEY_ARN'
...     }
... )

Reading all Parquet files from a list

>>> import awswrangler as wr
>>> df = wr.s3.read_parquet(path=['s3://bucket/filename0.parquet', 's3://bucket/filename1.parquet'])

Reading in chunks

>>> import awswrangler as wr
>>> dfs = wr.s3.read_parquet(path=['s3://bucket/filename0.csv', 's3://bucket/filename1.csv'], chunked=True)
>>> for df in dfs:
>>>     print(df)  # Smaller Pandas DataFrame