awswrangler.s3.read_csv

awswrangler.s3.read_csv(path: Union[str, List[str]], use_threads: bool = True, last_modified_begin: Optional[datetime.datetime] = None, last_modified_end: Optional[datetime.datetime] = None, boto3_session: Optional[boto3.session.Session] = None, s3_additional_kwargs: Optional[Dict[str, str]] = None, chunksize: Optional[int] = None, dataset: bool = False, **pandas_kwargs) → Union[pandas.core.frame.DataFrame, Iterator[pandas.core.frame.DataFrame]]

Read CSV file(s) from from a received S3 prefix or list of S3 objects paths.

Note

For partial and gradual reading use the argument chunksize instead of iterator.

Note

In case of use_threads=True the number of threads that will be spawned will be get from os.cpu_count().

Note

The filter by last_modified begin last_modified end is applied after list all S3 files

Parameters
  • path (Union[str, List[str]]) – S3 prefix (e.g. s3://bucket/prefix) or list of S3 objects paths (e.g. [s3://bucket/key0, s3://bucket/key1]).

  • use_threads (bool) – True to enable concurrent requests, False to disable multiple threads. If enabled os.cpu_count() will be used as the max number of threads.

  • last_modified_begin – Filter the s3 files by the Last modified date of the object. The filter is applied only after list all s3 files.

  • last_modified_end (datetime, optional) – Filter the s3 files by the Last modified date of the object. The filter is applied only after list all s3 files.

  • boto3_session (boto3.Session(), optional) – Boto3 Session. The default boto3 session will be used if boto3_session receive None.

  • s3_additional_kwargs – Forward to s3fs, useful for server side encryption https://s3fs.readthedocs.io/en/latest/#serverside-encryption

  • chunksize (int, optional) – If specified, return an generator where chunksize is the number of rows to include in each chunk.

  • dataset (bool) – If True read a CSV dataset instead of simple file(s) loading all the related partitions as columns.

  • pandas_kwargs – keyword arguments forwarded to pandas.read_csv(). https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html

Returns

Pandas DataFrame or a Generator in case of chunksize != None.

Return type

Union[pandas.DataFrame, Generator[pandas.DataFrame, None, None]]

Examples

Reading all CSV files under a prefix

>>> import awswrangler as wr
>>> df = wr.s3.read_csv(path='s3://bucket/prefix/')

Reading all CSV files under a prefix encrypted with a KMS key

>>> import awswrangler as wr
>>> df = wr.s3.read_csv(
...     path='s3://bucket/prefix/',
...     s3_additional_kwargs={
...         'ServerSideEncryption': 'aws:kms',
...         'SSEKMSKeyId': 'YOUR_KMY_KEY_ARN'
...     }
... )

Reading all CSV files from a list

>>> import awswrangler as wr
>>> df = wr.s3.read_csv(path=['s3://bucket/filename0.csv', 's3://bucket/filename1.csv'])

Reading in chunks of 100 lines

>>> import awswrangler as wr
>>> dfs = wr.s3.read_csv(path=['s3://bucket/filename0.csv', 's3://bucket/filename1.csv'], chunksize=100)
>>> for df in dfs:
>>>     print(df)  # 100 lines Pandas DataFrame