awswrangler.s3.read_fwf(path: Union[str, List[str]], use_threads: bool = True, boto3_session: Optional[boto3.session.Session] = None, s3_additional_kwargs: Optional[Dict[str, str]] = None, chunksize: Optional[int] = None, **pandas_kwargs) → Union[pandas.core.frame.DataFrame, Iterator[pandas.core.frame.DataFrame]]

Read fixed-width formatted file(s) from from a received S3 prefix or list of S3 objects paths.


For partial and gradual reading use the argument chunksize instead of iterator.


In case of use_threads=True the number of threads that will be spawned will be get from os.cpu_count().

  • path (Union[str, List[str]]) – S3 prefix (e.g. s3://bucket/prefix) or list of S3 objects paths (e.g. [s3://bucket/key0, s3://bucket/key1]).

  • use_threads (bool) – True to enable concurrent requests, False to disable multiple threads. If enabled os.cpu_count() will be used as the max number of threads.

  • boto3_session (boto3.Session(), optional) – Boto3 Session. The default boto3 session will be used if boto3_session receive None.

  • s3_additional_kwargs – Forward to s3fs, useful for server side encryption

  • chunksize (int, optional) – If specified, return an generator where chunksize is the number of rows to include in each chunk.

  • pandas_kwargs – keyword arguments forwarded to pandas.read_fwf().


Pandas DataFrame or a Generator in case of chunksize != None.

Return type

Union[pandas.DataFrame, Generator[pandas.DataFrame, None, None]]


Reading all fixed-width formatted (FWF) files under a prefix

>>> import awswrangler as wr
>>> df = wr.s3.read_fwf(path='s3://bucket/prefix/')

Reading all fixed-width formatted (FWF) files under a prefix encrypted with a KMS key

>>> import awswrangler as wr
>>> df = wr.s3.read_fwf(
...     path='s3://bucket/prefix/',
...     s3_additional_kwargs={
...         'ServerSideEncryption': 'aws:kms',
...         'SSEKMSKeyId': 'YOUR_KMY_KEY_ARN'
...     }
... )

Reading all fixed-width formatted (FWF) files from a list

>>> import awswrangler as wr
>>> df = wr.s3.read_fwf(path=['s3://bucket/filename0.txt', 's3://bucket/filename1.txt'])

Reading in chunks of 100 lines

>>> import awswrangler as wr
>>> dfs = wr.s3.read_fwf(path=['s3://bucket/filename0.txt', 's3://bucket/filename1.txt'], chunksize=100)
>>> for df in dfs:
>>>     print(df)  # 100 lines Pandas DataFrame