awswrangler.s3.read_fwf(path: Union[str, List[str]], path_suffix: Optional[Union[str, List[str]]] = None, path_ignore_suffix: Optional[Union[str, List[str]]] = None, ignore_empty: bool = True, use_threads: bool = True, last_modified_begin: Optional[datetime.datetime] = None, last_modified_end: Optional[datetime.datetime] = None, boto3_session: Optional[boto3.session.Session] = None, s3_additional_kwargs: Optional[Dict[str, Any]] = None, chunksize: Optional[int] = None, dataset: bool = False, partition_filter: Optional[Callable[[Dict[str, str]], bool]] = None, **pandas_kwargs: Any) → Union[pandas.core.frame.DataFrame, Iterator[pandas.core.frame.DataFrame]]

Read fixed-width formatted file(s) from from a received S3 prefix or list of S3 objects paths.

This function accepts Unix shell-style wildcards in the path argument. * (matches everything), ? (matches any single character), [seq] (matches any character in seq), [!seq] (matches any character not in seq).


For partial and gradual reading use the argument chunksize instead of iterator.


In case of use_threads=True the number of threads that will be spawned will be gotten from os.cpu_count().


The filter by last_modified begin last_modified end is applied after list all S3 files

  • path (Union[str, List[str]]) – S3 prefix (accepts Unix shell-style wildcards) (e.g. s3://bucket/prefix) or list of S3 objects paths (e.g. [s3://bucket/key0, s3://bucket/key1]).

  • path_suffix (Union[str, List[str], None]) – Suffix or List of suffixes to be read (e.g. [“.txt”]). If None, will try to read all files. (default)

  • path_ignore_suffix (Union[str, List[str], None]) – Suffix or List of suffixes for S3 keys to be ignored.(e.g. [“_SUCCESS”]). If None, will try to read all files. (default)

  • ignore_empty (bool) – Ignore files with 0 bytes.

  • use_threads (bool) – True to enable concurrent requests, False to disable multiple threads. If enabled os.cpu_count() will be used as the max number of threads.

  • last_modified_begin – Filter the s3 files by the Last modified date of the object. The filter is applied only after list all s3 files.

  • last_modified_end (datetime, optional) – Filter the s3 files by the Last modified date of the object. The filter is applied only after list all s3 files.

  • boto3_session (boto3.Session(), optional) – Boto3 Session. The default boto3 session will be used if boto3_session receive None.

  • s3_additional_kwargs (Optional[Dict[str, Any]]) – Forward to botocore requests, only “SSECustomerAlgorithm” and “SSECustomerKey” arguments will be considered.

  • chunksize (int, optional) – If specified, return an generator where chunksize is the number of rows to include in each chunk.

  • dataset (bool) – If True read a FWF dataset instead of simple file(s) loading all the related partitions as columns.

  • partition_filter (Optional[Callable[[Dict[str, str]], bool]]) – Callback Function filters to apply on PARTITION columns (PUSH-DOWN filter). This function MUST receive a single argument (Dict[str, str]) where keys are partitions names and values are partitions values. Partitions values will be always strings extracted from S3. This function MUST return a bool, True to read the partition or False to ignore it. Ignored if dataset=False. E.g lambda x: True if x["year"] == "2020" and x["month"] == "1" else False

  • pandas_kwargs – KEYWORD arguments forwarded to pandas.read_fwf(). You can NOT pass pandas_kwargs explicit, just add valid Pandas arguments in the function call and Wrangler will accept it. e.g. wr.s3.read_fwf(path=’s3://bucket/prefix/’, widths=[1, 3], names=[“c0”, “c1”])


Pandas DataFrame or a Generator in case of chunksize != None.

Return type

Union[pandas.DataFrame, Generator[pandas.DataFrame, None, None]]


Reading all fixed-width formatted (FWF) files under a prefix

>>> import awswrangler as wr
>>> df = wr.s3.read_fwf(path='s3://bucket/prefix/', widths=[1, 3], names=['c0', 'c1'])

Reading all fixed-width formatted (FWF) files from a list

>>> import awswrangler as wr
>>> df = wr.s3.read_fwf(path=['s3://bucket/0.txt', 's3://bucket/1.txt'], widths=[1, 3], names=['c0', 'c1'])

Reading in chunks of 100 lines

>>> import awswrangler as wr
>>> dfs = wr.s3.read_fwf(
...     path=['s3://bucket/0.txt', 's3://bucket/1.txt'],
...     chunksize=100,
...     widths=[1, 3],
...     names=["c0", "c1"]
... )
>>> for df in dfs:
>>>     print(df)  # 100 lines Pandas DataFrame

Reading FWF Dataset with PUSH-DOWN filter over partitions

>>> import awswrangler as wr
>>> my_filter = lambda x: True if x["city"].startswith("new") else False
>>> df = wr.s3.read_fwf(path, dataset=True, partition_filter=my_filter, widths=[1, 3], names=["c0", "c1"])