awswrangler.s3.merge_datasets(source_path: str, target_path: str, mode: str = 'append', use_threads: bool = True, boto3_session: Optional[boto3.session.Session] = None) → List[str]

Merge a source dataset into a target dataset.


If you are merging tables (S3 datasets + Glue Catalog metadata), remember that you will also need to update your partitions metadata in some cases. (e.g. wr.athena.repair_table(table=’…’, database=’…’))


In case of use_threads=True the number of threads that will be spawned will be get from os.cpu_count().

  • source_path (str,) – S3 Path for the source directory.

  • target_path (str,) – S3 Path for the target directory.

  • mode (str, optional) – append (Default), overwrite, overwrite_partitions.

  • use_threads (bool) – True to enable concurrent requests, False to disable multiple threads. If enabled os.cpu_count() will be used as the max number of threads.

  • boto3_session (boto3.Session(), optional) – Boto3 Session. The default boto3 session will be used if boto3_session receive None.


List of new objects paths.

Return type



>>> import awswrangler as wr
>>> wr.s3.merge_datasets(
...     source_path="s3://bucket0/dir0/",
...     target_path="s3://bucket1/dir1/",
...     mode="append"
... )
["s3://bucket1/dir1/key0", "s3://bucket1/dir1/key1"]