awswrangler.db.to_sql

awswrangler.db.to_sql(df: pandas.core.frame.DataFrame, con: sqlalchemy.engine.base.Engine, **pandas_kwargs: Any) → None

Write records stored in a DataFrame to a SQL database.

Support for Redshift, PostgreSQL and MySQL.

Support for all pandas to_sql() arguments: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html

Note

Redshift: For large DataFrames (1MM+ rows) consider the function wr.db.copy_to_redshift().

Note

Redshift: index=False will be forced.

Parameters
Returns

None.

Return type

None

Examples

Writing to Redshift with temporary credentials

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.db.to_sql(
...     df=pd.DataFrame({'col': [1, 2, 3]}),
...     con=wr.db.get_redshift_temp_engine(cluster_identifier="...", user="..."),
...     name="table_name",
...     schema="schema_name"
... )

Writing to Redshift with temporary credentials and using pandas_kwargs

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.db.to_sql(
...     df=pd.DataFrame({'col': [1, 2, 3]}),
...     con=wr.db.get_redshift_temp_engine(cluster_identifier="...", user="..."),
...     name="table_name",
...     schema="schema_name",
...     if_exists="replace",
...     index=False,
... )

Writing to Redshift from Glue Catalog Connections

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.db.to_sql(
...     df=pd.DataFrame({'col': [1, 2, 3]}),
...     con=wr.catalog.get_engine(connection="..."),
...     name="table_name",
...     schema="schema_name"
... )