This post was originally published in December 2023 and was updated in March 2025.

These days, there’s significant industry interest in moving database systems to PostgreSQL. Often, these are greenfield projects focusing on design and architecture. However, decisions are increasingly being made to move from existing platforms, like MySQL, to PostgreSQL for compelling business and economic reasons. Sunsetting a stable, successful stack often relates more to changing realities and opportunities than performance alone.

So, if tasked to migrate a well-performing, albeit potentially aging, MySQL DBMS to a modern PostgreSQL one, how would you approach it? While both systems are robust, they have distinct differences in architecture, features, and even simple things like data type handling that can present challenges. This is where specialized tools become invaluable. This post details a practical example of migrating from MySQL to PostgreSQL using pgloader.

About pgLoader: A powerful migration tool

Enter pgloader, a command-line interface (CLI) specifically designed to facilitate the conversion and migration from various DBMS, including MySQL, into PostgreSQL. pgloader can extract data from MySQL, MSSQL, Oracle, and even other PostgreSQL instances. It can perform ETL (Extract, Transform, Load) operations during migration and, in specific scenarios, might even serve basic replication needs.

Database migration POC: MySQL to PostgreSQL with pgloader

Objective

Perform a database migration from MySQL to PostgreSQL using pgloader.

Attention: For the sake of brevity, I’ve NOT included the database schema in this blog.

Migration environment setup

The environment for this Proof of Concept (POC) involves a production MySQL 5.x database with hundreds of tables, including large ones with many columns, indexes (some composite), and object names exceeding PostgreSQL’s 63-character limit.

Migration method: Preparing the source

While pgloader offers ETL capabilities, for this migration, the chosen approach was to minimize pgloader transformations by making necessary adjustments directly to the source MySQL database’s schema (column definitions, object names) beforehand.

TIP: Ensure any source database changes are compatible with the existing application stack before proceeding with the migration.

The development process involved:

  1. Performing trial migrations using pgloader to identify necessary changes.

  2. Observing and noting required updates on the MySQL database.

  3. Documenting the process to replicate for the production migration.

  4. Executing the final data migration using the pgloader CLI.

PostgreSQL Enterprise

Step-by-step migration process using pgloader

1. PostgreSQL host preparation

  • Install
    • postgres server
    • mysql client
    • pgloader CLI
  • Create target database
  • Perform the database migration: execute pgloader with the appropriate configuration parameters
  • Post database migration
    • move all tables, indexes, and sequences to schema PUBLIC
    • ANALYZE the database

MySQL host

  • Upload database
  • alter/update the database as required: refer to section PRE-Migration for more details

pgLoader configuration file

A number of iterative passes are made using different pgloader configuration parameters in order to determine what changes are needed on the MySQL database before a final database migration is possible.

The first step is to migrate the database schema without any data in order to identify failures related to data types, column and table names, and foreign constraints.

Here’s my initial pgloader configuration file; the default behavior is to execute all migration steps until the process is completed. In the case of an ERROR, it is logged, and the migration continues. Note that it is possible to halt the migration as soon as it encounters an ERROR with the parameter on error stop; refer to the reference documentation for more information.

pgLoader invocation

Here’s a very simple execution of the migration process. Note that all messages are logged in the file DEV_migration.log:

Pre-migration: MySQL database updates

The following issues are documented in the order of their discovery in this POC. Remember that your own experiences will, of course, vary:

Issue #1: Incompatible values/data types, MySQL (datetime) -> Postgres (timestamp)

The values in many columns were changed from “0000-00-00 00:00:00” to “1970-01-01 00:00:00“.  The values were explicitly updated in order to allow Postgres to accept the value. For the MySQL DBA, this is a known issue in older MySQL DBMS. Newer versions don’t allow this behavior.

Issue #2: Incompatible value/datatype,  MySQL (time) -> Postgres (timestamp)

The table.column “pro_game_reports.game_time_tomorrow” was switched from datatype “time” to “integer“.

Issue #3: MySQL table names are too long

Since MySQL can have longer names than is legally acceptable for Postgres, pgLoader must rename them to shorter ones. There is a caveat, however, when one can encounter errors when the first 63+ characters of the source relation are the same, i.e., an attempt is made to generate duplicate names.

Issue #4: MySQL index names are too long

Similar to issue #3, pgloader automatically renames indexes as they are reconstituted into PostgreSQL, i.e., duplicate named indexes which are illegal in PostgreSQL. Assuming the name length is legal in Postgres, this issue can be resolved using the option preserve index names, preserving the original index names.

Issue #5: MySQL index names duplicated

Many indexes with the same name were duplicated because multiple columns were combined, forming a PRIMARY KEY constraint. Duplicate-named indexes in PostgreSQL, unlike MySQL, are not permitted. Consequently, they are manually renamed to unique ones:

Issue #6: Missing data detected in tables resulting in Foreign Key Constraint failures

Two values in table “pro_scouting_reports” were seen to be missing, thus preventing the creation of a number of Foreign Key constraints:

These two INSERT statements permit the creation of the FK constraints:

ATTENTION: Prior to adding column and table constraints, it is suggested that upon a successful data migration, all inconsistent records be deleted first.

Final pgloader configuration for data migration

After all the passes of identifying and updating the MySQL database have been performed, the final version of the loader configuration file results in a pretty simple invocation; see below:

Conclusion

Migrating from MySQL to PostgreSQL using pgloader is a viable and powerful approach. As demonstrated in this POC, while pgloader handles much of the heavy lifting, successful migration often requires careful preparation, iterative testing to identify incompatibilities (like data types, object naming conflicts, constraint issues), and potentially making adjustments to the source MySQL database beforehand. Understanding common pitfalls and leveraging pgloader‘s configuration options allows for a smoother transition between these two popular database systems.


See why running open source PostgreSQL in-house demands more time, expertise, and resources than most teams expect — and what it means for IT and the business.

 

PostgreSQL in the Enterprise: The Real Cost of Going DIY

References

About pgloader

About Percona Monitoring and Management

About MySQL

FAQ: Migrating from MySQL to PostgreSQL using pgloader

Q1: What is pgloader?
A: pgloader is an open-source command-line tool designed to automate the migration of schema and data from various source databases (including MySQL, MS SQL, SQLite) into PostgreSQL. It handles data type casting, schema creation, data loading, index building, and constraint application.

Q2: Can pgloader migrate both schema and data from MySQL to PostgreSQL?
A: Yes, pgloader can perform a full migration, including creating tables, casting data types, loading data, creating indexes, adding foreign keys, and resetting sequences. You can also configure it to perform specific steps only (e.g., SCHEMA ONLY, DATA ONLY).

Q3: What are common problems encountered when using pgloader for MySQL to PostgreSQL migration?
A: Common issues include: incompatible data type values (like MySQL “zero dates”), object names (tables, indexes) exceeding PostgreSQL’s 63-character limit leading to naming conflicts, duplicate index names (allowed implicitly in MySQL, not in PostgreSQL), and foreign key constraint violations due to missing parent data or inconsistent child data. Careful pre-migration checks and potential source data cleanup are often needed.

Q4: Is pgloader free to use?
A: Yes, pgloader is open source software, typically released under the PostgreSQL License, making it free to download, use, and modify.

Q5: Does pgloader require downtime for migration?
A: pgloader performs a bulk load, which typically requires either significant downtime for the source application during the migration or a strategy involving initial load followed by change data capture (CDC) replication to minimize downtime, which pgloader itself doesn’t handle directly. The duration depends heavily on database size and network speed.

Subscribe
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Alexey

Hi Robert, we’ve used pgloader for this kind of migration for a few years with Percona 5.x but after upgrading to Percona 8.x pgloader does not work for us anymore:

>MySQL’s utf8mb3 (formerly utf8) not supportedhttps://github.com/dimitri/pgloader/issues/1592

You may want to re-test your flow.