Among the many reasons to Migrate to PostgreSQL, available support is a key driver for many businesses deciding to migrate. PostgreSQL is an open source software which is also extendable. You can implement missing features in it as per your business need. Migration to PostgreSQL, however, can be a tricky decision as it requires planning, testing, knowledge to handle challenges and much more.
As part of the PostgreSQL Webinar Series, 2ndQuadrant hosted a webinar on Migration to PostgreSQL. The webinar was conducted by Gianni Ciolli, Head of Professional Services at 2ndQuadrant. The recording of the webinar is now available, for those who missed the live session.
Some questions that were submitted after the end of the webinar have been answered by Gianni below:
Question: What would be good practice to move a lot of data with as little downtime as possible?
Answer: Ideally you should use a replication software, if available; this will take care of any writes happening during the data copy.
If that’s not an option, then you must inspect your system and find out whether there is a subset of the data which will not be written for the duration of the data copy window, then copy that data first, and finally take a short maintenance window to copy only the remaining data.
You are effectively re-implementing replication, but it only has to work for the duration of your migration so you don’t have to cover all the general use cases.
Question: How can CLOB/BLOB data be migrated to Postgres?
Answer: PostgreSQL has the bytea data type for raw binary data, while the text data type is for text data like CLOB. Both bytea and text allow up to 1GB (transparently compressed).
There is also a separate Large Objects facility, more similar to Oracle’s LOB, which allows for values up to 4TB of size.
For any questions or comments regarding PostgreSQL security, please send an email to [email protected]