I know it appears that it has been pretty quiet since we open sourced Replicator but that isn't the case. We have been actively working on 1.9 and fixing 1.8 Beta as bug reports have come in, including a bug to sequence replication. However, 1.9 is the true milestone release where we have finally moved away from the original architecture. For those that don't know, the original architecture looked like this:
master-->mcp | | ----------------- | | | s0 s1 s2The mcp would handle all communication and data transfer between the master and the slaves. The idea behind the architecture was to achieve maximum efficiency for the master, meaning that the number of the slaves never affected the performance of the master. However this architecture came with a cost. A single point of failure. If the mcp were to ever crash, replication would stop. When we started down the path of 1.9 it was made very clear to me by Alvaro and Alexey that this was not acceptable. In return I made it very clear that any architectural change we made must not suffer from the Slony problem (performance degradation based on number of subscribers). Together we were all very clear to each other and Alvaro came up with a new architecture. The new architecture calls for a "Forwarder" process and from a topological view doesn't look much different than the MCP. It does however offer us a great deal more flexibility and stability. Here is the new architecture:
master-->forwarder0 | | ----------------- | | | s0 s1 s2How is this different? It is different in a couple of ways. First, the forwarder is now integrated into the PostgreSQL backend. This removes the mcp binary entirely. It also greatly decreases the redundancy of the code. Secondly if the primary forwarder goes down a slave can become the forwarder. This removes the single point of failure. You can read more information on the forwarder here. Other items up for idle thoughts on the possibilities of this new architecture is a single monitoring point. With versions of replicator <= 1.8, you can have a clear idea of which replication transactions have been received and transfered to each slave but you must access each slave individually to see what transaction has actually been restored. So what else is coming with Replicator 1.9? In continuing our cleanup of the architecture we are completely rewriting the ROLE and GRANT/REVOKE replication. This is actually the first step of the other half of the Major Feature of Replicator 1.9, DDL replication. We are currently investigating the opportunity to have certain DDL operations automatically replicate. The most obvious of these would be:
We decided to pass on replicating CREATE FUNCTION due to complexity in dealing with externally linked libraries as well as various dependency problems. We may look at this again in the future. For more information on this feature please visit the thread. If you are interested in testing you can grab the 1.8 Beta or 1.9 from SVN. You can also get the 1.8 Beta from The pgsqlrpms project. The next developer meeting is on 01.08.09 at 11:00AM PST. We are holding in in the #replicator channel on Freenode. All are welcome.
- CREATE TABLE
- ALTER TABLE
- CREATE DOMAIN
- etc...