[pgpool-general: 3214] Re: Problem creating database with replication mode (pgpool 3.1.12)

Michel Kirstein kirstein at tuhh.de
Wed Oct 8 17:40:09 JST 2014


Thanks, but that fix only works for sequence-values but not for the OID that is distributed to new databases. Because I'm running three nodes, I have already implemented and tested this.
Did not fix the OIDs but works great for Sequences.

Maybe I'm missing something...

-----Ursprüngliche Nachricht-----
Von: pgpool-general-bounces at pgpool.net [mailto:pgpool-general-bounces at pgpool.net] Im Auftrag von Jose Baez
Gesendet: Mittwoch, 8. Oktober 2014 10:01
An: pgpool-general at pgpool.net
Betreff: [pgpool-general: 3213] Re: Problem creating database with replication mode (pgpool 3.1.12)

Hi, Michel!


Maybe this post fixes your problem with 3 nodes:
http://www.sraoss.jp/pipermail/pgpool-general/2013-November/002342.html



On 7 October 2014 19:15, Michel Kirstein <kirstein at tuhh.de> wrote:


	Hi Jose,
	i think i'm having the same problem.
	http://www.pgpool.net/pipermail/pgpool-general/2014-September/003172.html <http://www.pgpool.net/pipermail/pgpool-general/2014-September/003172.html> 
	
	I haven't found a valid solution for this yet, but seemingly stable workaround.
	Here is what i found so far:
	
	After Postgresql starts, you can find out with "pg_controldata [option] [datadir]" what the "Latest checkpoint's NextOID" should be. If you now create a new Database and ask for it's OID it should be the same.
	After some (i guess) traffic, Postgresql pre-allocates an additional full WAL block. You can see that there are significant more files in the xlog folder than at start postgresql. If you now check "Latest checkpoint's NextOID" you will find, that it has now advanced by roughly a full block.
	If you were to make a backup of this now (read "online-recovery") that backup will start with this NextOID value while the original still runs with the first OID.
	I guess this behavior is to protect a recovered database against OID collision.
	I found, that if you shut down Postgresql, it will clear out it's internal memory and save everything to disk. If you check "Latest checkpoint's NextOID" now, you will find that it has reverted back to the first value. My workaround consists of simply blocking all connections to pgpool, restart all nodes and then recover the target node.
	
	I still have to find a function that has the same effect as a full stop. I had hoped one of those create checkpoint functions does so, but to no avail for now.
	
	If you happen to find a way to fix this please share it...
	
	
	
	Am 07.10.2014 um 19:27 schrieb Jose Baez:
	

		Hello!
		
		I've got 2 virtual machine (Xen Server 6.2) with Debian Wheezy and:
		
		  * PostgreSQL 9.1
		  * Pgpool 3.1.12
		  * Pacemaker/Corosync with pgpool-HA 2.2
		
		I tweaked the recovery scripts to do "online recovery". It went smooth
		both ways.
		
		However, when I "CREATE DATABASE" and connect through PgAdmin, Pgpool says:
		
		ERROR: pid 7628: read_kind_from_backend: *1 th kind C does not match
		with master or majority connection kind D*
		ERROR: pid 7628: kind mismatch among backends. Possible last query was:
		"SET DateStyle=ISO; SET client_min_messages=notice; SET
		bytea_output=escape; SELECT oid, pg_encoding_to_char(encoding) AS
		encoding, datlastsysoid
		   FROM pg_database WHERE *oid = 18540" kind details are: 0[D] 1[C]*
		
		I checked database OID on each node and they are different (oid 18549
		and the oher is 26670). Is it normal?
		I created a few tables and they have indeed different OID in both
		nodes......
		
		If it is normal to have different OID in both nodes, why pgpool would
		complain?
		If it should not happen, why would both nodes create an object with
		different OID? I checked in both nodes that all Pgpool extensions are
		installed (regclass, insert_lock, recovery, remote_start, switch_xlog)
		
		What field/file defines the next OID to use?
		At first, I thought the problem was the multiple statement limitation
		(sent by PgAdmin) but it is not, because when I do "online recovery"
		both nodes get same OID in every object and I can connect with PgAdmin
		smoothly.
		
		(I found few links about same problem and nobody answered:
		http://lists.pgfoundry.org/pipermail/pgpool-general/2009-October/002229.html <http://lists.pgfoundry.org/pipermail/pgpool-general/2009-October/002229.html> 
		http://lists.pgfoundry.org/pipermail/pgpool-general/2008-June/001071.html <http://lists.pgfoundry.org/pipermail/pgpool-general/2008-June/001071.html> 
		
		I used PgAdmin from 1.16 to 1.20 with same behaviour.
		
		
		Thank you for your time!
		
		
		_pgpool.conf:_
		
		backend_hostname0 = 'psql1'
		backend_port0 = 5433
		backend_weight0 = 1
		backend_data_directory0 = '/var/lib/postgresql/9.1/main'
		backend_flag0 = 'ALLOW_TO_FAILOVER'
		
		backend_hostname1 = 'psql2'
		backend_port1 = 5433
		backend_weight1 = 1
		backend_data_directory1 = '/var/lib/postgresql/9.1/main'
		backend_flag1 = 'ALLOW_TO_FAILOVER'
		
		connection_cache = on
		
		replication_mode = on
		    replicate_select = off
		    insert_lock = on
		    lobj_lock_table = ''
		
		load_balance_mode = off
		master_slave_mode = off
		parallel_mode = off
		
		
		
		
		
		
		
		_______________________________________________
		pgpool-general mailing list
		pgpool-general at pgpool.net
		http://www.pgpool.net/mailman/listinfo/pgpool-general <http://www.pgpool.net/mailman/listinfo/pgpool-general> 
		
		

	_______________________________________________
	pgpool-general mailing list
	pgpool-general at pgpool.net
	http://www.pgpool.net/mailman/listinfo/pgpool-general <http://www.pgpool.net/mailman/listinfo/pgpool-general> 
	





More information about the pgpool-general mailing list