[pgpool-general: 5476] Re: how can i force selection of primary backend?
PVY
balroga3 at yandex.ru
Tue May 16 22:26:37 JST 2017
Great to hear that, thank you very much!
> I personaly did not try RDS but my guess is your set up will keep on
> letting Pgpool-II recognize node 0 as master, because Pgpool-II always
> starts searching for primary from node 0. So as long as "SELECT
> pg_is_in_recovery()" against node 0 returns false, you are safe.
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>> Hello.
>>
>> Is there any way I can specify particular backend to be a primary one, so that write requests are routed to it?
>> In short: out of three nodes two respond with "f" to "select pg_is_in_recovery();" request.
>> But I want only one of them, #0, to be considered master by pgool.
>>
>> I'll try to explain what I mean below.
>>
>> My setup:
>> pgpool-3.6.2
>> three PgSQL servers:
>> - master on amazon RDS
>> - streaming replica on amazon RDS
>> - replica in amazon ec2 instance created with Bucardo. It is the same server where pgpool runs.
>>
>> No (automatic) failover of master is planned, this is the description of backends in pgpool.conf:
>> ###################################################################################
>> #aws rds master
>> backend_hostname0 = 'aws-master-ip'
>> backend_port0 = 5432
>> backend_weight0 = 20
>> backend_flag0 = 'DISALLOW_TO_FAILOVER'
>>
>> #aws rds slave, postgresql streaming replication
>> backend_hostname1= 'aws-slave-ip'
>> backend_port1 = 5432
>> backend_weight1 = 40
>> backend_flag1 = 'ALLOW_TO_FAILOVER'
>>
>> #aws ec2 slave, created with Bucardo
>> backend_hostname2= '127.0.0.1'
>> backend_port2 = 5433
>> backend_weight2 = 40
>> backend_flag2 = 'ALLOW_TO_FAILOVER'
>>
>> ###################################################################################
>>
>> Since replica created with bucardo is not, from PgSQL point of view, running in replication mode, I set
>> sr_check_period = 0.
>> However sr_check_user, _password and _database variables are set with valid values.
>> Right now "show nodes_pool;" shows that all is as intended - the real
>> master server is marked as primary:
>>
>> node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
>> ---------+-----------------------------------------------------------------+------+--------+-----------+---------+------------+-------------------+-------------------
>> 0 | aws-master-ip | 5432 | up | 0.200000 | primary | 83327 | false | 0
>> 1 | aws-slave-ip | 5432 | up | 0.400000 | standby | 105534 | true | 0
>> 2 | 127.0.0.1 | 5433 | up | 0.400000 | standby | 112791 | false | 0
>>
>> However, I'm worried - the node #2 responds with "f" to the select pg_is_in_recovery(); request. Because, as I stated before, this postgresql server does not consider itself to be a replicating slave. There is no way to attach a streaming replica slave to a master server running on amazon rds, therefore I have to use this complex scheme with Bucardo. I'm afraid that after pgpool restarts it may select node #2 as primary.
>>
>> I was thinking of leaving sr_check_user, _password and _database values blank so that pgpool would always use node number 0 as primary (as it was stated here: http://www.sraoss.jp/pipermail/pgpool-general/2014-February/002603.html ) but it brings other problems: after a slave failover I've got a lot of pgpool zombie processes and my log was flooded with these messages:
>> pgpool[24961]: [338591-1] ERROR: failed to authenticate
>> pgpool[24961]: [338591-2] DETAIL: no PostgreSQL user name specified in startup packet
>>
>> So, to summarize: can i forcefully configure node #0 to be my primary one?
--
Pavel mailto:balroga3 at yandex.ru
More information about the pgpool-general
mailing list