<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Dec 19, 2020 at 5:48 AM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp">ishii@sraoss.co.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Ok, let me clarify. You think that 4.2.0's watchdog code has a problem here:<br>
<a href="https://github.com/pgpool/pgpool2/blob/V4_2_0_RPM/src/watchdog/watchdog.c#L2244" rel="noreferrer" target="_blank">https://github.com/pgpool/pgpool2/blob/V4_2_0_RPM/src/watchdog/watchdog.c#L2244</a><br>
and you think it causes your issue (node 0 dead).<br>
<br>
So you wonder if commit 70e0b2b93715094823102c9b1879e83fa75c7913<br>
would solve the issue.<br></blockquote><div><br></div><div>Yes, that's correct. this commit should solve the mentioned issue.</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I am not sure because according to the commit message, it was intended<br>
to fix the problem of wd_cli command (which is new in 4.2). Probably<br>
you'd better ask the commit author (Muhammad Usama).  I have added his<br>
email address in the Cc: field. </blockquote><div><br></div><div>Both wd_cli and the lifecheck mechanism uses the same path and commit messages</div><div>only mentions the wd_cli.</div><div>Looking at the email I think it's a very critical issue and we should do a point</div><div>release for 4.2 </div><div>The issue was caused by an oversight by the "simplifying watchdog configuration" feature</div><div>which was introduced in 4.2, so the older versions should not have the same problem.</div><div><br></div><div>Thanks</div><div>Best regards</div><div>Muhammad Usama</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Best regards,<br>
--<br>
Tatsuo Ishii<br>
SRA OSS, Inc. Japan<br>
English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
<br>
> Hi Tatsuo,<br>
> <br>
> I am using the RPM version which I think according to github still has node 0 for example:<br>
> <br>
> <a href="https://github.com/pgpool/pgpool2/blob/V4_2_0_RPM/src/watchdog/watchdog.c#L2244" rel="noreferrer" target="_blank">https://github.com/pgpool/pgpool2/blob/V4_2_0_RPM/src/watchdog/watchdog.c#L2244</a><br>
> <br>
> Do you think this could cause the issue?<br>
> <br>
> If I configure with node 0 always being dead:<br>
> <br>
> hostname0= '192.168.40.71'<br>
> wd_port0 = 9000<br>
> pgpool_port0 = 9999<br>
> <br>
> hostname1 = '192.168.40.66'<br>
>                                     # Host name or IP address of pgpool node<br>
>                                     # for watchdog connection<br>
>                                     # (change requires restart)<br>
> wd_port1 = 9000<br>
>                                     # Port number for watchdog service<br>
>                                     # (change requires restart)<br>
> pgpool_port1 = 9999<br>
>                                     # Port number for pgpool<br>
>                                     # (change requires restart)<br>
> <br>
> <br>
> hostname2 = '192.168.40.67'<br>
> wd_port2 = 9000<br>
> pgpool_port2 = 9999<br>
> <br>
> hostname3 = '192.168.40.64'<br>
> wd_port3 = 9000<br>
> pgpool_port3 = 9999<br>
> <br>
> hostname0= '192.168.40.71'<br>
> wd_port0 = 9000<br>
> pgpool_port0 = 9999<br>
> <br>
> hostname1 = '192.168.40.66'<br>
>                                     # Host name or IP address of pgpool node<br>
>                                     # for watchdog connection<br>
>                                     # (change requires restart)<br>
> wd_port1 = 9000<br>
>                                     # Port number for watchdog service<br>
>                                     # (change requires restart)<br>
> pgpool_port1 = 9999<br>
>                                     # Port number for pgpool<br>
>                                     # (change requires restart)<br>
> <br>
> <br>
> hostname2 = '192.168.40.67'<br>
> wd_port2 = 9000<br>
> pgpool_port2 = 9999<br>
> <br>
> hostname3 = '192.168.40.64'<br>
> wd_port3 = 9000<br>
> pgpool_port3 = 9999<br>
> <br>
> <br>
> need to set: enable_consensus_with_half_votes = on<br>
> <br>
> as 4 nodes with 1 always dead<br>
> <br>
> <br>
> it works okay and as expected.<br>
> <br>
> Is there an RPM Testing version I could try of the commit from 4 days ago?<br>
> <br>
> <br>
> Joe Madden<br>
> Senior Systems Engineer<br>
> D 01412224666      <br>
> <a href="mailto:joe.madden@mottmac.com" target="_blank">joe.madden@mottmac.com</a><br>
> <br>
> <br>
> -----Original Message-----<br>
> From: Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp" target="_blank">ishii@sraoss.co.jp</a>> <br>
> Sent: 18 December 2020 13:19<br>
> To: Joe Madden <<a href="mailto:Joe.Madden@mottmac.com" target="_blank">Joe.Madden@mottmac.com</a>><br>
> Cc: <a href="mailto:pgpool-general@pgpool.net" target="_blank">pgpool-general@pgpool.net</a><br>
> Subject: Re: [pgpool-general: 7372] Re: Watchdog New Primary & Standby shutdown when Node 0 Fails<br>
> <br>
>> Does anyone know if this commit would cause the issue:<br>
>> <br>
>> <a href="https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpgpool%2Fpgpool2%2Fcommit%2F70e0b2b93715094823102c9b1879e83fa75c7913&amp;data=04%7C01%7CJoe.Madden%40mottmac.com%7C9f4d36ccbdb54a49a5f708d8a3577bae%7Ca2bed0c459574f73b0c2a811407590fb%7C0%7C0%7C637438943434767785%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=cUHVth1MTeFCeJcvdSaGeQyrqEs%2FHNKsv2BflU%2B5F1A%3D&amp;reserved=0" rel="noreferrer" target="_blank">https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpgpool%2Fpgpool2%2Fcommit%2F70e0b2b93715094823102c9b1879e83fa75c7913&amp;data=04%7C01%7CJoe.Madden%40mottmac.com%7C9f4d36ccbdb54a49a5f708d8a3577bae%7Ca2bed0c459574f73b0c2a811407590fb%7C0%7C0%7C637438943434767785%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=cUHVth1MTeFCeJcvdSaGeQyrqEs%2FHNKsv2BflU%2B5F1A%3D&amp;reserved=0</a><br>
> <br>
> Asuming you are using 4.2.0 from the log file, this commit surely does<br>
> not affect your issue because it was committed after 4.2.0 was out.<br>
> <br>
> Best regards,<br>
> --<br>
> Tatsuo Ishii<br>
> SRA OSS, Inc. Japan<br>
> English: <a href="https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.sraoss.co.jp%2Findex_en.php&amp;data=04%7C01%7CJoe.Madden%40mottmac.com%7C9f4d36ccbdb54a49a5f708d8a3577bae%7Ca2bed0c459574f73b0c2a811407590fb%7C0%7C0%7C637438943434767785%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=fbbKXiccBv8wMdeA9wZ0MxWaYsTJu3TcNaPJGfGpE4U%3D&amp;reserved=0" rel="noreferrer" target="_blank">https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.sraoss.co.jp%2Findex_en.php&amp;data=04%7C01%7CJoe.Madden%40mottmac.com%7C9f4d36ccbdb54a49a5f708d8a3577bae%7Ca2bed0c459574f73b0c2a811407590fb%7C0%7C0%7C637438943434767785%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=fbbKXiccBv8wMdeA9wZ0MxWaYsTJu3TcNaPJGfGpE4U%3D&amp;reserved=0</a><br>
> Japanese:<a href="https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.sraoss.co.jp%2F&amp;data=04%7C01%7CJoe.Madden%40mottmac.com%7C9f4d36ccbdb54a49a5f708d8a3577bae%7Ca2bed0c459574f73b0c2a811407590fb%7C0%7C0%7C637438943434767785%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=6iqv00RyeLKQDW3FHAg3dsRXdOW75S23oT7uAnb0ufM%3D&amp;reserved=0" rel="noreferrer" target="_blank">https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.sraoss.co.jp%2F&amp;data=04%7C01%7CJoe.Madden%40mottmac.com%7C9f4d36ccbdb54a49a5f708d8a3577bae%7Ca2bed0c459574f73b0c2a811407590fb%7C0%7C0%7C637438943434767785%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=6iqv00RyeLKQDW3FHAg3dsRXdOW75S23oT7uAnb0ufM%3D&amp;reserved=0</a><br>
> <br>
>> Joe Madden<br>
>> Senior Systems Engineer<br>
>> D 01412224666<br>
>> <a href="mailto:joe.madden@mottmac.com" target="_blank">joe.madden@mottmac.com</a><mailto:<a href="mailto:joe.madden@mottmac.com" target="_blank">joe.madden@mottmac.com</a>><br>
>> <br>
>> From: Joe Madden<br>
>> Sent: 18 December 2020 09:45<br>
>> To: <a href="mailto:pgpool-general@pgpool.net" target="_blank">pgpool-general@pgpool.net</a><br>
>> Subject: RE: Watchdog New Primary & Standby shutdown when Node 0 Fails<br>
>> <br>
>> Hi All,<br>
>> <br>
>> I moved node 0 and Node 2 around (switch the node ids and updated the pgpool config) and I found the same issue on Node 2 (Now 0)<br>
>> <br>
>> It's got something to do with the Node id and the relevant configs, still don't know if it's a bug for not.<br>
>> <br>
>> Joe.<br>
>> <br>
>> Joe Madden<br>
>> Senior Systems Engineer<br>
>> D 01412224666<br>
>> <a href="mailto:joe.madden@mottmac.com" target="_blank">joe.madden@mottmac.com</a><mailto:<a href="mailto:joe.madden@mottmac.com" target="_blank">joe.madden@mottmac.com</a>><br>
>> <br>
>> From: Joe Madden<br>
>> Sent: 17 December 2020 18:57<br>
>> To: <a href="mailto:pgpool-general@pgpool.net" target="_blank">pgpool-general@pgpool.net</a><br>
>> Subject: Watchdog New Primary & Standby shutdown when Node 0 Fails<br>
>> <br>
>> Hi List,<br>
>> <br>
>> I've got a PGpool instance with three nodes:<br>
>> <br>
>> |Pg Pool Node 0 (192.168.40.66)| Pg Pool Node 1 (192.168.40.67)| Pg Pool Node 2 (192.168.40.64)|<br>
>> <br>
>> Communicate switch back end-<br>
>> |Postgresql12 Primary | Postgresql 12 Secondary|<br>
>> <br>
>> <br>
>> This works fine, Standby Nodes 1 & 2 can be shutdown, restarted etc without an issue. When node 0 is shutdown, one of the child processes fails and causes the Nodes 1 and Node 2 to shutdown after about 60 seconds post failover.<br>
>> <br>
>> I feel like this could be a bug, Our configurations on all three nodes are identical bar the weght pram which is different and node id of course.<br>
>> <br>
>> Config:<br>
>> <br>
>> # ----------------------------<br>
>> # pgPool-II configuration file<br>
>> # ----------------------------<br>
>> #<br>
>> # This file consists of lines of the form:<br>
>> #<br>
>> # name = value<br>
>> #<br>
>> # Whitespace may be used. Comments are introduced with "#" anywhere on a line.<br>
>> # The complete list of parameter names and allowed values can be found in the<br>
>> # pgPool-II documentation.<br>
>> #<br>
>> # This file is read on server startup and when the server receives a SIGHUP<br>
>> # signal. If you edit the file on a running system, you have to SIGHUP the<br>
>> # server for the changes to take effect, or use "pgpool reload". Some<br>
>> # parameters, which are marked below, require a server shutdown and restart to<br>
>> # take effect.<br>
>> #<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # BACKEND CLUSTERING MODE<br>
>> # Choose one of: 'streaming_replication', 'native_replication',<br>
>> # 'logical_replication', 'slony', 'raw' or 'snapshot_isolation'<br>
>> # (change requires restart)<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> backend_clustering_mode = 'streaming_replication'<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # CONNECTIONS<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> # - pgpool Connection Settings -<br>
>> <br>
>> listen_addresses = '*'<br>
>>                                    # Host name or IP address to listen on:<br>
>>                                    # '*' for all, '' for no TCP/IP connections<br>
>>                                    # (change requires restart)<br>
>> port = 9999<br>
>>                                    # Port number<br>
>>                                    # (change requires restart)<br>
>> socket_dir = '/tmp'<br>
>>                                    # Unix domain socket path<br>
>>                                    # The Debian package defaults to<br>
>>                                    # /var/run/postgresql<br>
>>                                    # (change requires restart)<br>
>> reserved_connections = 0<br>
>>                                    # Number of reserved connections.<br>
>>                                    # Pgpool-II does not accept connections if over<br>
>>                                    # num_init_chidlren - reserved_connections.<br>
>> <br>
>> <br>
>> # - pgpool Communication Manager Connection Settings -<br>
>> <br>
>> pcp_listen_addresses = '*'<br>
>>                                    # Host name or IP address for pcp process to listen on:<br>
>>                                    # '*' for all, '' for no TCP/IP connections<br>
>>                                    # (change requires restart)<br>
>> pcp_port = 9898<br>
>>                                    # Port number for pcp<br>
>>                                    # (change requires restart)<br>
>> pcp_socket_dir = '/tmp'<br>
>>                                    # Unix domain socket path for pcp<br>
>>                                    # The Debian package defaults to<br>
>>                                    # /var/run/postgresql<br>
>>                                    # (change requires restart)<br>
>> listen_backlog_multiplier = 2<br>
>>                                    # Set the backlog parameter of listen(2) to<br>
>>                                    # num_init_children * listen_backlog_multiplier.<br>
>>                                    # (change requires restart)<br>
>> serialize_accept = off<br>
>>                                    # whether to serialize accept() call to avoid thundering herd problem<br>
>>                                    # (change requires restart)<br>
>> <br>
>> # - Backend Connection Settings -<br>
>> <br>
>> backend_hostname0 = '192.168.40.61'<br>
>>                                    # Host name or IP address to connect to for backend 0<br>
>> backend_port0 = 5432<br>
>>                                    # Port number for backend 0<br>
>> backend_weight0 = 1<br>
>>                                    # Weight for backend 0 (only in load balancing mode)<br>
>> backend_data_directory0 = '/var/lib/pgsql/12/data/'<br>
>>                                    # Data directory for backend 0<br>
>> backend_flag0 = 'ALLOW_TO_FAILOVER'<br>
>>                                    # Controls various backend behavior<br>
>>                                    # ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER<br>
>>                                    # or ALWAYS_PRIMARY<br>
>> backend_application_name0 = '192.168.40.61'<br>
>>                                    # walsender's application_name, used for "show pool_nodes" command<br>
>> <br>
>> # - Backend Connection Settings -<br>
>> <br>
>> backend_hostname1 = '192.168.40.60'<br>
>>                                    # Host name or IP address to connect to for backend 0<br>
>> backend_port1 = 5432<br>
>>                                    # Port number for backend 0<br>
>> backend_weight1 = 1<br>
>>                                    # Weight for backend 0 (only in load balancing mode)<br>
>> backend_data_directory1 = '/var/lib/pgsql/12/data/'<br>
>>                                    # Data directory for backend 0<br>
>> backend_flag1 = 'ALLOW_TO_FAILOVER'<br>
>>                                    # Controls various backend behavior<br>
>>                                    # ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER<br>
>>                                    # or ALWAYS_PRIMARY<br>
>> backend_application_name1 = '192.168.40.60'<br>
>>                                    # walsender's application_name, used for "show pool_nodes" command<br>
>> <br>
>> <br>
>> # - Authentication -<br>
>> <br>
>> enable_pool_hba = on<br>
>>                                    # Use pool_hba.conf for client authentication<br>
>> pool_passwd = 'pool_passwd'<br>
>>                                    # File name of pool_passwd for md5 authentication.<br>
>>                                    # "" disables pool_passwd.<br>
>>                                    # (change requires restart)<br>
>> authentication_timeout = 1min<br>
>>                                    # Delay in seconds to complete client authentication<br>
>>                                    # 0 means no timeout.<br>
>> <br>
>> allow_clear_text_frontend_auth = off<br>
>>                                    # Allow Pgpool-II to use clear text password authentication<br>
>>                                    # with clients, when pool_passwd does not<br>
>>                                    # contain the user password<br>
>> <br>
>> # - SSL Connections -<br>
>> <br>
>> ssl = off<br>
>>                                    # Enable SSL support<br>
>>                                    # (change requires restart)<br>
>> #ssl_key = 'server.key'<br>
>>                                    # SSL private key file<br>
>>                                    # (change requires restart)<br>
>> #ssl_cert = 'server.crt'<br>
>>                                    # SSL public certificate file<br>
>>                                    # (change requires restart)<br>
>> #ssl_ca_cert = ''<br>
>>                                    # Single PEM format file containing<br>
>>                                    # CA root certificate(s)<br>
>>                                    # (change requires restart)<br>
>> #ssl_ca_cert_dir = ''<br>
>>                                    # Directory containing CA root certificate(s)<br>
>>                                    # (change requires restart)<br>
>> #ssl_crl_file = ''<br>
>>                                    # SSL certificate revocation list file<br>
>>                                    # (change requires restart)<br>
>> <br>
>> ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'<br>
>>                                    # Allowed SSL ciphers<br>
>>                                    # (change requires restart)<br>
>> ssl_prefer_server_ciphers = off<br>
>>                                    # Use server's SSL cipher preferences,<br>
>>                                    # rather than the client's<br>
>>                                    # (change requires restart)<br>
>> ssl_ecdh_curve = 'prime256v1'<br>
>>                                    # Name of the curve to use in ECDH key exchange<br>
>> ssl_dh_params_file = ''<br>
>>                                    # Name of the file containing Diffie-Hellman parameters used<br>
>>                                    # for so-called ephemeral DH family of SSL cipher.<br>
>> #ssl_passphrase_command=''<br>
>>                                    # Sets an external command to be invoked when a passphrase<br>
>>                                    # for decrypting an SSL file needs to be obtained<br>
>>                                    # (change requires restart)<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # POOLS<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> # - Concurrent session and pool size -<br>
>> <br>
>> num_init_children = 32<br>
>>                                    # Number of concurrent sessions allowed<br>
>>                                    # (change requires restart)<br>
>> max_pool = 4<br>
>>                                    # Number of connection pool caches per connection<br>
>>                                    # (change requires restart)<br>
>> <br>
>> # - Life time -<br>
>> <br>
>> child_life_time = 5min<br>
>>                                    # Pool exits after being idle for this many seconds<br>
>> child_max_connections = 0<br>
>>                                    # Pool exits after receiving that many connections<br>
>>                                    # 0 means no exit<br>
>> connection_life_time = 0<br>
>>                                    # Connection to backend closes after being idle for this many seconds<br>
>>                                    # 0 means no close<br>
>> client_idle_limit = 0<br>
>>                                    # Client is disconnected after being idle for that many seconds<br>
>>                                    # (even inside an explicit transactions!)<br>
>>                                    # 0 means no disconnection<br>
>> <br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # LOGS<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> # - Where to log -<br>
>> <br>
>> log_destination = 'stderr'<br>
>>                                    # Where to log<br>
>>                                    # Valid values are combinations of stderr,<br>
>>                                    # and syslog. Default to stderr.<br>
>> <br>
>> # - What to log -<br>
>> <br>
>> log_line_prefix = '%t: pid %p: ' # printf-style string to output at beginning of each log line.<br>
>> <br>
>> log_connections = off<br>
>>                                    # Log connections<br>
>> log_disconnections = off<br>
>>                                    # Log disconnections<br>
>> log_hostname = off<br>
>>                                    # Hostname will be shown in ps status<br>
>>                                    # and in logs if connections are logged<br>
>> log_statement = off<br>
>>                                    # Log all statements<br>
>> log_per_node_statement = off<br>
>>                                    # Log all statements<br>
>>                                    # with node and backend informations<br>
>> log_client_messages = off<br>
>>                                    # Log any client messages<br>
>> log_standby_delay = 'if_over_threshold'<br>
>>                                    # Log standby delay<br>
>>                                    # Valid values are combinations of always,<br>
>>                                    # if_over_threshold, none<br>
>> <br>
>> # - Syslog specific -<br>
>> <br>
>> syslog_facility = 'LOCAL0'<br>
>>                                    # Syslog local facility. Default to LOCAL0<br>
>> syslog_ident = 'pgpool'<br>
>>                                    # Syslog program identification string<br>
>>                                    # Default to 'pgpool'<br>
>> <br>
>> # - Debug -<br>
>> <br>
>> #log_error_verbosity = default # terse, default, or verbose messages<br>
>> <br>
>> #client_min_messages = notice # values in order of decreasing detail:<br>
>>                                         # debug5<br>
>>                                         # debug4<br>
>>                                         # debug3<br>
>>                                         # debug2<br>
>>                                         # debug1<br>
>>                                         # log<br>
>>                                         # notice<br>
>>                                         # warning<br>
>>                                         # error<br>
>> <br>
>> log_min_messages = debug5 # values in order of decreasing detail:<br>
>>                                         # debug5<br>
>>                                         # debug4<br>
>>                                         # debug3<br>
>>                                         # debug2<br>
>>                                         # debug1<br>
>>                                         # info<br>
>>                                         # notice<br>
>>                                         # warning<br>
>>                                         # error<br>
>>                                         # log<br>
>>                                         # fatal<br>
>>                                         # panic<br>
>> <br>
>> # This is used when logging to stderr:<br>
>> #logging_collector = off # Enable capturing of stderr<br>
>>                                         # into log files.<br>
>>                                         # (change requires restart)<br>
>> <br>
>> # -- Only used if logging_collector is on ---<br>
>> <br>
>> #log_directory = '/tmp/pgpool_log' # directory where log files are written,<br>
>>                                         # can be absolute<br>
>> #log_filename = 'pgpool-%Y-%m-%d_%H%M%S.log'<br>
>>                                         # log file name pattern,<br>
>>                                         # can include strftime() escapes<br>
>> <br>
>> #log_file_mode = 0600 # creation mode for log files,<br>
>>                                         # begin with 0 to use octal notation<br>
>> <br>
>> #log_truncate_on_rotation = off # If on, an existing log file with the<br>
>>                                         # same name as the new log file will be<br>
>>                                         # truncated rather than appended to.<br>
>>                                         # But such truncation only occurs on<br>
>>                                         # time-driven rotation, not on restarts<br>
>>                                         # or size-driven rotation. Default is<br>
>>                                         # off, meaning append to existing files<br>
>>                                         # in all cases.<br>
>> <br>
>> #log_rotation_age = 1d # Automatic rotation of logfiles will<br>
>>                                         # happen after that (minutes)time.<br>
>>                                         # 0 disables time based rotation.<br>
>> #log_rotation_size = 10MB # Automatic rotation of logfiles will<br>
>>                                         # happen after that much (KB) log output.<br>
>>                                         # 0 disables size based rotation.<br>
>> #------------------------------------------------------------------------------<br>
>> # FILE LOCATIONS<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> pid_file_name = '/var/run/pgpool/pgpool.pid'<br>
>>                                    # PID file name<br>
>>                                    # Can be specified as relative to the"<br>
>>                                    # location of pgpool.conf file or<br>
>>                                    # as an absolute path<br>
>>                                    # (change requires restart)<br>
>> logdir = '/tmp'<br>
>>                                    # Directory of pgPool status file<br>
>>                                    # (change requires restart)<br>
>> <br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # CONNECTION POOLING<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> connection_cache = on<br>
>>                                    # Activate connection pools<br>
>>                                    # (change requires restart)<br>
>> <br>
>>                                    # Semicolon separated list of queries<br>
>>                                    # to be issued at the end of a session<br>
>>                                    # The default is for 8.3 and later<br>
>> reset_query_list = 'ABORT; DISCARD ALL'<br>
>>                                    # The following one is for 8.2 and before<br>
>> #reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'<br>
>> <br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # REPLICATION MODE<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> replicate_select = off<br>
>>                                    # Replicate SELECT statements<br>
>>                                    # when in replication mode<br>
>>                                    # replicate_select is higher priority than<br>
>>                                    # load_balance_mode.<br>
>> <br>
>> insert_lock = off<br>
>>                                    # Automatically locks a dummy row or a table<br>
>>                                    # with INSERT statements to keep SERIAL data<br>
>>                                    # consistency<br>
>>                                    # Without SERIAL, no lock will be issued<br>
>> lobj_lock_table = ''<br>
>>                                    # When rewriting lo_creat command in<br>
>>                                    # replication mode, specify table name to<br>
>>                                    # lock<br>
>> <br>
>> # - Degenerate handling -<br>
>> <br>
>> replication_stop_on_mismatch = off<br>
>>                                    # On disagreement with the packet kind<br>
>>                                    # sent from backend, degenerate the node<br>
>>                                    # which is most likely "minority"<br>
>>                                    # If off, just force to exit this session<br>
>> <br>
>> failover_if_affected_tuples_mismatch = off<br>
>>                                    # On disagreement with the number of affected<br>
>>                                    # tuples in UPDATE/DELETE queries, then<br>
>>                                    # degenerate the node which is most likely<br>
>>                                    # "minority".<br>
>>                                    # If off, just abort the transaction to<br>
>>                                    # keep the consistency<br>
>> <br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # LOAD BALANCING MODE<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> load_balance_mode = on<br>
>>                                    # Activate load balancing mode<br>
>>                                    # (change requires restart)<br>
>> ignore_leading_white_space = on<br>
>>                                    # Ignore leading white spaces of each query<br>
>> read_only_function_list = ''<br>
>>                                    # Comma separated list of function names<br>
>>                                    # that don't write to database<br>
>>                                    # Regexp are accepted<br>
>> write_function_list = ''<br>
>>                                    # Comma separated list of function names<br>
>>                                    # that write to database<br>
>>                                    # Regexp are accepted<br>
>>                                    # If both read_only_function_list and write_function_list<br>
>>                                    # is empty, function's volatile property is checked.<br>
>>                                    # If it's volatile, the function is regarded as a<br>
>>                                    # writing function.<br>
>> <br>
>> primary_routing_query_pattern_list = ''<br>
>>                                    # Semicolon separated list of query patterns<br>
>>                                    # that should be sent to primary node<br>
>>                                    # Regexp are accepted<br>
>>                                    # valid for streaming replicaton mode only.<br>
>> <br>
>> database_redirect_preference_list = ''<br>
>>                                    # comma separated list of pairs of database and node id.<br>
>>                                    # example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2'<br>
>>                                    # valid for streaming replicaton mode only.<br>
>> <br>
>> app_name_redirect_preference_list = ''<br>
>>                                    # comma separated list of pairs of app name and node id.<br>
>>                                    # example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby'<br>
>>                                    # valid for streaming replicaton mode only.<br>
>> allow_sql_comments = off<br>
>>                                    # if on, ignore SQL comments when judging if load balance or<br>
>>                                    # query cache is possible.<br>
>>                                    # If off, SQL comments effectively prevent the judgment<br>
>>                                    # (pre 3.4 behavior).<br>
>> <br>
>> disable_load_balance_on_write = 'transaction'<br>
>>                                    # Load balance behavior when write query is issued<br>
>>                                    # in an explicit transaction.<br>
>>                                    #<br>
>>                                    # Valid values:<br>
>>                                    #<br>
>>                                    # 'transaction' (default):<br>
>>                                    # if a write query is issued, subsequent<br>
>>                                    # read queries will not be load balanced<br>
>>                                    # until the transaction ends.<br>
>>                                    #<br>
>>                                    # 'trans_transaction':<br>
>>                                    # if a write query is issued, subsequent<br>
>>                                    # read queries in an explicit transaction<br>
>>                                    # will not be load balanced until the session ends.<br>
>>                                    #<br>
>>                                    # 'dml_adaptive':<br>
>>                                    # Queries on the tables that have already been<br>
>>                                    # modified within the current explicit transaction will<br>
>>                                    # not be load balanced until the end of the transaction.<br>
>>                                    #<br>
>>                                    # 'always':<br>
>>                                    # if a write query is issued, read queries will<br>
>>                                    # not be load balanced until the session ends.<br>
>>                                    #<br>
>>                                    # Note that any query not in an explicit transaction<br>
>>                                    # is not affected by the parameter.<br>
>> <br>
>> dml_adaptive_object_relationship_list= ''<br>
>>                                    # comma separated list of object pairs<br>
>>                                    # [object]:[dependent-object], to disable load balancing<br>
>>                                    # of dependent objects within the explicit transaction<br>
>>                                    # after WRITE statement is issued on (depending-on) object.<br>
>>                                    #<br>
>>                                    # example: 'tb_t1:tb_t2,insert_tb_f_func():tb_f,tb_v:my_view'<br>
>>                                    # Note: function name in this list must also be present in<br>
>>                                    # the write_function_list<br>
>>                                    # only valid for disable_load_balance_on_write = 'dml_adaptive'.<br>
>> <br>
>> statement_level_load_balance = off<br>
>>                                    # Enables statement level load balancing<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # NATIVE REPLICATION MODE<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> # - Streaming -<br>
>> <br>
>> sr_check_period = 10<br>
>>                                    # Streaming replication check period<br>
>>                                    # Disabled (0) by default<br>
>> sr_check_user = 'repmgr'<br>
>>                                    # Streaming replication check user<br>
>>                                    # This is neccessary even if you disable streaming<br>
>>                                    # replication delay check by sr_check_period = 0<br>
>> sr_check_password = '###################'<br>
>>                                    # Password for streaming replication check user<br>
>>                                    # Leaving it empty will make Pgpool-II to first look for the<br>
>>                                    # Password in pool_passwd file before using the empty password<br>
>> <br>
>> sr_check_database = 'repmgr'<br>
>>                                    # Database name for streaming replication check<br>
>> delay_threshold = 10000000<br>
>>                                    # Threshold before not dispatching query to standby node<br>
>>                                    # Unit is in bytes<br>
>>                                    # Disabled (0) by default<br>
>> <br>
>> # - Special commands -<br>
>> <br>
>> follow_primary_command = ''<br>
>>                                    # Executes this command after main node failover<br>
>>                                    # Special values:<br>
>>                                    # %d = failed node id<br>
>>                                    # %h = failed node host name<br>
>>                                    # %p = failed node port number<br>
>>                                    # %D = failed node database cluster path<br>
>>                                    # %m = new main node id<br>
>>                                    # %H = new main node hostname<br>
>>                                    # %M = old main node id<br>
>>                                    # %P = old primary node id<br>
>>                                    # %r = new main port number<br>
>>                                    # %R = new main database cluster path<br>
>>                                    # %N = old primary node hostname<br>
>>                                    # %S = old primary node port number<br>
>>                                    # %% = '%' character<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # HEALTH CHECK GLOBAL PARAMETERS<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> health_check_period = 5<br>
>>                                    # Health check period<br>
>>                                    # Disabled (0) by default<br>
>> health_check_timeout = 20<br>
>>                                    # Health check timeout<br>
>>                                    # 0 means no timeout<br>
>> health_check_user = 'pgpool'<br>
>>                                    # Health check user<br>
>> health_check_password = '#############################'<br>
>>                                    # Password for health check user<br>
>>                                    # Leaving it empty will make Pgpool-II to first look for the<br>
>>                                    # Password in pool_passwd file before using the empty password<br>
>> <br>
>> health_check_database = 'postgres'<br>
>>                                    # Database name for health check. If '', tries 'postgres' frist,<br>
>> health_check_max_retries = 3<br>
>>                                    # Maximum number of times to retry a failed health check before giving up.<br>
>> health_check_retry_delay = 1<br>
>>                                    # Amount of time to wait (in seconds) between retries.<br>
>> connect_timeout = 10000<br>
>>                                    # Timeout value in milliseconds before giving up to connect to backend.<br>
>>                                    # Default is 10000 ms (10 second). Flaky network user may want to increase<br>
>>                                    # the value. 0 means no timeout.<br>
>>                                    # Note that this value is not only used for health check,<br>
>>                                    # but also for ordinary conection to backend.<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # HEALTH CHECK PER NODE PARAMETERS (OPTIONAL)<br>
>> #------------------------------------------------------------------------------<br>
>> #health_check_period0 = 0<br>
>> #health_check_timeout0 = 20<br>
>> #health_check_user0 = 'nobody'<br>
>> #health_check_password0 = ''<br>
>> #health_check_database0 = ''<br>
>> #health_check_max_retries0 = 0<br>
>> #health_check_retry_delay0 = 1<br>
>> #connect_timeout0 = 10000<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # FAILOVER AND FAILBACK<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> #failover_command = '/opt/pgpool/scripts/failover.sh %d %h %p %D %m %H %M %P %r %R'<br>
>> failover_command = '/etc/pgpool-II/failover.sh %d %H %h %p %D %m %M %P %r %R %N %S'<br>
>>                                    # Executes this command at failover<br>
>>                                    # Special values:<br>
>>                                    # %d = failed node id<br>
>>                                    # %h = failed node host name<br>
>>                                    # %p = failed node port number<br>
>>                                    # %D = failed node database cluster path<br>
>>                                    # %m = new main node id<br>
>>                                    # %H = new main node hostname<br>
>>                                    # %M = old main node id<br>
>>                                    # %P = old primary node id<br>
>>                                    # %r = new main port number<br>
>>                                    # %R = new main database cluster path<br>
>>                                    # %N = old primary node hostname<br>
>>                                    # %S = old primary node port number<br>
>>                                    # %% = '%' character<br>
>> failback_command = ''<br>
>>                                    # Executes this command at failback.<br>
>>                                    # Special values:<br>
>>                                    # %d = failed node id<br>
>>                                    # %h = failed node host name<br>
>>                                    # %p = failed node port number<br>
>>                                    # %D = failed node database cluster path<br>
>>                                    # %m = new main node id<br>
>>                                    # %H = new main node hostname<br>
>>                                    # %M = old main node id<br>
>>                                    # %P = old primary node id<br>
>>                                    # %r = new main port number<br>
>>                                    # %R = new main database cluster path<br>
>>                                    # %N = old primary node hostname<br>
>>                                    # %S = old primary node port number<br>
>>                                    # %% = '%' character<br>
>> <br>
>> failover_on_backend_error = on<br>
>>                                    # Initiates failover when reading/writing to the<br>
>>                                    # backend communication socket fails<br>
>>                                    # If set to off, pgpool will report an<br>
>>                                    # error and disconnect the session.<br>
>> <br>
>> detach_false_primary = on<br>
>>                                    # Detach false primary if on. Only<br>
>>                                    # valid in streaming replicaton<br>
>>                                    # mode and with PostgreSQL 9.6 or<br>
>>                                    # after.<br>
>> <br>
>> search_primary_node_timeout = 5min<br>
>>                                    # Timeout in seconds to search for the<br>
>>                                    # primary node when a failover occurs.<br>
>>                                    # 0 means no timeout, keep searching<br>
>>                                    # for a primary node forever.<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # ONLINE RECOVERY<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> recovery_user = 'nobody'<br>
>>                                    # Online recovery user<br>
>> recovery_password = ''<br>
>>                                    # Online recovery password<br>
>>                                    # Leaving it empty will make Pgpool-II to first look for the<br>
>>                                    # Password in pool_passwd file before using the empty password<br>
>> <br>
>> recovery_1st_stage_command = ''<br>
>>                                    # Executes a command in first stage<br>
>> recovery_2nd_stage_command = ''<br>
>>                                    # Executes a command in second stage<br>
>> recovery_timeout = 90<br>
>>                                    # Timeout in seconds to wait for the<br>
>>                                    # recovering node's postmaster to start up<br>
>>                                    # 0 means no wait<br>
>> client_idle_limit_in_recovery = 0<br>
>>                                    # Client is disconnected after being idle<br>
>>                                    # for that many seconds in the second stage<br>
>>                                    # of online recovery<br>
>>                                    # 0 means no disconnection<br>
>>                                    # -1 means immediate disconnection<br>
>> <br>
>> auto_failback = on<br>
>>                                    # Dettached backend node reattach automatically<br>
>>                                    # if replication_state is 'streaming'.<br>
>> auto_failback_interval = 1min<br>
>>                                    # Min interval of executing auto_failback in<br>
>>                                    # seconds.<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # WATCHDOG<br>
>> #------------------------------------------------------------------------------<br>
>> <br>
>> # - Enabling -<br>
>> <br>
>> use_watchdog = on<br>
>>                                     # Activates watchdog<br>
>>                                     # (change requires restart)<br>
>> <br>
>> # -Connection to up stream servers -<br>
>> <br>
>> trusted_servers = ''<br>
>>                                     # trusted server list which are used<br>
>>                                     # to confirm network connection<br>
>>                                     # (hostA,hostB,hostC,...)<br>
>>                                     # (change requires restart)<br>
>> ping_path = '/bin'<br>
>>                                     # ping command path<br>
>>                                     # (change requires restart)<br>
>> <br>
>> # - Watchdog communication Settings -<br>
>> <br>
>> hostname0 = '192.168.40.66'<br>
>>                                     # Host name or IP address of pgpool node<br>
>>                                     # for watchdog connection<br>
>>                                     # (change requires restart)<br>
>> wd_port0 = 9000<br>
>>                                     # Port number for watchdog service<br>
>>                                     # (change requires restart)<br>
>> pgpool_port0 = 9999<br>
>>                                     # Port number for pgpool<br>
>>                                     # (change requires restart)<br>
>> <br>
>> <br>
>> hostname1 = '192.168.40.67'<br>
>> wd_port1 = 9000<br>
>> pgpool_port1 = 9999<br>
>> <br>
>> hostname2 = '192.168.40.64'<br>
>> wd_port2 = 9000<br>
>> pgpool_port2 = 9999<br>
>> <br>
>> <br>
>> wd_priority = 90<br>
>>                                     # priority of this watchdog in leader election<br>
>>                                     # (change requires restart)<br>
>> <br>
>> wd_authkey = '###################################'<br>
>>                                     # Authentication key for watchdog communication<br>
>>                                     # (change requires restart)<br>
>> <br>
>> wd_ipc_socket_dir = '/tmp'<br>
>>                                     # Unix domain socket path for watchdog IPC socket<br>
>>                                     # The Debian package defaults to<br>
>>                                     # /var/run/postgresql<br>
>>                                     # (change requires restart)<br>
>> <br>
>> <br>
>> # - Virtual IP control Setting -<br>
>> <br>
>> delegate_IP = '192.168.40.70'<br>
>>                                     # delegate IP address<br>
>>                                     # If this is empty, virtual IP never bring up.<br>
>>                                     # (change requires restart)<br>
>> if_cmd_path = '/sbin'<br>
>>                                     # path to the directory where if_up/down_cmd exists<br>
>>                                     # If if_up/down_cmd starts with "/", if_cmd_path will be ignored.<br>
>>                                     # (change requires restart)<br>
>> if_up_cmd = '/usr/bin/sudo /sbin/ip addr add $_IP_$/24 dev eth0 label eth0:0'<br>
>>                                     # startup delegate IP command<br>
>>                                     # (change requires restart)<br>
>> if_down_cmd = '/usr/bin/sudo /sbin/ip addr del $_IP_$/24 dev eth0'<br>
>>                                     # shutdown delegate IP command<br>
>>                                     # (change requires restart)<br>
>> arping_path = '/usr/sbin'<br>
>>                                     # arping command path<br>
>>                                     # If arping_cmd starts with "/", if_cmd_path will be ignored.<br>
>>                                     # (change requires restart)<br>
>> arping_cmd = '/usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I eth0'<br>
>>                                     # arping command<br>
>>                                     # (change requires restart)<br>
>> <br>
>> # - Behaivor on escalation Setting -<br>
>> <br>
>> clear_memqcache_on_escalation = on<br>
>>                                     # Clear all the query cache on shared memory<br>
>>                                     # when standby pgpool escalate to active pgpool<br>
>>                                     # (= virtual IP holder).<br>
>>                                     # This should be off if client connects to pgpool<br>
>>                                     # not using virtual IP.<br>
>>                                     # (change requires restart)<br>
>> wd_escalation_command = ''<br>
>>                                     # Executes this command at escalation on new active pgpool.<br>
>>                                     # (change requires restart)<br>
>> wd_de_escalation_command = ''<br>
>>                                     # Executes this command when leader pgpool resigns from being leader.<br>
>>                                     # (change requires restart)<br>
>> <br>
>> # - Watchdog consensus settings for failover -<br>
>> <br>
>> failover_when_quorum_exists = on<br>
>>                                     # Only perform backend node failover<br>
>>                                     # when the watchdog cluster holds the quorum<br>
>>                                     # (change requires restart)<br>
>> <br>
>> failover_require_consensus = on<br>
>>                                     # Perform failover when majority of Pgpool-II nodes<br>
>>                                     # aggrees on the backend node status change<br>
>>                                     # (change requires restart)<br>
>> <br>
>> allow_multiple_failover_requests_from_node = off<br>
>>                                     # A Pgpool-II node can cast multiple votes<br>
>>                                     # for building the consensus on failover<br>
>>                                     # (change requires restart)<br>
>> <br>
>> <br>
>> enable_consensus_with_half_votes = off<br>
>>                                     # apply majority rule for consensus and quorum computation<br>
>>                                     # at 50% of votes in a cluster with even number of nodes.<br>
>>                                     # when enabled the existence of quorum and consensus<br>
>>                                     # on failover is resolved after receiving half of the<br>
>>                                     # total votes in the cluster, otherwise both these<br>
>>                                     # decisions require at least one more vote than<br>
>>                                     # half of the total votes.<br>
>>                                     # (change requires restart)<br>
>> <br>
>> # - Lifecheck Setting -<br>
>> <br>
>> # -- common --<br>
>> <br>
>> wd_monitoring_interfaces_list = 'any' # Comma separated list of interfaces names to monitor.<br>
>>                                     # if any interface from the list is active the watchdog will<br>
>>                                     # consider the network is fine<br>
>>                                     # 'any' to enable monitoring on all interfaces except loopback<br>
>>                                     # '' to disable monitoring<br>
>>                                     # (change requires restart)<br>
>> <br>
>> wd_lifecheck_method = 'heartbeat'<br>
>>                                     # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')<br>
>>                                     # (change requires restart)<br>
>> wd_interval = 10<br>
>>                                     # lifecheck interval (sec) > 0<br>
>>                                     # (change requires restart)<br>
>> <br>
>> # -- heartbeat mode --<br>
>> <br>
>> heartbeat_hostname0 = '192.168.40.66'<br>
>>                                     # Host name or IP address used<br>
>>                                     # for sending heartbeat signal.<br>
>>                                     # (change requires restart)<br>
>> heartbeat_port0 = 9694<br>
>>                                     # Port number used for receiving/sending heartbeat signal<br>
>>                                     # Usually this is the same as heartbeat_portX.<br>
>>                                     # (change requires restart)<br>
>> heartbeat_device0 = 'eth0'<br>
>>                                     # Name of NIC device (such like 'eth0')<br>
>>                                     # used for sending/receiving heartbeat<br>
>>                                     # signal to/from destination 0.<br>
>>                                     # This works only when this is not empty<br>
>>                                     # and pgpool has root privilege.<br>
>>                                     # (change requires restart)<br>
>> <br>
>> heartbeat_hostname1 = '192.168.40.67'<br>
>> heartbeat_port1 = 9694<br>
>> heartbeat_device1 = 'eth0'<br>
>> <br>
>> heartbeat_hostname2 = '192.168.40.64'<br>
>> heartbeat_port2 = 9694<br>
>> heartbeat_device2 = 'eth0'<br>
>> <br>
>> wd_heartbeat_keepalive = 2<br>
>>                                     # Interval time of sending heartbeat signal (sec)<br>
>>                                     # (change requires restart)<br>
>> wd_heartbeat_deadtime = 30<br>
>>                                     # Deadtime interval for heartbeat signal (sec)<br>
>>                                     # (change requires restart)<br>
>> <br>
>> # -- query mode --<br>
>> <br>
>> wd_life_point = 3<br>
>>                                     # lifecheck retry times<br>
>>                                     # (change requires restart)<br>
>> wd_lifecheck_query = 'SELECT 1'<br>
>>                                     # lifecheck query to pgpool from watchdog<br>
>>                                     # (change requires restart)<br>
>> wd_lifecheck_dbname = 'template1'<br>
>>                                     # Database name connected for lifecheck<br>
>>                                     # (change requires restart)<br>
>> wd_lifecheck_user = 'nobody'<br>
>>                                     # watchdog user monitoring pgpools in lifecheck<br>
>>                                     # (change requires restart)<br>
>> wd_lifecheck_password = ''<br>
>>                                     # Password for watchdog user in lifecheck<br>
>>                                     # Leaving it empty will make Pgpool-II to first look for the<br>
>>                                     # Password in pool_passwd file before using the empty password<br>
>>                                     # (change requires restart)<br>
>> <br>
>> #------------------------------------------------------------------------------<br>
>> # OTHERS<br>
>> #------------------------------------------------------------------------------<br>
>> relcache_expire = 0<br>
>>                                    # Life time of relation cache in seconds.<br>
>>                                    # 0 means no cache expiration(the default).<br>
>>                                    # The relation cache is used for cache the<br>
>>                                    # query result against PostgreSQL system<br>
>>                                    # catalog to obtain various information<br>
>>                                    # including table structures or if it's a<br>
>>                                    # temporary table or not. The cache is<br>
>>                                    # maintained in a pgpool child local memory<br>
>>                                    # and being kept as long as it survives.<br>
>>                                    # If someone modify the table by using<br>
>>                                    # ALTER TABLE or some such, the relcache is<br>
>>                                    # not consistent anymore.<br>
>>                                    # For this purpose, cache_expiration<br>
>>                                    # controls the life time of the cache.<br>
>> relcache_size = 256<br>
>>                                    # Number of relation cache<br>
>>                                    # entry. If you see frequently:<br>
>>                                    # "pool_search_relcache: cache replacement happend"<br>
>>                                    # in the pgpool log, you might want to increate this number.<br>
>> <br>
>> check_temp_table = catalog<br>
>>                                    # Temporary table check method. catalog, trace or none.<br>
>>                                    # Default is catalog.<br>
>> <br>
>> check_unlogged_table = on<br>
>>                                    # If on, enable unlogged table check in SELECT statements.<br>
>>                                    # This initiates queries against system catalog of primary/main<br>
>>                                    # thus increases load of primary.<br>
>>                                    # If you are absolutely sure that your system never uses unlogged tables<br>
>>                                    # and you want to save access to primary/main, you could turn this off.<br>
>>                                    # Default is on.<br>
>> enable_shared_relcache = on<br>
>>                                    # If on, relation cache stored in memory cache,<br>
>>                                    # the cache is shared among child process.<br>
>>                                    # Default is on.<br>
>>                                    # (change requires restart)<br>
>> <br>
>> relcache_query_target = primary # Target node to send relcache queries. Default is primary node.<br>
>>                                    # If load_balance_node is specified, queries will be sent to load balance node.<br>
>> #------------------------------------------------------------------------------<br>
>> # IN MEMORY QUERY MEMORY CACHE<br>
>> #------------------------------------------------------------------------------<br>
>> memory_cache_enabled = off<br>
>>                                    # If on, use the memory cache functionality, off by default<br>
>>                                    # (change requires restart)<br>
>> memqcache_method = 'shmem'<br>
>>                                    # Cache storage method. either 'shmem'(shared memory) or<br>
>>                                    # 'memcached'. 'shmem' by default<br>
>>                                    # (change requires restart)<br>
>> memqcache_memcached_host = 'localhost'<br>
>>                                    # Memcached host name or IP address. Mandatory if<br>
>>                                    # memqcache_method = 'memcached'.<br>
>>                                    # Defaults to localhost.<br>
>>                                    # (change requires restart)<br>
>> memqcache_memcached_port = 11211<br>
>>                                    # Memcached port number. Mondatory if memqcache_method = 'memcached'.<br>
>>                                    # Defaults to 11211.<br>
>>                                    # (change requires restart)<br>
>> memqcache_total_size = 64MB<br>
>>                                    # Total memory size in bytes for storing memory cache.<br>
>>                                    # Mandatory if memqcache_method = 'shmem'.<br>
>>                                    # Defaults to 64MB.<br>
>>                                    # (change requires restart)<br>
>> memqcache_max_num_cache = 1000000<br>
>>                                    # Total number of cache entries. Mandatory<br>
>>                                    # if memqcache_method = 'shmem'.<br>
>>                                    # Each cache entry consumes 48 bytes on shared memory.<br>
>>                                    # Defaults to 1,000,000(45.8MB).<br>
>>                                    # (change requires restart)<br>
>> memqcache_expire = 0<br>
>>                                    # Memory cache entry life time specified in seconds.<br>
>>                                    # 0 means infinite life time. 0 by default.<br>
>>                                    # (change requires restart)<br>
>> memqcache_auto_cache_invalidation = on<br>
>>                                    # If on, invalidation of query cache is triggered by corresponding<br>
>>                                    # DDL/DML/DCL(and memqcache_expire). If off, it is only triggered<br>
>>                                    # by memqcache_expire. on by default.<br>
>>                                    # (change requires restart)<br>
>> memqcache_maxcache = 400kB<br>
>>                                    # Maximum SELECT result size in bytes.<br>
>>                                    # Must be smaller than memqcache_cache_block_size. Defaults to 400KB.<br>
>>                                    # (change requires restart)<br>
>> memqcache_cache_block_size = 1MB<br>
>>                                    # Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.<br>
>>                                    # Defaults to 1MB.<br>
>>                                    # (change requires restart)<br>
>> memqcache_oiddir = '/var/log/pgpool/oiddir'<br>
>>                                    # Temporary work directory to record table oids<br>
>>                                    # (change requires restart)<br>
>> cache_safe_memqcache_table_list = ''<br>
>>                                    # Comma separated list of table names to memcache<br>
>>                                    # that don't write to database<br>
>>                                    # Regexp are accepted<br>
>> cache_unsafe_memqcache_table_list = ''<br>
>>                                    # Comma separated list of table names not to memcache<br>
>>                                    # that don't write to database<br>
>>                                    # Regexp are accepted<br>
>> <br>
>> Error output:<br>
>> <br>
>> Dec 17 18:39:49 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:49: pid 1332675: LOG:  setting the local watchdog node name to "<a href="http://192.168.40.67:9999" rel="noreferrer" target="_blank">192.168.40.67:9999</a> Linux SVD-SLB02"<br>
>> Dec 17 18:39:49 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:49: pid 1332675: LOG:  watchdog cluster is configured with 2 remote nodes<br>
>> Dec 17 18:39:49 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:49: pid 1332675: LOG:  watchdog remote node:0 on <a href="http://192.168.40.66:9000" rel="noreferrer" target="_blank">192.168.40.66:9000</a><br>
>> Dec 17 18:39:49 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:49: pid 1332675: LOG:  watchdog remote node:1 on <a href="http://192.168.40.64:9000" rel="noreferrer" target="_blank">192.168.40.64:9000</a><br>
>> Dec 17 18:39:49 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:49: pid 1332675: LOG:  ensure availibility on any interface<br>
>> Dec 17 18:39:49 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:49: pid 1332675: LOG:  watchdog node state changed from [DEAD] to [LOADING]<br>
>> Dec 17 18:39:49 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:49: pid 1332675: LOG:  new outbound connection to <a href="http://192.168.40.64:9000" rel="noreferrer" target="_blank">192.168.40.64:9000</a><br>
>> Dec 17 18:39:50 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:50: pid 1332675: LOG:  new watchdog node connection is received from "<a href="http://192.168.40.66:62151" rel="noreferrer" target="_blank">192.168.40.66:62151</a>"<br>
>> Dec 17 18:39:50 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:50: pid 1332675: LOG:  new node joined the cluster hostname:"192.168.40.66" port:9000 pgpool_port:9999<br>
>> Dec 17 18:39:50 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:50: pid 1332675: DETAIL:  Pgpool-II version:"4.2.0" watchdog messaging version: 1.2<br>
>> Dec 17 18:39:53 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:53: pid 1332675: LOG:  watchdog node state changed from [LOADING] to [INITIALIZING]<br>
>> Dec 17 18:39:54 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:54: pid 1332675: LOG:  watchdog node state changed from [INITIALIZING] to [STANDING FOR LEADER]<br>
>> Dec 17 18:39:54 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:54: pid 1332675: LOG:  watchdog node state changed from [STANDING FOR LEADER] to [PARTICIPATING IN ELECTION]<br>
>> Dec 17 18:39:54 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:54: pid 1332675: LOG:  watchdog node state changed from [PARTICIPATING IN ELECTION] to [INITIALIZING]<br>
>> Dec 17 18:39:54 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:54: pid 1332675: LOG:  setting the remote node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01" as watchdog cluster leader<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: LOG:  watchdog node state changed from [INITIALIZING] to [STANDBY]<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: LOG:  successfully joined the watchdog cluster as standby node<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: DETAIL:  our join coordinator request is accepted by cluster leader node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01"<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: LOG:  new IPC connection received<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: LOG:  new IPC connection received<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: LOG:  new IPC connection received<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: LOG:  new IPC connection received<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: LOG:  received the get data request from local pgpool-II on IPC interface<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: LOG:  get data request from local pgpool-II node received on IPC interface is forwarded to leader watchdog node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01"<br>
>> Dec 17 18:39:55 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:55: pid 1332675: DETAIL:  waiting for the reply...<br>
>> Dec 17 18:39:59 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:59: pid 1332675: LOG:  new watchdog node connection is received from "<a href="http://192.168.40.64:15019" rel="noreferrer" target="_blank">192.168.40.64:15019</a>"<br>
>> Dec 17 18:39:59 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:59: pid 1332675: LOG:  new node joined the cluster hostname:"192.168.40.64" port:9000 pgpool_port:9999<br>
>> Dec 17 18:39:59 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:39:59: pid 1332675: DETAIL:  Pgpool-II version:"4.2.0" watchdog messaging version: 1.2<br>
>> Dec 17 18:40:00 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:00: pid 1332675: LOG:  new outbound connection to <a href="http://192.168.40.66:9000" rel="noreferrer" target="_blank">192.168.40.66:9000</a><br>
>> Dec 17 18:40:44 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:44: pid 1332675: LOG:  remote node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01" is shutting down<br>
>> Dec 17 18:40:44 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:44: pid 1332675: LOG:  watchdog cluster has lost the coordinator node<br>
>> Dec 17 18:40:44 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:44: pid 1332675: LOG:  removing the remote node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01" from watchdog cluster leader<br>
>> Dec 17 18:40:44 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:44: pid 1332675: LOG:  We have lost the cluster leader node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01"<br>
>> Dec 17 18:40:44 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:44: pid 1332675: LOG:  watchdog node state changed from [STANDBY] to [JOINING]<br>
>> Dec 17 18:40:44 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:44: pid 1332675: LOG:  watchdog node state changed from [JOINING] to [INITIALIZING]<br>
>> Dec 17 18:40:45 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:45: pid 1332675: LOG:  watchdog node state changed from [INITIALIZING] to [STANDING FOR LEADER]<br>
>> Dec 17 18:40:45 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:45: pid 1332675: LOG:  watchdog node state changed from [STANDING FOR LEADER] to [LEADER]<br>
>> Dec 17 18:40:45 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:45: pid 1332675: LOG:  I am announcing my self as leader/coordinator watchdog node<br>
>> Dec 17 18:40:45 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:45: pid 1332675: LOG:  I am the cluster leader node<br>
>> Dec 17 18:40:45 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:45: pid 1332675: DETAIL:  our declare coordinator message is accepted by all nodes<br>
>> Dec 17 18:40:45 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:45: pid 1332675: LOG:  setting the local node "<a href="http://192.168.40.67:9999" rel="noreferrer" target="_blank">192.168.40.67:9999</a> Linux SVD-SLB02" as watchdog cluster leader<br>
>> Dec 17 18:40:45 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:45: pid 1332675: LOG:  I am the cluster leader node but we do not have enough nodes in cluster<br>
>> Dec 17 18:40:45 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:45: pid 1332675: DETAIL:  waiting for the quorum to start escalation process<br>
>> Dec 17 18:40:45 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:45: pid 1332675: LOG:  new IPC connection received<br>
>> Dec 17 18:40:46 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:46: pid 1332675: LOG:  adding watchdog node "<a href="http://192.168.40.64:9999" rel="noreferrer" target="_blank">192.168.40.64:9999</a> Linux SVD-WEB01" to the standby list<br>
>> Dec 17 18:40:46 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:46: pid 1332675: LOG:  quorum found<br>
>> Dec 17 18:40:46 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:46: pid 1332675: DETAIL:  starting escalation process<br>
>> Dec 17 18:40:46 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:46: pid 1332675: LOG:  escalation process started with PID:1332782<br>
>> Dec 17 18:40:46 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:46: pid 1332675: LOG:  new IPC connection received<br>
>> Dec 17 18:40:46 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:46: pid 1332675: LOG:  new IPC connection received<br>
>> Dec 17 18:40:50 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:40:50: pid 1332675: LOG:  watchdog escalation process with pid: 1332782 exit with SUCCESS.<br>
>> Dec 17 18:41:03 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:41:03: pid 1332675: LOG:  new watchdog node connection is received from "<a href="http://192.168.40.66:55496" rel="noreferrer" target="_blank">192.168.40.66:55496</a>"<br>
>> Dec 17 18:41:03 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:41:03: pid 1332675: LOG:  new node joined the cluster hostname:"192.168.40.66" port:9000 pgpool_port:9999<br>
>> Dec 17 18:41:03 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:41:03: pid 1332675: DETAIL:  Pgpool-II version:"4.2.0" watchdog messaging version: 1.2<br>
>> Dec 17 18:41:03 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:41:03: pid 1332675: LOG:  The newly joined node:"<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01" had left the cluster because it was shutdown<br>
>> Dec 17 18:41:03 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:41:03: pid 1332675: LOG:  new outbound connection to <a href="http://192.168.40.66:9000" rel="noreferrer" target="_blank">192.168.40.66:9000</a><br>
>> Dec 17 18:41:04 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:41:04: pid 1332675: LOG:  adding watchdog node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01" to the standby list<br>
>> Dec 17 18:42:51 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:42:51: pid 1332675: LOG:  read from socket failed, remote end closed the connection<br>
>> Dec 17 18:42:51 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:42:51: pid 1332675: LOG:  client socket of <a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01 is closed<br>
>> Dec 17 18:42:51 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:42:51: pid 1332675: LOG:  read from socket failed, remote end closed the connection<br>
>> Dec 17 18:42:51 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:42:51: pid 1332675: LOG:  outbound socket of <a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01 is closed<br>
>> Dec 17 18:42:51 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:42:51: pid 1332675: LOG:  remote node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01" is not reachable<br>
>> Dec 17 18:42:51 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:42:51: pid 1332675: DETAIL:  marking the node as lost<br>
>> Dec 17 18:42:51 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:42:51: pid 1332675: LOG:  remote node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01" is lost<br>
>> Dec 17 18:42:51 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:42:51: pid 1332675: LOG:  removing watchdog node "<a href="http://192.168.40.66:9999" rel="noreferrer" target="_blank">192.168.40.66:9999</a> Linux SVD-SLB01" from the standby list<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: LOG:  new IPC connection received<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: LOG:  read from socket failed, remote end closed the connection<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: LOG:  client socket of <a href="http://192.168.40.64:9999" rel="noreferrer" target="_blank">192.168.40.64:9999</a> Linux SVD-WEB01 is closed<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: LOG:  remote node "<a href="http://192.168.40.64:9999" rel="noreferrer" target="_blank">192.168.40.64:9999</a> Linux SVD-WEB01" is shutting down<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: LOG:  removing watchdog node "<a href="http://192.168.40.64:9999" rel="noreferrer" target="_blank">192.168.40.64:9999</a> Linux SVD-WEB01" from the standby list<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: LOG:  We have lost the quorum<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: LOG:  received node status change ipc message<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: DETAIL:  No heartbeat signal from node<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: WARNING:  watchdog life-check reported, we are disconnected from the network<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: DETAIL:  changing the state to LOST<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: LOG:  watchdog node state changed from [LEADER] to [LOST]<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: FATAL:  system has lost the network<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332675: LOG:  Watchdog is shutting down<br>
>> Dec 17 18:43:25 SVD-SLB02 pgpool[1332673]: 2020-12-17 18:43:25: pid 1332673: LOG:  watchdog child process with pid: 1332675 exits with status 768<br>
>> <br>
>> Does anyone have any suggestions of what this could be?<br>
>> <br>
>> Note if I play around with the weights I can get the other node to be the VIP but it still shutdowns with node 0 is shutdown.<br>
>> <br>
>> It does not shutdown when any of the other nodes are shutdown, only node 0<br>
>> <br>
>> Thanks,<br>
>> <br>
>> Joe<br>
>> <br>
</blockquote></div></div>