[pgpool-general: 4511] Forked New PCP Worker
    Burton, Chuck 
    Chuck.Burton at arris.com
       
    Wed Mar  2 04:52:10 JST 2016
    
    
  
We are running pgpool in our lab in a virtualized, dockerized environment with pgpool managing two streaming replicated Postgres nodes.  I am running pgpool 3.5.0.1.
I am using pgbench to connect through pgpool:
/usr/pgsql-9.3/bin/pgbench -c 300 -r -j 5 -t 3334 -h pgpool -p 9999 dbname
What is happening is I'm getting frequent disconnects of the pcp worker:
2016-03-01 19:36:48: pid 5369: LOG:  forked new pcp worker, pid=7377 socket=7
2016-03-01 19:36:48: pid 5369: LOG:  PCP process with pid: 7377 exit with SUCCESS.
2016-03-01 19:36:48: pid 5369: LOG:  PCP process with pid: 7377 exits with status 0
2016-03-01 19:36:48: pid 244: LOG:  reload config files.
2016-03-01 19:36:48: pid 5370: LOG:  reloading config file
2016-03-01 19:37:00: pid 5369: LOG:  forked new pcp worker, pid=7433 socket=7
2016-03-01 19:37:00: pid 5369: LOG:  PCP process with pid: 7433 exit with SUCCESS.
2016-03-01 19:37:00: pid 5369: LOG:  PCP process with pid: 7433 exits with status 0
2016-03-01 19:37:00: pid 244: LOG:  reload config files.
2016-03-01 19:37:03: pid 5370: LOG:  reloading config file
Also, occasionally:
2016-03-01 19:36:18: pid 4497: FATAL:  Backend throw an error message
2016-03-01 19:36:18: pid 4497: DETAIL:  Exiting current session because of an error from backend
2016-03-01 19:36:18: pid 4497: HINT:  BACKEND Error: "function pgpool_regclass(unknown) does not exist"
This happened six times during the five-minute pgbench test.
I am running with:
num_init_children = 1000
                                   # Number of pools
                                   # (change requires restart)
max_pool = 1
                                   # Number of connections per pool
                                   # (change requires restart)
listen_backlog_multiplier = 3
serialize_accept = on
child_life_time = 0
Is this a problem with the pcp worker getting overwhelmed?
Looking at the new metrics on show pool_nodes I see that reads are being distributed.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20160301/24785d16/attachment.htm>
    
    
More information about the pgpool-general
mailing list