[pgpool-general: 186] Re: Getting a better understanding of loadbalancing

Tatsuo Ishii ishii at postgresql.org
Fri Jan 20 17:57:00 JST 2012


Let me answer to the question that I can answer...

> Hello,
>  First of all, thanks for creating such a awesome product which is much
> needed. Also I am very new to pgpool (just started to work with it about a
> day or so), so please do excuse my ignorance.
> 
> Let me first describe our environment and on how we wish to use pgpool2.
> 
> We have a master-slave (streaming replication which comes with postgresql)
> Postgresql 9.1. We want run pgpool2 (without the admin) on each webserver.
> The application on webserver will connect locally to the pgpool2 and will
> be used for connection pooling and loadbalancing (reads) only.
> 
> Now I was successfully in setting up pgpool2 (3.1.1) on ubuntu 10.04.3 LTS
> and did some tests The pgpool.conf is at the end of this mail. Somehow I
> havent been able to get a real grasp on the way pgpool is loadbalancing. So
> I can extract the best from the setup. These are some of the tests I ran
> (using pgbench)
> 
> Test 1: To test the pgpool connection pooling and loadbalancing.
> 
> Run against master db server directly:  pgbench -h <masterserver-IP>  -p
> 5432 -U postgres -T 30 -j 4 -S -c 12 pgbench. For this I get the results:
> starting vacuum...end.
> transaction type: SELECT only
> scaling factor: 100
> query mode: simple
> number of clients: 12
> number of threads: 4
> duration: 30 s
> number of transactions actually processed: 476867
> tps = 15894.130897 (including connections establishing)
> tps = 15935.429409 (excluding connections establishing)
> 
> Now I run it through pgpool: pgbench -h 127.0.0.1 -p 5432 -U postgres -T 30
> -j 4 -S -c 12 pgbench. For this I get the results:
> pgbench -h 127.0.0.1  -p 5432 -U postgres -T 30 -j 4 -S -c 12 pgbench
> starting vacuum...end.
> transaction type: SELECT only
> scaling factor: 100
> query mode: simple
> number of clients: 12
> number of threads: 4
> duration: 30 s
> number of transactions actually processed: 381812
> tps = 12726.511367 (including connections establishing)
> tps = 12740.921357 (excluding connections establishing)
> 
> So definitely the TPS for select only has reduced a lot. I can see that if
> I enable "log_per_node_statement = true", I see that loadbalancing is
> working as demonstrated here:
> 
> Jan 18 10:41:23 localhost pgpool[4686]: DB node id: 0 backend pid: 12227
> statement: SELECT abalance FROM pgbench_accounts WHERE aid = 989975;
> Jan 18 10:41:23 localhost pgpool[4684]: DB node id: 1 backend pid: 20430
> statement: SELECT abalance FROM pgbench_accounts WHERE aid = 2896056;
> Jan 18 10:41:23 localhost pgpool[4679]: DB node id: 0 backend pid: 12222
> statement: SELECT abalance FROM pgbench_accounts WHERE aid = 721763;
> Jan 18 10:41:23 localhost pgpool[4617]: DB node id: 1 backend pid: 20426
> statement: SELECT abalance FROM pgbench_accounts WHERE aid = 35721;
> 
> Question I do understand the fact that the loadbalancing is done per
> connect rather than transaction by pgpool... so that is why may be the
> pgpool shows lower TPS. So no pgpool in a sense is adding overheads here.
> Am i mistaken or is there something more to this
> 
> Test 2: Connects.
> 
> Ok if pgpool2 is loadbalancing per the connection, I tried with -C option
> of pgbench.
> 
> Direct connection to master: pgbench -h <master_ip>  -p 5432 -U postgres -t
> 20 -S -C -c 10 pgbench
> The max connections on master and slave  is set to be 200 each, so the
> above parameter are setup to do accomodate for that part.
> 
> tps = 41.827791 (including connections establishing)
> tps = 3048.315806 (excluding connections establishing)
> 
> Now running via pgpool: pgbench -h  127.0.0.1  -p 5432 -U postgres -t 20 -S
> -C -c 10 pgbench
> 
> pgbench -h 127.0.0.1 -p 5432 -U postgres -t 20 -S -C -c 10 pgbench
> 
> tps = 139.176424 (including connections establishing)
> tps = 2759.381898 (excluding connections establishing)
> 
> (now that is showing some scaling).
> 
> Now let us increase the connection,
> pgbench -h <master_ip>  -p 5432 -U postgres -t 30 -S -C -c 10 pgbench
> 
> tps = 42.555908 (including connections establishing)
> tps = 3098.405354 (excluding connections establishing)
> 
> pgpool:
> pgbench -h 127.0.0.1  -p 5432 -U postgres -t 30 -S -C -c 10 pgbench
> tps = 162.134643 (including connections establishing)
> tps = 3500.828529 (excluding connections establishing)
> 
> (cool still some more improvement)
> 
> Ok lets drive it to the critical state,
>  pgbench -h <master_ip>  -p 5432 -U postgres -t 30 -S -C -c 200 pgbench
> FATAL:  sorry, too many clients already
> Thats obvious,
> 
> But with pgpool it gives the same error. I was under the impression that
> since for read there are two servers, the number of connections available
> would be doubled. The observation I made is that moment connections are
> sent via pgpool, the same number of connections are created on the master
> as well as the slave instantaneously. So that when the max connections were
> used  it actual opened up all of them in both the servers and hence met the
> same limit. Is this expected behavior or I have messed up on the
> configuration.

Yes, if you connect to pgpool with 200 connections, pgoool tries to
connect to PostgreSQL servers 200 connections *each*. So if
max_connections of PostgreSQL is lower than 200, you'll get the error
message.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

> ==============================================
> pgpool.conf
> ==============================================
> #------------------------------------------------------------------------------
> # CONNECTIONS
> #------------------------------------------------------------------------------
> 
> # - pgpool Connection Settings -
> listen_addresses = 'localhost'
> port = 5432
> socket_dir = '/var/run/postgresql'
> 
> # - pgpool Communication Manager Connection Settings -
> pcp_port = 9898
> pcp_socket_dir = '/var/run/postgresql'
> pcp_timeout = 10
> 
> # - Backend Connection Settings -
> backend_hostname0 = 'master_ip'
> backend_port0 = 5432
> backend_weight0 = 1
> 
> backend_hostname1 = 'slave_ip'
> backend_port1 = 5432
> backend_weight1 = 1
> 
> # - Authentication -
> enable_pool_hba = true
> authentication_timeout = 60
> 
> # - SSL Connections -
> ssl = false
> 
> #------------------------------------------------------------------------------
> # POOLS
> #------------------------------------------------------------------------------
> # - Pool size -
> num_init_children = 256
> max_pool = 4
> 
> # - Life time -
> child_life_time = 120
> child_max_connections = 0
> connection_life_time = 120
> client_idle_limit = 120
> 
> #------------------------------------------------------------------------------
> # LOGS
> #------------------------------------------------------------------------------
> # - Where to log -
> log_destination = 'syslog'
> 
> # - What to log -
> print_timestamp = true
> log_connections = false
> log_hostname = false
> log_statement = false
> log_per_node_statement = true
> log_standby_delay = 'if_over_threshold'
> 
> # - Syslog specific -
> syslog_facility = 'LOCAL0'
> syslog_ident = 'pgpool'
> 
> # - Debug -
> debug_level = 0
> 
> #------------------------------------------------------------------------------
> # FILE LOCATIONS
> #------------------------------------------------------------------------------
> pid_file_name = '/var/run/postgresql/pgpool.pid'
> logdir = '/var/log/postgresql'
> 
> #------------------------------------------------------------------------------
> # CONNECTION POOLING
> #------------------------------------------------------------------------------
> connection_cache =  true
> reset_query_list = 'ABORT; DISCARD ALL'
> 
> #------------------------------------------------------------------------------
> # REPLICATION MODE
> #------------------------------------------------------------------------------
> replication_mode = false
> replicate_select = false
> insert_lock = true
> lobj_lock_table = ''
> 
> # - Degenerate handling -
> replication_stop_on_mismatch = false
> failover_if_affected_tuples_mismatch = false
> 
> #------------------------------------------------------------------------------
> # LOAD BALANCING MODE
> #------------------------------------------------------------------------------
> load_balance_mode = true
> ignore_leading_white_space = true
> white_function_list = ''
> black_function_list = 'nextval,setval,lastval,currval'
> 
> #------------------------------------------------------------------------------
> # MASTER/SLAVE MODE
> #------------------------------------------------------------------------------
> master_slave_mode = true
> master_slave_sub_mode = 'stream'
> 
> # - Streaming -
> sr_check_period = 0
> sr_check_user = 'postgres'
> sr_check_password = ''
> delay_threshold = 10000000
> 
> # - Special commands -
> #follow_master_command = ''
>                                    #   %m = new master node id
>                                    #   %H = hostname of the new master node
>                                    #   %M = old master node id
>                                    #   %P = old primary node id
>                                    #   %% = '%' character
> 
> 
> #------------------------------------------------------------------------------
> # PARALLEL MODE AND QUERY CACHE
> #------------------------------------------------------------------------------
> parallel_mode = false
> enable_query_cache = false
> pgpool2_hostname = ''
> 
> # - System DB info -
> system_db_hostname  = 'localhost'
> system_db_port = 5432
> system_db_dbname = 'pgpool'
> system_db_schema = 'pgpool_catalog'
> system_db_user = 'pgpool'
> system_db_password = ''
> 
> #------------------------------------------------------------------------------
> # HEALTH CHECK
> #------------------------------------------------------------------------------
> health_check_period = 0
> health_check_timeout = 20
> health_check_user = 'postgres'
> health_check_password = ''
> 
> #------------------------------------------------------------------------------
> # FAILOVER AND FAILBACK
> #------------------------------------------------------------------------------
> failover_command = '%d%h%p'
> failback_command = '%d%h%p'
> fail_over_on_backend_error = false
> 
> 
> #------------------------------------------------------------------------------
> # ONLINE RECOVERY
> #------------------------------------------------------------------------------
> recovery_user = 'postgres'
> recovery_password = ''
> recovery_1st_stage_command = ''
> recovery_2nd_stage_command = ''
> recovery_timeout = 90
> client_idle_limit_in_recovery = 0
> 
> #------------------------------------------------------------------------------
> # OTHERS
> #------------------------------------------------------------------------------
> relcache_expire = 0


More information about the pgpool-general mailing list