1,2c1,2 < Last modified: Thu Aug 11 09:10:34 JST 2011 < Welcome to the pgpool -II page --- > Last modified: Tue May 8 16:42:49 JST 2012 > Welcome to pgpool -II page 5c5 < Platforms --- > Supported Platforms 11a12,16 > Streaming Replication > Parallel Mode > Client authentication > Setting Query cache method > On memory query Cache 15a21,22 > Backup > Deploying pgpool-II 23d29 < 46a53,54 > > back to top 53a62,63 > > back to top 69,80c79,84 < < --prefix=path < pgpool-II binaries and docs will be installed in this directory. Default value is /usr/local < --with-pgsql=path < The top directory where PostgreSQL's client libraries are installed. Default value is provided by pg_config < --with-openssl < pgpool-II binaries will be built with OpenSSL support. OpenSSL support is disabled by default. < --enable-sequence-lock < Use insert_lock compatible with pgpool-II 3.0 series(until 3.0.4). pgpool-II locks against a row in the sequence table. PostgreSQL 8.2 or later which was released after June 2011 cannot use this lock method. < --enable-table-lock < Use insert_lock compatible with pgpool-II 2.2 and 2.3 series. pgpool-II locks against the insert target table. This lock method is deprecated because it causes a lock conflict with VACUUM. < --- > --prefix=path pgpool-II binaries and docs will be installed in this directory. Default value is /usr/local > --with-pgsql=path The top directory where PostgreSQL's client libraries are installed. Default value is provided by pg_config > --with-openssl pgpool-II binaries will be built with OpenSSL support. OpenSSL support is disabled by default. > --enable-sequence-lock Use insert_lock compatible with pgpool-II 3.0 series(until 3.0.4). pgpool-II locks against a row in the sequence table. PostgreSQL 8.2 or later which was released after June 2011 cannot use this lock method. > --enable-table-lock Use insert_lock compatible with pgpool-II 2.2 and 2.3 series. pgpool-II locks against the insert target table. This lock method is deprecated because it causes a lock conflict with VACUUM. > --with-memcached=path pgpool-II binaries will use memcached for on memory query cache. You have to install libmemcached. 87d90 < 107a111,112 > > back to top 154d158 < 166c170,172 < Specifies the hostname or IP address, on which pgpool-II will accept TCP/IP connections. '*' accepts all incoming connections. '' disables TCP/IP connections. Default is 'localhost'. Connections via UNIX domain socket are always accepted. This parameter can only be set at server start. --- > Specifies the hostname or IP address, on which pgpool-II will accept TCP/IP connections. '*' accepts all incoming connections. '' disables TCP/IP connections. Default is 'localhost'. Connections via UNIX domain socket are always accepted. > > This parameter can only be set at server start. 169c175,177 < The port number used by pgpool-II to listen for connections. Default is 9999. This parameter can only be set at server start. --- > The port number used by pgpool-II to listen for connections. Default is 9999. > > This parameter can only be set at server start. 172c180,182 < The directory where the UNIX domain socket accepting connections for pgpool-II will be created. Default is '/tmp'. Be aware that this socket might be deleted by a cron job. We recommend to set this value to '/var/run' or such directory. This parameter can only be set at server start. --- > The directory where the UNIX domain socket accepting connections for pgpool-II will be created. Default is '/tmp'. Be aware that this socket might be deleted by a cron job. We recommend to set this value to '/var/run' or such directory. > > This parameter can only be set at server start. 175c185,187 < The port number where PCP process accepts connections. Default is 9898. This parameter can only be set at server start. --- > The port number where PCP process accepts connections. Default is 9898. > > This parameter can only be set at server start. 178c190,192 < The directory path of the UNIX domain socket accepting connections for the PCP process. Default is '/tmp'. Be aware that the socket might be deleted by cron. We recommend to set this value to '/var/run' or such directory. This parameter can only be set at server start. --- > The directory path of the UNIX domain socket accepting connections for the PCP process. Default is '/tmp'. Be aware that the socket might be deleted by cron. We recommend to set this value to '/var/run' or such directory. > > This parameter can only be set at server start. 183c197,199 < This parameter was defining the PostgreSQL server's UNIX domain socket directory. Default is '/tmp'. This parameter can only be set at server start. --- > This parameter was defining the PostgreSQL server's UNIX domain socket directory. Default is '/tmp'. > > This parameter can only be set at server start. 186c202,204 < PCP connection's timeout value in seconds. If a client does not respond within the set seconds, PCP closes the connection with the client. Default is 10 seconds. 0 means no timeout. This can be changed with a reload. --- > PCP connection's timeout value in seconds. If a client does not respond within the set seconds, PCP closes the connection with the client. Default is 10 seconds. 0 means no timeout. > > This can be changed with a reload. 201d218 < 205c222,224 < A pgpool-II child process' life time in seconds. When a child is idle for that many seconds, it is terminated and a new child is created. This parameter is a measure to prevent memory leaks and other unexpected errors. Default value is 300 (5 minutes). 0 disables this feature. Note that this doesn't apply for processes that have not accepted any connection yet. You need to reload pgpool.conf if you change this value. --- > A pgpool-II child process' life time in seconds. When a child is idle for that many seconds, it is terminated and a new child is created. This parameter is a measure to prevent memory leaks and other unexpected errors. Default value is 300 (5 minutes). 0 disables this feature. Note that this doesn't apply for processes that have not accepted any connection yet. > > You need to reload pgpool.conf if you change this value. 208c227,229 < A pgpool-II child process will be terminated after this many connections from clients. This parameter is useful on a server if it is so busy that child_life_time and connection_life_time are never triggered. Thus this is also usefull to prevent PostgreSQL servers from getting too big. You need to reload pgpool.conf if you change this value. --- > A pgpool-II child process will be terminated after this many connections from clients. This parameter is useful on a server if it is so busy that child_life_time and connection_life_time are never triggered. Thus this is also usefull to prevent PostgreSQL servers from getting too big. > > You need to reload pgpool.conf if you change this value. 211c232,234 < Disconnect a client if it has been idle for client_idle_limit seconds after the last query has completed. This is useful to prevent pgpool childs from being occupied by a lazy client or a broken TCP/IP connection between client and pgpool. The default value for client_idle_limit is 0, which means the feature is turned off. this value. This parameter is ignored in the second stage of online recovery. You need to reload pgpool.conf if you change client_idle_limit. --- > Disconnect a client if it has been idle for client_idle_limit seconds after the last query has completed. This is useful to prevent pgpool childs from being occupied by a lazy client or a broken TCP/IP connection between client and pgpool. The default value for client_idle_limit is 0, which means the feature is turned off. this value. This parameter is ignored in the second stage of online recovery. > > You need to reload pgpool.conf if you change client_idle_limit. 214c237,239 < Specify the timeout for pgpool authentication. 0 disables the time out. Default value is 60. You need to restart pgpool-II if you change authentication_timeout. --- > Specify the timeout for pgpool authentication. 0 disables the time out. Default value is 60. > > You need to restart pgpool-II if you change authentication_timeout. 226d250 < 236c260,262 < Full path to a file which contains pgpool's process id. Default is "/var/run/pgpool/pgpool.pid". You need to restart pgpool-II if you change this value. --- > Full path to a file which contains pgpool's process id. Default is "/var/run/pgpool/pgpool.pid". > > You need to restart pgpool-II if you change this value. 239c265,267 < Add timestamps to the logs when set to true. Default is true. You need to reload pgpool.conf if you change print_timestamp. --- > Add timestamps to the logs when set to true. Default is true. > > You need to reload pgpool.conf if you change print_timestamp. 242c270,272 < Caches connections to backends when set to true. Default is true. You need to restart pgpool-II if you change this value. --- > Caches connections to backends when set to true. Default is true. > > You need to restart pgpool-II if you change this value. 245c275,281 < pgpool-II periodically tries to connect to the backends to detect any error on the servers or networks. This error check procedure is called "health check". If an error is detected, pgpool-II tries to perform failover or degeneration. This parameter serves to prevent the health check from waiting for a long time in a case such as un unplugged network cable. The timeout value is in seconds. Default value is 20. 0 disables timeout (waits until TCP/IP timeout). This health check requires one extra connection to each backend, so max_connections in the postgresql.conf needs to be incremented as needed. You need to reload pgpool.conf if you change this value. --- > pgpool-II periodically tries to connect to the backends to detect any error on the servers or networks. This error check procedure is called "health check". If an error is detected, pgpool-II tries to perform failover or degeneration. > > This parameter serves to prevent the health check from waiting for a long time in a case such as un unplugged network cable. The timeout value is in seconds. Default value is 20. 0 disables timeout (waits until TCP/IP timeout). > > This health check requires one extra connection to each backend, so max_connections in the postgresql.conf needs to be incremented as needed. > > You need to reload pgpool.conf if you change this value. 248c284,286 < This parameter specifies the interval between the health checks in seconds. Default is 0, which means health check is disabled. You need to reload pgpool.conf if you change health_check_period. --- > This parameter specifies the interval between the health checks in seconds. Default is 0, which means health check is disabled. > > You need to reload pgpool.conf if you change health_check_period. 251c289,291 < The user name to perform health check. This user must exist in all the PostgreSQL backends. Otherwise, health check causes an error. You need to reload pgpool.conf if you change health_check_user. --- > The user name to perform health check. This user must exist in all the PostgreSQL backends. Otherwise, health check causes an error. > > You need to reload pgpool.conf if you change health_check_user. 254c294,306 < The password of the user to perform health check. You need to reload pgpool.conf if you change health_check_password. --- > The password of the user to perform health check. > > You need to reload pgpool.conf if you change health_check_password. > health_check_max_retries > > The maximum number of times to retry a failed health check before giving up and initiating failover. This setting can be useful in spotty networks, when it is expected that health checks will fail occasionally even when the master is fine. Default is 0, which means do not retry. It is advised that you disable fail_over_on_backend_error if you want to enable health_check_max_retries. > > You need to reload pgpool.conf if you change health_check_max_retries. > health_check_retry_delay > > The amount of time (in seconds) to sleep between failed health check retries (not used unless health_check_max_retries is > 0). If 0, then retries are immediate (no delay). > > You need to reload pgpool.conf if you change health_check_retry_delay. 258d309 < 276d326 < 292d341 < 309c358,360 < If true, and an error occurs when writing to the backend communication, pgpool-II will trigger the fail over procedure . This is the same behavior as of pgpool-II 2.2.x or earlier. If set to false, pgpool will report an error and disconnect the session. If you set this parameter to off, it is recommended that you turn on health checking. Please note that even if this parameter is set to off, however, pgpool will also do the fail over when connecting to a backend fails or pgpool detects the administrative shutdown of postmaster. You need to reload pgpool.conf if you change this value. --- > If true, and an error occurs when reading/writing to the backend communication, pgpool-II will trigger the fail over procedure. If set to false, pgpool will report an error and disconnect the session. If you set this parameter to off, it is recommended that you turn on health checking. Please note that even if this parameter is set to off, however, pgpool will also do the fail over when pgpool detects the administrative shutdown of postmaster. > > You need to reload pgpool.conf if you change this value. 312c363,365 < pgpool-II ignores white spaces at the beginning of SQL queries while in the load balance mode. It is useful if used with APIs like DBI/DBD:Pg which adds white spaces against the user's will. You need to reload pgpool.conf if you change this value. --- > pgpool-II ignores white spaces at the beginning of SQL queries while in the load balance mode. It is useful if used with APIs like DBI/DBD:Pg which adds white spaces against the user's will. > > You need to reload pgpool.conf if you change this value. 315c368,370 < Produces SQL log messages when true. This is similar to the log_statement parameter in PostgreSQL. It produces logs even if the debug option was not passed to pgpool-II at startup. You need to reload pgpool.conf if you change this value. --- > Produces SQL log messages when true. This is similar to the log_statement parameter in PostgreSQL. It produces logs even if the debug option was not passed to pgpool-II at startup. > > You need to reload pgpool.conf if you change this value. 318c373,375 < Similar to log_statement, except that it prints logs for each DB node separately. It can be useful to make sure that replication is working, for example. You need to reload pgpool.conf if you change this value. --- > Similar to log_statement, except that it prints logs for each DB node separately. It can be useful to make sure that replication is working, for example. > > You need to reload pgpool.conf if you change this value. 321c378,380 < If true, ps command status will show the client's hostname instead of an IP address. Also, if log_connections is enabled, hostname will be logged. You need to reload pgpool.conf if you change this value. --- > If true, ps command status will show the client's hostname instead of an IP address. Also, if log_connections is enabled, hostname will be logged. > > You need to reload pgpool.conf if you change this value. 324c383,385 < If true, all incoming connections will be printed to the log. You need to reload pgpool.conf if you change this value. --- > If true, all incoming connections will be printed to the log. > > You need to reload pgpool.conf if you change this value. 327c388,390 < If true, use pool_hba.conf for client authentication. See setting up pool_hba.conf for client authentication. You need to restart pgpool-II if you change this value. --- > If true, use pool_hba.conf for client authentication. See setting up pool_hba.conf for client authentication. > > You need to restart pgpool-II if you change this value. 363,368c426,427 < < ALLOW_TO_FAILOVER < Allow to failover or detaching backend. This is the default. You cannot specify with DISALLOW_TO_FAILOVER at a same time. < DISALLOW_TO_FAILOVER < Disallow to failover or detaching backend. This is usefull when you protect backend by using HA(High Availability) softwares such as Heartbeat or Pacemaker. You cannot specify with ALLOW_TO_FAILOVER at a same time. < --- > ALLOW_TO_FAILOVER Allow to failover or detaching backend. This is the default. You cannot specify with DISALLOW_TO_FAILOVER at a same time. > DISALLOW_TO_FAILOVER Disallow to failover or detaching backend. This is usefull when you protect backend by using HA(High Availability) softwares such as Heartbeat or Pacemaker. You cannot specify with ALLOW_TO_FAILOVER at a same time. 398a458,459 > > back to top 405c466,470 < The maximum number of cached connections in pgpool-II children processes. pgpool-II reuses the cached connection if an incoming connection is connecting to the same database with the same username. If not, pgpool-II creates a new connection to the backend. If the number of cached connections exceeds max_pool, the oldest connection will be discarded, and uses that slot for the new connection. Default value is 4. Please be aware that the number of connections from pgpool-II processes to the backends may reach num_init_children * max_pool. This parameter can only be set at server start. --- > The maximum number of cached connections in pgpool-II children processes. pgpool-II reuses the cached connection if an incoming connection is connecting to the same database with the same username. If not, pgpool-II creates a new connection to the backend. If the number of cached connections exceeds max_pool, the oldest connection will be discarded, and uses that slot for the new connection. > > Default value is 4. Please be aware that the number of connections from pgpool-II processes to the backends may reach num_init_children * max_pool. > > This parameter can only be set at server start. 415d479 < 417d480 < 429a493,494 > > back to top 449c514,518 < When set to true, if all backends don't return the same packet kind, the backends that differ from most frequent result set are degenerated. A typical use case is a SELECT statement being part of a transaction, replicate_select set to true, and SELECT returning a different number of rows among backends. Non-SELECT statements might trigger this though. For example, a backend succeeded in an UPDATE, while others failed. Note that pgpool does NOT examine the content of records returned by SELECT. If set to false, the session is terminated and the backends are not degenerated. Default is false. --- > When set to true, if all backends don't return the same packet kind, the backends that differ from most frequent result set are degenerated. > > A typical use case is a SELECT statement being part of a transaction, replicate_select set to true, and SELECT returning a different number of rows among backends. Non-SELECT statements might trigger this though. For example, a backend succeeded in an UPDATE, while others failed. Note that pgpool does NOT examine the content of records returned by SELECT. > > If set to false, the session is terminated and the backends are not degenerated. Default is false. 479d547 < 490d557 < 498d564 < 517d582 < 576c641 < SELECT lo_creat --- > SELECT lo_create 593d657 < it's not SELECT nextval or SELECT setval 596c660 < it starts with "SELECT" or one of COPY TO STDOUT, EXPLAIN, EXPLAIN ANALYZE SELECT... ignore_leading_white_space = true will ignore leading white space. --- > it starts with "SELECT" or one of COPY TO STDOUT, EXPLAIN, EXPLAIN ANALYZE SELECT... ignore_leading_white_space = true will ignore leading white space. (Except for writing queries in black_list or white_list) 605d668 < 609d671 < 623a686 > back to top 626c689,691 < This mode is used to couple pgpool-II with another master/slave replication software (like Slony-I and Streaming replication), which is responsible for doing the actual data replication. DB nodes' information (backend_hostname, backend_port, backend_weight, backend_flag and backend_data_directory if you need the online recovery fonctionality) must be set, in the same way as in the replication mode. In addition to that, set master_slave_mode and load_balance_mode to true. pgpool-II will then send queries that need to be replicated to the Master DB, and other queries will be load balanced if possible. Queries sent to Master DB because they cannot be balanced are of course accounted for in the load balancing algorithm. --- > This mode is used to couple pgpool-II with another master/slave replication software (like Slony-I and Streaming replication), which is responsible for doing the actual data replication. DB nodes' information (backend_hostname, backend_port, backend_weight, backend_flag and backend_data_directory if you need the online recovery fonctionality) must be set, in the same way as in the replication mode. In addition to that, set master_slave_mode and load_balance_mode to true. > > pgpool-II will then send queries that need to be replicated to the Master DB, and other queries will be load balanced if possible. Queries sent to Master DB because they cannot be balanced are of course accounted for in the load balancing algorithm. 636a702,703 > > back to top 643c710,712 < Specifies the maximum tolerated replication delay of the standby against the primary server in WAL bytes. If the delay exceeds delay_threshold, pgpool-II does not send SELECT queries to the standby server anymore. Everything is sent to the primary server even if load balance mode is enabled, until the standby has caught-up. If delay_threshold is 0 or sr checking is disabled, the delay checking is not performed. This check is performed every 'sr_check_period'. The default value for delay_threshold is 0. You need to reload pgpool.conf if you change this directive. --- > Specifies the maximum tolerated replication delay of the standby against the primary server in WAL bytes. If the delay exceeds delay_threshold, pgpool-II does not send SELECT queries to the standby server anymore. Everything is sent to the primary server even if load balance mode is enabled, until the standby has caught-up. If delay_threshold is 0 or sr checking is disabled, the delay checking is not performed. This check is performed every 'sr_check_period'. The default value for delay_threshold is 0. > > You need to reload pgpool.conf if you change this directive. 646c715,717 < This parameter specifies the interval between the streaming replication delay checks in seconds. Default is 0, which means the check is disabled. You need to reload pgpool.conf if you change sr_check_period. --- > This parameter specifies the interval between the streaming replication delay checks in seconds. Default is 0, which means the check is disabled. > > You need to reload pgpool.conf if you change sr_check_period. 649c720,722 < The user name to perform streaming replication check. This user must exist in all the PostgreSQL backends. Otherwise, the check causes an error. Note that sr_check_user and sr_check_password are used even sr_check_period is 0. To identify the primary server, pgpool-II sends function call request to each backend. sr_check_user and sr_check_password are used for this session. You need to reload pgpool.conf if you change sr_check_user. --- > The user name to perform streaming replication check. This user must exist in all the PostgreSQL backends. Otherwise, the check causes an error. Note that sr_check_user and sr_check_password are used even sr_check_period is 0. To identify the primary server, pgpool-II sends function call request to each backend. sr_check_user and sr_check_password are used for this session. > > You need to reload pgpool.conf if you change sr_check_user. 652c725,727 < The password of the user to perform streaming replication check. If no password is required, specify empty string(''). You need to reload pgpool.conf if you change sr_check_password. --- > The password of the user to perform streaming replication check. If no password is required, specify empty string(''). > > You need to reload pgpool.conf if you change sr_check_password. 753d827 < 777c851 < Leave recovery_2nd_stage_command be empty. --- > Leave rrecovery_2nd_stage_command be empty. 801a876 > back to top 805d879 < 901a976 > back to top 941a1017,1018 > > back to top 961a1039,1202 > > back to top > On memory query Cache > > You can use on memory query cache in any mode. It is diffrent from the above query cache on the point that on memory query cache is faster because cache storage is on memory. Moreover you don't need to restart pgpool-II when the cache is outdated because the underlying table gets updated. > > On memory cache saves pair of SELECT statements (with its Bind parameters if the SELECT is an extended query). If the same SELECTs comes in, it returns the value from cache. Since no SQL parsing nor access to PostgreSQL are involed, it's extremely fast. > > On the other hand, it might be slower than the normal path because it adds some overhead to store cache. Moreover when a table is updated, pgpool automatically deletes all the caches related to the table. So the prformace will be degraded by a system with a lot of updates. If the cache_hit_ratio is lower than 70%, you might want to disable onl memory cache. > Restrictions > > On memory query cache deletes the all cache of an updated table automatically with monitoring if the executed query is UPDATE, INSERT, ALTER TABLE and so on. But pgpool-II isn't able to recognize implicit updates due to trigers, foreign keys and DROP TABLE CASCADE. You can avoid this problem with memqcache_expire by which pgpool deletes old cache in a fixed time automatically, or with white_memqcache_table_list or black_memqcache_table_list by which pgpool's memory cache flow ignores the tables. > If you want to use multiple instances of pgpool-II with online memory cache which uses shared memory, it could happen that one pgpool deletes cache, and the other one doesn't do it thus finds old cached result when a table gets updated. Memcached is the better cache strage in this case. > > Enabling on memory query cache > > To enable the memory cache functionality, set this to on (default is off). > > memory_cache_enabled = on > > Choosing cache strage > > You can choose a cache strage: shared memory or memcached (you can't use the both). Query cache with shared memory is fast and easy because you don't have to install and config memcached, but restricted the max size of cache by the one of shared memory. Query cache with memcached needs a overhead to access network, but you can set the size as you like. > > Memory cache behavior can be specified by memqcache_method directive. Either "shmem"(shared memory) or "memcached". Default is shmem. > > memqcache_method = 'shmem' > > When on memory query cache is prohibited > > Not All of SELECTs and WITH can be cached. In some cases including followings, cache is avoided to keep consistency between caches and databases. > > SELECT FOR SHARE / UPDATE > SELECT including un-immutable funcsions > SELECT including TEMP TABLE > SELECT including system catalogs > SELECT including VIEWs > SELECT in an aborted explicit transaction > SELECT including tables in white_memqcache_table_list or black_memqcache_table_list > SELECT with the result larger than memqcache_maxcache > > When cache is not used > > It can happen that even if the matched query cache exists, pgpool doesn't return it. > > If an updating query is executed in an explicit transaction, during the transaction, pgpool doesn't use any query cache. > The matched query cache is made by the other user (for security reason) > The matched query cache has to be deleted due to memqcache_expire. > > Configuring > > These are the parameters used with both of shmem and memcached. > > memqcache_expire > > Life time of query cache in seconds. Default is 60. 0 means no cache expiration, and cache have been enabled until a table is updated. This parameter and memqcache_auto_cache_invalidation are orthogonal. > memqcache_auto_cache_invalidation > > If on, automatically deletes cache realted to the updated tables. If off, does not delete caches. Default is on. This parameter and memqcache_expire. are orthogonal. > memqcache_maxcache > > If the size of a SELECT result is larger than memqcache_maxcache bytes, it is not cached and the messages is shown: > > 2012-05-02 15:08:17 LOG: pid 13756: pool_add_temp_query_cache: data size exceeds memqcache_maxcache. current:4095 requested:111 memq_maxcache:4096 > > To avoid this problem, you have to set memqcache_maxcache larger. But if you use shared memory as the cache strage, it must be lower than memqcache_cache_block_size. If memqcached, it must be lower than the size of slab (default is 1 MB). > white_memqcache_table_list > > Specify a comma separated list of table names whose SELECT results are to be cached. You can use regular expression. > > If both of white_memqcache_table_list and black_memqcache_table_list are '' (as default), SELECT results of any tables are to be cached. > black_memqcache_table_list > > Specify a comma separated list of table names whose SELECT results are NOT to be cached. You can use regular expression. > > If both of white_memqcache_table_list and black_memqcache_table_list are '' (as default), SELECT results of any tables are to be cached. > memqcache_oiddir > > Full path to the directory where oids of tables used by SELECTs are stored. Under memqcache_oiddir there are directories named database oids, and under each of them there are files named table oids used by SELECTs. In the file pointers to query cache are stored. They are used as keys to delete caches. > > Directories and files under memqcache_oiddir are deleted whenever pgpool-II restarts. > > Monitoring caches > > This explains how to monitor on memory query cache. To know if a SELECT result is from query cache or not, enable log_per_node_statement. > > 2012-05-01 15:42:09 LOG: pid 20181: query result fetched from cache. statement: select * from t1; > > pool_status command shows the cache hit ratio. > > memqcache_stats_start_time | Tue May 1 15:41:59 2012 | Start time of query cache stats > memqcache_no_cache_hits | 80471 | Number of SELECTs not hitting query cache > memqcache_cache_hits | 36717 | Number of SELECTs hitting query cache > > In this example, you can calculate like the below: > > (memqcache_cache_hits) / (memqcache_no_cache_hits+memqcache_cache_hits) = 36717 / (36717 + 80471) = 31.3% > > show pool_cache commands shows the same one. > Configuring to use shared memory > > These are the parameters used with shared memory as the cache strage. > > memqcache_total_size > > Specify the size of shared memory as cache strage in bytes. > memqcache_max_num_cache > > Specify the number of cache which is used for the whole size of the cache space. (except for memqcache_total_size): It is memqcache_max_num_cache * 48 byte. A too small number makes registing cache a error, and a large number will be too much. > memqcache_cache_block_size > > Specify the number of cache entries. This is used to define the size of cache management space (you need this in addition to memqcache_total_size). The management space size can be calculated by: memqcache_max_num_cache * 48 bytes. Too small number will cause an error while registering cache. On the other hand too large number is just a waste of space. > > Configuring to use memcached > > These are the parameters used with memcached as the cache strage. > > memqcache_memcached_host > > Specify the host name or the IP adddress in which memcached works. If it is the same one as pgpool-II, set 'localhost'. > memqcache_memcached_port > > Specify the port number of memcached. Default is 11211. > > memcached Installation > > To use memcached as cache strage, pgpool-II needs a working memcached and the client library: libmemcached. It is easy to install them by rpms. This explains how to install from source codes. > > memcached's source code can be downloaded from: memcached development page > > configure > > After extracting the source tarball, execute the configure script. > > ./configure > > make > > make > make install > > libmemcached Installation > > Libmemcached is a client library for memcached. You need to install libmemcached after installing memcached. > > libmemcached's source code can be downloaded from: libmemcached development page > > configure > > After extracting the source tarball, execute the configure script. > > ./configure > > If you want non-default values, some options can be set: > > --with-memcached=path > The top directory where Memcached are installed. > > make > > make > make install > > back to top 962a1204 > Start pgpool-II 968,981c1210,1218 < -c < --clear-cache deletes query cache < -f config_file < --config-file config-file specifies pgpool.conf < -a hba_file < --hba-file hba_file specifies pool_hba.conf < -F pcp_config_file < --pcp-password-file specifies pcp.conf < -n < --no-daemon no daemon mode (terminal is not detached) < -D < --discard-status Discard pgpool_status file and do not restore previous status < -d < --debug debug mode --- > -c --clear-cache deletes query cache > -f config_file --config-file config-file specifies pgpool.conf > -a hba_file --hba-file hba_file specifies pool_hba.conf > -F pcp_config_file --pcp-password-file specifies pcp.conf > -n --no-daemon no daemon mode (terminal is not detached) > -D --discard-status Discard pgpool_status file and do not restore previous status > -d --debug debug mode > Stop pgpool-II > 986,991c1223,1225 < -m s[mart] < --mode s[mart] waits for clients to disconnect, and shutdown (default) < -m f[ast] < --mode f[ast] does not wait for clients; shutdown immediately < -m i[mmediate] < --mode i[mmediate] the same as '-m f' --- > -m s[mart] --mode s[mart] waits for clients to disconnect, and shutdown (default) > -m f[ast] --mode f[ast] does not wait for clients; shutdown immediately > -m i[mmediate] --mode i[mmediate] the same as '-m f' 1001a1236,1237 > > back to top 1008,1013c1244,1246 < -f config_file < --config-file config-file specifies pgpool.conf < -a hba_file < --hba-file hba_file specifies pool_hba.conf < -F pcp_config_file < --pcp-password-file specifies pcp.conf --- > -f config_file --config-file config-file specifies pgpool.conf > -a hba_file --hba-file hba_file specifies pool_hba.conf > -F pcp_config_file --pcp-password-file specifies pcp.conf 1015a1249,1250 > > back to top 1047c1282 < "SHOW pool_nodes" sends back a list of all configured nodes. It displays the node id, the hostname, the port, the status, and the weight (only meaningful if you use the load balancing mode). The possible values in the status column are explained in the pcp_node_info reference. --- > "SHOW pool_nodes" sends back a list of all configured nodes. It displays the node id, the hostname, the port, the status, the weight (only meaningful if you use the load balancing mode) and the role. The possible values in the status column are explained in the pcp_node_info reference. 1050,1053c1285,1288 < id | hostname | port | status | lb_weight < ------+-------------+------+--------+----------- < 0 | 127.0.0.1 | 5432 | 2 | 0.5 < 1 | 192.168.1.7 | 5432 | 3 | 0.5 --- > id | hostname | port | status | lb_weight | role > ------+-------------+------+--------+-----------+--------- > 0 | 127.0.0.1 | 5432 | 2 | 0.5 | primary > 1 | 192.168.1.7 | 5432 | 3 | 0.5 | standby 1134a1370,1401 > pool_cache > > "SHOW pool_cache" displays cache storage statistics if on memory query cache is enabled. Here is an example of it: > > test=# \x > \x > Expanded display is on. > test=# show pool_cache; > show pool_cache; > -[ RECORD 1 ]---------------+--------- > num_cache_hits | 891703 > num_selects | 99995 > cache_hit_ratio | 0.90 > num_hash_entries | 131072 > used_hash_entries | 99992 > num_cache_entries | 99992 > used_cache_enrties_size | 12482600 > free_cache_entries_size | 54626264 > fragment_cache_entries_size | 0 > > num_cache_hits means the number of SELECTs which hit cache. > num_selects means the number of SELECTs which do not hit cache. > cache_hit_ratio means cache hit ratio, calculated from num_cache_hits/(num_cache_hits+num_selects) Anything below num_hash_entries are valid only when cache storage is on shared memory. > num_hash_entries means number of entries in hash table, which is used for index to cache storage and should be equal to memqcache_max_num_cache in pgpool.conf. This is the upper limit for number of cache entries. > used_hash_entries means number of already used entries in num_hash_entries. > num_cache_entries means number of valid cache entries in the cache storage and should be equal to used_hash_entries. > used_cache_entries_size means total size of cache storage in bytes which is already used. > free_cache_entries_size means total size of cache storage in bytes which is not used yet or can be usable. > fragment_cache_entries_size means total size of cache storage in bytes which cannot be used because of fragmentation. > The fragmented area can be reused later if free_cache_entries_size becomes 0 (or there's no enough space for the SELECT result). > > back to top 1164c1431,1433 < Data synchronization is finalized during what is called "second stage". Before entering the second stage, pgpool-II waits until all clients have disconnected. It blocks any new incoming connection until the second stage is over. After all connections have terminated, pgpool-II merges updated data between the first stage and the second stage. This is the final data synchronization step. --- > Data synchronization is finalized during what is called "second stage". Before entering the second stage, pgpool-II waits until all clients have disconnected. It blocks any new incoming connection until the second stage is over. > > After all connections have terminated, pgpool-II merges updated data between the first stage and the second stage. This is the final data synchronization step. 1198a1468 > 1st stage 1217a1488,1490 > 2nd stage > > The second stage of the procedure is a script to force an XLOG file switch. This script is named "pgpool_recovery_pitr" here. It enforces a switch of the transaction log. For this purpose, pg_switch_xlog could be used. 1219c1492 < The second stage of the procedure is a script to force an XLOG file switch. This script is named "pgpool_recovery_pitr" here. It enforces a switch of the transaction log. For this purpose, pg_switch_xlog could be used. However it may return before the switch is done and this might lead to failure of the online recovery procedure. Pgpool-II provides a safer function called "pgpool_switch_xlog" which will wait until the transaction log switching is actually finished. pgpool_switch_xlog is installed during the procedure performed in the Installing C functions section. --- > However it may return before the switch is done and this might lead to failure of the online recovery procedure. Pgpool-II provides a safer function called "pgpool_switch_xlog" which will wait until the transaction log switching is actually finished. pgpool_switch_xlog is installed during the procedure performed in the Installing C functions section. 1243c1516,1518 < This flushing of sequences is only useful in replication mode: in this case, sequences have to have the same starting point on all nodes. It's not useful in master-slave mode. The loop in the script forces PostgreSQL to emit current value of all sequences in all databases in the master node to the transaction log so that it is propagated to the recovery target node. --- > This flushing of sequences is only useful in replication mode: in this case, sequences have to have the same starting point on all nodes. It's not useful in master-slave mode. > > The loop in the script forces PostgreSQL to emit current value of all sequences in all databases in the master node to the transaction log so that it is propagated to the recovery target node. 1253d1527 < 1349c1623 < Update primary PostgreSQl --- > Update primary PostgreSQL 1352a1627 > back to top 1372a1648,1649 > > back to top 1382a1660 > 1392a1671,1672 > > back to top 1424d1703 < 1436a1716 > back to top 1449a1730 > After changing md5 password(in both pool_passwd and PostgreSQL of course), you need to execute "pgpool reload" or restart pgpool(if your pgoool is 3.1 or before). 1639d1919 < 1651a1932,1933 > > back to top 1657,1667c1939,1947 < < * pcp_node_count - retrieves the number of nodes < * pcp_node_info - retrieves the node information < * pcp_proc_count - retrieves the process list < * pcp_proc_info - retrieves the process information < * pcp_systemdb_info - retrieves the System DB information < * pcp_detach_node - detaches a node from pgpool-II < * pcp_attach_node - attaches a node to pgpool-II < * pcp_promote_node - promote a new master node to pgpool-II < * pcp_stop_pgpool - stops pgpool-II < --- > pcp_node_count retrieves the number of nodes > pcp_node_info retrieves the node information > pcp_proc_count retrieves the process list > pcp_proc_info retrieves the process information > pcp_systemdb_info retrieves the System DB information > pcp_detach_node detaches a node from pgpool-II > pcp_attach_node attaches a node to pgpool-II > pcp_promote_node promote a new master node to pgpool-II > pcp_stop_pgpool stops pgpool-II 1847a2128,2133 > pcp_recovery_node > > pcp_recovery_node _timeout_ _host_ _port_ _userid_ _passwd_ _nodeid_ > > Attaches the given backend node with recovery. > 1851d2136 < 1865a2151 > back to top 1883d2168 < 1886c2171,2175 < The retrieval query submitted by the client goes through the SQL parser. It is then analyzed using information stored in the system DB. Execution status of each part of this query is updated using this information. This execution status stores where this node can be treated. For instance, if a table's data is distributed on several nodes (as declared in the catalog's pgpool_catalog.dist_def table), it has to be retrieved from all nodes. On the other hand, data from a table registered in pgpool_catalog.replicate_def is replicated, and can therefore be retrieved from any node. These states are 'P' when data has to be processed by all nodes, 'L' when it should be processed by one node. The 'S' status is a special case: it means that there is another step to be performed on the data after retrieving it from all nodes. For example, sorting data coming from a table registered in the pgpool_catalog.dist_def table. --- > The retrieval query submitted by the client goes through the SQL parser. It is then analyzed using information stored in the system DB. Execution status of each part of this query is updated using this information. > > This execution status stores where this node can be treated. For instance, if a table's data is distributed on several nodes (as declared in the catalog's pgpool_catalog.dist_def table), it has to be retrieved from all nodes. On the other hand, data from a table registered in pgpool_catalog.replicate_def is replicated, and can therefore be retrieved from any node. > > These states are 'P' when data has to be processed by all nodes, 'L' when it should be processed by one node. The 'S' status is a special case: it means that there is another step to be performed on the data after retrieving it from all nodes. For example, sorting data coming from a table registered in the pgpool_catalog.dist_def table. 1912d2200 < 1928d2215 < 1950d2236 < 1954d2239 < 1958d2242 < 1985c2269 < In this example, the FROM and WHERE clause and the TARGETLIST are in the L status. Because the subquery is in the P status, the query itself is in the S status. The rewriting is, as a consequence, performed as follows. --- > In this example, the FROM and WHEREツ�lause and the TARGETLIST are in the L status. Because the subquery is in the P status, the query itself is in the S status. The rewriting is, as a consequence, performed as follows. 2032d2315 < 2036a2320,2321 > > back to top 2040c2325,2327 < Copyright © 2003 – 2011 pgpool Global Development Group --- > > back to top > Copyright © 2003 – 2012 pgpool Global Development Group