<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt">Hi <br><div><br><span></span></div><div style="color: rgb(0, 0, 0); font-size: 16px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><span>I have just followed the tutorial, and in the tutorial, the pgpool_remote_start looks like :</span><br></div><pre>#! /bin/sh
DEST=$1
DESTDIR=$2
PGCTL=/usr/local/pgsql/bin/pg_ctl
# Deploy a base backup
ssh -T $DEST 'cd /data/; <span style="color: rgb(205, 35, 44);"><span>tar zxf pgsql.tar.gz</span></span>' 2>/dev/null 1>/dev/null < /dev/null
# Startup PostgreSQL server
ssh -T $DEST $PGCTL -w -D $DESTDIR start 2>/dev/null 1>/dev/null < /dev/null &</pre><pre>so, the first line decompress the pgsql.tar.gz and the starts the postgres server.<br><br><br><br>I have updated my scripts:<br><br>basebackup.sh<br><br>#!/bin/bash -x<br><br>DATA=$1<br>RECOVERY_TARGET=$2<br>RECOVERY_DATA=$3<br><br>psql -c "select pg_start_backup('pgpool-recovery')" postgres<br>echo "restore_command = 'scp -P 2022 $HOSTNAME:/var/lib/postgresql/9.2/archive/%f %p'" > /var/lib/postgresql/9.2/data/recovery.conf<br>tar -C /var/lib/postgresql/9.2/data -zcf pgsql.tar.gz . --exclude='postmaster.log' --exclude='postmaster.pid' --exclude='postmaster.opts'<br>psql -c 'select pg_stop_backup()' postgres<br>scp -P 2022 pgsql.tar.gz $RECOVERY_TARGET:$RECOVERY_DATA<br><span style="color: rgb(205, 35, 44);"><span style="font-weight: bold;">ssh -T -p 2022 root@$RECOVERY_TARGET 'cd /var/lib/postgresql/9.2/data; tar zxf
pgsql.tar.gz'</span></span><br><br><br>#same pgpool_recovery_pitr script<br><br><br>pgpool_remote_start<br><br>#!/bin/bash -x<br>#<br># Start PostgreSQL on the recovery target node<br>#<br>DEST=$1<br>DESTDIR=$2<br># Deploy a base backup<br>ssh -T -p 2022 root@$DEST '/etc/init.d/postgresql-9.2 start'<br>rm -rf pgsql.tar.gz<br>rm -rf recovery.conf<br><br></pre><div> but the problem persists.<br>Is this what you are trying to say or I have misunderstood ? <br><br><br></div><div>Regards,<br>Adrian Videanu</div><div><br></div> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 12pt;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 12pt;"> <div dir="ltr"> <hr size="1"> <font face="Arial" size="2"> <b><span style="font-weight:bold;">From:</span></b> Yugo Nagata <nagata@sraoss.co.jp><br> <b><span style="font-weight:
bold;">To:</span></b> Videanu Adrian <videanuadrian@yahoo.com> <br><b><span style="font-weight: bold;">Cc:</span></b> "pgpool-general@pgpool.net" <pgpool-general@pgpool.net> <br> <b><span style="font-weight: bold;">Sent:</span></b> Thursday, November 7, 2013 1:58 PM<br> <b><span style="font-weight: bold;">Subject:</span></b> Re: [pgpool-general: 2237] pgpool 3.3.1 replicate setup with 3 nodes<br> </font> </div> <div class="y_msg_container"><br>Hi,<br><br>In your basebackup.sh, backuped data (pgsql.tar.gz) is sent to the recovery target<br>host, but isn't extracted. This should be extracted at 1st stage in basebackup.sh<br>and updated to latest state at 2nd stage in pgpool_recovery_pitr.sh.<br>So, you should not overwrite the data in pgpool_remote_start. This only have<br>to restart postgresql.<br><br><br><br>On Thu, 7 Nov 2013 03:15:44 -0800 (PST)<br>Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com"
href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> wrote:<br><br>> Hi,<br>> <br>> Here are my scripts: <br>> (<br>> There must be something related to this line <br>> <br>> psql -p $port -c "SELECT setval(oid, nextval(oid)) FROM pg_class WHERE relkind = 'S'" $i<br>> )<br>> <br>> <br>> <br>> basebackup.sh<br>> <br>> #!/bin/bash -x<br>> <br>> DATA=$1<br>> RECOVERY_TARGET=$2<br>> RECOVERY_DATA=$3<br>> <br>> psql -c "select pg_start_backup('pgpool-recovery')" postgres<br>> echo "restore_command = 'scp -P 2022 $HOSTNAME:/var/lib/postgresql/9.2/archive/%f %p'" > /var/lib/postgresql/9.2/data/recovery.conf<br>> tar -C /var/lib/postgresql/9.2/data -zcf pgsql.tar.gz . --exclude='postmaster.log' --exclude='postmaster.pid' --exclude='postmaster.opts'<br>> psql -c 'select pg_stop_backup()' postgres<br>> scp
-P 2022 pgsql.tar.gz $RECOVERY_TARGET:$RECOVERY_DATA<br>> <br>> <br>> pgpool_recovery_pitr.sh<br>> <br>> #!/bin/bash<br>> <br>> # Online recovery 2nd stage script<br>> #<br>> datadir=$1 # master dabatase cluster<br>> DEST=$2 # hostname of the DB node to be recovered<br>> DESTDIR=$3 # database cluster of the DB node to be recovered<br>> port=5432 # PostgreSQL port number<br>> archdir=/var/lib/postgresql/9.2/archive # archive log directory<br>> <br>> # Force to flush current value of sequences to xlog<br>> psql -p $port -t -c 'SELECT datname FROM pg_database WHERE NOT datistemplate AND datallowconn' template1|<br>> while read i<br>> do<br>> if [ "$i" != "" ];then<br>> psql -p $port -c "SELECT
setval(oid, nextval(oid)) FROM pg_class WHERE relkind = 'S'" $i<br>> fi<br>> done<br>> <br>> psql -p $port -c "SELECT pgpool_switch_xlog('$archdir')" template1<br>> <br>> <br>> pgpool_remote_start<br>> <br>> #!/bin/bash -x<br>> #<br>> # Start PostgreSQL on the recovery target node<br>> #<br>> DEST=$1<br>> DESTDIR=$2<br>> # Deploy a base backup<br>> ssh -T -p 2022 root@$DEST 'cd /var/lib/postgresql/9.2/data; tar zxf pgsql.tar.gz'<br>> ssh -T -p 2022 root@$DEST '/etc/init.d/postgresql-9.2 start'<br>> rm -rf pgsql.tar.gz<br>> rm -rf recovery.conf<br>> <br>> <br>> <br>> These are my scripts used in recovery process. I will put here the config file also:<br>> <br>> # ----------------------------<br>> # pgPool-II configuration file<br>> # ----------------------------<br>> #<br>> # This file consists of lines of the form:<br>> #<br>> # name =
value<br>> #<br>> # Whitespace may be used. Comments are introduced with "#" anywhere on a line.<br>> # The complete list of parameter names and allowed values can be found in the<br>> # pgPool-II documentation.<br>> #<br>> # This file is read on server startup and when the server receives a SIGHUP<br>> # signal. If you edit the file on a running system, you have to SIGHUP the<br>> # server for the changes to take effect, or use "pgpool reload". Some<br>> # parameters, which are marked below, require a server shutdown and restart to<br>> # take effect.<br>> #<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # CONNECTIONS<br>> #------------------------------------------------------------------------------<br>> <br>> # - pgpool Connection Settings -<br>> <br>> listen_addresses = '*'<br>>
# Host name or IP address to listen on:<br>> # '*' for all, '' for no TCP/IP connections<br>> # (change requires restart)<br>> port = 9999<br>>
# Port number<br>> # (change requires restart)<br>> socket_dir = '/tmp'<br>> # Unix domain socket path<br>> # The Debian package
defaults to<br>> # /var/run/postgresql<br>> # (change requires restart)<br>> <br>> <br>> # - pgpool Communication Manager Connection Settings -<br>> <br>> pcp_port = 9898<br>> # Port number for pcp<br>>
# (change requires restart)<br>> pcp_socket_dir = '/tmp'<br>> # Unix domain socket path for pcp<br>> # The Debian package defaults to<br>>
# /var/run/postgresql<br>> # (change requires restart)<br>> <br>> # - Backend Connection Settings -<br>> <br>> backend_hostname0 = '192.168.91.33'<br>> # Host name or IP address to connect to for backend 0<br>> backend_port0 = 5432<br>>
# Port number for backend 0<br>> backend_weight0 = 1<br>> # Weight for backend 0 (only in load balancing mode)<br>> backend_data_directory0 = '/var/lib/postgresql/9.2/data'<br>> # Data directory for backend 0<br>> backend_flag0 = 'ALLOW_TO_FAILOVER'<br>>
# Controls various backend behavior<br>> # ALLOW_TO_FAILOVER or DISALLOW_TO_FAILOVER<br>> <br>> backend_hostname1 = '192.168.91.34'<br>> backend_port1 = 5432<br>> backend_weight1 = 1<br>> backend_data_directory1 = '/var/lib/postgresql/9.2/data'<br>> backend_flag1 = 'ALLOW_TO_FAILOVER'<br>> <br>> <br>> backend_hostname2 = '192.168.91.35'<br>> backend_port2 = 5432<br>> backend_weight2 = 1<br>> backend_data_directory2 = '/var/lib/postgresql/9.2/data'<br>> backend_flag2 = 'ALLOW_TO_FAILOVER'<br>> <br>>
<br>> # - Authentication -<br>> <br>> enable_pool_hba = on<br>> # Use pool_hba.conf for client authentication<br>> pool_passwd = ''<br>> # File name of pool_passwd for md5 authentication.<br>> # "" disables pool_passwd.<br>>
# (change requires restart)<br>> authentication_timeout = 60<br>> # Delay in seconds to complete client authentication<br>> # 0 means no timeout.<br>> <br>> # - SSL Connections -<br>> <br>> ssl = off<br>>
# Enable SSL support<br>> # (change requires restart)<br>> #ssl_key = './server.key'<br>> # Path to the SSL private key file<br>> #
(change requires restart)<br>> #ssl_cert = './server.cert'<br>> # Path to the SSL public certificate file<br>> # (change requires restart)<br>> #ssl_ca_cert = ''<br>> # Path to a single PEM format file<br>>
# containing CA root certificate(s)<br>> # (change requires restart)<br>> #ssl_ca_cert_dir = ''<br>> # Directory containing CA root certificate(s)<br>>
# (change requires restart)<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # POOLS<br>> #------------------------------------------------------------------------------<br>> <br>> # - Pool size -<br>> <br>> num_init_children = 100<br>> # Number of pools<br>> #
(change requires restart)<br>> max_pool = 10<br>> # Number of connections per pool<br>> # (change requires restart)<br>> <br>> # - Life time -<br>> <br>> child_life_time = 300<br>> # Pool exits after being idle for this many seconds<br>> child_max_connections = 0<br>>
# Pool exits after receiving that many connections<br>> # 0 means no exit<br>> connection_life_time = 0<br>> # Connection to backend closes after being idle for this many seconds<br>>
# 0 means no close<br>> client_idle_limit = 0<br>> # Client is disconnected after being idle for that many seconds<br>> # (even inside an explicit transactions!)<br>>
# 0 means no disconnection<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # LOGS<br>> #------------------------------------------------------------------------------<br>> <br>> # - Where to log -<br>> <br>> log_destination = 'syslog'<br>> # Where to log<br>> #
Valid values are combinations of stderr,<br>> # and syslog. Default to stderr.<br>> <br>> # - What to log -<br>> <br>> print_timestamp = on<br>> # Print timestamp on each line<br>> # (change requires restart)<br>> <br>> log_connections = on<br>>
# Log connections<br>> log_hostname = on<br>> # Hostname will be shown in ps status<br>> # and in logs if connections are logged<br>> log_statement = off<br>>
# Log all statements<br>> log_per_node_statement = on<br>> # Log all statements<br>> # with node and backend informations<br>> log_standby_delay = 'none'<br>>
# Log standby delay<br>> # Valid values are combinations of always,<br>> # if_over_threshold, none<br>> <br>> # - Syslog specific -<br>> <br>> syslog_facility = 'LOCAL0'<br>>
# Syslog local facility. Default to LOCAL0<br>> syslog_ident = 'pgpool'<br>> # Syslog program identification string<br>> # Default to 'pgpool'<br>> <br>> # - Debug -<br>> <br>> debug_level = 0<br>>
# Debug message verbosity level<br>> # 0 means no message, 1 or more mean verbose<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # FILE LOCATIONS<br>> #------------------------------------------------------------------------------<br>> <br>> pid_file_name = '/var/run/pgpool.pid'<br>>
# PID file name<br>> # (change requires restart)<br>> logdir = '/tmp'<br>> # Directory of pgPool status file<br>> # (change requires
restart)<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # CONNECTION POOLING<br>> #------------------------------------------------------------------------------<br>> <br>> connection_cache = on<br>> # Activate connection pools<br>> # (change requires restart)<br>> <br>>
# Semicolon separated list of queries<br>> # to be issued at the end of a session<br>> # The default is for 8.3 and later<br>> reset_query_list = 'ABORT; DISCARD ALL'<br>>
# The following one is for 8.2 and before<br>> #reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # REPLICATION MODE<br>> #------------------------------------------------------------------------------<br>> <br>> replication_mode = on<br>> # Activate replication mode<br>>
# (change requires restart)<br>> replicate_select = off<br>> # Replicate SELECT statements<br>> # when in replication or parallel mode<br>>
# replicate_select is higher priority than<br>> # load_balance_mode.<br>> <br>> insert_lock = on<br>> # Automatically locks a dummy row or a table<br>>
# with INSERT statements to keep SERIAL data<br>> # consistency<br>> # Without SERIAL, no lock will be issued<br>> lobj_lock_table = ''<br>>
# When rewriting lo_creat command in<br>> # replication mode, specify table name to<br>> # lock<br>> <br>> # - Degenerate handling -<br>> <br>> replication_stop_on_mismatch = off<br>>
# On disagreement with the packet kind<br>> # sent from backend, degenerate the node<br>> # which is most likely "minority"<br>> # If off,
just force to exit this session<br>> <br>> failover_if_affected_tuples_mismatch = off<br>> # On disagreement with the number of affected<br>> # tuples in UPDATE/DELETE queries, then<br>> # degenerate the node which is most likely<br>>
# "minority".<br>> # If off, just abort the transaction to<br>> # keep the consistency<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # LOAD BALANCING MODE<br>> #------------------------------------------------------------------------------<br>> <br>> load_balance_mode
= on<br>> # Activate load balancing mode<br>> # (change requires restart)<br>> ignore_leading_white_space = on<br>> # Ignore leading white spaces of each query<br>> white_function_list = ''<br>>
# Comma separated list of function names<br>> # that don't write to database<br>> # Regexp are accepted<br>> black_function_list = 'nextval,setval'<br>>
# Comma separated list of function names<br>> # that write to database<br>> # Regexp are accepted<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # MASTER/SLAVE MODE<br>> #------------------------------------------------------------------------------<br>> <br>>
master_slave_mode = off<br>> # Activate master/slave mode<br>> # (change requires restart)<br>> master_slave_sub_mode = 'slony'<br>> # Master/slave sub mode<br>>
# Valid values are combinations slony or<br>> # stream. Default is slony.<br>> # (change requires restart)<br>> <br>> # - Streaming -<br>> <br>> sr_check_period = 0<br>>
# Streaming replication check period<br>> # Disabled (0) by default<br>> sr_check_user = 'postgres'<br>> # Streaming replication check user<br>>
# This is necessary even if you disable<br>> # streaming replication delay check with<br>> # sr_check_period = 0<br>> sr_check_password = ''<br>>
# Password for streaming replication check user<br>> delay_threshold = 0<br>> # Threshold before not dispatching query to standby node<br>> # Unit is in bytes<br>>
# Disabled (0) by default<br>> <br>> # - Special commands -<br>> <br>> follow_master_command = ''<br>> # Executes this command after master failover<br>> # Special values:<br>>
# %d = node id<br>> # %h = host name<br>> # %p = port number<br>> # %D = database cluster
path<br>> # %m = new master node id<br>> # %H = hostname of the new master node<br>> # %M = old master node id<br>>
# %P = old primary node id<br>> # %r = new master port number<br>> # %R = new master database cluster path<br>>
# %% = '%' character<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # PARALLEL MODE<br>> #------------------------------------------------------------------------------<br>> <br>> parallel_mode = off<br>> # Activates parallel query mode<br>> # (change
requires restart)<br>> pgpool2_hostname = ''<br>> # Set pgpool2 hostname<br>> # (change requires restart)<br>> <br>> # - System DB info -<br>> <br>> system_db_hostname = 'localhost'<br>> # (change requires restart)<br>> system_db_port = 5432<br>>
# (change requires restart)<br>> system_db_dbname = 'pgpool'<br>> # (change requires restart)<br>> system_db_schema = 'pgpool_catalog'<br>> # (change requires restart)<br>> system_db_user = 'pgpool'<br>>
# (change requires restart)<br>> system_db_password = ''<br>> # (change requires restart)<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # HEALTH CHECK<br>> #------------------------------------------------------------------------------<br>> <br>> health_check_period = 2<br>>
# Health check period<br>> # Disabled (0) by default<br>> health_check_timeout = 20<br>> # Health check timeout<br>> # 0 means no
timeout<br>> health_check_user = 'postgres'<br>> # Health check user<br>> health_check_password = ''<br>> # Password for health check user<br>> health_check_max_retries = 0<br>> # Maximum number of times to retry a failed health check before giving up.<br>> health_check_retry_delay = 1<br>>
# Amount of time to wait (in seconds) between retries.<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # FAILOVER AND FAILBACK<br>> #------------------------------------------------------------------------------<br>> <br>> failover_command = ''<br>> # Executes this command at failover<br>>
# Special values:<br>> # %d = node id<br>> # %h = host name<br>> # %p = port number<br>>
# %D = database cluster path<br>> # %m = new master node id<br>> # %H = hostname of the new master node<br>>
# %M = old master node id<br>> # %P = old primary node id<br>> # %r = new master port number<br>>
# %R = new master database cluster path<br>> # %% = '%' character<br>> failback_command = ''<br>> # Executes this command at failback.<br>> # Special values:<br>>
# %d = node id<br>> # %h = host name<br>> # %p = port number<br>> # %D = database cluster
path<br>> # %m = new master node id<br>> # %H = hostname of the new master node<br>> # %M = old master node id<br>>
# %P = old primary node id<br>> # %r = new master port number<br>> # %R = new master database cluster path<br>>
# %% = '%' character<br>> <br>> fail_over_on_backend_error = on<br>> # Initiates failover when reading/writing to the<br>> # backend communication socket fails<br>>
# If set to off, pgpool will report an<br>> # error and disconnect the session.<br>> <br>> search_primary_node_timeout = 10<br>> # Timeout in seconds to search for the<br>>
# primary node when a failover occurs.<br>> # 0 means no timeout, keep searching<br>> # for a primary node forever.<br>> <br>> #------------------------------------------------------------------------------<br>> # ONLINE RECOVERY<br>> #------------------------------------------------------------------------------<br>> <br>>
recovery_user = 'postgres'<br>> # Online recovery user<br>> recovery_password = ''<br>> # Online recovery password<br>> recovery_1st_stage_command = 'basebackup.sh'<br>> # Executes a command in first stage<br>> recovery_2nd_stage_command = 'pgpool_recovery_pitr.sh'<br>>
# Executes a command in second stage<br>> recovery_timeout = 900<br>> # Timeout in seconds to wait for the<br>> # recovering node's postmaster to start up<br>>
# 0 means no wait<br>> client_idle_limit_in_recovery = 0<br>> # Client is disconnected after being idle<br>> # for that many seconds in the second stage<br>>
# of online recovery<br>> # 0 means no disconnection<br>> # -1 means immediate disconnection<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # WATCHDOG<br>> #------------------------------------------------------------------------------<br>> <br>> # - Enabling -<br>>
<br>> use_watchdog = on<br>> # Activates watchdog<br>> # (change requires restart)<br>> <br>> # -Connection to up stream servers -<br>> <br>> trusted_servers = ''<br>> # trusted server list which are used<br>>
# to confirm network connection<br>> # (hostA,hostB,hostC,...)<br>> # (change requires restart)<br>> ping_path = '/bin'<br>>
# ping command path<br>> # (change requires restart)<br>> <br>> # - Watchdog communication Settings -<br>> <br>> wd_hostname = '192.168.91.31'<br>> # Host name or IP address of this watchdog<br>>
# (change requires restart)<br>> wd_port = 9000<br>> # port number for watchdog service<br>> # (change requires restart)<br>> wd_authkey = ''<br>>
# Authentication key for watchdog communication<br>> # (change requires restart)<br>> <br>> # - Virtual IP control Setting -<br>> <br>> delegate_IP = '192.168.91.30'<br>> # delegate IP address<br>>
# If this is empty, virtual IP never bring up. <br>> # (change requires restart)<br>> ifconfig_path = '/bin'<br>> # ifconfig command path<br>>
# (change requires restart)<br>> if_up_cmd = 'ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0'<br>> # startup delegate IP command<br>> # (change requires restart)<br>> if_down_cmd = 'ifconfig eth0:0 down'<br>>
# shutdown delegate IP command<br>> # (change requires restart)<br>> <br>> arping_path = '/sbin' # arping command path<br>> # (change requires restart)<br>> <br>> arping_cmd = 'arping -U $_IP_$ -w 1'<br>>
# arping command<br>> # (change requires restart)<br>> <br>> # - Behaivor on escalation Setting -<br>> <br>> clear_memqcache_on_escalation = on<br>> # Clear all the query cache on shared memory<br>>
# when standby pgpool escalate to active pgpool<br>> # (= virtual IP holder).<br>> # This should be off if client connects to pgpool<br>>
# not using virtual IP.<br>> # (change requires restart)<br>> wd_escalation_command = ''<br>> # Executes this command at escalation on new active pgpool.<br>>
# (change requires restart)<br>> <br>> # - Lifecheck Setting - <br>> <br>> # -- common --<br>> <br>> #wd_lifecheck_method = 'heartbeat'<br>> # Method of watchdog lifecheck ('heartbeat' or 'query')<br>> # (change requires restart)<br>> wd_lifecheck_method = 'heartbeat'<br>> wd_interval = 2<br>>
# lifecheck interval (sec) > 0<br>> # (change requires restart)<br>> <br>> # -- heartbeat mode --<br>> <br>> wd_heartbeat_port = 9694<br>> # Port number for receiving heartbeat signal<br>>
# (change requires restart)<br>> wd_heartbeat_keepalive = 2<br>> # Interval time of sending heartbeat signal (sec)<br>> # (change requires restart)<br>> wd_heartbeat_deadtime = 30<br>>
# Deadtime interval for heartbeat signal (sec)<br>> # (change requires restart)<br>> <br>> heartbeat_destination0 = '192.168.91.32'<br>> # Host name or IP address of destination 0<br>>
# for sending heartbeat signal.<br>> # (change requires restart)<br>> heartbeat_destination_port0 = 9694 <br>> # Port number of destination 0 for sending<br>>
# heartbeat signal. Usually this is the<br>> # same as wd_heartbeat_port.<br>> # (change requires restart)<br>> heartbeat_device0 = 'eth0'<br>>
# Name of NIC device (such like 'eth0')<br>> # used for sending/receiving heartbeat<br>> # signal to/from destination 0.<br>>
# This works only when this is not empty<br>> # and pgpool has root privilege.<br>> # (change requires restart)<br>> <br>> #heartbeat_destination1 = 'host0_ip2'<br>> #heartbeat_destination_port1 = 9694<br>> #heartbeat_device1 = ''<br>> <br>> # -- query mode --<br>> <br>> wd_life_point = 3<br>>
# lifecheck retry times<br>> # (change requires restart)<br>> wd_lifecheck_query = 'SELECT 1'<br>> # lifecheck query to pgpool from watchdog<br>>
# (change requires restart)<br>> wd_lifecheck_dbname = 'template1'<br>> # Database name connected for lifecheck<br>> # (change requires restart)<br>> wd_lifecheck_user = 'postgres'<br>>
# watchdog user monitoring pgpools in lifecheck<br>> # (change requires restart)<br>> wd_lifecheck_password = ''<br>> # Password for watchdog user in lifecheck<br>>
# (change requires restart)<br>> <br>> # - Other pgpool Connection Settings -<br>> <br>> other_pgpool_hostname0 = '192.168.91.32'<br>> # Host name or IP address to connect to for other pgpool 0<br>> # (change requires restart)<br>> other_pgpool_port0 = 9999<br>>
# Port number for othet pgpool 0<br>> # (change requires restart)<br>> other_wd_port0 = 9000<br>> # Port number for othet watchdog 0<br>>
# (change requires restart)<br>> #other_pgpool_hostname1 = 'host1'<br>> #other_pgpool_port1 = 5432<br>> #other_wd_port1 = 9000<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # OTHERS<br>> #------------------------------------------------------------------------------<br>> relcache_expire = 0<br>> # Life time of relation cache in seconds.<br>>
# 0 means no cache expiration(the default).<br>> # The relation cache is used for cache the<br>> # query result against PostgreSQL system<br>>
# catalog to obtain various information<br>> # including table structures or if it's a<br>> # temporary table or not. The cache is<br>> #
maintained in a pgpool child local memory<br>> # and being kept as long as it survives.<br>> # If someone modify the table by using<br>> # ALTER TABLE or some such, the relcache is<br>>
# not consistent anymore.<br>> # For this purpose, cache_expiration<br>> # controls the life time of the cache.<br>> <br>> relcache_size = 256<br>>
# Number of relation cache<br>> # entry. If you see frequently:<br>> # "pool_search_relcache: cache replacement happend"<br>> # in the
pgpool log, you might want to increate this number.<br>> <br>> check_temp_table = on<br>> # If on, enable temporary table check in SELECT statements.<br>> # This initiates queries against system catalog of primary/master<br>> # thus increases load of master.<br>>
# If you are absolutely sure that your system never uses temporary tables<br>> # and you want to save access to primary/master, you could turn this off.<br>> # Default is on.<br>> <br>> <br>> #------------------------------------------------------------------------------<br>> # ON MEMORY QUERY MEMORY CACHE<br>>
#------------------------------------------------------------------------------<br>> memory_cache_enabled = off<br>> # If on, use the memory cache functionality, off by default<br>> memqcache_method = 'shmem'<br>> # Cache storage method. either 'shmem'(shared memory) or<br>> # 'memcached'. 'shmem' by default<br>>
# (change requires restart)<br>> memqcache_memcached_host = 'localhost'<br>> # Memcached host name or IP address. Mandatory if<br>> # memqcache_method = 'memcached'.<br>> # Defaults to localhost.<br>>
# (change requires restart)<br>> memqcache_memcached_port = 11211<br>> # Memcached port number. Mondatory if memqcache_method = 'memcached'.<br>> # Defaults to 11211.<br>> # (change requires restart)<br>>
memqcache_total_size = 67108864<br>> # Total memory size in bytes for storing memory cache.<br>> # Mandatory if memqcache_method = 'shmem'.<br>> # Defaults to 64MB.<br>> # (change requires restart)<br>> memqcache_max_num_cache = 1000000<br>>
# Total number of cache entries. Mandatory<br>> # if memqcache_method = 'shmem'.<br>> # Each cache entry consumes 48 bytes on shared memory.<br>> # Defaults to 1,000,000(45.8MB).<br>>
# (change requires restart)<br>> memqcache_expire = 0<br>> # Memory cache entry life time specified in seconds.<br>> # 0 means infinite life time. 0 by default.<br>> # (change requires restart)<br>>
memqcache_auto_cache_invalidation = on<br>> # If on, invalidation of query cache is triggered by corresponding<br>> # DDL/DML/DCL(and memqcache_expire). If off, it is only triggered<br>> # by memqcache_expire. on by default.<br>> # (change
requires restart)<br>> memqcache_maxcache = 409600<br>> # Maximum SELECT result size in bytes.<br>> # Must be smaller than memqcache_cache_block_size. Defaults to 400KB.<br>> # (change requires restart)<br>> memqcache_cache_block_size = 1048576<br>>
# Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.<br>> # Defaults to 1MB.<br>> # (change requires restart)<br>> memqcache_oiddir = '/var/log/pgpool/oiddir'<br>> # Temporary work directory to record table oids<br>>
# (change requires restart)<br>> white_memqcache_table_list = ''<br>> # Comma separated list of table names to memcache<br>> # that don't write to database<br>>
# Regexp are accepted<br>> black_memqcache_table_list = ''<br>> # Comma separated list of table names not to memcache<br>> # that don't write to database<br>>
# Regexp are accepted<br>> <br>> <br>> <br>> <br>> Also, when I use load balance mode the load balancing is made between connections and not queries, right?<br>> I mean that if i have 1 script the open one connection and perform 1000 selects, these selects will be performed on one node.<br>> <br>> <br>> <br>> <br>> Regards,<br>> Adrian Videanu<br>> <br>> <br>> ________________________________<br>> From: Yugo Nagata <<a ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a>><br>> To: Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com" href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> <br>> Cc: <a
ymailto="mailto:pgpool-general@pgpool.net" href="mailto:pgpool-general@pgpool.net">pgpool-general@pgpool.net</a> <br>> Sent: Thursday, November 7, 2013 12:44 PM<br>> Subject: Re: [pgpool-general: 2237] pgpool 3.3.1 replicate setup with 3 nodes<br>> <br>> <br>> Hi,<br>> <br>> Are you setting recovery_2nd_stage_command properly?<br>> (<a href="http://www.pgpool.net/docs/latest/pgpool-en.html#online-recovery" target="_blank">http://www.pgpool.net/docs/latest/pgpool-en.html#online-recovery</a><br>> <a href="http://www.pgpool.net/docs/latest/pgpool-en.html#RECOVERY_2ND_STAGE_COMMAND" target="_blank">http://www.pgpool.net/docs/latest/pgpool-en.html#RECOVERY_2ND_STAGE_COMMAND</a>)<br>> <br>> If this is wrong, recovery might fails and backends might get inconsistent.<br>> For analysis, could you please provide your pgpool.conf and all scripts? <br>> <br>> <br>> On Wed, 6 Nov 2013 06:08:05 -0800 (PST)<br>>
Videanu Adrian <<a ymailto="mailto:videanuadrian@yahoo.com" href="mailto:videanuadrian@yahoo.com">videanuadrian@yahoo.com</a>> wrote:<br>> <br>> > Hi all,<br>> > <br>> > I have a pgpool 3.3.1 with 3 postgresql 9.2 backends.<br>> > The pgpool is configured as : [ mode ]<br>> > Replication Mode<br>> > / Load Balance Mode / Watchdog On<br>> > <br>> > I have a problem when I use online recovery for one of the nodes.<br>> > <br>> > Scenario: <br>> > 1. The 3rd node failed.<br>> > 2. Pgpool detects the failure and present the node as down. (everything is ok)<br>> > 3. I press recovery from pgpadmin and the recovery script starts.<br>> > 4. The 3rd node is back online, but the sequence for this node and the 1st node are incremented by 1 regarding the 2nd node.<br>> > Because of this nothing works because i get a kind mismatch
between nodes.<br>> > <br>> > Does replication setup works for more than 2 nodes and if yes how should i perform online recovery.<br>> > <br>> > PS. I use PITR.<br>> > <br>> > <br>> > <br>> > Regards,<br>> > Adrian Videanu<br>> <br>> <br>> -- <br>> Yugo Nagata <<a ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a><br>> ><br><br><br>-- <br>Yugo Nagata <<a ymailto="mailto:nagata@sraoss.co.jp" href="mailto:nagata@sraoss.co.jp">nagata@sraoss.co.jp</a>><br><br></div> </div> </div> </div></body></html>