<div dir="ltr">Hi Tatsuo<div><br><div>I have been trying to reproduce this compilation issue but unable to do so. I have compiled the patched code with GCC 4.6.3 and also with </div><div>LLVM 5.0 and in both case I am able to successfully compile the code. </div>
<div>Can you please guide me to reproduce the issue at my end as the Oid definition which is conflicting at your end is not changed by the patch.</div><div><br></div><div><div>here is the gcc log of files compilation, </div>
</div><div><br></div><div><div><span class="" style="white-space:pre">        </span>gcc -DHAVE_CONFIG_H -DDEFAULT_CONFIGDIR=\"/home/usama/EDB/pgpool/elog_work/mypgpool/test/comanddel/installed/etc\" -I. -I../src/include -D_GNU_SOURCE -I /home/usama/EDB/PG/installed/include -O0 -g3 -Wall -Wmissing-prototypes -Wmissing-declarations -MT main/main.o -MD -MP -MF $depbase.Tpo -c -o main/main.o main/main.c &&\</div>
<div><span class="" style="white-space:pre">        </span>mv -f $depbase.Tpo $depbase.Po</div><div>main/main.c: In function ‘daemonize’:</div><div>main/main.c:384:17: warning: variable ‘rc_chdir’ set but not used [-Wunused-but-set-variable]</div>
<div>depbase=`echo main/pgpool_main.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\</div><div><span class="" style="white-space:pre">        </span>gcc -DHAVE_CONFIG_H -DDEFAULT_CONFIGDIR=\"/home/usama/EDB/pgpool/elog_work/mypgpool/test/comanddel/installed/etc\" -I. -I../src/include -D_GNU_SOURCE -I /home/usama/EDB/PG/installed/include -O0 -g3 -Wall -Wmissing-prototypes -Wmissing-declarations -MT main/pgpool_main.o -MD -MP -MF $depbase.Tpo -c -o main/pgpool_main.o main/pgpool_main.c &&\</div>
<div><span class="" style="white-space:pre">        </span>mv -f $depbase.Tpo $depbase.Po</div><div>main/pgpool_main.c: In function ‘find_primary_node’:</div><div>main/pgpool_main.c:1981:14: warning: variable ‘status’ set but not used [-Wunused-but-set-variable]</div>
</div><div>..</div><div><br></div><div>pgpool2$ gcc --version<br></div><div><div>gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3</div><div>Copyright (C) 2011 Free Software Foundation, Inc.</div><div><br></div><div>And on OS/X the compilation is also successfull. </div>
<div><div>gcc --version</div><div>Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1</div><div>Apple LLVM version 5.0 (clang-500.2.76) (based on LLVM 3.3svn)</div>
<div>Target: x86_64-apple-darwin12.5.0</div></div><div><br></div><div>Thanks</div><div>Usama</div><div><br></div></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sat, Oct 12, 2013 at 5:41 AM, Tatsuo Ishii <span dir="ltr"><<a href="mailto:ishii@postgresql.org" target="_blank">ishii@postgresql.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I have freshly checkout git master and applied your patch.<br>
I got this:<br>
<br>
gcc -DHAVE_CONFIG_H -DDEFAULT_CONFIGDIR=\"/home/t-ishii/work/<a href="http://git.postgresql.org/exception/pgpool2/src/test/regression/temp/installed/etc\" target="_blank">git.postgresql.org/exception/pgpool2/src/test/regression/temp/installed/etc\</a>" -I. -I../src/include -D_GNU_SOURCE -I /usr/local/pgsql/include -g -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -MT main/pgpool_main.o -MD -MP -MF $depbase.Tpo -c -o main/pgpool_main.o main/pgpool_main.c &&\<br>
mv -f $depbase.Tpo $depbase.Po<br>
In file included from /usr/local/pgsql/include/libpq-fe.h:29,<br>
from ../src/include/pool.h:34,<br>
from main/pgpool_main.c:51:<br>
/usr/local/pgsql/include/postgres_ext.h:31: error: redefinition of typedef ‘PoolOid’<br>
../src/include/parser/pool_parser.h:43: note: previous declaration of ‘PoolOid’ was here<br>
<br>
This is caused by the conflict in postgres_ext.h:<br>
typedef unsigned int Oid;<br>
<br>
and<br>
<br>
./include/parser/pool_parser.h:typedef unsigned int PoolOid;<br>
./include/parser/pool_parser.h:#define Oid PoolOid<br>
<br>
This is gcc 4.4.5.<br>
<div class="HOEnZb"><div class="h5">--<br>
Tatsuo Ishii<br>
SRA OSS, Inc. Japan<br>
English: <a href="http://www.sraoss.co.jp/index_en.php" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
Japanese: <a href="http://www.sraoss.co.jp" target="_blank">http://www.sraoss.co.jp</a><br>
<br>
> Hi<br>
><br>
> Seems like the repository was changed after I had generated the patch.<br>
> Please find the updated patch rebased with the current state of git.<br>
><br>
><br>
> Thanks<br>
> Usama<br>
><br>
><br>
><br>
> On Fri, Oct 11, 2013 at 9:47 AM, Ahsan Hadi <<a href="mailto:ahsan.hadi@enterprisedb.com">ahsan.hadi@enterprisedb.com</a>>wrote:<br>
><br>
>> Usama,<br>
>> Please take a look.<br>
>><br>
>> Tatsuo,<br>
>> I was apply to apply this patch cleanly on pgpool master and test before<br>
>> Usama sent the patch. It is likely that we have done some changes to the<br>
>> master branch since then..<br>
>><br>
>><br>
>> On Fri, Oct 11, 2013 at 4:03 AM, Tatsuo Ishii <<a href="mailto:ishii@postgresql.org">ishii@postgresql.org</a>>wrote:<br>
>><br>
>>> Usama,<br>
>>><br>
>>> I have applied your patch and got following error:<br>
>>><br>
>>> Hunk #12 FAILED at 499.<br>
>>> 1 out of 12 hunks FAILED -- saving rejects to file src/main/main.c.rej<br>
>>><br>
>>> I also attached main.c.rej.<br>
>>> --<br>
>>> Tatsuo Ishii<br>
>>> SRA OSS, Inc. Japan<br>
>>> English: <a href="http://www.sraoss.co.jp/index_en.php" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
>>> Japanese: <a href="http://www.sraoss.co.jp" target="_blank">http://www.sraoss.co.jp</a><br>
>>><br>
>>> > Muhammad,<br>
>>> ><br>
>>> > Thank you for your great work! I'll look into this.<br>
>>> > --<br>
>>> > Tatsuo Ishii<br>
>>> > SRA OSS, Inc. Japan<br>
>>> > English: <a href="http://www.sraoss.co.jp/index_en.php" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
>>> > Japanese: <a href="http://www.sraoss.co.jp" target="_blank">http://www.sraoss.co.jp</a><br>
>>> ><br>
>>> >> Hi<br>
>>> >><br>
>>> >> I am working on adding the exception manager in pgpool and my plan of<br>
>>> >> action for this is to use the Postgres exception manager API (elog and<br>
>>> >> friends). Since the exception manager in Postgres uses the long jump,<br>
>>> so<br>
>>> >> importing this API in pgpool will effect all existing pgpool code flows<br>
>>> >> especially in case of an error. and lot of care will be required for<br>
>>> this<br>
>>> >> integration, and secondly the elog API along with its friends will<br>
>>> touch<br>
>>> >> almost all parts of pgpool source code which will add up to a very huge<br>
>>> >> patch.<br>
>>> >> So instead throwing a very huge patch to the community my plan is to<br>
>>> divide<br>
>>> >> this task into multiple smaller sub tasks so that it would be easier<br>
>>> >> maintain and review the patch.<br>
>>> >><br>
>>> >> Cut to the chase, attached is the first of the series of related<br>
>>> patches<br>
>>> >> to come.This is the first cut patch for implementing the exception<br>
>>> manager<br>
>>> >> in pgpool. As described above the exception manager and related code is<br>
>>> >> borrowed from PostgreSQL source code.<br>
>>> >> and the exception manager (elog API) is very closely tied with memory<br>
>>> >> manager in PostgreSQL (palloc API) so the patch also borrows the PG's<br>
>>> >> memory manager.<br>
>>> >><br>
>>> >> Below is the little description of things part of this patch.<br>
>>> >><br>
>>> >> -- Exception manager API of Postgres is added to pgpool, The API<br>
>>> consists<br>
>>> >> of elog.c and elog.h files. Since this API is very extensive and is<br>
>>> >> designed for PostgreSQL so to fit it properly into pgpool I have<br>
>>> modified<br>
>>> >> it a little bit, and most of the modifications are related to removal<br>
>>> of<br>
>>> >> code which is not required for pgpool.<br>
>>> >><br>
>>> >> -- Added on_proc_exit callback mechanism of Postgres. To facilitate the<br>
>>> >> cleanup at exit time.<br>
>>> >><br>
>>> >> -- Added PostgreSQL's memory manager (palloc API). This includes the<br>
>>> client<br>
>>> >> side palloc functions placed in 'src/tools' directory (fe_memutils)<br>
>>> >><br>
>>> >> -- Removed the existing memory manager which was very minimalistic and<br>
>>> was<br>
>>> >> not integrated in all parts of the code.<br>
>>> >><br>
>>> >> -- I have also tried to reflector some portions of code to make the<br>
>>> code<br>
>>> >> more readable at first glance. This includes<br>
>>> >><br>
>>> >> - dividing the main.c file into two files main.c and pgpool_main.c,<br>
>>> Now the<br>
>>> >> main.c file only contains the code related to early initialisations of<br>
>>> >> pgpool and parsing the command line options and related code. The<br>
>>> actual<br>
>>> >> logic of the pgpool main process is moved to new pgpool_main.c file.<br>
>>> >> - breaking up some large functions in child.c into smaller functions.<br>
>>> >> - rewrite the pgpool's main loop logic to make the code more readable.<br>
>>> >><br>
>>> >><br>
>>> >> Remaining TODOs on this front.<br>
>>> >><br>
>>> >> -- The current patch only integrates the memory and exception manager<br>
>>> in<br>
>>> >> main process and connection creation segment of pgpool child process.<br>
>>> >> integration of newly added APIs in pcp and worker child process codes<br>
>>> will<br>
>>> >> be done be next patch.<br>
>>> >><br>
>>> >> -- Integration of newly added API into query processor logic in child<br>
>>> >> process. ( this will be the toughest part)<br>
>>> >><br>
>>> >> -- elog.c and elog.h files needs some cleanups and changes ( to remove<br>
>>> >> unwanted functions and data members of ErrorData structure) but this<br>
>>> will<br>
>>> >> be done at the end when we will have 100% surety if something in there<br>
>>> is<br>
>>> >> required or not.<br>
>>> >><br>
>>> >><br>
>>> >> Thanks<br>
>>> >> Muhammad Usama<br>
>>> > _______________________________________________<br>
>>> > pgpool-hackers mailing list<br>
>>> > <a href="mailto:pgpool-hackers@pgpool.net">pgpool-hackers@pgpool.net</a><br>
>>> > <a href="http://www.pgpool.net/mailman/listinfo/pgpool-hackers" target="_blank">http://www.pgpool.net/mailman/listinfo/pgpool-hackers</a><br>
>>><br>
>>> --- src/main/main.c<br>
>>> +++ src/main/main.c<br>
>>> @@ -499,1915 +-23,60 @@<br>
>>> fd = open(pool_config->pid_file_name, O_CREAT|O_WRONLY,<br>
>>> S_IRUSR|S_IWUSR);<br>
>>> if (fd == -1)<br>
>>> {<br>
>>> - pool_error("could not open pid file as %s. reason: %s",<br>
>>> - pool_config->pid_file_name,<br>
>>> strerror(errno));<br>
>>> - pool_shmem_exit(1);<br>
>>> - exit(1);<br>
>>> + ereport(FATAL,<br>
>>> + (errmsg("could not open pid file as %s. reason:<br>
>>> %s",<br>
>>> + pool_config->pid_file_name,<br>
>>> strerror(errno))));<br>
>>> }<br>
>>> snprintf(pidbuf, sizeof(pidbuf), "%d", (int)getpid());<br>
>>> if (write(fd, pidbuf, strlen(pidbuf)+1) == -1)<br>
>>> {<br>
>>> - pool_error("could not write pid file as %s. reason: %s",<br>
>>> - pool_config->pid_file_name,<br>
>>> strerror(errno));<br>
>>> close(fd);<br>
>>> - pool_shmem_exit(1);<br>
>>> - exit(1);<br>
>>> + ereport(FATAL,<br>
>>> + (errmsg("could not write pid file as %s. reason:<br>
>>> %s",<br>
>>> + pool_config->pid_file_name,<br>
>>> strerror(errno))));<br>
>>> }<br>
>>> if (fsync(fd) == -1)<br>
>>> {<br>
>>> - pool_error("could not fsync pid file as %s. reason: %s",<br>
>>> - pool_config->pid_file_name,<br>
>>> strerror(errno));<br>
>>> close(fd);<br>
>>> - pool_shmem_exit(1);<br>
>>> - exit(1);<br>
>>> + ereport(FATAL,<br>
>>> + (errmsg("could not fsync pid file as %s. reason:<br>
>>> %s",<br>
>>> + pool_config->pid_file_name,<br>
>>> strerror(errno))));<br>
>>> }<br>
>>> if (close(fd) == -1)<br>
>>> {<br>
>>> - pool_error("could not close pid file as %s. reason: %s",<br>
>>> - pool_config->pid_file_name,<br>
>>> strerror(errno));<br>
>>> - pool_shmem_exit(1);<br>
>>> - exit(1);<br>
>>> + ereport(FATAL,<br>
>>> + (errmsg("could not close pid file as %s. reason:<br>
>>> %s",<br>
>>> + pool_config->pid_file_name,<br>
>>> strerror(errno))));<br>
>>> }<br>
>>> + /* register the call back to delete the pid file at system exit */<br>
>>> + on_proc_exit(FileUnlink, (Datum) pool_config->pid_file_name);<br>
>>> }<br>
>>><br>
>>> /*<br>
>>> -* Read the status file<br>
>>> -*/<br>
>>> -static int read_status_file(bool discard_status)<br>
>>> + * get_config_file_name: return full path of pgpool.conf.<br>
>>> + */<br>
>>> +char *get_config_file_name(void)<br>
>>> {<br>
>>> - FILE *fd;<br>
>>> - char fnamebuf[POOLMAXPATHLEN];<br>
>>> - int i;<br>
>>> - bool someone_wakeup = false;<br>
>>> -<br>
>>> - snprintf(fnamebuf, sizeof(fnamebuf), "%s/%s",<br>
>>> pool_config->logdir, STATUS_FILE_NAME);<br>
>>> - fd = fopen(fnamebuf, "r");<br>
>>> - if (!fd)<br>
>>> - {<br>
>>> - pool_log("Backend status file %s does not exist",<br>
>>> fnamebuf);<br>
>>> - return -1;<br>
>>> - }<br>
>>> -<br>
>>> - /*<br>
>>> - * If discard_status is true, unlink pgpool_status and<br>
>>> - * do not restore previous status.<br>
>>> - */<br>
>>> - if (discard_status)<br>
>>> - {<br>
>>> - fclose(fd);<br>
>>> - if (unlink(fnamebuf) == 0)<br>
>>> - {<br>
>>> - pool_log("Backend status file %s discarded",<br>
>>> fnamebuf);<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - pool_error("Failed to discard backend status file<br>
>>> %s reason:%s", fnamebuf, strerror(errno));<br>
>>> - }<br>
>>> - return 0;<br>
>>> - }<br>
>>> -<br>
>>> - if (fread(&backend_rec, 1, sizeof(backend_rec), fd) !=<br>
>>> sizeof(backend_rec))<br>
>>> - {<br>
>>> - pool_error("Could not read backend status file as %s.<br>
>>> reason: %s",<br>
>>> - fnamebuf, strerror(errno));<br>
>>> - fclose(fd);<br>
>>> - return -1;<br>
>>> - }<br>
>>> - fclose(fd);<br>
>>> -<br>
>>> - for (i=0;i< pool_config->backend_desc->num_backends;i++)<br>
>>> - {<br>
>>> - if (backend_rec.status[i] == CON_DOWN)<br>
>>> - {<br>
>>> - BACKEND_INFO(i).backend_status = CON_DOWN;<br>
>>> - pool_log("read_status_file: %d th backend is set<br>
>>> to down status", i);<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - BACKEND_INFO(i).backend_status = CON_CONNECT_WAIT;<br>
>>> - someone_wakeup = true;<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - /*<br>
>>> - * If no one woke up, we regard the status file bogus<br>
>>> - */<br>
>>> - if (someone_wakeup == false)<br>
>>> - {<br>
>>> - for (i=0;i< pool_config->backend_desc->num_backends;i++)<br>
>>> - {<br>
>>> - BACKEND_INFO(i).backend_status = CON_CONNECT_WAIT;<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - return 0;<br>
>>> + return conf_file;<br>
>>> }<br>
>>><br>
>>> /*<br>
>>> -* Write the pid file<br>
>>> -*/<br>
>>> -static int write_status_file(void)<br>
>>> -{<br>
>>> - FILE *fd;<br>
>>> - char fnamebuf[POOLMAXPATHLEN];<br>
>>> - int i;<br>
>>> -<br>
>>> - snprintf(fnamebuf, sizeof(fnamebuf), "%s/%s",<br>
>>> pool_config->logdir, STATUS_FILE_NAME);<br>
>>> - fd = fopen(fnamebuf, "w");<br>
>>> - if (!fd)<br>
>>> - {<br>
>>> - pool_error("Could not open status file %s", fnamebuf);<br>
>>> - return -1;<br>
>>> - }<br>
>>> -<br>
>>> - memset(&backend_rec, 0, sizeof(backend_rec));<br>
>>> -<br>
>>> - for (i=0;i< pool_config->backend_desc->num_backends;i++)<br>
>>> - {<br>
>>> - backend_rec.status[i] = BACKEND_INFO(i).backend_status;<br>
>>> - }<br>
>>> -<br>
>>> - if (fwrite(&backend_rec, 1, sizeof(backend_rec), fd) !=<br>
>>> sizeof(backend_rec))<br>
>>> - {<br>
>>> - pool_error("Could not write backend status file as %s.<br>
>>> reason: %s",<br>
>>> - fnamebuf, strerror(errno));<br>
>>> - fclose(fd);<br>
>>> - return -1;<br>
>>> - }<br>
>>> - fclose(fd);<br>
>>> - return 0;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * fork a child for PCP<br>
>>> - */<br>
>>> -pid_t pcp_fork_a_child(int unix_fd, int inet_fd, char *pcp_conf_file)<br>
>>> -{<br>
>>> - pid_t pid;<br>
>>> -<br>
>>> - pid = fork();<br>
>>> -<br>
>>> - if (pid == 0)<br>
>>> - {<br>
>>> - close(pipe_fds[0]);<br>
>>> - close(pipe_fds[1]);<br>
>>> -<br>
>>> - myargv = save_ps_display_args(myargc, myargv);<br>
>>> -<br>
>>> - /* call PCP child main */<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> - health_check_timer_expired = 0;<br>
>>> - reload_config_request = 0;<br>
>>> - run_as_pcp_child = true;<br>
>>> - pcp_do_child(unix_fd, inet_fd, pcp_conf_file);<br>
>>> - }<br>
>>> - else if (pid == -1)<br>
>>> - {<br>
>>> - pool_error("fork() failed. reason: %s", strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> - return pid;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> -* fork a child<br>
>>> -*/<br>
>>> -pid_t fork_a_child(int unix_fd, int inet_fd, int id)<br>
>>> -{<br>
>>> - pid_t pid;<br>
>>> -<br>
>>> - pid = fork();<br>
>>> -<br>
>>> - if (pid == 0)<br>
>>> - {<br>
>>> - /* Before we unconditionally closed pipe_fds[0] and<br>
>>> pipe_fds[1]<br>
>>> - * here, which is apparently wrong since in the start up<br>
>>> of<br>
>>> - * pgpool, pipe(2) is not called yet and it mistakenly<br>
>>> closes<br>
>>> - * fd 0. Now we check the fd > 0 before close(), expecting<br>
>>> - * pipe returns fds greater than 0. Note that we cannot<br>
>>> - * unconditionally remove close(2) calls since<br>
>>> fork_a_child()<br>
>>> - * may be called *after* pgpool starting up.<br>
>>> - */<br>
>>> - if (pipe_fds[0] > 0)<br>
>>> - {<br>
>>> - close(pipe_fds[0]);<br>
>>> - close(pipe_fds[1]);<br>
>>> - }<br>
>>> -<br>
>>> - myargv = save_ps_display_args(myargc, myargv);<br>
>>> -<br>
>>> - /* call child main */<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> - health_check_timer_expired = 0;<br>
>>> - reload_config_request = 0;<br>
>>> - my_proc_id = id;<br>
>>> - run_as_pcp_child = false;<br>
>>> - do_child(unix_fd, inet_fd);<br>
>>> - }<br>
>>> - else if (pid == -1)<br>
>>> - {<br>
>>> - pool_error("fork() failed. reason: %s", strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> - return pid;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> -* fork worker child process<br>
>>> -*/<br>
>>> -pid_t worker_fork_a_child()<br>
>>> -{<br>
>>> - pid_t pid;<br>
>>> -<br>
>>> - pid = fork();<br>
>>> -<br>
>>> - if (pid == 0)<br>
>>> - {<br>
>>> - /* Before we unconditionally closed pipe_fds[0] and<br>
>>> pipe_fds[1]<br>
>>> - * here, which is apparently wrong since in the start up<br>
>>> of<br>
>>> - * pgpool, pipe(2) is not called yet and it mistakenly<br>
>>> closes<br>
>>> - * fd 0. Now we check the fd > 0 before close(), expecting<br>
>>> - * pipe returns fds greater than 0. Note that we cannot<br>
>>> - * unconditionally remove close(2) calls since<br>
>>> fork_a_child()<br>
>>> - * may be called *after* pgpool starting up.<br>
>>> - */<br>
>>> - if (pipe_fds[0] > 0)<br>
>>> - {<br>
>>> - close(pipe_fds[0]);<br>
>>> - close(pipe_fds[1]);<br>
>>> - }<br>
>>> -<br>
>>> - myargv = save_ps_display_args(myargc, myargv);<br>
>>> -<br>
>>> - /* call child main */<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> - health_check_timer_expired = 0;<br>
>>> - reload_config_request = 0;<br>
>>> - do_worker_child();<br>
>>> - }<br>
>>> - else if (pid == -1)<br>
>>> - {<br>
>>> - pool_error("fork() failed. reason: %s", strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> - return pid;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> -* create inet domain socket<br>
>>> -*/<br>
>>> -static int create_inet_domain_socket(const char *hostname, const int<br>
>>> port)<br>
>>> -{<br>
>>> - struct sockaddr_in addr;<br>
>>> - int fd;<br>
>>> - int status;<br>
>>> - int one = 1;<br>
>>> - int len;<br>
>>> - int backlog;<br>
>>> -<br>
>>> - fd = socket(AF_INET, SOCK_STREAM, 0);<br>
>>> - if (fd == -1)<br>
>>> - {<br>
>>> - pool_error("Failed to create INET domain socket. reason:<br>
>>> %s", strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> - if ((setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, (char *) &one,<br>
>>> - sizeof(one))) == -1)<br>
>>> - {<br>
>>> - pool_error("setsockopt() failed. reason: %s",<br>
>>> strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> -<br>
>>> - memset((char *) &addr, 0, sizeof(addr));<br>
>>> - addr.sin_family = AF_INET;<br>
>>> -<br>
>>> - if (strcmp(hostname, "*")==0)<br>
>>> - {<br>
>>> - addr.sin_addr.s_addr = htonl(INADDR_ANY);<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - struct hostent *hostinfo;<br>
>>> -<br>
>>> - hostinfo = gethostbyname(hostname);<br>
>>> - if (!hostinfo)<br>
>>> - {<br>
>>> - pool_error("could not resolve host name \"%s\":<br>
>>> %s", hostname, hstrerror(h_errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> - addr.sin_addr = *(struct in_addr *) hostinfo->h_addr;<br>
>>> - }<br>
>>> -<br>
>>> - addr.sin_port = htons(port);<br>
>>> - len = sizeof(struct sockaddr_in);<br>
>>> - status = bind(fd, (struct sockaddr *)&addr, len);<br>
>>> - if (status == -1)<br>
>>> - {<br>
>>> - char *host = "", *serv = "";<br>
>>> - char hostname[NI_MAXHOST], servname[NI_MAXSERV];<br>
>>> - if (getnameinfo((struct sockaddr *) &addr, len, hostname,<br>
>>> sizeof(hostname), servname, sizeof(servname), 0) == 0) {<br>
>>> - host = hostname;<br>
>>> - serv = servname;<br>
>>> - }<br>
>>> - pool_error("bind(%s:%s) failed. reason: %s", host, serv,<br>
>>> strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> -<br>
>>> - backlog = pool_config->num_init_children * 2;<br>
>>> - if (backlog > PGPOOLMAXLITSENQUEUELENGTH)<br>
>>> - backlog = PGPOOLMAXLITSENQUEUELENGTH;<br>
>>> -<br>
>>> - status = listen(fd, backlog);<br>
>>> - if (status < 0)<br>
>>> - {<br>
>>> - pool_error("listen() failed. reason: %s",<br>
>>> strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> - return fd;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> -* create UNIX domain socket<br>
>>> -*/<br>
>>> -static int create_unix_domain_socket(struct sockaddr_un un_addr_tmp)<br>
>>> -{<br>
>>> - struct sockaddr_un addr;<br>
>>> - int fd;<br>
>>> - int status;<br>
>>> - int len;<br>
>>> -<br>
>>> - fd = socket(AF_UNIX, SOCK_STREAM, 0);<br>
>>> - if (fd == -1)<br>
>>> - {<br>
>>> - pool_error("Failed to create UNIX domain socket. reason:<br>
>>> %s", strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> - memset((char *) &addr, 0, sizeof(addr));<br>
>>> - addr.sun_family = AF_UNIX;<br>
>>> - snprintf(addr.sun_path, sizeof(addr.sun_path), "%s",<br>
>>> un_addr_tmp.sun_path);<br>
>>> - len = sizeof(struct sockaddr_un);<br>
>>> - status = bind(fd, (struct sockaddr *)&addr, len);<br>
>>> - if (status == -1)<br>
>>> - {<br>
>>> - pool_error("bind(%s) failed. reason: %s", addr.sun_path,<br>
>>> strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> -<br>
>>> - if (chmod(un_addr_tmp.sun_path, 0777) == -1)<br>
>>> - {<br>
>>> - pool_error("chmod() failed. reason: %s", strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> -<br>
>>> - status = listen(fd, PGPOOLMAXLITSENQUEUELENGTH);<br>
>>> - if (status < 0)<br>
>>> - {<br>
>>> - pool_error("listen() failed. reason: %s",<br>
>>> strerror(errno));<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> - return fd;<br>
>>> -}<br>
>>> -<br>
>>> -static void myunlink(const char* path)<br>
>>> -{<br>
>>> - if (unlink(path) == 0) return;<br>
>>> - pool_error("unlink(%s) failed: %s", path, strerror(errno));<br>
>>> -}<br>
>>> -<br>
>>> -static void myexit(int code)<br>
>>> -{<br>
>>> - int i;<br>
>>> -<br>
>>> - if (getpid() != mypid)<br>
>>> - return;<br>
>>> -<br>
>>> - if (process_info != NULL) {<br>
>>> - POOL_SETMASK(&AuthBlockSig);<br>
>>> - exiting = 1;<br>
>>> - for (i = 0; i < pool_config->num_init_children; i++)<br>
>>> - {<br>
>>> - pid_t pid = process_info[i].pid;<br>
>>> - if (pid)<br>
>>> - {<br>
>>> - kill(pid, SIGTERM);<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - /* wait for all children to exit */<br>
>>> - while (wait(NULL) > 0)<br>
>>> - ;<br>
>>> - if (errno != ECHILD)<br>
>>> - pool_error("wait() failed. reason:%s",<br>
>>> strerror(errno));<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> - }<br>
>>> -<br>
>>> - myunlink(un_addr.sun_path);<br>
>>> - myunlink(pcp_un_addr.sun_path);<br>
>>> - myunlink(pool_config->pid_file_name);<br>
>>> -<br>
>>> - write_status_file();<br>
>>> -<br>
>>> - pool_shmem_exit(code);<br>
>>> - exit(code);<br>
>>> -}<br>
>>> -<br>
>>> -void notice_backend_error(int node_id)<br>
>>> -{<br>
>>> - int n = node_id;<br>
>>> -<br>
>>> - if (getpid() == mypid)<br>
>>> - {<br>
>>> - pool_log("notice_backend_error: called from pgpool main.<br>
>>> ignored.");<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - degenerate_backend_set(&n, 1);<br>
>>> - }<br>
>>> -}<br>
>>> -<br>
>>> -/* notice backend connection error using SIGUSR1 */<br>
>>> -void degenerate_backend_set(int *node_id_set, int count)<br>
>>> -{<br>
>>> - pid_t parent = getppid();<br>
>>> - int i;<br>
>>> - bool need_signal = false;<br>
>>> -#ifdef HAVE_SIGPROCMASK<br>
>>> - sigset_t oldmask;<br>
>>> -#else<br>
>>> - int oldmask;<br>
>>> -#endif<br>
>>> -<br>
>>> - if (pool_config->parallel_mode)<br>
>>> - {<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - POOL_SETMASK2(&BlockSig, &oldmask);<br>
>>> - pool_semaphore_lock(REQUEST_INFO_SEM);<br>
>>> - Req_info->kind = NODE_DOWN_REQUEST;<br>
>>> - for (i = 0; i < count; i++)<br>
>>> - {<br>
>>> - if (node_id_set[i] < 0 || node_id_set[i] >=<br>
>>> MAX_NUM_BACKENDS ||<br>
>>> - !VALID_BACKEND(node_id_set[i]))<br>
>>> - {<br>
>>> - pool_log("degenerate_backend_set: node %d is not<br>
>>> valid backend.", i);<br>
>>> - continue;<br>
>>> - }<br>
>>> -<br>
>>> - if<br>
>>> (POOL_DISALLOW_TO_FAILOVER(BACKEND_INFO(node_id_set[i]).flag))<br>
>>> - {<br>
>>> - pool_log("degenerate_backend_set: %d failover<br>
>>> request from pid %d is canceled because failover is disallowed",<br>
>>> node_id_set[i], getpid());<br>
>>> - continue;<br>
>>> - }<br>
>>> -<br>
>>> - pool_log("degenerate_backend_set: %d fail over request<br>
>>> from pid %d", node_id_set[i], getpid());<br>
>>> - Req_info->node_id[i] = node_id_set[i];<br>
>>> - need_signal = true;<br>
>>> - }<br>
>>> -<br>
>>> - if (need_signal)<br>
>>> - {<br>
>>> - if (!pool_config->use_watchdog || WD_OK ==<br>
>>> wd_degenerate_backend_set(node_id_set, count))<br>
>>> - {<br>
>>> - kill(parent, SIGUSR1);<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - pool_log("degenerate_backend_set: failover<br>
>>> request from pid %d is canceled by other pgpool", getpid());<br>
>>> - memset(Req_info->node_id, -1, sizeof(int) *<br>
>>> MAX_NUM_BACKENDS);<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - pool_semaphore_unlock(REQUEST_INFO_SEM);<br>
>>> - POOL_SETMASK(&oldmask);<br>
>>> -}<br>
>>> -<br>
>>> -/* send promote node request using SIGUSR1 */<br>
>>> -void promote_backend(int node_id)<br>
>>> -{<br>
>>> - pid_t parent = getppid();<br>
>>> -<br>
>>> - if (!MASTER_SLAVE || strcmp(pool_config->master_slave_sub_mode,<br>
>>> MODE_STREAMREP))<br>
>>> - {<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - if (node_id < 0 || node_id >= MAX_NUM_BACKENDS ||<br>
>>> !VALID_BACKEND(node_id))<br>
>>> - {<br>
>>> - pool_error("promote_backend: node %d is not valid<br>
>>> backend.", node_id);<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - pool_semaphore_lock(REQUEST_INFO_SEM);<br>
>>> - Req_info->kind = PROMOTE_NODE_REQUEST;<br>
>>> - Req_info->node_id[0] = node_id;<br>
>>> - pool_log("promote_backend: %d promote node request from pid %d",<br>
>>> node_id, getpid());<br>
>>> -<br>
>>> - if (!pool_config->use_watchdog || WD_OK ==<br>
>>> wd_promote_backend(node_id))<br>
>>> - {<br>
>>> - kill(parent, SIGUSR1);<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - pool_log("promote_backend: promote request from pid %d is<br>
>>> canceled by other pgpool", getpid());<br>
>>> - Req_info->node_id[0] = -1;<br>
>>> - }<br>
>>> -<br>
>>> - pool_semaphore_unlock(REQUEST_INFO_SEM);<br>
>>> -}<br>
>>> -<br>
>>> -/* send failback request using SIGUSR1 */<br>
>>> -void send_failback_request(int node_id)<br>
>>> -{<br>
>>> - pid_t parent = getppid();<br>
>>> -<br>
>>> - pool_log("send_failback_request: fail back %d th node request<br>
>>> from pid %d", node_id, getpid());<br>
>>> - Req_info->kind = NODE_UP_REQUEST;<br>
>>> - Req_info->node_id[0] = node_id;<br>
>>> -<br>
>>> - if (node_id < 0 || node_id >= MAX_NUM_BACKENDS ||<br>
>>> - (RAW_MODE && BACKEND_INFO(node_id).backend_status !=<br>
>>> CON_DOWN && VALID_BACKEND(node_id)))<br>
>>> - {<br>
>>> - pool_error("send_failback_request: node %d is alive.",<br>
>>> node_id);<br>
>>> - Req_info->node_id[0] = -1;<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - if (pool_config->use_watchdog && WD_OK !=<br>
>>> wd_send_failback_request(node_id))<br>
>>> - {<br>
>>> - pool_log("send_failback_request: failback request from<br>
>>> pid %d is canceled by other pgpool", getpid());<br>
>>> - Req_info->node_id[0] = -1;<br>
>>> - return;<br>
>>> - }<br>
>>> - kill(parent, SIGUSR1);<br>
>>> -}<br>
>>> -<br>
>>> -static RETSIGTYPE exit_handler(int sig)<br>
>>> -{<br>
>>> - int i;<br>
>>> -<br>
>>> - POOL_SETMASK(&AuthBlockSig);<br>
>>> -<br>
>>> - /*<br>
>>> - * this could happen in a child process if a signal has been sent<br>
>>> - * before resetting signal handler<br>
>>> - */<br>
>>> - if (getpid() != mypid)<br>
>>> - {<br>
>>> - pool_debug("exit_handler: I am not parent");<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> - pool_shmem_exit(0);<br>
>>> - exit(0);<br>
>>> - }<br>
>>> -<br>
>>> - if (sig == SIGTERM)<br>
>>> - pool_log("received smart shutdown request");<br>
>>> - else if (sig == SIGINT)<br>
>>> - pool_log("received fast shutdown request");<br>
>>> - else if (sig == SIGQUIT)<br>
>>> - pool_log("received immediate shutdown request");<br>
>>> - else<br>
>>> - {<br>
>>> - pool_error("exit_handler: unknown signal received %d",<br>
>>> sig);<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - exiting = 1;<br>
>>> -<br>
>>> - for (i = 0; i < pool_config->num_init_children; i++)<br>
>>> - {<br>
>>> - pid_t pid = process_info[i].pid;<br>
>>> - if (pid)<br>
>>> - {<br>
>>> - kill(pid, sig);<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - kill(pcp_pid, sig);<br>
>>> - kill(worker_pid, sig);<br>
>>> -<br>
>>> - if (pool_config->use_watchdog)<br>
>>> - {<br>
>>> - wd_kill_watchdog(sig);<br>
>>> - }<br>
>>> -<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> -<br>
>>> - while (wait(NULL) > 0)<br>
>>> - ;<br>
>>> -<br>
>>> - if (errno != ECHILD)<br>
>>> - pool_error("wait() failed. reason:%s", strerror(errno));<br>
>>> -<br>
>>> - process_info = NULL;<br>
>>> - myexit(0);<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * Calculate next valid master node id.<br>
>>> - * If no valid node found, returns -1.<br>
>>> - */<br>
>>> -static int get_next_master_node(void)<br>
>>> -{<br>
>>> - int i;<br>
>>> -<br>
>>> - for (i=0;i<pool_config->backend_desc->num_backends;i++)<br>
>>> - {<br>
>>> - /*<br>
>>> - * Do not use VALID_BACKEND macro in raw mode.<br>
>>> - * VALID_BACKEND return true only if the argument is<br>
>>> master<br>
>>> - * node id. In other words, standby nodes are false. So<br>
>>> need<br>
>>> - * to check backend status with VALID_BACKEND_RAW.<br>
>>> - */<br>
>>> - if (RAW_MODE)<br>
>>> - {<br>
>>> - if (VALID_BACKEND_RAW(i))<br>
>>> - break;<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - if (VALID_BACKEND(i))<br>
>>> - break;<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - if (i == pool_config->backend_desc->num_backends)<br>
>>> - i = -1;<br>
>>> -<br>
>>> - return i;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * handle SIGUSR1<br>
>>> - *<br>
>>> - */<br>
>>> -static RETSIGTYPE failover_handler(int sig)<br>
>>> -{<br>
>>> - POOL_SETMASK(&BlockSig);<br>
>>> - failover_request = 1;<br>
>>> - write(pipe_fds[1], "\0", 1);<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * backend connection error, failover/failback request, if possible<br>
>>> - * failover() must be called under protecting signals.<br>
>>> - */<br>
>>> -static void failover(void)<br>
>>> -{<br>
>>> - int i;<br>
>>> - int node_id;<br>
>>> - bool by_health_check;<br>
>>> - int new_master;<br>
>>> - int new_primary;<br>
>>> - int nodes[MAX_NUM_BACKENDS];<br>
>>> - bool need_to_restart_children;<br>
>>> - int status;<br>
>>> - int sts;<br>
>>> -<br>
>>> - pool_debug("failover_handler called");<br>
>>> -<br>
>>> - memset(nodes, 0, sizeof(int) * MAX_NUM_BACKENDS);<br>
>>> -<br>
>>> - /*<br>
>>> - * this could happen in a child process if a signal has been sent<br>
>>> - * before resetting signal handler<br>
>>> - */<br>
>>> - if (getpid() != mypid)<br>
>>> - {<br>
>>> - pool_debug("failover_handler: I am not parent");<br>
>>> - kill(pcp_pid, SIGUSR2);<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - /*<br>
>>> - * processing SIGTERM, SIGINT or SIGQUIT<br>
>>> - */<br>
>>> - if (exiting)<br>
>>> - {<br>
>>> - pool_debug("failover_handler called while exiting");<br>
>>> - kill(pcp_pid, SIGUSR2);<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - /*<br>
>>> - * processing fail over or switch over<br>
>>> - */<br>
>>> - if (switching)<br>
>>> - {<br>
>>> - pool_debug("failover_handler called while switching");<br>
>>> - kill(pcp_pid, SIGUSR2);<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - pool_semaphore_lock(REQUEST_INFO_SEM);<br>
>>> -<br>
>>> - if (Req_info->kind == CLOSE_IDLE_REQUEST)<br>
>>> - {<br>
>>> - pool_semaphore_unlock(REQUEST_INFO_SEM);<br>
>>> - kill_all_children(SIGUSR1);<br>
>>> - kill(pcp_pid, SIGUSR2);<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - /*<br>
>>> - * if not in replication mode/master slave mode, we treat this a<br>
>>> restart request.<br>
>>> - * otherwise we need to check if we have already failovered.<br>
>>> - */<br>
>>> - pool_debug("failover_handler: starting to select new master<br>
>>> node");<br>
>>> - switching = 1;<br>
>>> - Req_info->switching = true;<br>
>>> - node_id = Req_info->node_id[0];<br>
>>> -<br>
>>> - /* start of command inter-lock with watchdog */<br>
>>> - if (pool_config->use_watchdog)<br>
>>> - {<br>
>>> - by_health_check = (!failover_request &&<br>
>>> Req_info->kind==NODE_DOWN_REQUEST);<br>
>>> - wd_start_interlock(by_health_check);<br>
>>> - }<br>
>>> -<br>
>>> - /* failback request? */<br>
>>> - if (Req_info->kind == NODE_UP_REQUEST)<br>
>>> - {<br>
>>> - if (node_id >= MAX_NUM_BACKENDS ||<br>
>>> - (Req_info->kind == NODE_UP_REQUEST && !(RAW_MODE<br>
>>> &&<br>
>>> - BACKEND_INFO(node_id).backend_status == CON_DOWN) &&<br>
>>> VALID_BACKEND(node_id)) ||<br>
>>> - (Req_info->kind == NODE_DOWN_REQUEST &&<br>
>>> !VALID_BACKEND(node_id)))<br>
>>> - {<br>
>>> - pool_semaphore_unlock(REQUEST_INFO_SEM);<br>
>>> - pool_error("failover_handler: invalid node_id %d<br>
>>> status:%d MAX_NUM_BACKENDS: %d", node_id,<br>
>>> -<br>
>>> BACKEND_INFO(node_id).backend_status, MAX_NUM_BACKENDS);<br>
>>> - kill(pcp_pid, SIGUSR2);<br>
>>> - switching = 0;<br>
>>> - Req_info->switching = false;<br>
>>> -<br>
>>> - /* end of command inter-lock */<br>
>>> - if (pool_config->use_watchdog)<br>
>>> - wd_leave_interlock();<br>
>>> -<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - pool_log("starting fail back. reconnect host %s(%d)",<br>
>>> - BACKEND_INFO(node_id).backend_hostname,<br>
>>> - BACKEND_INFO(node_id).backend_port);<br>
>>> - BACKEND_INFO(node_id).backend_status = CON_CONNECT_WAIT;<br>
>>> /* unset down status */<br>
>>> -<br>
>>> - /* wait for failback command lock or to be lock holder */<br>
>>> - if (pool_config->use_watchdog && !wd_am_I_lock_holder())<br>
>>> - {<br>
>>> - wd_wait_for_lock(WD_FAILBACK_COMMAND_LOCK);<br>
>>> - }<br>
>>> - /* execute failback command if lock holder */<br>
>>> - if (!pool_config->use_watchdog || wd_am_I_lock_holder())<br>
>>> - {<br>
>>> - trigger_failover_command(node_id,<br>
>>> pool_config->failback_command,<br>
>>> -<br>
>>> MASTER_NODE_ID, get_next_master_node(), PRIMARY_NODE_ID);<br>
>>> -<br>
>>> - /* unlock failback command */<br>
>>> - if (pool_config->use_watchdog)<br>
>>> - wd_unlock(WD_FAILBACK_COMMAND_LOCK);<br>
>>> - }<br>
>>> - }<br>
>>> - else if (Req_info->kind == PROMOTE_NODE_REQUEST)<br>
>>> - {<br>
>>> - if (node_id != -1 && VALID_BACKEND(node_id))<br>
>>> - {<br>
>>> - pool_log("starting promotion. promote host<br>
>>> %s(%d)",<br>
>>> -<br>
>>> BACKEND_INFO(node_id).backend_hostname,<br>
>>> -<br>
>>> BACKEND_INFO(node_id).backend_port);<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - pool_log("failover: no backends are promoted");<br>
>>> - pool_semaphore_unlock(REQUEST_INFO_SEM);<br>
>>> - kill(pcp_pid, SIGUSR2);<br>
>>> - switching = 0;<br>
>>> - Req_info->switching = false;<br>
>>> -<br>
>>> - /* end of command inter-lock */<br>
>>> - if (pool_config->use_watchdog)<br>
>>> - wd_leave_interlock();<br>
>>> -<br>
>>> - return;<br>
>>> - }<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - int cnt = 0;<br>
>>> -<br>
>>> - for (i = 0; i < MAX_NUM_BACKENDS; i++)<br>
>>> - {<br>
>>> - if (Req_info->node_id[i] != -1 &&<br>
>>> - ((RAW_MODE &&<br>
>>> VALID_BACKEND_RAW(Req_info->node_id[i])) ||<br>
>>> - VALID_BACKEND(Req_info->node_id[i])))<br>
>>> - {<br>
>>> - pool_log("starting degeneration. shutdown<br>
>>> host %s(%d)",<br>
>>> -<br>
>>> BACKEND_INFO(Req_info->node_id[i]).backend_hostname,<br>
>>> -<br>
>>> BACKEND_INFO(Req_info->node_id[i]).backend_port);<br>
>>> -<br>
>>> -<br>
>>> BACKEND_INFO(Req_info->node_id[i]).backend_status = CON_DOWN; /* set down<br>
>>> status */<br>
>>> - /* save down node */<br>
>>> - nodes[Req_info->node_id[i]] = 1;<br>
>>> - cnt++;<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - if (cnt == 0)<br>
>>> - {<br>
>>> - pool_log("failover: no backends are degenerated");<br>
>>> - pool_semaphore_unlock(REQUEST_INFO_SEM);<br>
>>> - kill(pcp_pid, SIGUSR2);<br>
>>> - switching = 0;<br>
>>> - Req_info->switching = false;<br>
>>> -<br>
>>> - /* end of command inter-lock */<br>
>>> - if (pool_config->use_watchdog)<br>
>>> - wd_leave_interlock();<br>
>>> -<br>
>>> - return;<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - new_master = get_next_master_node();<br>
>>> -<br>
>>> - if (new_master < 0)<br>
>>> - {<br>
>>> - pool_error("failover_handler: no valid DB node found");<br>
>>> - }<br>
>>> -<br>
>>> -/*<br>
>>> - * Before we tried to minimize restarting pgpool to protect existing<br>
>>> - * connections from clients to pgpool children. What we did here was,<br>
>>> - * if children other than master went down, we did not fail over.<br>
>>> - * This is wrong. Think about following scenario. If someone<br>
>>> - * accidentally plugs out the network cable, the TCP/IP stack keeps<br>
>>> - * retrying for long time (typically 2 hours). The only way to stop<br>
>>> - * the retry is restarting the process. Bottom line is, we need to<br>
>>> - * restart all children in any case. See pgpool-general list posting<br>
>>> - * "TCP connections are *not* closed when a backend timeout" on Jul 13<br>
>>> - * 2008 for more details.<br>
>>> - */<br>
>>> -#ifdef NOT_USED<br>
>>> - else<br>
>>> - {<br>
>>> - if (Req_info->master_node_id == new_master && *InRecovery<br>
>>> == RECOVERY_INIT)<br>
>>> - {<br>
>>> - pool_log("failover_handler: do not restart<br>
>>> pgpool. same master node %d was selected", new_master);<br>
>>> - if (Req_info->kind == NODE_UP_REQUEST)<br>
>>> - {<br>
>>> - pool_log("failback done. reconnect host<br>
>>> %s(%d)",<br>
>>> -<br>
>>> BACKEND_INFO(node_id).backend_hostname,<br>
>>> -<br>
>>> BACKEND_INFO(node_id).backend_port);<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - pool_log("failover done. shutdown host<br>
>>> %s(%d)",<br>
>>> -<br>
>>> BACKEND_INFO(node_id).backend_hostname,<br>
>>> -<br>
>>> BACKEND_INFO(node_id).backend_port);<br>
>>> - }<br>
>>> -<br>
>>> - /* exec failover_command */<br>
>>> - for (i = 0; i <<br>
>>> pool_config->backend_desc->num_backends; i++)<br>
>>> - {<br>
>>> - if (nodes[i])<br>
>>> - trigger_failover_command(i,<br>
>>> pool_config->failover_command);<br>
>>> - }<br>
>>> -<br>
>>> - pool_semaphore_unlock(REQUEST_INFO_SEM);<br>
>>> - switching = 0;<br>
>>> - Req_info->switching = false;<br>
>>> - kill(pcp_pid, SIGUSR2);<br>
>>> - switching = 0;<br>
>>> - Req_info->switching = false;<br>
>>> - return;<br>
>>> - }<br>
>>> - }<br>
>>> -#endif<br>
>>> -<br>
>>> -<br>
>>> - /* On 2011/5/2 Tatsuo Ishii says: if mode is streaming replication<br>
>>> - * and request is NODE_UP_REQUEST(failback case) we don't need to<br>
>>> - * restart all children. Existing session will not use newly<br>
>>> - * attached node, but load balanced node is not changed until this<br>
>>> - * session ends, so it's harmless anyway.<br>
>>> - */<br>
>>> - if (MASTER_SLAVE && !strcmp(pool_config->master_slave_sub_mode,<br>
>>> MODE_STREAMREP) &&<br>
>>> - Req_info->kind == NODE_UP_REQUEST)<br>
>>> - {<br>
>>> - pool_log("Do not restart children because we are<br>
>>> failbacking node id %d host%s port:%d and we are in streaming replication<br>
>>> mode", node_id,<br>
>>> - BACKEND_INFO(node_id).backend_hostname,<br>
>>> - BACKEND_INFO(node_id).backend_port);<br>
>>> -<br>
>>> - need_to_restart_children = false;<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - pool_log("Restart all children");<br>
>>> -<br>
>>> - /* kill all children */<br>
>>> - for (i = 0; i < pool_config->num_init_children; i++)<br>
>>> - {<br>
>>> - pid_t pid = process_info[i].pid;<br>
>>> - if (pid)<br>
>>> - {<br>
>>> - kill(pid, SIGQUIT);<br>
>>> - pool_debug("failover_handler: kill %d",<br>
>>> pid);<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - need_to_restart_children = true;<br>
>>> - }<br>
>>> -<br>
>>> - /* wait for failover command lock or to be lock holder*/<br>
>>> - if (pool_config->use_watchdog && !wd_am_I_lock_holder())<br>
>>> - {<br>
>>> - wd_wait_for_lock(WD_FAILOVER_COMMAND_LOCK);<br>
>>> - }<br>
>>> -<br>
>>> - /* execute failover command if lock holder */<br>
>>> - if (!pool_config->use_watchdog || wd_am_I_lock_holder())<br>
>>> - {<br>
>>> - /* Exec failover_command if needed */<br>
>>> - for (i = 0; i < pool_config->backend_desc->num_backends;<br>
>>> i++)<br>
>>> - {<br>
>>> - if (nodes[i])<br>
>>> - trigger_failover_command(i,<br>
>>> pool_config->failover_command,<br>
>>> -<br>
>>> MASTER_NODE_ID, new_master, PRIMARY_NODE_ID);<br>
>>> - }<br>
>>> -<br>
>>> - /* unlock failover command */<br>
>>> - if (pool_config->use_watchdog)<br>
>>> - wd_unlock(WD_FAILOVER_COMMAND_LOCK);<br>
>>> - }<br>
>>> -<br>
>>> -<br>
>>> -/* no need to wait since it will be done in reap_handler */<br>
>>> -#ifdef NOT_USED<br>
>>> - while (wait(NULL) > 0)<br>
>>> - ;<br>
>>> -<br>
>>> - if (errno != ECHILD)<br>
>>> - pool_error("failover_handler: wait() failed. reason:%s",<br>
>>> strerror(errno));<br>
>>> -#endif<br>
>>> -<br>
>>> - if (Req_info->kind == PROMOTE_NODE_REQUEST &&<br>
>>> VALID_BACKEND(node_id))<br>
>>> - new_primary = node_id;<br>
>>> - else<br>
>>> - new_primary = find_primary_node_repeatedly();<br>
>>> -<br>
>>> - /*<br>
>>> - * If follow_master_command is provided and in master/slave<br>
>>> - * streaming replication mode, we start degenerating all backends<br>
>>> - * as they are not replicated anymore.<br>
>>> - */<br>
>>> - int follow_cnt = 0;<br>
>>> - if (MASTER_SLAVE && !strcmp(pool_config->master_slave_sub_mode,<br>
>>> MODE_STREAMREP))<br>
>>> - {<br>
>>> - if (*pool_config->follow_master_command != '\0' ||<br>
>>> - Req_info->kind == PROMOTE_NODE_REQUEST)<br>
>>> - {<br>
>>> - /* only if the failover is against the current<br>
>>> primary */<br>
>>> - if (((Req_info->kind == NODE_DOWN_REQUEST) &&<br>
>>> - (nodes[Req_info->primary_node_id])) ||<br>
>>> - ((Req_info->kind == PROMOTE_NODE_REQUEST)<br>
>>> &&<br>
>>> - (VALID_BACKEND(node_id)))) {<br>
>>> -<br>
>>> - for (i = 0; i <<br>
>>> pool_config->backend_desc->num_backends; i++)<br>
>>> - {<br>
>>> - /* do not degenerate the new<br>
>>> primary */<br>
>>> - if ((new_primary >= 0) && (i !=<br>
>>> new_primary)) {<br>
>>> - BackendInfo *bkinfo;<br>
>>> - bkinfo =<br>
>>> pool_get_node_info(i);<br>
>>> - pool_log("starting follow<br>
>>> degeneration. shutdown host %s(%d)",<br>
>>> -<br>
>>> bkinfo->backend_hostname,<br>
>>> -<br>
>>> bkinfo->backend_port);<br>
>>> - bkinfo->backend_status =<br>
>>> CON_DOWN; /* set down status */<br>
>>> - follow_cnt++;<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - if (follow_cnt == 0)<br>
>>> - {<br>
>>> - pool_log("failover: no follow<br>
>>> backends are degenerated");<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - /* update new master node */<br>
>>> - new_master =<br>
>>> get_next_master_node();<br>
>>> - pool_log("failover: %d follow<br>
>>> backends have been degenerated", follow_cnt);<br>
>>> - }<br>
>>> - }<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - memset(Req_info->node_id, -1, sizeof(int) * MAX_NUM_BACKENDS);<br>
>>> - pool_semaphore_unlock(REQUEST_INFO_SEM);<br>
>>> -<br>
>>> - /* wait for follow_master_command lock or to be lock holder */<br>
>>> - if (pool_config->use_watchdog && !wd_am_I_lock_holder())<br>
>>> - {<br>
>>> - wd_wait_for_lock(WD_FOLLOW_MASTER_COMMAND_LOCK);<br>
>>> - }<br>
>>> -<br>
>>> - /* execute follow_master_command */<br>
>>> - if (!pool_config->use_watchdog || wd_am_I_lock_holder())<br>
>>> - {<br>
>>> - if ((follow_cnt > 0) &&<br>
>>> (*pool_config->follow_master_command != '\0'))<br>
>>> - {<br>
>>> - follow_pid =<br>
>>> fork_follow_child(Req_info->master_node_id, new_primary,<br>
>>> -<br>
>>> Req_info->primary_node_id);<br>
>>> - }<br>
>>> -<br>
>>> - /* unlock follow_master_command */<br>
>>> - if (pool_config->use_watchdog)<br>
>>> - wd_unlock(WD_FOLLOW_MASTER_COMMAND_LOCK);<br>
>>> - }<br>
>>> -<br>
>>> - /* end of command inter-lock */<br>
>>> - if (pool_config->use_watchdog)<br>
>>> - wd_end_interlock();<br>
>>> -<br>
>>> - /* Save primary node id */<br>
>>> - Req_info->primary_node_id = new_primary;<br>
>>> - pool_log("failover: set new primary node: %d",<br>
>>> Req_info->primary_node_id);<br>
>>> -<br>
>>> - if (new_master >= 0)<br>
>>> - {<br>
>>> - Req_info->master_node_id = new_master;<br>
>>> - pool_log("failover: set new master node: %d",<br>
>>> Req_info->master_node_id);<br>
>>> - }<br>
>>> -<br>
>>> -<br>
>>> - /* Fork the children if needed */<br>
>>> - if (need_to_restart_children)<br>
>>> - {<br>
>>> - for (i=0;i<pool_config->num_init_children;i++)<br>
>>> - {<br>
>>> -<br>
>>> - /*<br>
>>> - * Try to kill pgpool child because previous kill<br>
>>> signal<br>
>>> - * may not be received by pgpool child. This<br>
>>> could happen<br>
>>> - * if multiple PostgreSQL are going down (or even<br>
>>> starting<br>
>>> - * pgpool, without starting PostgreSQL can<br>
>>> trigger this).<br>
>>> - * Child calls degenerate_backend() and it tries<br>
>>> to aquire<br>
>>> - * semaphore to write a failover request. In this<br>
>>> case the<br>
>>> - * signal mask is set as well, thus signals are<br>
>>> never<br>
>>> - * received.<br>
>>> - */<br>
>>> - kill(process_info[i].pid, SIGQUIT);<br>
>>> -<br>
>>> - process_info[i].pid = fork_a_child(unix_fd,<br>
>>> inet_fd, i);<br>
>>> - process_info[i].start_time = time(NULL);<br>
>>> - }<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - /* Set restart request to each child. Children will<br>
>>> exit(1)<br>
>>> - * whenever they are idle to restart.<br>
>>> - */<br>
>>> - for (i=0;i<pool_config->num_init_children;i++)<br>
>>> - {<br>
>>> - process_info[i].need_to_restart = 1;<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - /*<br>
>>> - * Send restart request to worker child.<br>
>>> - */<br>
>>> - kill(worker_pid, SIGUSR1);<br>
>>> -<br>
>>> - if (Req_info->kind == NODE_UP_REQUEST)<br>
>>> - {<br>
>>> - pool_log("failback done. reconnect host %s(%d)",<br>
>>> - BACKEND_INFO(node_id).backend_hostname,<br>
>>> - BACKEND_INFO(node_id).backend_port);<br>
>>> - }<br>
>>> - else if (Req_info->kind == PROMOTE_NODE_REQUEST)<br>
>>> - {<br>
>>> - pool_log("promotion done. promoted host %s(%d)",<br>
>>> - BACKEND_INFO(node_id).backend_hostname,<br>
>>> - BACKEND_INFO(node_id).backend_port);<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - pool_log("failover done. shutdown host %s(%d)",<br>
>>> - BACKEND_INFO(node_id).backend_hostname,<br>
>>> - BACKEND_INFO(node_id).backend_port);<br>
>>> - }<br>
>>> -<br>
>>> - switching = 0;<br>
>>> - Req_info->switching = false;<br>
>>> -<br>
>>> - /* kick wakeup_handler in pcp_child to notice that<br>
>>> - * failover/failback done<br>
>>> - */<br>
>>> - kill(pcp_pid, SIGUSR2);<br>
>>> -<br>
>>> - sleep(1);<br>
>>> -<br>
>>> - /*<br>
>>> - * Send restart request to pcp child.<br>
>>> - */<br>
>>> - kill(pcp_pid, SIGUSR1);<br>
>>> - for (;;)<br>
>>> - {<br>
>>> - sts = waitpid(pcp_pid, &status, 0);<br>
>>> - if (sts != -1)<br>
>>> - break;<br>
>>> - if (sts == -1)<br>
>>> - {<br>
>>> - if (errno == EINTR)<br>
>>> - continue;<br>
>>> - else<br>
>>> - {<br>
>>> - pool_error("failover: waitpid failed.<br>
>>> reason: %s", strerror(errno));<br>
>>> - return;<br>
>>> - }<br>
>>> - }<br>
>>> - }<br>
>>> - if (WIFSIGNALED(status))<br>
>>> - pool_log("PCP child %d exits with status %d by signal %d<br>
>>> in failover()", pcp_pid, status, WTERMSIG(status));<br>
>>> - else<br>
>>> - pool_log("PCP child %d exits with status %d in<br>
>>> failover()", pcp_pid, status);<br>
>>> -<br>
>>> - pcp_pid = pcp_fork_a_child(pcp_unix_fd, pcp_inet_fd,<br>
>>> pcp_conf_file);<br>
>>> - pool_log("fork a new PCP child pid %d in failover()", pcp_pid);<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * health check timer handler<br>
>>> - */<br>
>>> -static RETSIGTYPE health_check_timer_handler(int sig)<br>
>>> -{<br>
>>> - POOL_SETMASK(&BlockSig);<br>
>>> - health_check_timer_expired = 1;<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> -}<br>
>>> -<br>
>>> -<br>
>>> -/*<br>
>>> - * Check if we can connect to the backend<br>
>>> - * returns 0 for OK. otherwise returns backend id + 1<br>
>>> - */<br>
>>> -static int health_check(void)<br>
>>> -{<br>
>>> - POOL_CONNECTION_POOL_SLOT *slot;<br>
>>> - BackendInfo *bkinfo;<br>
>>> - static bool is_first = true;<br>
>>> - static char *dbname;<br>
>>> - int i;<br>
>>> -<br>
>>> - /* Do not execute health check during recovery */<br>
>>> - if (*InRecovery)<br>
>>> - return 0;<br>
>>> -<br>
>>> - Retry:<br>
>>> - /*<br>
>>> - * First we try with "postgres" database.<br>
>>> - */<br>
>>> - if (is_first)<br>
>>> - dbname = "postgres";<br>
>>> -<br>
>>> - for (i=0;i<pool_config->backend_desc->num_backends;i++)<br>
>>> - {<br>
>>> - /*<br>
>>> - * Make sure that health check timer has not been expired.<br>
>>> - * Before called health_check(),<br>
>>> health_check_timer_expired is<br>
>>> - * set to 0. However it is possible that while<br>
>>> processing DB<br>
>>> - * nodes health check timer expired.<br>
>>> - */<br>
>>> - if (health_check_timer_expired)<br>
>>> - {<br>
>>> - pool_log("health_check: health check timer has<br>
>>> been already expired before attempting to connect to %d th backend", i);<br>
>>> - return i+1;<br>
>>> - }<br>
>>> -<br>
>>> - bkinfo = pool_get_node_info(i);<br>
>>> -<br>
>>> - pool_debug("health_check: %d th DB node status: %d", i,<br>
>>> bkinfo->backend_status);<br>
>>> -<br>
>>> - if (bkinfo->backend_status == CON_UNUSED ||<br>
>>> - bkinfo->backend_status == CON_DOWN)<br>
>>> - continue;<br>
>>> -<br>
>>> - slot =<br>
>>> make_persistent_db_connection(bkinfo->backend_hostname,<br>
>>> -<br>
>>> bkinfo->backend_port,<br>
>>> -<br>
>>> dbname,<br>
>>> -<br>
>>> pool_config->health_check_user,<br>
>>> -<br>
>>> pool_config->health_check_password, false);<br>
>>> -<br>
>>> - if (is_first)<br>
>>> - is_first = false;<br>
>>> -<br>
>>> - if (!slot)<br>
>>> - {<br>
>>> - /*<br>
>>> - * Retry with template1 unless health check timer<br>
>>> is expired.<br>
>>> - */<br>
>>> - if (!strcmp(dbname, "postgres") &&<br>
>>> health_check_timer_expired == 0)<br>
>>> - {<br>
>>> - dbname = "template1";<br>
>>> - goto Retry;<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - pool_error("health check failed. %d th<br>
>>> host %s at port %d is down",<br>
>>> - i,<br>
>>> -<br>
>>> bkinfo->backend_hostname,<br>
>>> - bkinfo->backend_port);<br>
>>> - return i+1;<br>
>>> - }<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - discard_persistent_db_connection(slot);<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - return 0;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * check if we can connect to the SystemDB<br>
>>> - * returns 0 for OK. otherwise returns -1<br>
>>> - */<br>
>>> -static int<br>
>>> -system_db_health_check(void)<br>
>>> -{<br>
>>> - int fd;<br>
>>> -<br>
>>> - /* V2 startup packet */<br>
>>> - typedef struct {<br>
>>> - int len; /* startup packet length */<br>
>>> - StartupPacket_v2 sp;<br>
>>> - } MySp;<br>
>>> - MySp mysp;<br>
>>> - char kind;<br>
>>> -<br>
>>> - memset(&mysp, 0, sizeof(mysp));<br>
>>> - mysp.len = htonl(296);<br>
>>> - mysp.sp.protoVersion = htonl(PROTO_MAJOR_V2 << 16);<br>
>>> - strcpy(mysp.sp.database, "template1");<br>
>>> - strncpy(mysp.sp.user, SYSDB_INFO->user, sizeof(mysp.sp.user) - 1);<br>
>>> - *mysp.sp.options = '\0';<br>
>>> - *mysp.sp.unused = '\0';<br>
>>> - *mysp.sp.tty = '\0';<br>
>>> -<br>
>>> - pool_debug("health_check: SystemDB status: %d", SYSDB_STATUS);<br>
>>> -<br>
>>> - /* if SystemDB is already down, ignore */<br>
>>> - if (SYSDB_STATUS == CON_UNUSED || SYSDB_STATUS == CON_DOWN)<br>
>>> - return 0;<br>
>>> -<br>
>>> - if (*SYSDB_INFO->hostname == '/')<br>
>>> - fd = connect_unix_domain_socket_by_port(SYSDB_INFO->port,<br>
>>> SYSDB_INFO->hostname, FALSE);<br>
>>> - else<br>
>>> - fd =<br>
>>> connect_inet_domain_socket_by_port(SYSDB_INFO->hostname, SYSDB_INFO->port,<br>
>>> FALSE);<br>
>>> -<br>
>>> - if (fd < 0)<br>
>>> - {<br>
>>> - pool_error("health check failed. SystemDB host %s at port<br>
>>> %d is down",<br>
>>> - SYSDB_INFO->hostname,<br>
>>> - SYSDB_INFO->port);<br>
>>> -<br>
>>> - return -1;<br>
>>> - }<br>
>>> -<br>
>>> - if (write(fd, &mysp, sizeof(mysp)) < 0)<br>
>>> - {<br>
>>> - pool_error("health check failed during write. SystemDB<br>
>>> host %s at port %d is down",<br>
>>> - SYSDB_INFO->hostname,<br>
>>> - SYSDB_INFO->port);<br>
>>> - close(fd);<br>
>>> - return -1;<br>
>>> - }<br>
>>> -<br>
>>> - read(fd, &kind, 1);<br>
>>> -<br>
>>> - if (write(fd, "X", 1) < 0)<br>
>>> - {<br>
>>> - pool_error("health check failed during write. SystemDB<br>
>>> host %s at port %d is down",<br>
>>> - SYSDB_INFO->hostname,<br>
>>> - SYSDB_INFO->port);<br>
>>> - close(fd);<br>
>>> - return -1;<br>
>>> - }<br>
>>> -<br>
>>> - close(fd);<br>
>>> - return 0;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * handle SIGCHLD<br>
>>> - */<br>
>>> -static RETSIGTYPE reap_handler(int sig)<br>
>>> -{<br>
>>> - POOL_SETMASK(&BlockSig);<br>
>>> - sigchld_request = 1;<br>
>>> - write(pipe_fds[1], "\0", 1);<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * Attach zombie processes and restart child processes.<br>
>>> - * reaper() must be called protected from signals.<br>
>>> - */<br>
>>> -static void reaper(void)<br>
>>> -{<br>
>>> - pid_t pid;<br>
>>> - int status;<br>
>>> - int i;<br>
>>> -<br>
>>> - pool_debug("reap_handler called");<br>
>>> -<br>
>>> - if (exiting)<br>
>>> - {<br>
>>> - pool_debug("reap_handler: exited due to exiting");<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - if (switching)<br>
>>> - {<br>
>>> - pool_debug("reap_handler: exited due to switching");<br>
>>> - return;<br>
>>> - }<br>
>>> -<br>
>>> - /* clear SIGCHLD request */<br>
>>> - sigchld_request = 0;<br>
>>> -<br>
>>> -#ifdef HAVE_WAITPID<br>
>>> - pool_debug("reap_handler: call waitpid");<br>
>>> - while ((pid = waitpid(-1, &status, WNOHANG)) > 0)<br>
>>> -#else<br>
>>> - pool_debug("reap_handler: call wait3");<br>
>>> - while ((pid = wait3(&status, WNOHANG, NULL)) > 0)<br>
>>> -#endif<br>
>>> - {<br>
>>> - if (WIFSIGNALED(status) && WTERMSIG(status) == SIGSEGV)<br>
>>> - {<br>
>>> - /* Child terminated by segmentation fault. Report<br>
>>> it */<br>
>>> - pool_error("Child process %d was terminated by<br>
>>> segmentation fault", pid);<br>
>>> - }<br>
>>> -<br>
>>> - /* if exiting child process was PCP handler */<br>
>>> - if (pid == pcp_pid)<br>
>>> - {<br>
>>> - if (WIFSIGNALED(status))<br>
>>> - pool_log("PCP child %d exits with status<br>
>>> %d by signal %d", pid, status, WTERMSIG(status));<br>
>>> - else<br>
>>> - pool_log("PCP child %d exits with status<br>
>>> %d", pid, status);<br>
>>> -<br>
>>> - pcp_pid = pcp_fork_a_child(pcp_unix_fd,<br>
>>> pcp_inet_fd, pcp_conf_file);<br>
>>> - pool_log("fork a new PCP child pid %d", pcp_pid);<br>
>>> - }<br>
>>> -<br>
>>> - /* exiting process was worker process */<br>
>>> - else if (pid == worker_pid)<br>
>>> - {<br>
>>> - if (WIFSIGNALED(status))<br>
>>> - pool_log("worker child %d exits with<br>
>>> status %d by signal %d", pid, status, WTERMSIG(status));<br>
>>> - else<br>
>>> - pool_log("worker child %d exits with<br>
>>> status %d", pid, status);<br>
>>> -<br>
>>> - if (status)<br>
>>> - worker_pid = worker_fork_a_child();<br>
>>> -<br>
>>> - pool_log("fork a new worker child pid %d",<br>
>>> worker_pid);<br>
>>> - }<br>
>>> -<br>
>>> - /* exiting process was watchdog process */<br>
>>> - else if (pool_config->use_watchdog &&<br>
>>> wd_is_watchdog_pid(pid))<br>
>>> - {<br>
>>> - if (!wd_reaper_watchdog(pid, status))<br>
>>> - {<br>
>>> - pool_error("wd_reaper failed");<br>
>>> - myexit(1);<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - else<br>
>>> - {<br>
>>> - if (WIFSIGNALED(status))<br>
>>> - pool_debug("child %d exits with status %d<br>
>>> by signal %d", pid, status, WTERMSIG(status));<br>
>>> - else<br>
>>> - pool_debug("child %d exits with status<br>
>>> %d", pid, status);<br>
>>> -<br>
>>> - /* look for exiting child's pid */<br>
>>> - for (i=0;i<pool_config->num_init_children;i++)<br>
>>> - {<br>
>>> - if (pid == process_info[i].pid)<br>
>>> - {<br>
>>> - /* if found, fork a new child */<br>
>>> - if (!switching && !exiting &&<br>
>>> status)<br>
>>> - {<br>
>>> - process_info[i].pid =<br>
>>> fork_a_child(unix_fd, inet_fd, i);<br>
>>> -<br>
>>> process_info[i].start_time = time(NULL);<br>
>>> - pool_debug("fork a new<br>
>>> child pid %d", process_info[i].pid);<br>
>>> - break;<br>
>>> - }<br>
>>> - }<br>
>>> - }<br>
>>> - }<br>
>>> - }<br>
>>> - pool_debug("reap_handler: normally exited");<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * get node information specified by node_number<br>
>>> - */<br>
>>> -BackendInfo *<br>
>>> -pool_get_node_info(int node_number)<br>
>>> -{<br>
>>> - if (node_number < 0 || node_number >= NUM_BACKENDS)<br>
>>> - return NULL;<br>
>>> -<br>
>>> - return &BACKEND_INFO(node_number);<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * get number of nodes<br>
>>> - */<br>
>>> -int<br>
>>> -pool_get_node_count(void)<br>
>>> -{<br>
>>> - return NUM_BACKENDS;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * get process ids<br>
>>> - */<br>
>>> -int *<br>
>>> -pool_get_process_list(int *array_size)<br>
>>> -{<br>
>>> - int *array;<br>
>>> - int i;<br>
>>> -<br>
>>> - *array_size = pool_config->num_init_children;<br>
>>> - array = calloc(*array_size, sizeof(int));<br>
>>> - for (i = 0; i < *array_size; i++)<br>
>>> - array[i] = process_info[i].pid;<br>
>>> -<br>
>>> - return array;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * get process information specified by pid<br>
>>> - */<br>
>>> -ProcessInfo *<br>
>>> -pool_get_process_info(pid_t pid)<br>
>>> -{<br>
>>> - int i;<br>
>>> -<br>
>>> - for (i = 0; i < pool_config->num_init_children; i++)<br>
>>> - if (process_info[i].pid == pid)<br>
>>> - return &process_info[i];<br>
>>> -<br>
>>> - return NULL;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * get System DB information<br>
>>> - */<br>
>>> -SystemDBInfo *<br>
>>> -pool_get_system_db_info(void)<br>
>>> -{<br>
>>> - if (system_db_info == NULL)<br>
>>> - return NULL;<br>
>>> -<br>
>>> - return system_db_info->info;<br>
>>> -}<br>
>>> -<br>
>>> -<br>
>>> -/*<br>
>>> - * handle SIGUSR2<br>
>>> - * Wakeup all processes<br>
>>> - */<br>
>>> -static void wakeup_children(void)<br>
>>> -{<br>
>>> - kill_all_children(SIGUSR2);<br>
>>> -}<br>
>>> -<br>
>>> -<br>
>>> -static RETSIGTYPE wakeup_handler(int sig)<br>
>>> -{<br>
>>> - POOL_SETMASK(&BlockSig);<br>
>>> - wakeup_request = 1;<br>
>>> - write(pipe_fds[1], "\0", 1);<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * handle SIGHUP<br>
>>> - *<br>
>>> - */<br>
>>> -static RETSIGTYPE reload_config_handler(int sig)<br>
>>> -{<br>
>>> - POOL_SETMASK(&BlockSig);<br>
>>> - reload_config_request = 1;<br>
>>> - write(pipe_fds[1], "\0", 1);<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> -}<br>
>>> -<br>
>>> -static void reload_config(void)<br>
>>> -{<br>
>>> - pool_log("reload config files.");<br>
>>> - pool_get_config(conf_file, RELOAD_CONFIG);<br>
>>> - if (pool_config->enable_pool_hba)<br>
>>> - load_hba(hba_file);<br>
>>> - if (pool_config->parallel_mode)<br>
>>> - pool_memset_system_db_info(system_db_info->info);<br>
>>> - kill_all_children(SIGHUP);<br>
>>> -<br>
>>> - if (worker_pid)<br>
>>> - kill(worker_pid, SIGHUP);<br>
>>> -}<br>
>>> -<br>
>>> -static void kill_all_children(int sig)<br>
>>> -{<br>
>>> - int i;<br>
>>> -<br>
>>> - /* kill all children */<br>
>>> - for (i = 0; i < pool_config->num_init_children; i++)<br>
>>> - {<br>
>>> - pid_t pid = process_info[i].pid;<br>
>>> - if (pid)<br>
>>> - {<br>
>>> - kill(pid, sig);<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - /* make PCP process reload as well */<br>
>>> - if (sig == SIGHUP)<br>
>>> - kill(pcp_pid, sig);<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * pause in a period specified by timeout. If any data is coming<br>
>>> - * through pipe_fds[0], that means one of: failover request(SIGUSR1),<br>
>>> - * SIGCHLD received, children wake up request(SIGUSR2 used in on line<br>
>>> - * recovery processing) or config file reload request(SIGHUP) has been<br>
>>> - * occurred. In this case this function returns 1.<br>
>>> - * otherwise 0: (no signal event occurred), -1: (error)<br>
>>> - * XXX: is it OK that select(2) error is ignored here?<br>
>>> - */<br>
>>> -static int pool_pause(struct timeval *timeout)<br>
>>> -{<br>
>>> - fd_set rfds;<br>
>>> - int n;<br>
>>> - char dummy;<br>
>>> -<br>
>>> - FD_ZERO(&rfds);<br>
>>> - FD_SET(pipe_fds[0], &rfds);<br>
>>> - n = select(pipe_fds[0]+1, &rfds, NULL, NULL, timeout);<br>
>>> - if (n == 1)<br>
>>> - read(pipe_fds[0], &dummy, 1);<br>
>>> - return n;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * sleep for seconds specified by "second". Unlike pool_pause(), this<br>
>>> - * function guarantees that it will sleep for specified seconds. This<br>
>>> - * function uses pool_pause() internally. If it informs that there is<br>
>>> - * a pending signal event, they are processed using CHECK_REQUEST<br>
>>> - * macro. Note that most of these processes are done while all signals<br>
>>> - * are blocked.<br>
>>> - */<br>
>>> -void pool_sleep(unsigned int second)<br>
>>> -{<br>
>>> - struct timeval current_time, sleep_time;<br>
>>> -<br>
>>> - gettimeofday(&current_time, NULL);<br>
>>> - sleep_time.tv_sec = second + current_time.tv_sec;<br>
>>> - sleep_time.tv_usec = current_time.tv_usec;<br>
>>> -<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> - while (sleep_time.tv_sec > current_time.tv_sec)<br>
>>> - {<br>
>>> - struct timeval timeout;<br>
>>> - int r;<br>
>>> -<br>
>>> - timeout.tv_sec = sleep_time.tv_sec - current_time.tv_sec;<br>
>>> - timeout.tv_usec = sleep_time.tv_usec -<br>
>>> current_time.tv_usec;<br>
>>> - if (timeout.tv_usec < 0)<br>
>>> - {<br>
>>> - timeout.tv_sec--;<br>
>>> - timeout.tv_usec += 1000000;<br>
>>> - }<br>
>>> -<br>
>>> - r = pool_pause(&timeout);<br>
>>> - POOL_SETMASK(&BlockSig);<br>
>>> - if (r > 0)<br>
>>> - CHECK_REQUEST;<br>
>>> - POOL_SETMASK(&UnBlockSig);<br>
>>> - gettimeofday(&current_time, NULL);<br>
>>> - }<br>
>>> - POOL_SETMASK(&BlockSig);<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * get_config_file_name: return full path of pgpool.conf.<br>
>>> - */<br>
>>> -char *get_config_file_name(void)<br>
>>> -{<br>
>>> - return conf_file;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * get_config_file_name: return full path of pool_hba.conf.<br>
>>> - */<br>
>>> -char *get_hba_file_name(void)<br>
>>> + * get_config_file_name: return full path of pool_hba.conf.<br>
>>> + */<br>
>>> +char *get_hba_file_name(void)<br>
>>> {<br>
>>> return hba_file;<br>
>>> }<br>
>>> -<br>
>>> -/*<br>
>>> - * trigger_failover_command: execute specified command at failover.<br>
>>> - * command_line is null-terminated string.<br>
>>> - */<br>
>>> -static int trigger_failover_command(int node, const char *command_line,<br>
>>> -<br>
>>> int old_master, int new_master, int old_primary)<br>
>>> -{<br>
>>> - int r = 0;<br>
>>> - String *exec_cmd;<br>
>>> - char port_buf[6];<br>
>>> - char buf[2];<br>
>>> - BackendInfo *info;<br>
>>> - BackendInfo *newmaster;<br>
>>> -<br>
>>> - if (command_line == NULL || (strlen(command_line) == 0))<br>
>>> - return 0;<br>
>>> -<br>
>>> - /* check failed nodeID */<br>
>>> - if (node < 0 || node > NUM_BACKENDS)<br>
>>> - return -1;<br>
>>> -<br>
>>> - info = pool_get_node_info(node);<br>
>>> - if (!info)<br>
>>> - return -1;<br>
>>> -<br>
>>> - buf[1] = '\0';<br>
>>> - pool_memory = pool_memory_create(PREPARE_BLOCK_SIZE);<br>
>>> - if (!pool_memory)<br>
>>> - {<br>
>>> - pool_error("trigger_failover_command:<br>
>>> pool_memory_create() failed");<br>
>>> - return -1;<br>
>>> - }<br>
>>> - exec_cmd = init_string("");<br>
>>> -<br>
>>> - while (*command_line)<br>
>>> - {<br>
>>> - if (*command_line == '%')<br>
>>> - {<br>
>>> - if (*(command_line + 1))<br>
>>> - {<br>
>>> - char val = *(command_line + 1);<br>
>>> - switch (val)<br>
>>> - {<br>
>>> - case 'p': /* failed node port */<br>
>>> - snprintf(port_buf,<br>
>>> sizeof(port_buf), "%d", info->backend_port);<br>
>>> -<br>
>>> string_append_char(exec_cmd, port_buf);<br>
>>> - break;<br>
>>> -<br>
>>> - case 'D': /* failed node database<br>
>>> directory */<br>
>>> -<br>
>>> string_append_char(exec_cmd, info->backend_data_directory);<br>
>>> - break;<br>
>>> -<br>
>>> - case 'd': /* failed node id */<br>
>>> - snprintf(port_buf,<br>
>>> sizeof(port_buf), "%d", node);<br>
>>> -<br>
>>> string_append_char(exec_cmd, port_buf);<br>
>>> - break;<br>
>>> -<br>
>>> - case 'h': /* failed host name */<br>
>>> -<br>
>>> string_append_char(exec_cmd, info->backend_hostname);<br>
>>> - break;<br>
>>> -<br>
>>> - case 'H': /* new master host name<br>
>>> */<br>
>>> - newmaster =<br>
>>> pool_get_node_info(new_master);<br>
>>> - if (newmaster)<br>
>>> -<br>
>>> string_append_char(exec_cmd, newmaster->backend_hostname);<br>
>>> - else<br>
>>> - /* no valid new<br>
>>> master */<br>
>>> -<br>
>>> string_append_char(exec_cmd, "");<br>
>>> - break;<br>
>>> -<br>
>>> - case 'm': /* new master node id */<br>
>>> - snprintf(port_buf,<br>
>>> sizeof(port_buf), "%d", new_master);<br>
>>> -<br>
>>> string_append_char(exec_cmd, port_buf);<br>
>>> - break;<br>
>>> -<br>
>>> - case 'r': /* new master port */<br>
>>> - newmaster =<br>
>>> pool_get_node_info(get_next_master_node());<br>
>>> - if (newmaster)<br>
>>> - {<br>
>>> -<br>
>>> snprintf(port_buf, sizeof(port_buf), "%d", newmaster->backend_port);<br>
>>> -<br>
>>> string_append_char(exec_cmd, port_buf);<br>
>>> - }<br>
>>> - else<br>
>>> - /* no valid new<br>
>>> master */<br>
>>> -<br>
>>> string_append_char(exec_cmd, "");<br>
>>> - break;<br>
>>> -<br>
>>> - case 'R': /* new master database<br>
>>> directory */<br>
>>> - newmaster =<br>
>>> pool_get_node_info(get_next_master_node());<br>
>>> - if (newmaster)<br>
>>> -<br>
>>> string_append_char(exec_cmd, newmaster->backend_data_directory);<br>
>>> - else<br>
>>> - /* no valid new<br>
>>> master */<br>
>>> -<br>
>>> string_append_char(exec_cmd, "");<br>
>>> - break;<br>
>>> -<br>
>>> - case 'M': /* old master node id */<br>
>>> - snprintf(port_buf,<br>
>>> sizeof(port_buf), "%d", old_master);<br>
>>> -<br>
>>> string_append_char(exec_cmd, port_buf);<br>
>>> - break;<br>
>>> -<br>
>>> - case 'P': /* old primary node id<br>
>>> */<br>
>>> - snprintf(port_buf,<br>
>>> sizeof(port_buf), "%d", old_primary);<br>
>>> -<br>
>>> string_append_char(exec_cmd, port_buf);<br>
>>> - break;<br>
>>> -<br>
>>> - case '%': /* escape */<br>
>>> -<br>
>>> string_append_char(exec_cmd, "%");<br>
>>> - break;<br>
>>> -<br>
>>> - default: /* ignore */<br>
>>> - break;<br>
>>> - }<br>
>>> - command_line++;<br>
>>> - }<br>
>>> - } else {<br>
>>> - buf[0] = *command_line;<br>
>>> - string_append_char(exec_cmd, buf);<br>
>>> - }<br>
>>> - command_line++;<br>
>>> - }<br>
>>> -<br>
>>> - if (strlen(exec_cmd->data) != 0)<br>
>>> - {<br>
>>> - pool_log("execute command: %s", exec_cmd->data);<br>
>>> - r = system(exec_cmd->data);<br>
>>> - }<br>
>>> -<br>
>>> - pool_memory_delete(pool_memory, 0);<br>
>>> - pool_memory = NULL;<br>
>>> -<br>
>>> - return r;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> - * Find the primary node (i.e. not standby node) and returns its node<br>
>>> - * id. If no primary node is found, returns -1.<br>
>>> - */<br>
>>> -static int find_primary_node(void)<br>
>>> -{<br>
>>> - BackendInfo *bkinfo;<br>
>>> - POOL_CONNECTION_POOL_SLOT *s;<br>
>>> - POOL_CONNECTION *con;<br>
>>> - POOL_STATUS status;<br>
>>> - POOL_SELECT_RESULT *res;<br>
>>> - bool is_standby;<br>
>>> - int i;<br>
>>> -<br>
>>> - /* Streaming replication mode? */<br>
>>> - if (pool_config->master_slave_mode == 0 ||<br>
>>> - strcmp(pool_config->master_slave_sub_mode,<br>
>>> MODE_STREAMREP))<br>
>>> - {<br>
>>> - /* No point to look for primary node if not in streaming<br>
>>> - * replication mode.<br>
>>> - */<br>
>>> - pool_debug("find_primary_node: not in streaming<br>
>>> replication mode");<br>
>>> - return -1;<br>
>>> - }<br>
>>> -<br>
>>> - for(i=0;i<NUM_BACKENDS;i++)<br>
>>> - {<br>
>>> - if (!VALID_BACKEND(i))<br>
>>> - continue;<br>
>>> -<br>
>>> - /*<br>
>>> - * Check to see if this is a standby node or not.<br>
>>> - */<br>
>>> - is_standby = false;<br>
>>> -<br>
>>> - bkinfo = pool_get_node_info(i);<br>
>>> - s =<br>
>>> make_persistent_db_connection(bkinfo->backend_hostname,<br>
>>> -<br>
>>> bkinfo->backend_port,<br>
>>> -<br>
>>> "postgres",<br>
>>> -<br>
>>> pool_config->sr_check_user,<br>
>>> -<br>
>>> pool_config->sr_check_password, true);<br>
>>> - if (!s)<br>
>>> - {<br>
>>> - pool_error("find_primary_node:<br>
>>> make_persistent_connection failed");<br>
>>> - return -1;<br>
>>> - }<br>
>>> - con = s->con;<br>
>>> - status = do_query(con, "SELECT pg_is_in_recovery()",<br>
>>> - &res, PROTO_MAJOR_V3);<br>
>>> - if (res->numrows <= 0)<br>
>>> - {<br>
>>> - pool_log("find_primary_node: do_query returns no<br>
>>> rows");<br>
>>> - }<br>
>>> - if (res->data[0] == NULL)<br>
>>> - {<br>
>>> - pool_log("find_primary_node: do_query returns no<br>
>>> data");<br>
>>> - }<br>
>>> - if (res->nullflags[0] == -1)<br>
>>> - {<br>
>>> - pool_log("find_primary_node: do_query returns<br>
>>> NULL");<br>
>>> - }<br>
>>> - if (res->data[0] && !strcmp(res->data[0], "t"))<br>
>>> - {<br>
>>> - is_standby = true;<br>
>>> - }<br>
>>> - free_select_result(res);<br>
>>> - discard_persistent_db_connection(s);<br>
>>> -<br>
>>> - /*<br>
>>> - * If this is a standby, we continue to look for primary<br>
>>> node.<br>
>>> - */<br>
>>> - if (is_standby)<br>
>>> - {<br>
>>> - pool_debug("find_primary_node: %d node is<br>
>>> standby", i);<br>
>>> - }<br>
>>> - else<br>
>>> - {<br>
>>> - break;<br>
>>> - }<br>
>>> - }<br>
>>> -<br>
>>> - if (i == NUM_BACKENDS)<br>
>>> - {<br>
>>> - pool_debug("find_primary_node: no primary node found");<br>
>>> - return -1;<br>
>>> - }<br>
>>> -<br>
>>> - pool_log("find_primary_node: primary node id is %d", i);<br>
>>> - return i;<br>
>>> -}<br>
>>> -<br>
>>> -static int find_primary_node_repeatedly(void)<br>
>>> +/* Call back function to unlink the file */<br>
>>> +/* Call back function to unlink the file */<br>
>>> +static void FileUnlink(int code, Datum path)<br>
>>> {<br>
>>> - int sec;<br>
>>> - int node_id = -1;<br>
>>> -<br>
>>> - /* Streaming replication mode? */<br>
>>> - if (pool_config->master_slave_mode == 0 ||<br>
>>> - strcmp(pool_config->master_slave_sub_mode,<br>
>>> MODE_STREAMREP))<br>
>>> - {<br>
>>> - /* No point to look for primary node if not in streaming<br>
>>> - * replication mode.<br>
>>> - */<br>
>>> - pool_debug("find_primary_node: not in streaming<br>
>>> replication mode");<br>
>>> - return -1;<br>
>>> - }<br>
>>> -<br>
>>> - /*<br>
>>> - * Try to find the new primary node and keep trying for<br>
>>> - * search_primary_node_timeout seconds.<br>
>>> - * search_primary_node_timeout = 0 means never timeout and keep<br>
>>> searching<br>
>>> - * indefinitely<br>
>>> + char* filePath = (char*)path;<br>
>>> + if (unlink(filePath) == 0) return;<br>
>>> + /*<br>
>>> + * We are already exiting the system just produce a log entry to<br>
>>> report an error<br>
>>> */<br>
>>> - pool_log("find_primary_node_repeatedly: waiting for finding a<br>
>>> primary node");<br>
>>> - for (sec = 0; (pool_config->search_primary_node_timeout == 0 ||<br>
>>> - sec <<br>
>>> pool_config->search_primary_node_timeout); sec++)<br>
>>> - {<br>
>>> - node_id = find_primary_node();<br>
>>> - if (node_id != -1)<br>
>>> - break;<br>
>>> - pool_sleep(1);<br>
>>> - }<br>
>>> - return node_id;<br>
>>> -}<br>
>>> -<br>
>>> -/*<br>
>>> -* fork a follow child<br>
>>> -*/<br>
>>> -pid_t fork_follow_child(int old_master, int new_primary, int old_primary)<br>
>>> -{<br>
>>> - pid_t pid;<br>
>>> - int i;<br>
>>> -<br>
>>> - pid = fork();<br>
>>> -<br>
>>> - if (pid == 0)<br>
>>> - {<br>
>>> - pool_log("start triggering follow command.");<br>
>>> - for (i = 0; i < pool_config->backend_desc->num_backends;<br>
>>> i++)<br>
>>> - {<br>
>>> - BackendInfo *bkinfo;<br>
>>> - bkinfo = pool_get_node_info(i);<br>
>>> - if (bkinfo->backend_status == CON_DOWN)<br>
>>> - trigger_failover_command(i,<br>
>>> pool_config->follow_master_command,<br>
>>> -<br>
>>> old_master, new_primary, old_primary);<br>
>>> - }<br>
>>> - exit(0);<br>
>>> - }<br>
>>> - else if (pid == -1)<br>
>>> - {<br>
>>> - pool_error("follow fork() failed. reason: %s",<br>
>>> strerror(errno));<br>
>>> - exit(1);<br>
>>> - }<br>
>>> - return pid;<br>
>>> + ereport(LOG,<br>
>>> + (errmsg("unlink failed for file at path \"%s\"",<br>
>>> filePath),<br>
>>> + errdetail("\"%s\"", strerror(errno))));<br>
>>> }<br>
>>><br>
>>> ...<br>
>>><br>
>>> [Message clipped]<br>
>><br>
>><br>
>><br>
>><br>
>> --<br>
>> Ahsan Hadi<br>
>> Snr Director Product Development<br>
>> EnterpriseDB Corporation<br>
>> The Enterprise Postgres Company<br>
>><br>
>> Phone: +92-51-8358874<br>
>> Mobile: +92-333-5162114<br>
>><br>
>> Website: <a href="http://www.enterprisedb.com" target="_blank">www.enterprisedb.com</a><br>
>> EnterpriseDB Blog: <a href="http://blogs.enterprisedb.com/" target="_blank">http://blogs.enterprisedb.com/</a><br>
>> Follow us on Twitter: <a href="http://www.twitter.com/enterprisedb" target="_blank">http://www.twitter.com/enterprisedb</a><br>
>><br>
>> This e-mail message (and any attachment) is intended for the use of the<br>
>> individual or entity to whom it is addressed. This message contains<br>
>> information from EnterpriseDB Corporation that may be privileged,<br>
>> confidential, or exempt from disclosure under applicable law. If you are<br>
>> not the intended recipient or authorized to receive this for the intended<br>
>> recipient, any use, dissemination, distribution, retention, archiving, or<br>
>> copying of this communication is strictly prohibited. If you have received<br>
>> this e-mail in error, please notify the sender immediately by reply e-mail<br>
>> and delete this message.<br>
>><br>
</div></div></blockquote></div><br></div>