<div dir="ltr">Hi Ishii-San<div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 11, 2017 at 11:01 AM, Tatsuo Ishii <span dir="ltr"><<a href="mailto:ishii@sraoss.co.jp" target="_blank">ishii@sraoss.co.jp</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Usama,<br>
<br>
I have modified watchdog regression script to test out your quorum<br>
aware failover patch. Here are differences from the existing script.<br>
<br>
- Install streaming replication primary and standby DB node (before<br>
raw mode + only 1 node). This is necessary to test an ordinary<br>
failover scenario. Current script creates only one DB node. So if we<br>
get the node down, the cluster goes into "all db node down"<br>
status. I already reported possible problem with the status up<br>
thread and waiting for the answer from Usama.<br>
<br>
- Add one more pgpool-II node "standby2". For this purpose, new<br>
configuration file "standby2.conf" added.<br>
<br>
- Add new test scenario: "fake" failover. By using the infrastructure<br>
I have created to simulate the communication path between standby2<br>
and DB node 1. The test checks if such a error raised a failover<br>
request from standby2, but it is safely ignored.<br>
<br>
- Add new test scenario: "real" failover. Shutting down DB node 1,<br>
which should raise a failover request.<br>
<br>
- Modify test.sh to agree with the changes above.<br></blockquote><div><br></div><div><br class="gmail-Apple-interchange-newline">Thanks for testing and test script patch. I think the changes and test scenario is spot on but I think we should add a new test case</div><div>with these modifications and keep the 004_watchdog test case in the same shape as-well, This will help us to identify watchdog issues more</div><div>swiftly.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Since the modification requires your quorum patch committed, I haven't<br>
push the change yet.<br></blockquote><div><br></div><div>I was worried about the scenario mentioned above and now identified it as an existing issue, So will commit the patch tomorrow.</div><div><br></div><div>Thanks</div><div>Best regardsd</div><div>Muhammad Usama</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail-HOEnZb"><div class="gmail-h5"><br>
Best regards,<br>
--<br>
Tatsuo Ishii<br>
SRA OSS, Inc. Japan<br>
English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_<wbr>en.php</a><br>
Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.<wbr>jp</a><br>
<br>
> I have tested the patch a little bit using 004 watchdog regression<br>
> test. After the test ends, I manually started master and standby<br>
> Pgpool-II.<br>
><br>
> 1) Stop master PostgreSQL. Since there's only one PostgreSQL is<br>
> configured, I expected:<br>
><br>
> psql: ERROR: pgpool is not accepting any new connections<br>
> DETAIL: all backend nodes are down, pgpool requires at least one valid node<br>
> HINT: repair the backend nodes and restart pgpool<br>
><br>
> but master Pgpool-II replies:<br>
><br>
> psql: FATAL: failed to create a backend connection<br>
> DETAIL: executing failover on backend<br>
><br>
> Is this normal?<br>
><br>
> 2) I shutdown the master node to see if the standby escalates.<br>
><br>
> After shutting down the master, I see this using pcp_watchdog_info:<br>
><br>
> pcp_watchdog_info -p 11105<br>
> localhost:11100 Linux tishii-CF-SX3HE4BP localhost 11100 21104 4 MASTER<br>
> localhost:11000 Linux tishii-CF-SX3HE4BP localhost 11000 21004 10 SHUTDOWN<br>
><br>
> Seems ok but I want to confirm.<br>
><br>
> master and standby pgpool logs attached.<br>
><br>
> Best regards,<br>
> --<br>
> Tatsuo Ishii<br>
> SRA OSS, Inc. Japan<br>
> English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_<wbr>en.php</a><br>
> Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.<wbr>jp</a><br>
><br>
>> On Fri, Aug 25, 2017 at 5:05 PM, Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp">ishii@sraoss.co.jp</a>> wrote:<br>
>><br>
>>> > On Fri, Aug 25, 2017 at 12:53 PM, Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp">ishii@sraoss.co.jp</a>><br>
>>> wrote:<br>
>>> ><br>
>>> >> Usama,<br>
>>> >><br>
>>> >> With the new patch, the regression tests all passed.<br>
>>> >><br>
>>> ><br>
>>> > Glad to hear that :-)<br>
>>> > Did you had a chance to look at the node quarantine state I added. What<br>
>>> are<br>
>>> > your thoughts on that ?<br>
>>><br>
>>> I'm going to look into the patch this weekend.<br>
>>><br>
>><br>
>> Many thanks<br>
>><br>
>> Best Regards<br>
>> Muhammad Usama<br>
>><br>
>>><br>
>>> Best regards,<br>
>>> --<br>
>>> Tatsuo Ishii<br>
>>> SRA OSS, Inc. Japan<br>
>>> English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_<wbr>en.php</a><br>
>>> Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.<wbr>jp</a><br>
>>><br>
>>> >> > Hi Ishii-San<br>
>>> >> ><br>
>>> >> > Please fine the updated patch, It fixes the regression issue you were<br>
>>> >> > facing and also another bug which I encountered during my testing.<br>
>>> >> ><br>
>>> >> > -- Adding Yugo to the thread,<br>
>>> >> > Hi Yugo,<br>
>>> >> ><br>
>>> >> > Since you are an expert of watchdog feature, So I thought you might<br>
>>> have<br>
>>> >> > something to say especially regarding the discussion points mentioned<br>
>>> in<br>
>>> >> > the initial mail.<br>
>>> >> ><br>
>>> >> ><br>
>>> >> > Thanks<br>
>>> >> > Best Regards<br>
>>> >> > Muhammad Usama<br>
>>> >> ><br>
>>> >> ><br>
>>> >> > On Thu, Aug 24, 2017 at 11:25 AM, Muhammad Usama <<a href="mailto:m.usama@gmail.com">m.usama@gmail.com</a>><br>
>>> >> wrote:<br>
>>> >> ><br>
>>> >> >><br>
>>> >> >><br>
>>> >> >> On Thu, Aug 24, 2017 at 4:34 AM, Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp">ishii@sraoss.co.jp</a>><br>
>>> >> wrote:<br>
>>> >> >><br>
>>> >> >>> After applying the patch, many of regression tests fail. It seems<br>
>>> >> >>> pgpool.conf.sample has bogus comment which causes the pgpool.conf<br>
>>> >> >>> parser to complain parse error.<br>
>>> >> >>><br>
>>> >> >>> 2017-08-24 08:22:36: pid 6017: FATAL: syntex error in configuration<br>
>>> >> file<br>
>>> >> >>> "/home/t-ishii/work/pgpool-II/<wbr>current/pgpool2/src/test/regre<br>
>>> >> >>> ssion/tests/004.watchdog/<wbr>standby/etc/pgpool.conf"<br>
>>> >> >>> 2017-08-24 08:22:36: pid 6017: DETAIL: parse error at line 568 '*'<br>
>>> >> token<br>
>>> >> >>> = 8<br>
>>> >> >>><br>
>>> >> >><br>
>>> >> >> Really sorry, Somehow I overlooked the sample config file changes I<br>
>>> made<br>
>>> >> >> at the last minute.<br>
>>> >> >> Will send you the updated version.<br>
>>> >> >><br>
>>> >> >> Thanks<br>
>>> >> >> Best Regards<br>
>>> >> >> Muhammad Usama<br>
>>> >> >><br>
>>> >> >>><br>
>>> >> >>> Best regards,<br>
>>> >> >>> --<br>
>>> >> >>> Tatsuo Ishii<br>
>>> >> >>> SRA OSS, Inc. Japan<br>
>>> >> >>> English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_<wbr>en.php</a><br>
>>> >> >>> Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.<wbr>jp</a><br>
>>> >> >>><br>
>>> >> >>> > Usama,<br>
>>> >> >>> ><br>
>>> >> >>> > Thanks for the patch. I am going to review it.<br>
>>> >> >>> ><br>
>>> >> >>> > In the mean time when I apply your patch, I got some trailing<br>
>>> >> >>> > whitespace errors. Can you please fix them?<br>
>>> >> >>> ><br>
>>> >> >>> > /home/t-ishii/quorum_aware_<wbr>failover.diff:470: trailing<br>
>>> whitespace.<br>
>>> >> >>> ><br>
>>> >> >>> > /home/t-ishii/quorum_aware_<wbr>failover.diff:485: trailing<br>
>>> whitespace.<br>
>>> >> >>> ><br>
>>> >> >>> > /home/t-ishii/quorum_aware_<wbr>failover.diff:564: trailing<br>
>>> whitespace.<br>
>>> >> >>> ><br>
>>> >> >>> > /home/t-ishii/quorum_aware_<wbr>failover.diff:1428: trailing<br>
>>> whitespace.<br>
>>> >> >>> ><br>
>>> >> >>> > /home/t-ishii/quorum_aware_<wbr>failover.diff:1450: trailing<br>
>>> whitespace.<br>
>>> >> >>> ><br>
>>> >> >>> > warning: squelched 3 whitespace errors<br>
>>> >> >>> > warning: 8 lines add whitespace errors.<br>
>>> >> >>> ><br>
>>> >> >>> > Best regards,<br>
>>> >> >>> > --<br>
>>> >> >>> > Tatsuo Ishii<br>
>>> >> >>> > SRA OSS, Inc. Japan<br>
>>> >> >>> > English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_<wbr>en.php</a><br>
>>> >> >>> > Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.<wbr>jp</a><br>
>>> >> >>> ><br>
>>> >> >>> >> Hi<br>
>>> >> >>> >><br>
>>> >> >>> >> I was working on the new feature to make the backend node<br>
>>> failover<br>
>>> >> >>> quorum<br>
>>> >> >>> >> aware and on the half way through the implementation I also added<br>
>>> >> the<br>
>>> >> >>> >> majority consensus feature for the same.<br>
>>> >> >>> >><br>
>>> >> >>> >> So please find the first version of the patch for review that<br>
>>> makes<br>
>>> >> the<br>
>>> >> >>> >> backend node failover consider the watchdog cluster quorum status<br>
>>> >> and<br>
>>> >> >>> seek<br>
>>> >> >>> >> the majority consensus before performing failover.<br>
>>> >> >>> >><br>
>>> >> >>> >> *Changes in the Failover mechanism with watchdog.*<br>
>>> >> >>> >> For this new feature I have modified the Pgpool-II's existing<br>
>>> >> failover<br>
>>> >> >>> >> mechanism with watchdog.<br>
>>> >> >>> >> Previously as you know when the Pgpool-II require to perform a<br>
>>> node<br>
>>> >> >>> >> operation (failover, failback, promote-node) with the watchdog.<br>
>>> The<br>
>>> >> >>> >> watchdog used to propagated the failover request to all the<br>
>>> >> Pgpool-II<br>
>>> >> >>> nodes<br>
>>> >> >>> >> in the watchdog cluster and as soon as the request was received<br>
>>> by<br>
>>> >> the<br>
>>> >> >>> >> node, it used to initiate the local failover and that failover<br>
>>> was<br>
>>> >> >>> >> synchronised on all nodes using the distributed locks.<br>
>>> >> >>> >><br>
>>> >> >>> >> *Now Only the Master node performs the failover.*<br>
>>> >> >>> >> The attached patch changes the mechanism of synchronised<br>
>>> failover,<br>
>>> >> and<br>
>>> >> >>> now<br>
>>> >> >>> >> only the Pgpool-II of master watchdog node performs the failover,<br>
>>> >> and<br>
>>> >> >>> all<br>
>>> >> >>> >> other standby nodes sync the backend statuses after the master<br>
>>> >> >>> Pgpool-II is<br>
>>> >> >>> >> finished with the failover.<br>
>>> >> >>> >><br>
>>> >> >>> >> *Overview of new failover mechanism.*<br>
>>> >> >>> >> -- If the failover request is received to the standby watchdog<br>
>>> >> >>> node(from<br>
>>> >> >>> >> local Pgpool-II), That request is forwarded to the master<br>
>>> watchdog<br>
>>> >> and<br>
>>> >> >>> the<br>
>>> >> >>> >> Pgpool-II main process is returned with the<br>
>>> >> FAILOVER_RES_WILL_BE_DONE<br>
>>> >> >>> >> return code. And upon receiving the FAILOVER_RES_WILL_BE_DONE<br>
>>> from<br>
>>> >> the<br>
>>> >> >>> >> watchdog for the failover request the requesting Pgpool-II moves<br>
>>> >> >>> forward<br>
>>> >> >>> >> without doing anything further for the particular failover<br>
>>> command.<br>
>>> >> >>> >><br>
>>> >> >>> >> -- Now when the failover request from standby node is received by<br>
>>> >> the<br>
>>> >> >>> >> master watchdog, after performing the validation, applying the<br>
>>> >> >>> consensus<br>
>>> >> >>> >> rules the failover request is triggered on the local Pgpool-II .<br>
>>> >> >>> >><br>
>>> >> >>> >> -- When the failover request is received to the master watchdog<br>
>>> node<br>
>>> >> >>> from<br>
>>> >> >>> >> the local Pgpool-II (On the IPC channel) the watchdog process<br>
>>> inform<br>
>>> >> >>> the<br>
>>> >> >>> >> Pgpool-II requesting process to proceed with failover (provided<br>
>>> all<br>
>>> >> >>> >> failover rules are satisfied).<br>
>>> >> >>> >><br>
>>> >> >>> >> -- After the failover is finished on the master Pgpool-II, the<br>
>>> >> failover<br>
>>> >> >>> >> function calls the *wd_failover_end*() which sends the backend<br>
>>> sync<br>
>>> >> >>> >> required message to all standby watchdogs.<br>
>>> >> >>> >><br>
>>> >> >>> >> -- Upon receiving the sync required message from master watchdog<br>
>>> >> node<br>
>>> >> >>> all<br>
>>> >> >>> >> Pgpool-II sync the new statuses of each backend node from the<br>
>>> master<br>
>>> >> >>> >> watchdog.<br>
>>> >> >>> >><br>
>>> >> >>> >> *No More Failover locks*<br>
>>> >> >>> >> Since with this new failover mechanism we do not require any<br>
>>> >> >>> >> synchronisation and guards against the execution of<br>
>>> >> failover_commands<br>
>>> >> >>> by<br>
>>> >> >>> >> multiple Pgpool-II nodes, So the patch removes all the<br>
>>> distributed<br>
>>> >> >>> locks<br>
>>> >> >>> >> from failover function, This makes the failover simpler and<br>
>>> faster.<br>
>>> >> >>> >><br>
>>> >> >>> >> *New kind of Failover operation NODE_QUARANTINE_REQUEST*<br>
>>> >> >>> >> The patch adds the new kind of backend node operation<br>
>>> >> NODE_QUARANTINE<br>
>>> >> >>> which<br>
>>> >> >>> >> is effectively same as the NODE_DOWN, but with node_quarantine<br>
>>> the<br>
>>> >> >>> >> failover_command is not triggered.<br>
>>> >> >>> >> The NODE_DOWN_REQUEST is automatically converted to the<br>
>>> >> >>> >> NODE_QUARANTINE_REQUEST when the failover is requested on the<br>
>>> >> backend<br>
>>> >> >>> node<br>
>>> >> >>> >> but watchdog cluster does not holds the quorum.<br>
>>> >> >>> >> This means in the absence of quorum the failed backend nodes are<br>
>>> >> >>> >> quarantined and when the quorum becomes available again the<br>
>>> >> Pgpool-II<br>
>>> >> >>> >> performs the failback operation on all quarantine nodes.<br>
>>> >> >>> >> And again when the failback is performed on the quarantine<br>
>>> backend<br>
>>> >> >>> node the<br>
>>> >> >>> >> failover function does not trigger the failback_command.<br>
>>> >> >>> >><br>
>>> >> >>> >> *Controlling the Failover behaviour.*<br>
>>> >> >>> >> The patch adds three new configuration parameters to configure<br>
>>> the<br>
>>> >> >>> failover<br>
>>> >> >>> >> behaviour from user side.<br>
>>> >> >>> >><br>
>>> >> >>> >> *failover_when_quorum_exists*<br>
>>> >> >>> >> When enabled the failover command will only be executed when the<br>
>>> >> >>> watchdog<br>
>>> >> >>> >> cluster holds the quorum. And when the quorum is absent and<br>
>>> >> >>> >> failover_when_quorum_exists is enabled the failed backend nodes<br>
>>> will<br>
>>> >> >>> get<br>
>>> >> >>> >> quarantine until the quorum becomes available again.<br>
>>> >> >>> >> disabling it will enable the old behaviour of failover commands.<br>
>>> >> >>> >><br>
>>> >> >>> >><br>
>>> >> >>> >> *failover_require_consensus*<wbr>This new configuration parameter<br>
>>> can be<br>
>>> >> >>> used to<br>
>>> >> >>> >> make sure we get the majority vote before performing the<br>
>>> failover on<br>
>>> >> >>> the<br>
>>> >> >>> >> node. When *failover_require_consensus* is enabled then the<br>
>>> >> failover is<br>
>>> >> >>> >> only performed after receiving the failover request from the<br>
>>> >> majority<br>
>>> >> >>> or<br>
>>> >> >>> >> Pgpool-II nodes.<br>
>>> >> >>> >> For example in three nodes cluster the failover will not be<br>
>>> >> performed<br>
>>> >> >>> until<br>
>>> >> >>> >> at least two nodes ask for performing the failover on the<br>
>>> particular<br>
>>> >> >>> >> backend node.<br>
>>> >> >>> >><br>
>>> >> >>> >> It is also worthwhile to mention here that<br>
>>> >> *failover_require_consensus*<br>
>>> >> >>> >> only works when failover_when_quorum_exists is enables.<br>
>>> >> >>> >><br>
>>> >> >>> >><br>
>>> >> >>> >> *enable_multiple_failover_<wbr>requests_from_node*<br>
>>> >> >>> >> This parameter works in connection with<br>
>>> *failover_require_consensus*<br>
>>> >> >>> >> config. When enabled a single Pgpool-II node can vote for<br>
>>> failover<br>
>>> >> >>> multiple<br>
>>> >> >>> >> times.<br>
>>> >> >>> >> For example in the three nodes cluster if one Pgpool-II node<br>
>>> sends<br>
>>> >> the<br>
>>> >> >>> >> failover request of particular node twice that would be counted<br>
>>> as<br>
>>> >> two<br>
>>> >> >>> >> votes in favour of failover and the failover will be performed<br>
>>> even<br>
>>> >> if<br>
>>> >> >>> we<br>
>>> >> >>> >> do not get a vote from other two nodes.<br>
>>> >> >>> >><br>
>>> >> >>> >> And when *enable_multiple_failover_<wbr>requests_from_node* is<br>
>>> disabled,<br>
>>> >> >>> Only<br>
>>> >> >>> >> the first vote from each Pgpool-II will be accepted and all other<br>
>>> >> >>> >> subsequent votes will be marked duplicate and rejected.<br>
>>> >> >>> >> So in that case we will require a majority votes from distinct<br>
>>> >> nodes to<br>
>>> >> >>> >> execute the failover.<br>
>>> >> >>> >> Again this *enable_multiple_failover_<wbr>requests_from_node* only<br>
>>> >> becomes<br>
>>> >> >>> >> effective when both *failover_when_quorum_exists* and<br>
>>> >> >>> >> *failover_require_consensus* are enabled.<br>
>>> >> >>> >><br>
>>> >> >>> >><br>
>>> >> >>> >> *Controlling the failover: The Coding perspective.*<br>
>>> >> >>> >> Although the failover functions are made quorum and consensus<br>
>>> aware<br>
>>> >> but<br>
>>> >> >>> >> there is still a way to bypass the quorum conditions, and<br>
>>> >> requirement<br>
>>> >> >>> of<br>
>>> >> >>> >> consensus.<br>
>>> >> >>> >><br>
>>> >> >>> >> For this the patch uses the existing request_details flags in<br>
>>> >> >>> >> POOL_REQUEST_NODE to control the behaviour of failover.<br>
>>> >> >>> >><br>
>>> >> >>> >> Here are the newly added flags values.<br>
>>> >> >>> >><br>
>>> >> >>> >> *REQ_DETAIL_WATCHDOG*:<br>
>>> >> >>> >> Setting this flag while issuing the failover command will not<br>
>>> send<br>
>>> >> the<br>
>>> >> >>> >> failover request to the watchdog. But this flag may not be<br>
>>> useful in<br>
>>> >> >>> any<br>
>>> >> >>> >> other place than where it is already used.<br>
>>> >> >>> >> Mostly this flag can be used to avoid the failover command from<br>
>>> >> going<br>
>>> >> >>> to<br>
>>> >> >>> >> watchdog that is already originated from watchdog. Otherwise we<br>
>>> can<br>
>>> >> >>> end up<br>
>>> >> >>> >> in infinite loop.<br>
>>> >> >>> >><br>
>>> >> >>> >> *REQ_DETAIL_CONFIRMED*:<br>
>>> >> >>> >> Setting this flag will bypass the *failover_require_consensus*<br>
>>> >> >>> >> configuration and immediately perform the failover if quorum is<br>
>>> >> >>> present.<br>
>>> >> >>> >> This flag can be used to issue the failover request originated<br>
>>> from<br>
>>> >> PCP<br>
>>> >> >>> >> command.<br>
>>> >> >>> >><br>
>>> >> >>> >> *REQ_DETAIL_UPDATE*:<br>
>>> >> >>> >> This flag is used for the command where we are failing back the<br>
>>> >> >>> quarantine<br>
>>> >> >>> >> nodes. Setting this flag will not trigger the failback_command.<br>
>>> >> >>> >><br>
>>> >> >>> >> *Some conditional flags used:*<br>
>>> >> >>> >> I was not sure about the configuration of each type of failover<br>
>>> >> >>> operation.<br>
>>> >> >>> >> As we have three main failover operations NODE_UP_REQUEST,<br>
>>> >> >>> >> NODE_DOWN_REQUEST, and PROMOTE_NODE_REQUEST<br>
>>> >> >>> >> So I was thinking do we need to give the configuration option to<br>
>>> the<br>
>>> >> >>> users,<br>
>>> >> >>> >> if they want to enable/disable quorum checking and consensus for<br>
>>> >> >>> individual<br>
>>> >> >>> >> failover operation type.<br>
>>> >> >>> >> For example: is it a practical configuration where a user would<br>
>>> >> want to<br>
>>> >> >>> >> ensure quorum while preforming NODE_DOWN operation while does not<br>
>>> >> want<br>
>>> >> >>> it<br>
>>> >> >>> >> for NODE_UP.<br>
>>> >> >>> >> So in this patch I use three compile time defines to enable<br>
>>> disable<br>
>>> >> the<br>
>>> >> >>> >> individual failover operation, while we can decide on the best<br>
>>> >> >>> solution.<br>
>>> >> >>> >><br>
>>> >> >>> >> NODE_UP_REQUIRE_CONSENSUS: defining it will enable quorum<br>
>>> checking<br>
>>> >> >>> feature<br>
>>> >> >>> >> for NODE_UP_REQUESTs<br>
>>> >> >>> >><br>
>>> >> >>> >> NODE_DOWN_REQUIRE_CONSENSUS: defining it will enable quorum<br>
>>> checking<br>
>>> >> >>> >> feature for NODE_DOWN_REQUESTs<br>
>>> >> >>> >><br>
>>> >> >>> >> NODE_PROMOTE_REQUIRE_<wbr>CONSENSUS: defining it will enable quorum<br>
>>> >> >>> checking<br>
>>> >> >>> >> feature for PROMOTE_NODE_REQUESTs<br>
>>> >> >>> >><br>
>>> >> >>> >> *Some Point for Discussion:*<br>
>>> >> >>> >><br>
>>> >> >>> >> *Do we really need to check ReqInfo->switching flag before<br>
>>> enqueuing<br>
>>> >> >>> >> failover request.*<br>
>>> >> >>> >> While working on the patch I was wondering why do we disallow<br>
>>> >> >>> enqueuing the<br>
>>> >> >>> >> failover command when the failover is already in progress? For<br>
>>> >> example<br>
>>> >> >>> in<br>
>>> >> >>> >> *pcp_process_command*() function if we see the<br>
>>> *Req_info->switching*<br>
>>> >> >>> flag<br>
>>> >> >>> >> set we bailout with the error instead of enqueuing the command.<br>
>>> Is<br>
>>> >> is<br>
>>> >> >>> >> really necessary?<br>
>>> >> >>> >><br>
>>> >> >>> >> *Do we need more granule control over each failover operation:*<br>
>>> >> >>> >> As described in section "Some conditional flags used" I want the<br>
>>> >> >>> opinion on<br>
>>> >> >>> >> do we need configuration parameters in pgpool.conf to enable<br>
>>> disable<br>
>>> >> >>> quorum<br>
>>> >> >>> >> and consensus checking on individual failover types.<br>
>>> >> >>> >><br>
>>> >> >>> >> *Which failover should be mark as Confirmed:*<br>
>>> >> >>> >> As defined in the above section of REQ_DETAIL_CONFIRMED, We can<br>
>>> mark<br>
>>> >> >>> the<br>
>>> >> >>> >> failover request to not need consensus, currently the requests<br>
>>> from<br>
>>> >> >>> the PCP<br>
>>> >> >>> >> commands are fired with this flag. But I was wondering there may<br>
>>> be<br>
>>> >> >>> more<br>
>>> >> >>> >> places where we many need to use the flag.<br>
>>> >> >>> >> For example I currently use the same confirmed flag when<br>
>>> failover is<br>
>>> >> >>> >> triggered because of *replication_stop_on_mismatch*<wbr>.<br>
>>> >> >>> >><br>
>>> >> >>> >> I think we should think this flag for each place of failover,<br>
>>> like<br>
>>> >> >>> when the<br>
>>> >> >>> >> failover is triggered<br>
>>> >> >>> >> because of health_check failure.<br>
>>> >> >>> >> because of replication mismatch<br>
>>> >> >>> >> because of backend_error<br>
>>> >> >>> >> e.t.c<br>
>>> >> >>> >><br>
>>> >> >>> >> *Node Quarantine behaviour.*<br>
>>> >> >>> >> What do you think about the node quarantine used by this patch.<br>
>>> Can<br>
>>> >> you<br>
>>> >> >>> >> think of some problem which can be caused by this?<br>
>>> >> >>> >><br>
>>> >> >>> >> *What should be the default values for each newly added config<br>
>>> >> >>> parameters.*<br>
>>> >> >>> >><br>
>>> >> >>> >><br>
>>> >> >>> >><br>
>>> >> >>> >> *TODOs*<br>
>>> >> >>> >><br>
>>> >> >>> >> -- Updating the documentation is still todo. Will do that once<br>
>>> every<br>
>>> >> >>> aspect<br>
>>> >> >>> >> of the feature will be finalised.<br>
>>> >> >>> >> -- Some code warnings and cleanups are still not done.<br>
>>> >> >>> >> -- I am still little short on testing<br>
>>> >> >>> >> -- Regression test cases for the feature<br>
>>> >> >>> >><br>
>>> >> >>> >><br>
>>> >> >>> >> Thoughts and suggestions are most welcome.<br>
>>> >> >>> >><br>
>>> >> >>> >> Thanks<br>
>>> >> >>> >> Best regards<br>
>>> >> >>> >> Muhammad Usama<br>
>>> >> >>> > ______________________________<wbr>_________________<br>
>>> >> >>> > pgpool-hackers mailing list<br>
>>> >> >>> > <a href="mailto:pgpool-hackers@pgpool.net">pgpool-hackers@pgpool.net</a><br>
>>> >> >>> > <a href="http://www.pgpool.net/mailman/listinfo/pgpool-hackers" rel="noreferrer" target="_blank">http://www.pgpool.net/mailman/<wbr>listinfo/pgpool-hackers</a><br>
>>> >> >>><br>
>>> >> >><br>
>>> >> >><br>
>>> >><br>
>>><br>
</div></div></blockquote></div><br></div></div></div>