<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 16, 2019 at 12:49 PM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp">ishii@sraoss.co.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> On Tue, Apr 16, 2019 at 12:14 PM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp" target="_blank">ishii@sraoss.co.jp</a>> wrote:<br>
> <br>
>> > On Tue, Apr 16, 2019 at 7:55 AM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp" target="_blank">ishii@sraoss.co.jp</a>> wrote:<br>
>> ><br>
>> >> Hi Usama,<br>
>> >><br>
>> >> > Hi TAKATSUKA Haruka,<br>
>> >> ><br>
>> >> > Thanks for the patch, But your patch effectively disables the node<br>
>> >> > quarantine, which does't seems a right way.<br>
>> >> > Since the backend node that was quarantined because of absence of<br>
>> quorum<br>
>> >> > and/or consensus is already un-reachable<br>
>> >> > form the Pgpool-II node, and we don't want to select it as<br>
>> load-balance<br>
>> >> > node ( in case the node was secondary) or consider it<br>
>> >> > as available when it is not by not marking it as quarantine.<br>
>> >> ><br>
>> >> > In my opinion the right way to tackle the issue is by keep setting<br>
>> the<br>
>> >> > quarantine state as it is done currently but<br>
>> >> > also keep the health check working on quarantine nodes. So that as<br>
>> soon<br>
>> >> as<br>
>> >> > the connectivity to the<br>
>> >> > quarantined node resumes, it becomes the part of cluster<br>
>> automatically.<br>
>> >><br>
>> >> What if the connection failure between the primary PostgreSQL and one<br>
>> >> of Pgpool-II servers is permanent? Doesn't health checking continues<br>
>> >> forever?<br>
>> >><br>
>> ><br>
>> > Yes, only for the quarantined PostgreSQL nodes. But I don't think there<br>
>> is<br>
>> > a problem<br>
>> > in that. As conceptually the quarantine nodes are not failed node (they<br>
>> are<br>
>> > just unusable at that moment)<br>
>> > and taking the node out of quarantine zone shouldn't require the manual<br>
>> > intervention. So I think its the correct<br>
>> > way to continue the health checking on quarantined nodes.<br>
>> ><br>
>> > Do you see an issue with the approach ?<br>
>><br>
>> Yes. Think about the case when the PostgreSQL node is primary. Users<br>
>> cannot issue write queries while the retrying. The network failure<br>
>> could persist days and the whole database cluster is unusable in the<br>
>> period.<br>
>><br>
> <br>
> Yes thats true, But not allowing the node to go into quarantine state will<br>
> still not solve it,<br>
> Because the primary would still be unavailable anyway even if we set the<br>
> quarantine state<br>
> or not. So whole idea of this patch is to recover from quarantine state<br>
> automatically as soon as<br>
> the connectivity resumes.<br>
> Similarly failover of that node is again not an option if the user wants to<br>
> do failover only when the<br>
> network consensus exists, otherwise he should just disable<br>
> failover_require_consensus.<br>
<br>
Question is, why can't we automatically recover from detached state as<br>
well as quarantine state?<br></blockquote><div><br></div><div>Well ideally we should also automatically recover from detached state as well, but the problem</div><div>is that when the node is detached, specifically the primary node, the failover procedure</div><div>promotes another standby to make it a new master and follow_master adjusts the standby</div><div>nodes to point to the new master. Now even when the old primary that was detached becomes</div><div>reachable again, attaching it automatically would lead to the verity of problems and split-brain.</div><div>I think it is possible to implement the mechanism to verify the detached PostgreSQL node status when it</div><div>becomes reachable again and after taking appropriate actions attach it back automatically but currently</div><div>we don't have anything like that in Pgpool. So we instead rely on user intervention to do the re-attach</div><div>using pcp_attach_node or online recovery mechanisms.</div><div><br></div><div>Now if we look at the quarantine nodes, they are just as good as alive nodes (but unreachable by pgpool at the moment).</div><div>Because when the node was quarantined, Pgpool-II never executed any failover and/or follow_master commands</div><div>and did not interfered with the PostgreSQL backend in any way to alter its timeline or recovery states,</div><div>So when the quarantine node becomes reachable again it is safe to automatically connect them back to the Pgpool-II</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
>> BTW,<br>
>><br>
>> > > When the communication between master/coordinator pgpool and<br>
>> > > primary PostgreSQL node is down during a short period<br>
>> ><br>
>> > I wonder why you don't set appropriate health check retry parameters<br>
>> > to avoid such a temporary communication failure in the firs place. A<br>
>> > brain surgery to ignore the error reports from Pgpool-II does not seem<br>
>> > to be a sane choice.<br>
>><br>
>> The original reporter didn't answer my question. I think it is likely<br>
>> a problem of misconfiguraton (should use longer heath check retry).<br>
>><br>
>> In summary I think for shorter period communication failure just<br>
>> increasing health check parameters is enough. However for longer<br>
>> period communication failure, the watchdog node should decline the<br>
>> role.<br>
>><br>
> <br>
> I am sorry I didn't totally get it what you mean here.<br>
> Do you mean that the pgpool-II node that has the primary node in quarantine<br>
> state should resign from the master/coordinator<br>
> pgpool-II node (if it was a master/coordinator) in that case?<br>
<br>
Yes, exactly. Note that if the PostgreSQL node is one of standbys,<br>
keeping the quarantine state is fine because users query could be<br>
processed.<br></blockquote><div><br></div><div>Yes that makes total sense. I will make that change as separate patch. </div><div><br></div><div>Thanks</div><div>Best Regards</div><div>Muhammad Usama</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> Thanks<br>
> Best Regards<br>
> Muhammad Usama<br>
> <br>
> <br>
>> >> > Can you please try out the attached patch, to see if the solution<br>
>> works<br>
>> >> for<br>
>> >> > the situation?<br>
>> >> > The patch is generated against current master branch.<br>
>> >> ><br>
>> >> > Thanks<br>
>> >> > Best Regards<br>
>> >> > Muhammad Usama<br>
>> >> ><br>
>> >> > On Wed, Apr 10, 2019 at 2:04 PM TAKATSUKA Haruka <<br>
>> <a href="mailto:harukat@sraoss.co.jp" target="_blank">harukat@sraoss.co.jp</a>><br>
>> >> > wrote:<br>
>> >> ><br>
>> >> >> Hello, Pgpool developers<br>
>> >> >><br>
>> >> >><br>
>> >> >> I found Pgpool-II watchdog is too strict for duplicate failover<br>
>> request<br>
>> >> >> with allow_multiple_failover_requests_from_node=off setting.<br>
>> >> >><br>
>> >> >> For example, A watchdog cluster with 3 pgpool instances is here.<br>
>> >> >> Their backends are PostgreSQL servers using streaming replication.<br>
>> >> >><br>
>> >> >> When the communication between master/coordinator pgpool and<br>
>> >> >> primary PostgreSQL node is down during a short period<br>
>> >> >> (or pgpool do any false-positive judgement by various reasons),<br>
>> >> >> and then the pgpool tries to failover but cannot get the consensus,<br>
>> >> >> so it makes the primary node into quarantine status. It cannot<br>
>> >> >> be reset automatically. As a result, the service becomes unavailable.<br>
>> >> >><br>
>> >> >> This case generates logs like the following:<br>
>> >> >><br>
>> >> >> pid 1234: LOG: new IPC connection received<br>
>> >> >> pid 1234: LOG: watchdog received the failover command from local<br>
>> >> >> pgpool-II on IPC interface<br>
>> >> >> pid 1234: LOG: watchdog is processing the failover command<br>
>> >> >> [DEGENERATE_BACKEND_REQUEST] received from local pgpool-II on IPC<br>
>> >> interface<br>
>> >> >> pid 1234: LOG: Duplicate failover request from "pg1:5432 Linux pg1"<br>
>> >> node<br>
>> >> >> pid 1234: DETAIL: request ignored<br>
>> >> >> pid 1234: LOG: failover requires the majority vote, waiting for<br>
>> >> consensus<br>
>> >> >> pid 1234: DETAIL: failover request noted<br>
>> >> >> pid 4321: LOG: degenerate backend request for 1 node(s) from pid<br>
>> >> [4321],<br>
>> >> >> is changed to quarantine node request by watchdog<br>
>> >> >> pid 4321: DETAIL: watchdog is taking time to build consensus<br>
>> >> >><br>
>> >> >> Note that this case dosen't have any communication truouble among<br>
>> >> >> the Pgpool watchdog nodes.<br>
>> >> >> You can reproduce it by changing one PostgreSQL's pg_hba.conf to<br>
>> >> >> reject the helth check access from one pgpool node in short period.<br>
>> >> >><br>
>> >> >> The document don't say that duplicate failover requests make the node<br>
>> >> >> quarantine immediately. I think it should be just igunoring the<br>
>> request.<br>
>> >> >><br>
>> >> >> A patch file for head of V3_7_STABLE is attached.<br>
>> >> >> Pgpool with this patch also disturbs failover by single pgpool's<br>
>> >> repeated<br>
>> >> >> failover requests. But it can recover when the connection trouble is<br>
>> >> gone.<br>
>> >> >><br>
>> >> >> Does this change have any problem?<br>
>> >> >><br>
>> >> >><br>
>> >> >> with best regards,<br>
>> >> >> TAKATSUKA Haruka <<a href="mailto:harukat@sraoss.co.jp" target="_blank">harukat@sraoss.co.jp</a>><br>
>> >> >> _______________________________________________<br>
>> >> >> pgpool-hackers mailing list<br>
>> >> >> <a href="mailto:pgpool-hackers@pgpool.net" target="_blank">pgpool-hackers@pgpool.net</a><br>
>> >> >> <a href="http://www.pgpool.net/mailman/listinfo/pgpool-hackers" rel="noreferrer" target="_blank">http://www.pgpool.net/mailman/listinfo/pgpool-hackers</a><br>
>> >> >><br>
>> >><br>
>><br>
</blockquote></div></div>