<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 16, 2019 at 1:27 PM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp">ishii@sraoss.co.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">>> Question is, why can't we automatically recover from detached state as<br>
>> well as quarantine state?<br>
>><br>
> <br>
> Well ideally we should also automatically recover from detached state as<br>
> well, but the problem<br>
> is that when the node is detached, specifically the primary node, the<br>
> failover procedure<br>
> promotes another standby to make it a new master and follow_master adjusts<br>
> the standby<br>
> nodes to point to the new master. Now even when the old primary that was<br>
> detached becomes<br>
> reachable again, attaching it automatically would lead to the verity of<br>
> problems and split-brain.<br>
> I think it is possible to implement the mechanism to verify the detached<br>
> PostgreSQL node status when it<br>
> becomes reachable again and after taking appropriate actions attach it back<br>
> automatically but currently<br>
> we don't have anything like that in Pgpool. So we instead rely on user<br>
> intervention to do the re-attach<br>
> using pcp_attach_node or online recovery mechanisms.<br>
> <br>
> Now if we look at the quarantine nodes, they are just as good as alive<br>
> nodes (but unreachable by pgpool at the moment).<br>
> Because when the node was quarantined, Pgpool-II never executed any<br>
> failover and/or follow_master commands<br>
> and did not interfered with the PostgreSQL backend in any way to alter its<br>
> timeline or recovery states,<br>
> So when the quarantine node becomes reachable again it is safe to<br>
> automatically connect them back to the Pgpool-II<br>
<br>
Ok, that makes sense.<br>
<br>
>> >> BTW,<br>
>> >><br>
>> >> > > When the communication between master/coordinator pgpool and<br>
>> >> > > primary PostgreSQL node is down during a short period<br>
>> >> ><br>
>> >> > I wonder why you don't set appropriate health check retry parameters<br>
>> >> > to avoid such a temporary communication failure in the firs place. A<br>
>> >> > brain surgery to ignore the error reports from Pgpool-II does not seem<br>
>> >> > to be a sane choice.<br>
>> >><br>
>> >> The original reporter didn't answer my question. I think it is likely<br>
>> >> a problem of misconfiguraton (should use longer heath check retry).<br>
>> >><br>
>> >> In summary I think for shorter period communication failure just<br>
>> >> increasing health check parameters is enough. However for longer<br>
>> >> period communication failure, the watchdog node should decline the<br>
>> >> role.<br>
>> >><br>
>> ><br>
>> > I am sorry I didn't totally get it what you mean here.<br>
>> > Do you mean that the pgpool-II node that has the primary node in<br>
>> quarantine<br>
>> > state should resign from the master/coordinator<br>
>> > pgpool-II node (if it was a master/coordinator) in that case?<br>
>><br>
>> Yes, exactly. Note that if the PostgreSQL node is one of standbys,<br>
>> keeping the quarantine state is fine because users query could be<br>
>> processed.<br>
>><br>
> <br>
> Yes that makes total sense. I will make that change as separate patch.<br>
<br>
Thanks. However this will change existing behavior. Probably we should<br>
make the change against master branch only?<br></blockquote><div><br></div><div>Probably yes, because the current fix I have for this in my mind involves the configurable timeout parameter</div><div>to make the master pgpool resign. Let me come up with the patch and then we work on the part of that</div><div>needs to be back ported.</div><div>And regarding the patch I shared upthread to continue the health check on quarantined nodes, Do you think we should</div><div>also back-patch it to older versions as-well ?</div><div><br></div><div>Thanks</div><div>Best Regards</div><div>Muhammad Usama</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> Thanks<br>
> Best Regards<br>
> Muhammad Usama<br>
> <br>
> <br>
>> > Thanks<br>
>> > Best Regards<br>
>> > Muhammad Usama<br>
>> ><br>
>> ><br>
>> >> >> > Can you please try out the attached patch, to see if the solution<br>
>> >> works<br>
>> >> >> for<br>
>> >> >> > the situation?<br>
>> >> >> > The patch is generated against current master branch.<br>
>> >> >> ><br>
>> >> >> > Thanks<br>
>> >> >> > Best Regards<br>
>> >> >> > Muhammad Usama<br>
>> >> >> ><br>
>> >> >> > On Wed, Apr 10, 2019 at 2:04 PM TAKATSUKA Haruka <<br>
>> >> <a href="mailto:harukat@sraoss.co.jp" target="_blank">harukat@sraoss.co.jp</a>><br>
>> >> >> > wrote:<br>
>> >> >> ><br>
>> >> >> >> Hello, Pgpool developers<br>
>> >> >> >><br>
>> >> >> >><br>
>> >> >> >> I found Pgpool-II watchdog is too strict for duplicate failover<br>
>> >> request<br>
>> >> >> >> with allow_multiple_failover_requests_from_node=off setting.<br>
>> >> >> >><br>
>> >> >> >> For example, A watchdog cluster with 3 pgpool instances is here.<br>
>> >> >> >> Their backends are PostgreSQL servers using streaming replication.<br>
>> >> >> >><br>
>> >> >> >> When the communication between master/coordinator pgpool and<br>
>> >> >> >> primary PostgreSQL node is down during a short period<br>
>> >> >> >> (or pgpool do any false-positive judgement by various reasons),<br>
>> >> >> >> and then the pgpool tries to failover but cannot get the<br>
>> consensus,<br>
>> >> >> >> so it makes the primary node into quarantine status. It cannot<br>
>> >> >> >> be reset automatically. As a result, the service becomes<br>
>> unavailable.<br>
>> >> >> >><br>
>> >> >> >> This case generates logs like the following:<br>
>> >> >> >><br>
>> >> >> >> pid 1234: LOG: new IPC connection received<br>
>> >> >> >> pid 1234: LOG: watchdog received the failover command from local<br>
>> >> >> >> pgpool-II on IPC interface<br>
>> >> >> >> pid 1234: LOG: watchdog is processing the failover command<br>
>> >> >> >> [DEGENERATE_BACKEND_REQUEST] received from local pgpool-II on IPC<br>
>> >> >> interface<br>
>> >> >> >> pid 1234: LOG: Duplicate failover request from "pg1:5432 Linux<br>
>> pg1"<br>
>> >> >> node<br>
>> >> >> >> pid 1234: DETAIL: request ignored<br>
>> >> >> >> pid 1234: LOG: failover requires the majority vote, waiting for<br>
>> >> >> consensus<br>
>> >> >> >> pid 1234: DETAIL: failover request noted<br>
>> >> >> >> pid 4321: LOG: degenerate backend request for 1 node(s) from pid<br>
>> >> >> [4321],<br>
>> >> >> >> is changed to quarantine node request by watchdog<br>
>> >> >> >> pid 4321: DETAIL: watchdog is taking time to build consensus<br>
>> >> >> >><br>
>> >> >> >> Note that this case dosen't have any communication truouble among<br>
>> >> >> >> the Pgpool watchdog nodes.<br>
>> >> >> >> You can reproduce it by changing one PostgreSQL's pg_hba.conf to<br>
>> >> >> >> reject the helth check access from one pgpool node in short<br>
>> period.<br>
>> >> >> >><br>
>> >> >> >> The document don't say that duplicate failover requests make the<br>
>> node<br>
>> >> >> >> quarantine immediately. I think it should be just igunoring the<br>
>> >> request.<br>
>> >> >> >><br>
>> >> >> >> A patch file for head of V3_7_STABLE is attached.<br>
>> >> >> >> Pgpool with this patch also disturbs failover by single pgpool's<br>
>> >> >> repeated<br>
>> >> >> >> failover requests. But it can recover when the connection trouble<br>
>> is<br>
>> >> >> gone.<br>
>> >> >> >><br>
>> >> >> >> Does this change have any problem?<br>
>> >> >> >><br>
>> >> >> >><br>
>> >> >> >> with best regards,<br>
>> >> >> >> TAKATSUKA Haruka <<a href="mailto:harukat@sraoss.co.jp" target="_blank">harukat@sraoss.co.jp</a>><br>
>> >> >> >> _______________________________________________<br>
>> >> >> >> pgpool-hackers mailing list<br>
>> >> >> >> <a href="mailto:pgpool-hackers@pgpool.net" target="_blank">pgpool-hackers@pgpool.net</a><br>
>> >> >> >> <a href="http://www.pgpool.net/mailman/listinfo/pgpool-hackers" rel="noreferrer" target="_blank">http://www.pgpool.net/mailman/listinfo/pgpool-hackers</a><br>
>> >> >> >><br>
>> >> >><br>
>> >><br>
>><br>
</blockquote></div></div>