<div dir="ltr">Hi Ishii-San<div><br></div><div>The parch works perfect. Many thanks for looking into it.<br><br>Kind regards</div><div>Muhammad Usama</div><div><br><div class="gmail_quote"><div dir="ltr">On Fri, Dec 7, 2018 at 8:20 PM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp">ishii@sraoss.co.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Hi Usama,<br>
<br>
The essential reason of the hang is, when Pgpool-II tries to read a<br>
command complete message, the query in progress flag was reset by the<br>
proceeding close complete message. Usualy resetting the flag is fine<br>
but in this case since we already had a sync message, the flag should<br>
have not been reset. So my idea is, checking doing extended query flag<br>
while receiving close complete message. If the doing extended query<br>
flag is off, then do not reset the query in progress flag.<br>
<br>
The attached patch works for me. Can you please check?<br>
<br>
Best regards,<br>
--<br>
Tatsuo Ishii<br>
SRA OSS, Inc. Japan<br>
English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
<br>
> Yeah this is tough. Let me think how Pgpool-II can deal with it.<br>
> <br>
> Best regards,<br>
> --<br>
> Tatsuo Ishii<br>
> SRA OSS, Inc. Japan<br>
> English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
> Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
> <br>
>> Hi Ishii-San<br>
>> <br>
>> On Fri, Dec 7, 2018 at 6:30 PM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp" target="_blank">ishii@sraoss.co.jp</a>> wrote:<br>
>> <br>
>>> Hi Usama,<br>
>>><br>
>>> I think I have found the cause. The transaction in question basically<br>
>>> does these:<br>
>>><br>
>>> BEGIN<br>
>>> :<br>
>>> (lots of extended query messages)<br>
>>> :<br>
>>> parse(COMMIT)<br>
>>> bind(COMMIT)<br>
>>> execute(COMMIT)<br>
>>> Sync<br>
>>> Close prepared statement (S_14) [1]<br>
>>> Close prepared statement (S_11) [2]<br>
>>> BEGIN <-- hang<br>
>>><br>
>>> This message sequence seems to be unusual, because two close<br>
>>> messages[1][2] are issued *after* a sync message (usually extended<br>
>>> query protocol messages sequence including close messages are finished<br>
>>> by a sync message). Pgpool-II expects a command complete message after<br>
>>> the last BEGIN but unfortunately two close messages, which are<br>
>>> responses of [1][2], arrived before it. While processing the close<br>
>>> complete message, Pgpool-II turned off the query-in-progress flag as<br>
>>> shown in the session_context. So read_kind_from_backend tried to read<br>
>>> from node 1, then hangs.<br>
>>><br>
>>> I was able to reproduce the hang by using attached pgproto data<br>
>>> (pgproto.data).<br>
>>><br>
>>> If the two close message were issued *before* the sync message,<br>
>>> Pgpool-II does not hang. (see attached pgproto2.data).<br>
>>><br>
>>><br>
>> Thanks for looking into this. This one seems to be a tough one.<br>
>> Apparently the application getting the hang is using the JDBC driver which<br>
>> is sending this<br>
>> unusual message sequence.<br>
>> <br>
>> Thanks<br>
>> Kind Regards<br>
>> Muhammad Usama<br>
>> <br>
>> <br>
>> <br>
>>> > Hi Ishii-San<br>
>>> ><br>
>>> > Please find the attached the debug log and pgpool.conf<br>
>>> ><br>
>>> > The output of the stack trace, session context and query context is<br>
>>> > pasted below<br>
>>> ><br>
>>> > *--STACK-TRACE--*<br>
>>> > #0 0x00007f3d5d218c53 in __select_nocancel () at<br>
>>> > ../sysdeps/unix/syscall-template.S:81<br>
>>> > #1 0x000000000042e086 in pool_check_fd (cp=cp@entry=0x7f3d5efb43a0) at<br>
>>> > protocol/pool_process_query.c:696<br>
>>> > #2 0x000000000042e356 in pool_check_fd (cp=cp@entry=0x7f3d5efb43a0) at<br>
>>> > protocol/pool_process_query.c:728<br>
>>> > #3 0x000000000045fafc in pool_read (cp=0x7f3d5efb43a0,<br>
>>> > buf=buf@entry=0x7fffbc4830b5,<br>
>>> > len=len@entry=1) at utils/pool_stream.c:166<br>
>>> > #4 0x000000000043303b in read_kind_from_backend<br>
>>> > (frontend=frontend@entry=0x2314e98,<br>
>>> > backend=backend@entry=0x7f3d5efaf750,<br>
>>> > decided_kind=decided_kind@entry=0x7fffbc4834ba<br>
>>> > "D")<br>
>>> > at protocol/pool_process_query.c:3379<br>
>>> > #5 0x000000000043f402 in ProcessBackendResponse<br>
>>> > (frontend=frontend@entry=0x2314e98,<br>
>>> > backend=backend@entry=0x7f3d5efaf750, state=state@entry=0x7fffbc483524,<br>
>>> > num_fields=num_fields@entry=0x7fffbc483522) at<br>
>>> > protocol/pool_proto_modules.c:2663<br>
>>> > #6 0x0000000000431ea7 in pool_process_query (frontend=0x2314e98,<br>
>>> > backend=0x7f3d5efaf750, reset_request=reset_request@entry=0) at<br>
>>> > protocol/pool_process_query.c:320<br>
>>> > #7 0x000000000042cd49 in do_child (fds=fds@entry=0x23134b0) at<br>
>>> > protocol/child.c:388<br>
>>> > #8 0x00000000004090a5 in fork_a_child (fds=0x23134b0, id=0) at<br>
>>> > main/pgpool_main.c:659<br>
>>> > #9 0x000000000040fb73 in PgpoolMain (discard_status=discard_status@entry<br>
>>> =0<br>
>>> > '\000', clear_memcache_oidmaps=clear_memcache_oidmaps@entry=0 '\000') at<br>
>>> > main/pgpool_main.c:378<br>
>>> > #10 0x0000000000407777 in main (argc=<optimized out>, argv=<optimized<br>
>>> out>)<br>
>>> > at main/main.c:349<br>
>>> ><br>
>>> > *--SESSION CONTEXT--*<br>
>>> > (gdb) print *session_context<br>
>>> > $4 = {process_context = 0x77c260 <process_context_d>, frontend =<br>
>>> 0x2314e98,<br>
>>> > backend = 0x7f3d5efaf750, in_progress = 0 '\000',<br>
>>> > doing_extended_query_message = 0 '\000',<br>
>>> > need_to_restore_where_to_send = 0 '\000', where_to_send_save = "\001",<br>
>>> > '\000' <repeats 126 times>, command_success = 1 '\001',<br>
>>> writing_transaction<br>
>>> > = 0 '\000',<br>
>>> > failed_transaction = 0 '\000', skip_reading_from_backends = 0 '\000',<br>
>>> > ignore_till_sync = 0 '\000', transaction_isolation = POOL_UNKNOWN,<br>
>>> > query_context = 0x248d418,<br>
>>> > memory_context = 0x22f2558, uncompleted_message = 0x231aeb8,<br>
>>> message_list<br>
>>> > = {capacity = 64, size = 63, sent_messages = 0x2351f70},<br>
>>> > load_balance_node_id = 0,<br>
>>> > mismatch_ntuples = 0 '\000', ntuples = {0, -1, 0 <repeats 126 times>},<br>
>>> > reset_context = 0 '\000', query_cache_array = 0x0, num_selects = 0,<br>
>>> > pending_messages = 0x231b020,<br>
>>> > previous_message_exists = 0 '\000', previous_message = {type =<br>
>>> > POOL_EXECUTE, contents = 0x24704f0 "SS_13", contents_len = 5, query =<br>
>>> > "COMMIT", '\000' <repeats 1017 times>,<br>
>>> > statement = "\000_46", '\000' <repeats 123 times>, portal = '\000'<br>
>>> > <repeats 127 times>, is_rows_returned = 0 '\000',<br>
>>> not_forward_to_frontend =<br>
>>> > 0 '\000', node_ids = {0, -1},<br>
>>> > query_context = 0x242ff68}, major = 3, minor = 0}<br>
>>> ><br>
>>> > *--QEURY-TRACE--*<br>
>>> > (gdb) print *session_context->query_context<br>
>>> > $5 = {original_query = 0x248e0e8 "BEGIN", rewritten_query = 0x0,<br>
>>> > original_length = 6, rewritten_length = -1, parse_tree = 0x248e038,<br>
>>> > rewritten_parse_tree = 0x0,<br>
>>> > where_to_send = "\001", '\000' <repeats 126 times>,<br>
>>> > virtual_master_node_id = 0, query_state = {POOL_UNPARSED <repeats 128<br>
>>> > times>}, is_cache_safe = 0 '\000', temp_cache = 0x0,<br>
>>> > is_multi_statement = 0 '\000', dboid = 0, query_w_hex = 0x0,<br>
>>> > is_parse_error = 0 '\000', num_original_params = -1,<br>
>>> > pg_terminate_backend_conn = 0x0, skip_cache_commit = 0 '\000',<br>
>>> > memory_context = 0x7f3d5efbff20}<br>
>>> ><br>
>>> > Thanks<br>
>>> > Best Regards<br>
>>> > Muhammad Usama<br>
>>> ><br>
>>> ><br>
>>> > On Fri, Dec 7, 2018 at 3:49 AM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp" target="_blank">ishii@sraoss.co.jp</a>> wrote:<br>
>>> ><br>
>>> >> Hi Usama,<br>
>>> >><br>
>>> >> I need:<br>
>>> >><br>
>>> >> - exact Pgpool-II version<br>
>>> >> - pgpool.conf<br>
>>> >> - gdb print command result of *pool_session_context when Pgpool-II hangs<br>
>>> >><br>
>>> >> Also a full debug log will be helpful.<br>
>>> >><br>
>>> >> Best regards,<br>
>>> >> --<br>
>>> >> Tatsuo Ishii<br>
>>> >> SRA OSS, Inc. Japan<br>
>>> >> English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
>>> >> Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
>>> >><br>
>>> >> > Hi Ishii San<br>
>>> >> ><br>
>>> >> > Thanks for looking into this, As discussed over the call please find<br>
>>> the<br>
>>> >> > attached pgpool log and stack trace for the stuck issue.<br>
>>> >> > I am currently not able to trim down the test case so sorry for<br>
>>> throwing<br>
>>> >> a<br>
>>> >> > huge log file.<br>
>>> >> > There is no special configuration on PostgreSQL server side and this<br>
>>> >> issue<br>
>>> >> > only happens when the load balance node and the primary node are same<br>
>>> >> > and as you can see from the stack trace the stuck happens in the<br>
>>> >> > read_kind_from_backend() function and the reason for that is<br>
>>> >> > the function is trying to read response from the backend node<br>
>>> 1(standby)<br>
>>> >> > while it has only sent the query to the backend node 0 (primary)<br>
>>> >> ><br>
>>> >> > --stack-trace--<br>
>>> >> > #0 0x00007f1775216c53 in __select_nocancel () at<br>
>>> >> > ../sysdeps/unix/syscall-template.S:81<br>
>>> >> > #1 0x000000000042def6 in pool_check_fd (cp=cp@entry=0x7f1776fb2328)<br>
>>> at<br>
>>> >> > protocol/pool_process_query.c:696<br>
>>> >> > #2 0x000000000042e1c6 in pool_check_fd (cp=cp@entry=0x7f1776fb2328)<br>
>>> at<br>
>>> >> > protocol/pool_process_query.c:728<br>
>>> >> > #3 0x000000000045f92c in pool_read (cp=0x7f1776fb2328,<br>
>>> >> > buf=buf@entry=0x7ffebb5f2345,<br>
>>> >> > len=len@entry=1) at utils/pool_stream.c:166<br>
>>> >> > #4 0x0000000000432eab in read_kind_from_backend<br>
>>> >> > (frontend=frontend@entry=0x2292e88,<br>
>>> >> > backend=backend@entry=0x7f1776fad6d8,<br>
>>> >> > decided_kind=decided_kind@entry=0x7ffebb5f274a<br>
>>> >> > "D")<br>
>>> >> > at protocol/pool_process_query.c:3379<br>
>>> >> > #5 0x000000000043f232 in ProcessBackendResponse<br>
>>> >> > (frontend=frontend@entry=0x2292e88,<br>
>>> >> > backend=backend@entry=0x7f1776fad6d8, state=state@entry<br>
>>> =0x7ffebb5f27b4,<br>
>>> >> > num_fields=num_fields@entry=0x7ffebb5f27b2) at<br>
>>> >> > protocol/pool_proto_modules.c:2653<br>
>>> >> > #6 0x0000000000431d17 in pool_process_query (frontend=0x2292e88,<br>
>>> >> > backend=0x7f1776fad6d8, reset_request=reset_request@entry=0) at<br>
>>> >> > protocol/pool_process_query.c:320<br>
>>> >> > #7 0x000000000042cbb9 in do_child (fds=fds@entry=0x2291370) at<br>
>>> >> > protocol/child.c:387<br>
>>> >> > #8 0x00000000004090a5 in fork_a_child (fds=0x2291370, id=2) at<br>
>>> >> > main/pgpool_main.c:659<br>
>>> >> > #9 0x000000000040fb73 in PgpoolMain<br>
>>> (discard_status=discard_status@entry<br>
>>> >> =0<br>
>>> >> > '\000', clear_memcache_oidmaps=clear_memcache_oidmaps@entry=0<br>
>>> '\000') at<br>
>>> >> > main/pgpool_main.c:378<br>
>>> >> > #10 0x0000000000407777 in main (argc=<optimized out>, argv=<optimized<br>
>>> >> out>)<br>
>>> >> > at main/main.c:361<br>
>>> >> ><br>
>>> >> > Thanks<br>
>>> >> > Best Regards<br>
>>> >> > Muhammad Usama<br>
>>> >> ><br>
>>> >> ><br>
>>> >> ><br>
>>> >> > On Thu, Dec 6, 2018 at 3:52 AM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp" target="_blank">ishii@sraoss.co.jp</a>><br>
>>> wrote:<br>
>>> >> ><br>
>>> >> >> > Hi Ishii San<br>
>>> >> >> ><br>
>>> >> >> > I was investigating the issue faced by one of Pgpool-II user who<br>
>>> >> reported<br>
>>> >> >> > that after executing a certain transaction (large amount of data<br>
>>> >> DELETE /<br>
>>> >> >> > INSERT / UPDATE, COMMIT), the next execution of the BEGIN statement<br>
>>> >> >> > hangs(using the extended query protocol).<br>
>>> >> >> ><br>
>>> >> >> > I think the problem is in<br>
>>> pool_is_node_to_be_sent_in_current_query()<br>
>>> >> >> > function. The issue is in streaming replication mode, when<br>
>>> >> >> > pool_is_query_in_progress is not set, the function always returns<br>
>>> >> true.<br>
>>> >> >> Now<br>
>>> >> >> > consider that we have two backend servers 1 primary and one<br>
>>> standby.<br>
>>> >> And<br>
>>> >> >> > for a particular session the primary node gets selected as a load<br>
>>> >> balance<br>
>>> >> >> > node. So effectively that session should only be sending queries<br>
>>> and<br>
>>> >> >> > reading responses from primary server alone. But with the current<br>
>>> >> >> > implementation of pool_is_node_to_be_sent_in_current_query() the<br>
>>> >> >> > VALID_BACKEND macro will return true for standby node as well when<br>
>>> the<br>
>>> >> >> > query in progress flag is not set(which is wrong when standby node<br>
>>> is<br>
>>> >> not<br>
>>> >> >> > selected for load balancing). and if this happens in let say<br>
>>> >> >> > read_kind_from_backend() function we will get a stuck, since the<br>
>>> >> >> > read_kind_from_backend() will be trying to read the response from<br>
>>> >> standby<br>
>>> >> >> > node while we have never sent any query to that node.<br>
>>> >> >> ><br>
>>> >> >> ><br>
>>> >> >> > Can you please take a look at the attached patch, I am not sure if<br>
>>> it<br>
>>> >> >> will<br>
>>> >> >> > have some side effects or not.<br>
>>> >> >><br>
>>> >> >> Unfortunately the patch would not help and probably would break many<br>
>>> >> >> cases which are working today.<br>
>>> >> >><br>
>>> >> >> For the scenario you mentioned is already handled by<br>
>>> >> >> pool_pending_message_* functions defined in<br>
>>> >> >> context/query_context.c. When a message, for example parse, is sent<br>
>>> >> >> from frontend and forwarded to backends, the info is recorded in an<br>
>>> >> >> FIFO queue by pool_pending_message_add() along with info which of<br>
>>> >> >> backend the message is forwarded to. When the parse message will be<br>
>>> >> >> replied back with "parse complete message" from the backend,<br>
>>> >> >> read_kind_message_from_backend() can know from which of backend it<br>
>>> >> >> should read a message by looking at the head of the queue. Once the<br>
>>> >> >> message is retrieved (in this example parse complete message), an<br>
>>> >> >> entry in the queue (in this example parse message) is pulled out and<br>
>>> >> >> removed.<br>
>>> >> >><br>
>>> >> >> This mechanism usually works well unless backend sends unpredicted<br>
>>> >> >> message. Recently one of such a case was reported:<br>
>>> >> >> <a href="https://www.pgpool.net/mantisbt/view.php?id=448" rel="noreferrer" target="_blank">https://www.pgpool.net/mantisbt/view.php?id=448</a><br>
>>> >> >><br>
>>> >> >> In the report PostgreSQL sends an error message caused by<br>
>>> >> >> idle_in_transaction_timeout setting which Pgpool-II does not expect.<br>
>>> >> >><br>
>>> >> >> Does the user has any uncommon setting in postgresql.conf?<br>
>>> >> >><br>
>>> >> >> Best regards,<br>
>>> >> >> --<br>
>>> >> >> Tatsuo Ishii<br>
>>> >> >> SRA OSS, Inc. Japan<br>
>>> >> >> English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
>>> >> >> Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
>>> >> >><br>
>>> >><br>
>>><br>
> _______________________________________________<br>
> pgpool-hackers mailing list<br>
> <a href="mailto:pgpool-hackers@pgpool.net" target="_blank">pgpool-hackers@pgpool.net</a><br>
> <a href="http://www.pgpool.net/mailman/listinfo/pgpool-hackers" rel="noreferrer" target="_blank">http://www.pgpool.net/mailman/listinfo/pgpool-hackers</a><br>
</blockquote></div></div></div>