<div dir="ltr"><div>Hi Ishii-San</div><div><br></div><div>While rebasing the multiple parser patch after the PostgreSQL parser import commit, I was thinking of the ways</div><div>to minimize the maintenance overhead and merging overheads for the new minimal parser as suggested by you</div><div>upthread.</div><div><br></div><div>So in the newer attached version of the patch, instead of maintaining two separate gram files for standard and</div><div>minimal parsers I have changed the original gram.y file to gram_template.y and added a new make target in</div><div>src/parser/ Makefile <b><i>'make generate_parsrers'</i></b> to generate (gram_minimal.y and gram_standard.y) grammar files</div><div>for minimal and standard parsers.</div><div><br></div><div>Makefile uses the sunifdef (<a href="http://www.linuxcertif.com/man/1/sunifdef/">http://www.linuxcertif.com/man/1/sunifdef/</a>) utility to generate the grammar files from</div><div>gram_template.y file, So I have also made the appropriate changes to Pgpool-II configure so that sunifdef utility path</div><div>can be configured. After the patch, we can also give the explicit sunifdef patch to configure using the <i style=""><b>--with-sunifdef</b></i></div><div>switch if the utility is not present at the standard location.</div><div><br></div><div>The patch also contains the README file in src/parser/ directory which has all the information about what needs</div><div>to be done to generate the grammar files and about importing the PostgreSQL parser.</div><div><br></div><div>Also, note that sunifdef will only be required when we want to generate the grammar files, which would be only after</div><div>importing the new PostgreSQL parser or when we want to make some explicit grammar changes. And for the normal</div><div>build of Pgpool-II, we would not need sunifdef.</div><div><br></div><div><br></div><div>Thoughts and comments</div><div><br></div><div><div>P.S Sorry for the very huge patch because of autogenerated files.</div><div><br></div></div><div>Thanks</div><div>Best regards</div><div>Muhammad Usama</div><div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, May 2, 2019 at 4:11 PM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp">ishii@sraoss.co.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Usama,<br>
<br>
> Hi Ishii-San<br>
> <br>
> Thanks for the feedback, I have been exploring the options to generate two<br>
> parsers from one gram.y file but unfortunately can't figure out any clean<br>
> and easy way to<br>
> do that. The problem is the bison assembler does not support the<br>
> preprocessors<br>
> off the shelf and I am not sure about how we can achieve that without<br>
> preprocessor<br>
> using the scripts.<br>
> <br>
> However, after analysing the end result of using one gram.y file for<br>
> generating<br>
> two parser I think it would be more hectic to maintain and especially merge<br>
> the gram.y<br>
> file in that case. Since adding the code in the existing gram.y file will<br>
> increase the number<br>
> of changes in our gram.y file from that of PostgreSQL, and effectively that<br>
> mean at every<br>
> PostgreSQL release we would require a bigger merging effort.<br>
> <br>
> And if we go down the path of having two gram files as in the original<br>
> patch I shared,<br>
> despite the downside of having another parser file to maintain, the<br>
> approach has its own<br>
> benefits as well. Like it would be easier to track down the errors and<br>
> problems.<br>
> <br>
> Also the idea of having the minimal parser for master-slave mode is to<br>
> speed up the queries by<br>
> quickly analysing if we we need to parse the complete query or can we just<br>
> do with the minimal<br>
> information. Now my motivation of adding the minimal parser was that, in a<br>
> perfect scenario we should<br>
> only parse the read statements and short-circuit the parsing of almost all<br>
> write queries. So although the<br>
> patch does that short-circuiting only with INSERT statements, but for<br>
> future it would be easier to make<br>
> enhancements in the minimal parser if we go with 2 gram files approach.<br>
> <br>
> Another thing is since the minimal parser is based on the philosophy that<br>
> only queries that can be load-balanced<br>
> must be parsed completely and rest of the statements need to be routed to<br>
> primary anyways, and can<br>
> be consider as default write statements no matter the actual statement. So<br>
> effectively we might<br>
> not need to merge the minimal parser with PostgreSQL's parser in most of<br>
> the cases and the minimal parser<br>
> would not add too much maintenance overhead.<br>
<br>
But SET statement needs special treatment and needs full parser for<br>
it. See is_set_transaction_serializable (in<br>
src/context/pool_query_context.c) for an example. Same thing can be<br>
said to LOCK, SAVEPOINT, BEGIN (START TRANSACTION), and PREPARE.<br>
<br>
Also for DML, at least the target table info needs to be analyzed<br>
because in memory query cache needs the info.<br>
<br>
> What are your thought and suggestions on this?<br>
<br>
If we go for two gram.y approach, probably we want to minimize the<br>
diff to create minimal parser from the full parser in order to save<br>
the work to create minimal parser, no?<br>
<br>
> Thanks<br>
> Best Regards<br>
> Muhammad Usama<br>
> <br>
> <br>
> to maintain<br>
> <br>
> <br>
> <br>
> <br>
> On Fri, Mar 8, 2019 at 7:40 AM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp" target="_blank">ishii@sraoss.co.jp</a>> wrote:<br>
> <br>
>> Hi Usama,<br>
>><br>
>> Thank you for the new patch. The essential idea to have two parsers<br>
>> seems to be good. However, I have a concern of maintenance cost for<br>
>> those two distinct parsers. It seems the difference between two<br>
>> parsers are subtle. So why don't you have just 1 original parser file<br>
>> (gram.y) then generate the other automatically by using a script? Or<br>
>> you could have 1 gram.y file with some ifdef's to differentiate those<br>
>> two parsers. What do you think?<br>
>><br>
>> Best regards,<br>
>> --<br>
>> Tatsuo Ishii<br>
>> SRA OSS, Inc. Japan<br>
>> English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
>> Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
>><br>
>> > Hi Ishii-San<br>
>> ><br>
>> > Thanks for looking into the patch and your observation is correct, the<br>
>> > patch I sent would break<br>
>> > the timestamp overwriting functionality,<br>
>> ><br>
>> > So here is another go at this, See the new attached patch, which uses two<br>
>> > parsers,<br>
>> > 1- src/parser/gram.y which is the standard parser brought in from PG code<br>
>> > 2- src/parser/gram_minimal.y modified parser that short-circuits the<br>
>> INSERT<br>
>> > and UPDATE statements<br>
>> ><br>
>> > The idea here is when replication mode is enabled we use the standard<br>
>> > parser and parse everything since<br>
>> > we need the information in case of timestamp or other things. but when we<br>
>> > are operating in the master-slave<br>
>> > mode we don't need any extra information for WRITE queries, since we will<br>
>> > always be sending them to the primary<br>
>> > anyways, so the newly added minimal parser which short-circuits the<br>
>> INSERT<br>
>> > and UPDATE queries<br>
>> > (can also be extended to short-circuit all the write queries like CREATE<br>
>> > and ALTER where ever we can)<br>
>> > and try to enhance the write query performance of the Pgpool-II,<br>
>> ><br>
>> > The parser selection is made in raw_parser() function and is controlled<br>
>> by<br>
>> > the "bool use_minimal" argument<br>
>> > and currently in the patch we always use the minimal parser whenever the<br>
>> > replication mode is disabled in the<br>
>> > pgpool.conf<br>
>> ><br>
>> > This is the very radical change in the parsing function of Pgpool-II, but<br>
>> > it can provide us with the platform to minimise<br>
>> > the parsing overhead of Pgpool-II for master-slave mode.<br>
>> ><br>
>> > Your thoughts and comments..<br>
>> ><br>
>> > Thanks<br>
>> > Best Regards<br>
>> > Muhammad Usama<br>
>> ><br>
>> > On Wed, Feb 27, 2019 at 3:58 AM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp" target="_blank">ishii@sraoss.co.jp</a>> wrote:<br>
>> ><br>
>> >> Hi Usama,<br>
>> >><br>
>> >> I think this patch breaks replication mode because it throws away<br>
>> >> information needed for time stamp rewriting. What do you think?<br>
>> >><br>
>> >> Best regards,<br>
>> >> --<br>
>> >> Tatsuo Ishii<br>
>> >> SRA OSS, Inc. Japan<br>
>> >> English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
>> >> Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
>> >><br>
>> >> > Hi Ishii San<br>
>> >> ><br>
>> >> > Can you have a look at the attached patch which tries to extract some<br>
>> >> > performance in the area of query parsing and query analysis for<br>
>> routing<br>
>> >> > decisions. Most of the performance gains from the changes in the patch<br>
>> >> can<br>
>> >> > be observed in large data INSERT statements.<br>
>> >> ><br>
>> >> > Patch contains the following changes<br>
>> >> > ==========<br>
>> >> > 1-- The idea here is since Pgpool-II only needs a very little<br>
>> information<br>
>> >> > about the queries especially for the insert queries to decide where it<br>
>> >> > needs to send the query,<br>
>> >> > for example: For the INSERT queries we only need the type of query and<br>
>> >> the<br>
>> >> > relation name.<br>
>> >> > But since the parser we use in Pgpool-II is taken from PostgreSQL<br>
>> source<br>
>> >> > which parses the complete query including the value lists ( which is<br>
>> not<br>
>> >> > required by Pgpool).<br>
>> >> > This parsing of value part seems very harmless in small statements<br>
>> but in<br>
>> >> > case of INSERTs with lots of column values and large data in each<br>
>> value<br>
>> >> > item, this becomes significant.<br>
>> >> > So this patch adds a smaller bison grammar rule to short circuit the<br>
>> >> INSERT<br>
>> >> > statement parsing when it gets the enough information required for<br>
>> >> > Pgpool-II.<br>
>> >> ><br>
>> >> > 2-- The patch also re-arranges some of the if statements in<br>
>> >> > pool_where_to_send() function and tries to make sure the<br>
>> pattern_compare<br>
>> >> > and pool_has_function_call calls should only be made when they are<br>
>> >> > absolutely necessary.<br>
>> >> ><br>
>> >> > 3--Another thing this patch does is, it tries to save the raw_parser()<br>
>> >> > calls in case of un-recognised queries. Instead of invoking the<br>
>> parser of<br>
>> >> > "dummy read" and "dummy write" queries in case of syntax error in<br>
>> >> original<br>
>> >> > query, the patch adds the functions to get pre-built parse_trees for<br>
>> >> these<br>
>> >> > dummy queries.<br>
>> >> ><br>
>> >> > 4-- strlen() call is removed from scanner_init() function and is<br>
>> passed<br>
>> >> in<br>
>> >> > as an argument to it, and the reason is we already have the query<br>
>> length<br>
>> >> in<br>
>> >> > most cases before invoking the parser so why waste CPU cycles on it.<br>
>> >> Again<br>
>> >> > this becomes significant in case of large query strings.<br>
>> >> ><br>
>> >> > Finally the patch tries to remove the unnecessary calls of<br>
>> >> > pool_is_likely_select()<br>
>> >> ><br>
>> >> > As mentioned above the area of improvements in this patch are mostly<br>
>> >> around<br>
>> >> > writing queries and for the testing purpose I used a INSERT query with<br>
>> >> > large binary insert data and I am getting a very huge performance<br>
>> gains<br>
>> >> > with this patch<br>
>> >> ><br>
>> >> > *Current Master Branch*<br>
>> >> > ================<br>
>> >> > usama=# \i sample.sql<br>
>> >> > id<br>
>> >> > -----<br>
>> >> > 104<br>
>> >> > (1 row)<br>
>> >> ><br>
>> >> > INSERT 0 1<br>
>> >> > Time: *2059.807* ms (00:02.060)<br>
>> >> ><br>
>> >> > *WITH PATCH*<br>
>> >> > ===============<br>
>> >> > usama=# \i sample.sql<br>
>> >> > id<br>
>> >> > -----<br>
>> >> > 102<br>
>> >> > (1 row)<br>
>> >> ><br>
>> >> > INSERT 0 1<br>
>> >> > Time: *314.237* ms<br>
>> >> ><br>
>> >> ><br>
>> >> > Performance gain* 655.50 %*<br>
>> >> ><br>
>> >> ><br>
>> >> > Comments and suggestions?<br>
>> >> ><br>
>> >> > Please let me know if you also want the test data I used for the<br>
>> INSERT<br>
>> >> test<br>
>> >> ><br>
>> >> > Thanks<br>
>> >> > Best regards<br>
>> >> > Muhammad Usama<br>
>> >><br>
>><br>
</blockquote></div></div>