Analyze the value of sqltoy-orm and think about scenarios (sqltoy far more powerful than mybatis)

sqltoy open source address: https://github.com/sagframe/sagacity-sqltoy

sqltoy gitee address: https://gitee.com/sagacity/sagacity-sqltoy

Before that, I wrote an article listing more than a dozen features that sqltoy is far more powerful than mybatis, and basically adopted a rigorous mode of proof.

For details, see: Why is sqltoy-orm far stronger than mybatishttps:  //www.oschina.net/news/114671/sqltoy-orm-vs-mybatis#comments

Many people say that I insist on mybatis, but it doesn't matter. Writing articles is not about forcing anyone to choose, but to share with each other or an expression of the self-pride of R&D personnel!

The development of sqltoy was not originally due to technology, but to serve project management. Therefore, sqltoy is not promoting hard-line technology, but unique thinking and insights. The purpose of sqltoy is to provide you with a reliable and full-featured ORM framework, which will eventually allow you to ignore ORM and free up your energy for other areas! So as not to be restrained by all kinds of energy to be hanged and beaten!

There are interesting and sad stories behind almost every innovation of sqltoy. It should be possible to ask: what experience can be used to sort out so many functions that fit the scene!

  • How to write sql: Many people don't understand it. They always think that the java code is similar to jooq. It is simple, but when it involves more than 2 tables query, you need to debug it from the client side, and it is extremely difficult for later maintenance. It is convenient, and sql is familiar with many products, testing, and operation and maintenance, which is convenient for optimization, debugging, change, maintenance, and communication! The later modification and maintenance of the java code on the left and another or in your jooq is a nightmare! sql who can't? In addition, the sql writing of mybatis disrupts the original structure of sql because of xml logical judgment, which is also inconvenient to read and maintain later.

     The starting point of sqltoy is because of this elegant sql writing. At the beginning, because of the project characteristics, there were more query functions, and there were more query conditions in the query function. The query conditions always required additional adjustments, and the query results also had to be adjusted. Development is miserable! Every adjustment needs to be deducted from the code and put into the client for verification. After verification, it has to be written back into the code. At that time, I was wondering if there is a way to keep the sql in the code basically the same as the client, and it can be retained. Notes. In this way, development, debugging, and maintenance are basically unified! By chance, I summed up the current sql writing method of sqltoy, and the effect is very obvious! Do you think it is basically consistent with the client?

  • Unique verification: customer name or code, mobile phone number, email unique verification scenario, right?
  • Take random records: Sampling, exams, questionnaires and other scenarios, right?
  • Take the top record: take the ranking, right?
  • updateFetch: Is it necessary for scenarios with extremely high transaction requirements such as capital account and inventory account? It turned out to be load(entity, LockMode), get back the lock and then update, two interactions.
  • treeTableWrapper: There are a lot of tree-structured table processing scenarios, right?
  • Cached translation: When indexes, partitioned tables, and even sub-database and sub-table means are exhausted, a cached translation performance can be greatly improved again!

    This is a bank in Shanghai: We took over a project reconstruction. The statistical report in the system at that time was after get off work in the evening, and the results came out the next day after going to work in the morning! The reason for refactoring is slow!

    A large number of related queries, a large number of queries are all based on native tables, or there is no moderate data regulation for statistical considerations when doing business. The company signs a contract with a customer. The standard for data statistics is 2 years of scale data of 50 concurrent data and the slowest speed should not exceed 20 Second! After racking my brains, the data ETL + cache translation is basically about 3~5 seconds in the end!

  • Paging optimization: traditional paging only implements paging and is not optimized for paging

    A company's ERP order query statistics: it displays the execution status of the order on a page, and the order has the status of inbound and outbound, receipt and payment. The reason for the slowness is that the relevant statistics are made in full according to the query conditions. A query takes only a few minutes, and customers complain very much. Big, directly complain that your company's development level is too poor!

    I went to coordinate and deal with it, and the customer looked distrustful! Your company is too bad! That night, I came up with fast paging, and I worked it until 5:00 in the morning. It was applied the next day, and it took about 6 seconds to try it! Clients showed their approval!

  • Row and Column Transformation: Needed for Cross-reporting?

    1. The earliest reason is that a project needs to display data according to the selected month range, and develop a monthly query combination. I criticized him for looking at me innocently with his aggrieved little eyes!

    2. The project of a bank in Shandong: the multi-dimensional cross selection is too complicated, and the development directly chooses to go home every day.

    Of course, this row-to-column conversion function has been integrated in our own reporting platform, just to avoid such a bitter scene!

  • Grouping and summarizing the average: Do you want to do statistics? I can write sql? What about multi-level? Is it still delicious to write so long?
  • Chain calculation: Is it the calculation of the bitterness in the code?
  • Sub-database and sub-table: In ultra-large-scale data scenarios, sub-database and sub-table are a good choice
  • elasticsearch\clickhouse: It will be a great challenge to use traditional databases to solve problems with super-large-scale data, and es can still use billions of data after practice in about 20 milliseconds!
  • sql file update detection: you don't want to change it during development, you need a hot restart, right?
  • Timeout sql print: easy to sort out slow queries

Written at the end: Don't be too unhappy in work and life. sqltoy has made me experience countless difficult moments and a lot of bright excitement. I hope sqltoy can also help everyone.

There is no eternal thing in the world, mybatis will become the past, and of course sqltoy will eventually become the past!

{{o.name}}
{{m.name}}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324084901&siteId=291194637