Xgame service tool technical design document

how to use

  • Need to install PHP 5.5+ environment;

  • Need to install php-pdo, php-mysql;

  • Modify the combine_tool/etc/dbConfig.php file to modify the configuration of wolf clothes and sheep clothes. For specific configuration content, please refer to: dbConfig.php.template file;

  • Go back to the combine_tool directory and execute:

php App_Combine.php


first

  1. This is a technical document that introduces the "service tool";

  2. You can refer to the server tool code: https://git.oschina.net/afrxprojs/xgame-php_tool

  3. Note: The service tool is developed in PHP language! Why use PHP, I will briefly explain later;

  4. The main purpose of this document is to illustrate the thought process of developing a "merging tool", not just instructions for use! The value of technical documentation is in the process rather than the conclusion;

  5. I assume you have some knowledge of tech jobs in the gaming industry. If you have personally participated in the game server work, this document may be easier to read;

  6. This is a non-standard document, all documents that conform to the standard, no one likes to read... This document takes "easy, easy to read, practical" as the standard;

Before starting the content, let me briefly explain the background...

background

  1. Suppose our game server adopts a single server architecture, that is, each game server has its own database;

  2. The database we use is MySQL;

  3. Our server merge tool must be able to merge 10 game servers at a time, i.e. 1 server eats the other 9;

  4. But the operation gave us a maximum of 6 hours, which is the limit. Usually completed within 4 hours;

  5. When the servers are merged, the servers are divided into "wolf server" and "sheep server". For example, the S2 server data is merged into the S1 server, that is, S1 eats S2. Then S1 is called wolf suit, and S2 is called sheep suit;


The earliest server merge tool I used can only merge 2 game servers at a time (1 wolf server eats 1 sheep server). Moreover, the longest time to merge the server took nearly 8 hours... With the rapid expansion of new projects and the improvement of operational requirements, the early server fusion tools were no longer adequate, so new tools had to be developed separately. When the old server merge tool merged data, the method used was to read a single piece of data from the sheep clothing, and then write this data into the wolf clothing database. This method is relatively intuitive, but the efficiency is too low, and once the server merge process is interrupted, it cannot be restarted and then merged, otherwise the data will be messed up.

We must change the original thinking and find a faster and more reliable way!

core implementation

After repeated thinking, I decided to use MySQL's "insert into ... select ..." syntax to insert data in batches, which is very efficient and can basically deal with time constraints. In this way, we can also determine a core idea:
use batch insertion! If you encounter problems, try to rely on this idea...

"Using batch insertion!" This sentence is easy to understand. So, how to understand the sentence "If you encounter a problem, try to rely on this idea..."? I once played a game of billiards with a colleague, and he told me a little trick for playing billiards: if you don't have a shot to get the ball in the hole, you try to make the ball slowly move closer to the hole. When the ball is very close to the hole, you will be sure. Isn't that the case with technical implementation? If we don’t have the confidence to solve a large system at once, then we will find a way to break this large system into several small systems that can be solved, and can’t we just break them one by one… Technology comes from life, but it is higher than life.

In the following text, we will experience this process a little bit.

analyze

In the previous paragraph, we identified the core implementation. Next, we analyze several specific business modules. Because of the limited space, we only look at 3 business modules: character data, single-player copy, and arena. The reason why these three business modules are selected is that these modules are relatively representative and can basically explain the design ideas of the server merge process.

Next, we analyze the three business modules separately, from the concrete to the abstract, to extract the design ideas of the entire service process.

character data

For character data, the following process is likely to be experienced when combining servers:


(Figure 1) The process of combining character data

  • To clean up useless data is to delete the character data of those who have not logged in for 30 days, and whose character level is lower than the specified level, and who have not charged money, and who are not legion commanders;

  • Merging data is to merge the sheep clothing data into the wolf clothing to complete the process of wolf eating sheep. In this step, we can use the insert into ... select ... syntax to perform batch inserts;


One thing to note here is that we clean up the useless data in advance, and then perform the merge data. Instead of picking out useful data and merging them one by one... This is in line with the idea I just mentioned: use batch insertion!

single copy

For the business module of single-player dungeon, the process of combining servers is roughly as follows:

 
(Figure 2) Server-merging process of

single-player dungeon The server-merging process of single-player dungeon is basically the same as that of "character data", or even exactly the same! The single-player copy module also needs to clean up useless data in advance, and then merge the data. At this point, we have found some commonalities between the two modules in the process of server merging. The flow chart can be summarized as follows:

 
(Figure 3) Overview of the server merge process

But there are some problems here, which may require us to think about it.

The single copy performs the step of cleaning up useless data. What does "useless data" refer to? It should be the data that was cleaned up in the "character data" step! If the character data no longer exists, then the single-player copy data corresponding to this character is meaningless, so it needs to be deleted. So how do you know which character data has been deleted when the cleanup work reaches the single-player copy step? We can use an array variable to record all deleted role IDs when performing "role data" cleanup. When cleaning up a single copy, take out the character ID from the array variable, and use this value as a condition for judging whether the single copy data needs to be deleted, so that's fine...

This is indeed correct, but there are two problems:

  1. This is inefficient and requires extra memory;

  2. If you don't fully agree with point 1, then if the application is interrupted in the middle of an exception, the array variable holding the deleted role will no longer exist. This does not guarantee the reliability of the program;


In fact, technical design is not demand-driven, but question-driven. A fundamental approach to technical design is to give counterexamples. After you have come up with a design, you have to stand up to questioning within a certain range. We can use the exhaustive method to find as many counterexamples as possible to this scheme. No matter how subtle the idea is, as long as there are counterexamples within a certain range, the design will be overturned. This kind of knockdown is not terrible in the design stage. If it occurs in the coding implementation stage, then everyone's manual labor will be wasted... In fact, it is rare for the code to be completely overturned and redone due to design flaws. More often, maintaining or fixing a simple feature or bug can take you days.

Design, like science, should be falsifiable.

In order to solve the first problem, we can use MySQL's where ... in ... query method, and the second problem, isn't the application afraid of interruption? I'll just write the variable to disk! In this way, even if there is a shutdown or power failure in the middle, I can still know which characters have been deleted. Because I have the data stored in the file... Let's think about it again, there is a ready-made MySQL database, and it is more convenient to store it here! Why even write to a file? Isn't the MySQL database also regarded as a "file"? And in this way, it is more convenient for us to use the in query.

To this end, we can create a temporary table in the sheep clothing database to record the ID of the role to be deleted. This table has only one field, which is "role ID". In order to make the whole process more reliable, we can also put the cleaning process of the character data at the end, that is to say, perform the data cleaning process of the single copy module first, and then perform the cleaning process of the character data.

 
(Picture 4) Clean up useless data and change the execution order of single-player dungeons and character data.

Our core idea has gone through a small test, but there is no big problem, which is fortunate. However, the next arena module, not so lucky...

arena

The server merging process of the Arena module can also be summarized as "deleting useless data" and "merging data". The flow chart is the same, so I won't draw it here. But Arena has a ranking issue, which is tricky. Let me explain in detail:

Suppose, the arena data in the S1 server is as follows:


(Table 1) S1 server arena data

Suppose, the arena data in the S2 server is as follows:


(Table 2) S2 server arena data

name It's a bit vulgar, but it's good to explain the problem. When the server is merged, we can't clear all the arena data and let the player start over. Such an operation is unacceptable. The arena will give players some rewards every day according to the ranking, so the players will not agree to clear the arena data. A reasonable result looks like this:

  • The 1st place in the S1 server remains the same and is still in the 1st position;

  • 1st place in the S2 server, 2nd place;

  • 2nd place in S1 server, 3rd place;

  • 2nd place in the S2 server, 4th place;

The result is like this:


(Table 3) S1 and S2 combined server results, S1 is the wolf server, S2 is the sheep server

. This is a plan that everyone can accept. To this end, we need to write an algorithm to read out all the arena data in S1, and read out all the data in the S2 arena, then sort, and finally write back to the S1 server. It's okay to do this, but it's too cumbersome...

By algorithm, it doesn't meet our original core idea: use batch inserts! Therefore, at this time, we implement the second set of policies. If we encounter problems, we try to rely on this idea...

Let's try, if the ranking values ​​of the S1 server are multiplied by 1.5 and rounded up, and the S2 server's ranking values ​​are multiplied by 1.5 The ranking values ​​are multiplied by 2, and finally the data is merged by batch insert. how about it? This is a really cool idea!

But this idea is too naive! Let me give a counter example, assuming that the ranking values ​​of S2 are actually discontinuous, like this:


(Table 4) S2 server arena data, in the

S2 server rankings with stakes, the 4th to 8th places are all Wooden stake (dummy). These stakes are set to ensure that when the arena module is first opened to players, players will not rush to the first place at once, and the data of these stakes cannot participate in the server merge. That is to say, the final combined server result should be as follows:


(Table 5) S1 and S2 combined server results, S1 is the wolf server, and S2 is the sheep server. The stake data from the S2 server should not be merged,

so the method of modifying the ranking value and then merging the server will not work. If there are cleared characters in the arena, there will also be a discontinuity in the ranking. If we solve this problem through an algorithm, the difficulty of the algorithm will be quite high.

It's over, I've encountered a big problem, the framework process is tied here, and it can't continue...

Reverse Thinking

If a problem is solved in the forward direction and the difficulty is too high, then try to solve it in reverse. In fact, the previous way of thinking has made great progress. For the arena data, we first merge the data together, and then organize the ranking values, what can we do? Anyway, we also have the original data, the big deal is to roll back. After the data is merged, it is already in the same DB, so the sorting process will be more efficient. Even if the algorithm is complex, the performance should not be too bad?

When it comes to the ranking value algorithm, in essence, it is not to read out each piece of data in order, update the ranking value, and then update it to the database. Isn't this a labeling process, how easy it is...

Simple is simple, but in retrospect, you still need to write some code. Anyway, you have to write a for loop to update each piece of data. Is there a more efficient and simpler way? What we can't do all at once, we can do in two. Can we create a temporary table to store the new ranking data? For example, we create a table like this:

create table `临时表`
(
    `角色 Id` bigint,
    `排名` int not null auto_increment,
    primary key ( `角色 Id` ),
    unique key ( `排名` )
);


Then insert the role Id into this temporary table through the SQL statement:

insert into `临时表` ( `角色 Id` )
select X.`角色 Id`
  from `排行榜` as X
 order by X.`排名` asc, X.`服务器名` asc;


Because the ranking field in the temporary table is self-increasing, after the data is inserted, a continuous ranking will naturally be formed. We then write back the data in the temporary table to the arena through an SQL statement:

update `竞技场` as A, `临时表` as B
   set A.`排名` = B.`排名`
 where A.`角色 Id` = B.`角色 Id`;


Thanks to MySQL's support for associated queries in update syntax, it saves us a lot of work. So far, our core thinking has once again stood the test.


Summarize

Our entire framework can be summarized into three steps:

 
(Figure 5)

Each functional module can be combined through these three steps. We can also find more system modules to verify and see if there are counter-examples. Due to space limitations, we will not continue to list them here. Whether we can design such a framework is composed of these three big steps, and the upper control module is used to control which big step should be executed. Moreover, the upper-level control logic does not care how each business module operates, it only needs to know the work flow. Each specific business module only needs to know how to do it, but does not need to care when to do it? In the future, there will be new functional modules, and we will follow these three big steps. Just do your own work. Think about the line workers on the Foxconn mobile phone production line.

This is the separation of the typical execution process and the specific implementation. It's actually a Work Flow!

We imagine the running process of the system as a torrent. The torrent flows to the level of "cleaning up useless data". Each business module cleans its own data one by one according to the predetermined order. After the last business module cleans up the data, the torrent of the system flows down to the level of merging data. At this time, each business module starts to get busy again, merging its own data in order. The torrent of the system continues to flow down to the level of "data sorting", and each business module continues to be busy until it is all over...

Implementation

We can make the following class definitions:

 
(Figure 6) The combined server tool class is defined

in the main control class of the combined server tool, which specifies the calling sequence. That is, first call the "clean up useless data" class, then call "merge data", and finally call "clean up data". Note: These three classes are abstract classes and require concrete implementations! The specific implementation is defined as follows:

 
(Figure 7)

This step of the specific implementation class is also well understood. We can do one-step sorting, such as changing the function names of the three steps of "cleaning up useless data", "merging data" and "organizing data" to the same! As shown in Figure 8:

 
(Figure 8) This step of changing all function names to the same name

is only purposeful, after that, we can extract the "working" function into an interface. As shown in Figure 9:

 
(Figure 9) Extract the "work node" interface

and "server tool master control class", and now no longer rely on the three abstract classes of "clean up useless data", "merge data" and "organize data" . Instead rely directly on the "worker node" interface! Moreover, this interface only provides one function, which is "work". The significance of this is that it is highly abstract, so that the main control class can completely get rid of the concrete implementation! We can even get rid of these 3 abstract classes and let the concrete classes directly implement the "worker node" interface. As shown in Figure 10:


 
(Figure 10) Make the concrete implementation class directly inherit the "work node" interface

. In this way, the main control class does not need to know how many concrete implementations there are, let alone how they are implemented? Finally, what we need to do is to organize these into different directories, as shown in Figure 11:


 
(Figure 11) The strong concrete implementation class is organized into the

main control class in different directories, and only know to call "Clear", "Combine", " Order" is the "work node" implementation class in these three directories, and these implementation classes have only one uniformly named method: "work". In this way, the main control program saves a lot of heart. And for each concrete implementation class, they just need to ensure that their work is correct...

Looking at the picture and recalling the Work Flow mentioned earlier, this is a concrete implementation process! This is also the Chain of Responsibility pattern in design. One of the great advantages of using the Chain of Responsibility pattern is that I can add and delete worker nodes at will, and even modify the link order of worker nodes, without having to modify a lot of code. For example, if I want to add a "Fixed" between the two marks of "Clear" and "Combine" to fix some wrong data, then, as long as a new directory is added, and the implementation class of "worker node" is added, That's it. No other code needs to be moved at all... Of course, the main control class needs to be slightly modified.

PHP

Finally, a word on the PHP language. The service tool is developed using the PHP language, not because PHP is the best language in the world (throw this word out, I don't know if it can pull some hatred and attract some traffic). But because the dynamic nature of the PHP language can meet the needs of easy expansion, it is not like a compiled language, which needs to be compiled and packaged every time it is updated. PHP can modify the code directly. Taking advantage of the dynamic nature of PHP and the directory structure of the file system, the server framework code can be easily implemented.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324441040&siteId=291194637