graph coloring learning record

  

  First briefly explain, VCP (Vertex Coloring Problem) is also the most basic graph coloring problem, that is, to color each node on an undirected graph, the colors of adjacent nodes are required to be different, and the total number of colors is minimized.

  Then the PCP (Partition Coloring Problem) is extended from the VCP. The description is that the entire graph has been divided into k parts, and you need to select a point in each part to make the induced subgraph formed by these points in the VCP problem. , the number of staining is minimized.

  There are currently two explicit algorithms to solve the PCP problem, one is branch-and-cut and the other is branch-and-price.

  But first, let's describe PCP with some linear programming equations.

  Obviously, the upper bound of our color number is the division number k given in PCP, because the worst case must be that the colors of each node are different from each other, so define y_c={0,1} to indicate whether the color c is used or not , x_vc={0,1} indicates whether the v point is dyed with color c, then our purpose is obviously that sum_{c=1}^k y_c is the smallest, while satisfying sum_{c=1}^k sum_{v∈ P_i} x_vc =1 (that is, only one point is colored for each segmentation), x_vc+x_uc≤y_c (that is, two adjacent points cannot be colored the same color)

  But this constraint suffers from symmetry (not very clear here, not detailed in the paper), making it only applicable to smaller graphs.

  We use another constraint method.

  In addition, S is a set similar to a power set of V. Any set in S is a subset of V, and the size of the intersection with each partition does not exceed 1, and there is no edge between any two points. We'll find out that this is actually a color choice in PCP. Then we continue to define ξ_S={1,0} that is, whether all points in S are the same.

  So, our constraint becomes to minimize sum ξ_S, and constraint 10 presumably wants to say that for each partition, there must and only be one point dyed one color? (but it feels like there is a problem with its description), and then relax ξ_S to a non-negative real number.

  upd: The above constraints are probably constrained by restricting ξ_S to be only 1 and 0 at first, and then relaxed to non-negative real numbers, that is to say, there are multiple color selection schemes that involve dividing i, then ξ_S becomes a decimal

  But the solution of the second constraint turns out to be no better than the first constraint.

  upd: doesn't seem to be optimal, but the minimized value is bigger. . . Because the maintenance is not the same thing. . . But after seeing it, why do you feel that the second one is better than the first one? It seems that there is a problem after changing the value to a non-negative real number after the first one. . .

  /*Define p(v) as the number of the division where v is located, then the number c(S) of each color selection can be defined as the number with the smallest division in it. This must not be repeated in the answer. */

  The naive idea to solve the second constraint is to enumerate the elements of the set of S, whose size is exponential. The paper proposes a column generation algorithm to solve this constraint.

  The specific branch-and-price algorithm consists of a column generation algorithm and a branch scheme. It is probably that we give an assignment to constraints in advance, use an evaluation function to evaluate the current assignment, call out the points that do not meet the requirements, assign them again, and repeat until there is no conflict.

  The question is how do we define this evaluation function (an optimal vector is defined in the text?), which is converted into MWSSP in the text. . . (Isn't this paragraph difficult to understand?)

  After that is the introduction of branch scheme, is it not difficult to understand? Mainly two branch (search?) conditions

  The core of the discovery paper is actually the description of the search method that follows. . . And it's not too hard to understand. . . The main thing is to talk about the specific operation method of LS, as well as to define the quality of a search and so on. . .

 

  This paper should be read in second place. . . It wasn't until I read the second one that I realized that this was a specific description of one of the operations or something. . . (Mac typing sounds so loud... no matter how hard I try to suppress the sound...)

  

  Next is the second paper, which talks about the application of hybrid evolutionary algorithms to graph coloring. The core idea is that we use the genetic algorithm as the framework of the entire hybrid algorithm. When selecting the two values ​​of crossover, it is different from the random selection of the genetic algorithm. There will be a selection mechanism here (I forgot what it is, it seems that the article also Not too specific...)

  After crossover, the hybrid algorithm does not directly insert it into the population, but uses greedy and LS to optimize it. The LS algorithm here specifically uses TS.

  Like mountain climbing? (I don't know the mountain climbing method very well, but I just used it because it was very visual...upd: I just checked and found that TS is the mountain climbing method? (The question expresses an uncertain tone) ) In order to prevent the algorithm from staying at the local maximum In the optimal solution, a TB table is used to record the previous exchange, and the n-time LS cannot be exchanged again. The n here is given by a formula, and the parameters in it need to be adjusted.

  Probably so? This paper feels very easy to understand. . .

 

 

  Write code in a few days to find out? . . . .

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325120508&siteId=291194637