Linux Patch file code patch

What is a Patch file and its purpose

【Definition】Pathch:A small piece of code (= instructions that a computer can understand) which can be added to a computer program to improve it or to correct a
fault Patch. Patches are generally used (1) to correct known errors, or (2) as a means of debugging to debug problematic code, discover problems and verify the results of corrections.

Patch operation and syntax

PatchThere are two main operations, refer to the link: https://www.shellhacks.com/create-patch-diff-command-linux/

  • diffIt is mainly used to compare old and new codes and generate patch files. The operation is$ diff -u OriginalFile UpdatedFile > PatchFile
  • patchMerge the patch file into the original code, the operation is$ patch OriginalFile < PatchFile

View the help file:$ diff --help

Patch file structure

patch head

The patch header is —/+++two lines starting with , which are used to indicate the files to be patched. The beginning indicates the old file, and +++the beginning indicates the new file. Among them, a patch file can have many patch headers.

--- unet.py	2022-08-20 12:22:39.713834077 +0200
+++ baseline_UNET3D.py	2022-08-20 12:22:03.482141847 +0200

patch block

The patch block is the place to be modified in the patch. It usually starts and ends with a part that doesn't change. They are only used to indicate the location to be modified. They usually @@start with and end with the start of another block or a new patch header.

Blocks are indented by one column (the first column) to indicate whether the line is added or deleted.
+The sign indicates that this line is to be added.
-The sign indicates that the row is to be deleted.
There is no plus sign or minus sign, which means that it is only quoted and does not need to be modified, and it is used for positioning.

@@ -204,12 +201,13 @@   # 原代码204行开始,共12行;新代码201行开始,共13行
     A helper Module that performs 2 convolutions and 1 MaxPool.
     A ReLU activation follows each convolution.
     """
-    def __init__(self, in_channels, out_channels, pooling=True, planar=False, activation='relu',
+    def __init__(self, in_channels, out_channels, dropout_rate, pooling=True, planar=False, activation='relu',  # 增加了一个dropout_rate 参数
                  normalization=None, full_norm=True, dim=3, conv_mode='same'):
         super().__init__()
 
         self.in_channels = in_channels
         self.out_channels = out_channels
+        self.dropout_rate = dropout_rate   # dropout_rate 传入参数
         self.pooling = pooling
         self.normalization = normalization
         self.dim = dim
@@ -232,21 +230,28 @@    # 原代码232行开始,共21行;新代码230行开始,共28行
             self.pool = nn.Identity()
             self.pool_ks = -123  # Bogus value, will never be read. Only to satisfy TorchScript's static type system
 
+        self.dropout = nn.Dropout3d(dropout_rate)   # dropout_rate 传入参数
+        
         self.act1 = get_activation(activation)
         self.act2 = get_activation(activation)
 
         if full_norm:
             self.norm0 = get_normalization(normalization, self.out_channels, dim=dim)
+            if VERBOSE: print("DownConv, full_norm, norm0 =", normalization) 
         else:
             self.norm0 = nn.Identity()
+            if VERBOSE: print("DownConv, no full_norm")
         self.norm1 = get_normalization(normalization, self.out_channels, dim=dim)
+        if VERBOSE: print("DownConv, norm1 =", normalization)
 
     def forward(self, x):
         y = self.conv1(x)
         y = self.norm0(y)
+        y =  self.dropout(y)  # 添加dropout层
         y = self.act1(y)
         y = self.conv2(y)
         y = self.norm1(y)
+        y =  self.dropout(y)   # 添加dropout层
         y = self.act2(y)
         before_pool = y
         y = self.pool(y)

reference link

  1. Rookie Tutorial-linux-comm-patch
  2. Grammar of CSDN-patch file
  3. Weather4cast2022-models-unet.patch example

Guess you like

Origin blog.csdn.net/yohangzhang/article/details/127649078