【408】Operating System - Unforgettable Self-test Question 1 (Part 1)

grassland

OS practice questions

first part:

1:

(Single choice) Which of the following statements is true about the advantages of processes over threads?
a. Process creation is faster and lighter
b. Inter-process isolation is more thorough and safe
c. Inter-process communication is easier to achieve
d. Inter-process switching is faster
  • ❌AD:
    The process is responsible for resource application and is the basic unit of resource application.

    Thread is the basic unit for CPU to schedule and execute tasks, which is lighter than process.

    The process control management block is called PCB, the thread control management block is called TCB, and the user-level thread control management block is called LWP.

  • ✅ B:
    The process has an independent address space and is logically isolated. As long as there is no problem with the OS, an error in one process will not affect other processes.

    The resources requested by the process, such as the logical address space, are shared among the threads of the same process. A thread out of bounds will cause the entire process to hang up.

    Because the process is very safe, we only emphasize thread safety when writing code.

  • ❌C:
    The process is well closed, so communication can only rely on third parties, such as: anonymous pipes, named pipes, mounted shared memory, message queues, socket sockets, and semaphores.

    Because threads share process resources, communication is more convenient. It only needs to implement mutual exclusion for a resource that is visible to all. The communication methods include global variables, mutexes, condition variables, and semaphores.

  • Shared and exclusive resources of threads under the same process:

    • shared:
      1. file descriptor table
      2. Signal processing method
      3. current working directory
      4. user id and group id
    • Exclusive:
      1. thread ID
      2. stack (each thread creates its own temporary variables)
      3. Context registers (save current content when process switches)
      4. Scheduling priority (PRI and NICE)
      5. Signal mask word (after all, it can be set by sigprocmask())
      6. errno

2:

Ways to speed up a matrix operation include:
a. Using multithreading on a single-core processor
b. Using multithreading on a multi-core processor
c. Improving the cache hit rate by optimizing the order of operations
  • ❌A:
    A single-core processor implements multi-threading so that multiple tasks seem to be executing, but because of scheduling overhead, the efficiency of each task is far inferior to that of a single-core single-thread. Can be used for a single-core server to respond to multiple user requests.

  • ✅B:
    The matrix of a*b is multiplied by the matrix of b*c, and handed over to a*c cores, each core performs b times of multiplication and b-1 times of addition, multiple cores are parallel, and the speed is faster than that of a single core Threads are a*c times faster

  • ✅C:
    The process of CPU accessing memory:

    1. The CPU core initiates a VA (virtual address) request data, and first checks whether there is a PA (physical address) corresponding to the first 20 bits of the VA in the TLB (fast table) inside the MMU inside the CPU. If there is, there is no need for Translation Table Walk.
    2. If there is no PA corresponding to the first 20 bits of VA in the TLB, you need to pass the page table address register of the MMUCR3Find the physical address of the first-level page table (page directory), query the page table entries corresponding to the first 10 bits in the first-level page table (PA of the second-level page table), and go to the second-level page table (often directly referred to as the page table) Query the page table entry corresponding to the middle 10 bits (PA of the page frame), and the PA corresponding to the last VA is the second-level page table entry + the last 12-bit offset. This process has successively passed through the MMU, the first-level page table, the second-level page table, and the memory. It looks like a person is walking, so it is called Translation Table Walk.
    3. It is because the TLB is inside the CPU, so accessing the TLB to obtain PA is much faster than accessing the two-level page table in memory to obtain PA.
    4. TLB shootdown: When any processor in the multi-core processor changes the PA corresponding to the VA in the TLB, other processors also change and synchronize accordingly.

3:

(Single-choice) Which item will not be used in the process of translating virtual addresses to physical addresses
a. MMU in CPU
b. TLB in CPU
c. Secondary storage (disk)
d. Register pointing to the page directory ( CR3)
e. Page table in memory
- ✅A B D E:
  1. The TLB contains the PA corresponding to the first 20 bits of the VA, and directly searches for the PA+12-bit offset in the memory without going through the Translation Table walk.
  2. The TLB does not contain the PA corresponding to the first 20 bits of VA. It needs to go through the Translation Table walk, first query the CR3 register to obtain the PA of the page directory, then access the page directory PA, access the page table PA, and access the memory PA+12-bit offset.
  3. Both TLB and CR3 registers are stored inside the MMU
  • ❌C:

    1. The CPU and the files on the disk can never interact directly. The files are scheduled into the memory before being used, and then written back to the disk after the modification is completed.
    2. File: A collection of information stored on a computer with a hard disk as a carrier.
    3. File system: The software mechanism in the OS responsible for managing and storing files.
    4. Opening and closing of files: To avoid repeated retrieval of directories.
      The OS maintains a table containing all open files, and numbers each file entry. After the file is opened, it is stored in the memory itself, and the index of the file in memory is stored in the table entry; after the file is closed, it is written back from the memory to the disk. Delete the index and number in the entry.
    5. Six hierarchical structures of the file system:
      user call interface
      file directory system
      storage control verification module
      logical file system and file information buffer
      physical file system
      auxiliary allocation module + device management program module
    6. Disk block allocation for file storage:
      sequential allocation
      Link allocation:
      1. Explicit link: The pointer (block number) at the end of each physical block of the file is directly stored in the file allocation table FAT.
      There is only one file allocation table FAT in the entire disk. The block number corresponding to -1 means that the block is the last block storing the file, and the block number corresponding to -2 means that the block is free. FAT is loaded into the memory after the OS starts. Reduce disk IO.
      2. Implicit link: Except for the last disk block, all other disk blocks have pointers to the next disk block.
      index allocation

4:

Threads A and B share the value of integer x, respectively execute two lines of code A{x=0;x=1;}; B{x=0; x=2;}, at the end of the program, the value of x may be:
a. 0
b. 1
c. 2
d. 3
  • ✅BC:
    In theory, the value of x depends on the last assignment statement, which may be x=1 or x=2, and the process scheduling is relatively random.
  • ✅During the actual test, because the two assignment statements are very lightweight, basically no process scheduling will be triggered due to the expiration of the time slice, so when writing the code, the process of pthread_create() determines the value of x.
  • Attach a copy of the Linux code, use the g++ file name -lpthread command to compile:
#include <iostream>
#include <pthread.h>
using namespace std;
int x;
void *func1(void *argv){
    
      
  //pthread_detach(pthread_self());
  *((int*)argv) = 0;
  cout<<"1  "<<*((int*)argv)<<endl;
  *((int*)argv) = 1;
  cout<<"1  "<<*((int*)argv)<<endl; 
  return (void*)&"0";
}
void *func2(void *argv){
    
     
  //pthread_detach(pthread_self());
  *((int*)argv) = 0;
  cout<<"2  "<<*((int*)argv)<<endl;
  *((int*)argv) = 2;
  cout<<"2  "<<*((int*)argv)<<endl;
  return (void*)&"0";
}
int main(){
    
    
  pthread_t tid1, tid2;
  pthread_create(&tid2, nullptr, func2, &x);
  pthread_create(&tid1, nullptr, func1, &x);
  pthread_join(tid2, nullptr);
  pthread_join(tid1, nullptr);
  cout<<"x = "<<x<<endl;
  return 0;
}


5:

(Single choice) In which of the following situations, do you not need to use a synchronization mechanism (lock, semaphore, etc.)?
a. No shared resources between threads
b. Unlimited resources
c. No concurrent programs
d. None of the above is required
  • ✅D:
    A: There are no critical resources, no mutex, and no need to synchronize
    B: The reason for adding mutex when accessing resources is that resources are limited. It can be said that most cases do not need mutual exclusion when resources are unlimited. (Maybe thread communication is still needed, process communication can use anonymous pipes, named pipes, shared memory, sockets, message queues)
    C: serial is equivalent to all resources being controlled by one execution flow during program execution, no mutual exclusion is required

6:

(Single choice) What factors do not need to be considered when designing an operating system?
a. System performance
b. System reliability
c. System security
d. All of the above are required
  • ✅D

7:

(Single Choice) Which of the following is not a task/function of a (common) operating system?
a. Manage hardware resources
b. Isolate the address space of a process
c. Handle system calls, interrupts, exceptions
d. Prevent user processes from entering a deadlock state
  • ✅D:
    • Necessary conditions for deadlock:
      1. mutually exclusive
      2. not to deprive
      3. request to keep
      4. cycle wait
    • Deadlock Prevention Strategies: Breaking the Four Necessary Conditions
    • Deadlock avoidance algorithm:
      1. via the banker's algorithm
      2. find a system state safe sequence
    • Deadlock detection algorithm:
      1. Are there any cycles in the resource allocation graph?
    • Deadlock solution algorithm:
      1. Resource preemption: suspend some deadlocked processes, preempting their resources
      2. Cancellation process method: force cancellation of some or all deadlock processes and seize their resources
      3. Process rollback method: One or more processes roll back enough to avoid deadlock, which is a process that releases resources voluntarily.

8:

Which of the following statements about operating systems are true?
a. BIOS is a part of the operating system
b. User mode and kernel mode refer to the state of CPU operation
c. Interrupt handler (including system call) is the only entry program for user mode to enter kernel mode.
d. Some interrupts can be masked; this masking operation is a privileged instruction.
  • ❌A:

    • After the motherboard is powered on, the CPU bus is connected to the ROM and memory stick RAM on the motherboard, and it is ready.Real-mode segmented address mapping, after which do the following:

      1. The cs (code segment) register in the CPU is set to 0xffff
      2. The ip (instruction pointer) register in the CPU is set to 0x0000
      3. The first instruction accessed by the CPU is cs segment register value <<4+ip segment offset = 0xfff0
      4. The segment address conversion of real mode is (segment register<<4+offset value), there is no segment table
    • 0xfff0 is the entry address of the BIOS program in ROM. The BIOS program will not be loaded into the memory RAM. The main tasks are:

      1. Check the hardware environment
      2. Establish interrupt vector table and interrupt service routine, and display the result on the display at the same time
      3. Load the 440-byte BootLoader (BootStrap) in the operating system boot sector (OBR) of the first sector accessible to the OS—disk 0 head 1 cylinder 1 sector into memory
    • The BootLoader program in memory does the following work:

      1. Turn off interrupts and focus on the following things:
      2. A20 enable
      3. set upSegment selector value in segment register,
        protected mode segment registers andselectorBoth are 16 bits,
        The upper 13 bits of the segment selector are the index to find the segment descriptor,
        the lower 2 bits are the request privilege level, and the middle bit indicates whether the current query is GDT or LDT.
        Because the scheme of segment base address <<4+offset in the segment register is no longer used to obtain any PA,
        but querysegment descriptorComeIndicates memory segmentation and segment base address for protected mode
      4. Query whether the lower 16-bit GDT limit in the 48-bit GDTR register is out of bounds,
        if not, pass the upper 32 bitsLocate and initialize the global descriptor table in memory,
        The upper 16 bits in the segment descriptor are the segment base address, and the lower 16 bits are the segment boundary
      5. Set the 0th bit (PE bit) of the CR0 register to 1, enter the protection mode, and enable the paging mechanism
      6. Opening the paging mechanism performs three steps:
        a. Initialize the page directory and page table
        b.The page directory address is written to the CR3 register
        c. The PG position of the CR0 register is 1
      7. set stack
      8. Load the OS kernel from the disk into the memory, and hand over the control of the computer to the OS Kernel
  • ✅B:

    • In order to prevent privileged instructions from being misused, the concept of user mode and kernel mode is proposed.
      The CPU judges what state it is in by looking at the lower 2 bits of the CS segment register.
      ring0 is the kernel mode. At this time, the CPU can access memory at will and execute all instructions
      ring3 In user mode, the CPU can only access the memory requested by the user and execute non-privileged instructions.
    • Common privileged instructions are:
      1. start device command
      2. stop command
      3. set clock command
      4. I/O instructions
      5. storage protection
      6. clear memory command
      7. interrupt operation
    • Common tasks in user mode:
      1. process scheduling
      2. initiate an external interrupt
      3. Page fault occurs
      4. prepare system call
    • Common tasks in kernel mode:
      1. command explanation
      2. All interrupt handlers in the interrupt vector table
      3. page fault handling
      4. Clock Interrupt Handling
      5. system call
    • How to enter kernel mode from user mode:
      1. Interruption, such as self-administration
      2. Exceptions (also called synchronous interrupts)
      3. Soft interrupt - system call (it is the specific implementation of the trap, and the trap is a visual statement, which means that continuous kernel user conversion is like stepping on a trap)
  • ✅D:

    • The masked interrupt sti instruction is a privileged instruction. When the maskable interrupt from outside the CPU comes, the
      maskable interrupt bit (IF bit) in the flag register is set to 1, and
      the maskable interrupt bit (IF bit) in the response flag register is set to 0. does not respond when
    • Interrupt classification:
      1. Internal interrupt (non-maskable):
        Generated from the currently executing instruction, detected internally by the CPU, responding to the instruction execution process except
        zero interrupt and self-interrupt (visit management) automatically skip the interrupt instruction after processing is completed,
        and some instructions are processing exceptions continue after
        1. Self-disruption, such as access control
        2. Software interrupt, such as /0, out of bounds, program code contains interrupt
      2. External interrupt (almost all maskable)
        1. Peripheral requests, such as keyboard input, printer
        2. Human intervention, such as program console, clock interruption
    • Interrupt Class 2:
      1. The soft interrupt is generated by executing the interrupt instruction.
        The interrupt number of the soft interrupt is directly pointed out by the instruction.
        There is no need to use an interrupt controller, and the soft interrupt cannot be masked.
      2. The hard interrupt is triggered by the peripheral,
        the interrupt number of the hard interrupt is provided by the interrupt controller, and
        the hard interrupt is maskable
    • Interrupt processing flow:
      1. Turn off interrupt
      2. Save the breakpoint by hardware-interrupt implicit instruction (pc register or pc+psw content)
      3. The DMA bus sends the interrupt vector from the vector table to the CPU (completed by hardware, enter after execution Core state)
      4. Software assists hardware, using system centralized stack or process independent core stack to save field and mask word
      5. Execute interrupt service routine
      6. Restore field and mask word
      7. Open interrupt
      8. Interrupt return
    • Field:
      32 user registers + Pc register
  • ✅C:

    • The method from user mode to kernel mode:
      1. to interrupt
      2. exception (synchronous break)
      3. Soft interrupt - system call (specific implementation of trap)
    • all system calls

9:

Which of the following statements about the stack is correct?
a. The operation (push, pop) of the user mode process on the user stack will fall into the operating system kernel
b. Different threads of the same process share a user mode stack
c. Different threads of the same process share an interrupt stack
d. User mode After the process falls into the kernel, the user mode stack pointer will be stored in the interrupt stack
e. The heap allocated by malloc may not be completely continuous in physical memory
  • ❌A:

    • The user state uses a low address space of 0~3G, which contains the user stack. The
      kernel state uses a high address space of 3G~4G, which contains the kernel stack.
    • The interrupt stack may use the kernel stack directly (more interrupts are easy to overflow),
      or it may be used by all process threads in the operating system (not easy to overflow)
      1. If the kernel stack is used, interrupts can be interrupted:
        after an interrupt arrives, the kernel stack itself is empty
        1. First push the user mode stack address into the kernel stack,
        2. Then set the stack register to the kernel stack address that was empty just now
        After an interrupt is processed,
        1. First write the user stack address at the top of the kernel stack into the stack register
        2. Then pop the top of the kernel stack
      2. If an independent interrupt stack is used, the interrupt cannot be interrupted:
        because the interrupt stack does not record which thread/process the current interrupt occurred in, it needs to return to the thread/process immediately after the processing is completed
  • ❌B:

    • A process is the basic unit for applying for resources. Threads of the same process share most of the address space, except for the stack
    • By default, the thread stack allocates stack space from the process heap, and each thread has an independent stack space.
      In order to avoid stepping on the stack space between threads, there is also a small guardsize between the thread stacks to isolate and protect their respective stack spaces. Once another thread steps into this isolated area, a segmentation fault will be triggered.
    • The size of the process stack is randomly determined during execution, and has nothing to do with compiling and linking.
      The process stack is larger than the thread stack, but not more than 2 times.
      The thread stack is a fixed size, you can use ulimit -a to view it, and use ulimit -s to modify it
  • ❓C:

    • If the kernel stack does not serve as an interrupt stack, all threads share the only interrupt stack in the OS
    • If the kernel stack acts as the interrupt stack, the question becomes whether threads have separate kernel stacks
      1. At the beginning, all kernel stacks are empty, and the kernel stack of each thread may be a shared process, or it may be newly created with time
      2. Practice has proved that threads in the Linux system have independent kernel stacks.
  • ✅D: See the explanation of A

  • ✅E:

    • The memory from malloc is continuous on VA, but the PA corresponding to each byte of VA is mapped by the page table.
      Therefore, the memory allocated by malloc may not be physically continuous, but it must be logically continuous.

10:

(Single-choice) Which of the following statements about threads is incorrect:
a. Concurrency cannot be achieved on a single-core processor
b. Thread is the smallest independent scheduling unit of the operating system
c. A pointer value, the result is the same
d. Switching between threads needs to save part of the register value and stack pointer
  • ❌A:
    • Concurrency refers to the occurrence of two or more events within the same period of time, macroscopically simultaneous, microscopically alternate
    • Parallel means that two or more events occur at the same time
    • In 2001, IBM launched the world's first multi-core processor IBM® POWER4.
      But people can play minesweeper and listen to music at the same time
  • ✅B:
    • concept
    • Threads are divided into kernel-level threads (KST) and user-level threads (ULT)
      1. KST has high concurrency (the only advantage), high overhead, low efficiency, and requires kernel switching
      2. ULT has low concurrency (the only disadvantage), low overhead, high efficiency, and does not require kernel switching (Bytheway process scheduling is in user mode)
      3. One ULT may be to multiple KST, multiple ULT may be to one KST, multiple ULT may be to multiple KST
      4. Go's coroutine and python multithreading are ULT
  • ✅C:
    • The value of the pointer is *p, indicating that the variable already exists in the process, and is passed to different threads
    • The variables on the stack of different threads are unique, and the rest of the address space is shared by the process.
      Therefore, no matter whether the variable in the process is opened on the heap or the stack of the process, the thread access is public
  • ✅D:
    • During the thread switching process, the current thread ID, thread state, stack, register state, etc. need to be saved
      1. SP: stack pointer, pointing to the top address of the current stack
      2. PC: program counter, stores the next instruction to be executed
      3. EAX: accumulation register, the default register for addition and multiplication

11:

For a 32-bit operating system, when the page size changes from 4KB to 8KB, it will cause
a. The page table becomes larger
b. The page table entry is used to save the bits of page information (dirty, r/w, valid/present, etc.) The number increases
c. The number of bits representing the offset in the page increases
d. The physical address that can be addressed becomes larger
  • 32bit operating system can access memory 2 32 = 4GB
  • ❌A:
    Qualitative understanding, the page table entries are all 4B, the fewer the number of page table entries, the less memory is consumed to store the page table, the
    number of page table entries in the first-level page table is equal to the number of pages, the number of 8KB pages is less, and the number of page table sheets is less. consumes less memory
    • Level 1 page table:
      4KB per page, 220 pages in total
      . One page table entry requires 20/8 = 3 bytes for record number, plus 4 bytes for page information.
      One page can store 4KB / 4B == 1K page table entries
      Requires page table 2 20 / 2 10 = 2 10 = 1K
      each 4KB, multiplied by 1K, equals 4MB
    • Level 1 page table:
      8KB per page, 2 19 pages in total
      . One page table entry requires 19/3 = 3 bytes for record number, plus 4 bytes for page information.
      One page can store 4KB / 4B == 1K page table entries
      Requires page table 2 19 / 2 10 = 512 sheets
      , each 4KB, multiplied by 512 sheets, equals 2MB
  • ✅B:
    • Level 1 page table:
      4KB per page, 220 pages in total, offset within the virtual address page up to 212
      page entries requires 20-bit number, leaving 12-bit page information,
      8KB per page, 219 pages in total , virtual address page Inner offset up to 2 13
      -page entry requires 19-bit number, leaving 13-bit page information (AVG bit plus one)
    • Page information bitmap:
      page directory entry
    • Two methods of writing to memory after a CaChe hit:
      1. write-through:
        When the CPU writes data to the cache, it also writes a copy to the memory at the same time, so that the two are consistent

      2. write-back:
        When the CPU writes data to the cache, it marks the updated cache area, and only writes back to the memory when the area is to be replaced by a new area

      3. post write (not commonly used):
        When the CPU writes data to the cache, it saves the written data to the update buffer, and the update buffer writes the data back to the memory in due course

    • Two processing methods after a CaChe miss:
      1. Write allocate:
        Read the write location into the cache, and then use the write-hit (cache hit write) operation. A write miss operation is similar to a read miss operation.
      2. No-write allocate:
        The write location is not read into the cache, but the data is directly written into the memory. In this way, only the memory of the read memory operation will be recorded in the Cache cache.
  • ✅C:
    1. When a page is 4KB (2 2 * 2 10 ), there are 2 20 pages, the first 20 bits look for the page table, and the last 12 bits offset in the page
    2. When a page is 8KB (2 3 * 2 10 ), there are 2 19 pages, the first 19 pages look for the page table, and the last 13 bits of the page offset
    3. The page offset of the logical address, which is the page information bitmap in the page directory or the page table entry of the second-level page table
  • ❌D:
    • The amount of accessible memory capacity is the number of 2 bus roots , which is equal to the memory size

12:

Which of the following statements about page faults is correct?
a. A page fault interrupt program will generate an exception and crash
b. The page fault interrupt may be due to the fact that the accessed virtual address does not have a corresponding physical page mapping
c. The premise of the page fault interrupt is TLB Miss
d. In the page fault interrupt handler May modify page tables
  • ❌A:

    • When an internal interrupt occurs in the user mode—after a page fault interrupt, the value of the register and stack pointer is saved, and it
      falls into the kernel mode for interrupt processing without crashing.
    • Page fault interrupt handling process:
      1. First the hardware traps into the kernel, saving the program counter on the stack.
        Most machines store various state information about the current instruction in special registers in the CPU.
      2. Context wrapping may occur during page swapping in and out,
        so it is necessary to save volatile information in general-purpose registers and other general-purpose registers
      3. Check the validity and security protection bits of the virtual page address. Kills the process if a protection fault occurs.
      4. If there is simply no PA corresponding to VA, the operating system will find a free page as the mapping of VA
      5. If there is no free page frame, you need to find a page that needs to be swapped out through the page replacement algorithm.
        If the content in the found page is modified, you need to save the modified content to the disk.
        At this time, mark the page as busy, and then Write disk call, context switch occurs (let other processes run while waiting for disk write) After the
        page frame is clean, continue to write disk call, context switch occurs, and write the content of the free page in the disk to the memory page
      6. When the contents of the page in the disk are all written into the memory page, an interrupt is sent to the operating system.
        The operating system updates the page table entries in memory, updates the page number of the virtual page mapping to the written page,
        and marks the page as normal.
      7. Restore the state before the page fault interrupt occurs, and redirect the program instruction device to the instruction that caused the page fault interrupt.
      8. The process that caused the page break is scheduled, and the operating system returns to the assembly code routine.
      9. The assembly code routine restores the scene, restoring the information previously saved in the general-purpose registers.
  • ✅B:

    • The process of CPU writing data to memory:
      1. The CPU accesses the TLB in the MMU to check whether the PA corresponding to the VA can be obtained directly.
      2. If it does not exist, Translation Table walk is required to query the page directory and find the PA in the secondary page table
      3. After finding PA, check whether CaChe has stored PA and its stored value. If not, you need to fetch PA.
      4. After modifying memory, selectively load PA and memory values ​​into Cache
    • The occurrence process of page fault interrupt:
      1. There is no PA corresponding to VA in TLB
      2. There is no page table corresponding to the first 10 bits of VA in the page directory
      3. There are page tables corresponding to the first 10 bits of VA in the page directory, but there is no page table entry corresponding to the middle 10 bits in the page table
    • Three types of page faults
  • ✅C:

    • Page fault interrupt refers to the absence of the page table corresponding to the first 20 bits of VA in the page table, so the starting PA of the page corresponding to VA cannot be found.
    • Now that we have gone to query the page table, it means that there is no PA corresponding to the VA in the TLB
  • ❌D:

    • If there is a page table entry corresponding to the first 20 bits of VA, but the permissions are incorrect, kill the process directly
    • If there is no page table entry corresponding to the first 20 bits of VA, the physical page is not in memory
      1. If the memory is not full, load a writable page directly from the disk
      2. If the memory is full, you need to use the page replacement algorithm to write a page back to the disk, and then transfer a writable page from the disk
      3. After the arrival of a new page, a new page table entry needs to be added to record the PA of the newly transferred page in memory corresponding to VA

13:

Which of the following statements about caching Cache (memory cache, not TLB) is correct?
a. Find the corresponding data block based entirely on the virtual address
b. There are multiple levels of cache in the computer, the closer to the CPU, the smaller the cache
c. When the process is switched, the cache needs to be cleared
d. The cache hit rate is related to the order of the application program
  • ☀️: Cache has actually learned the principles of computer composition
    1. Cache is the internal high-speed cache of the CPU, storing PA and corresponding data in the main
      memory
      . Additionally, only PA is supported.
      • Early ARM9 Level 1 Cache uses virtual addresses as indexes and tags, called VIVT mode, which has serious Cache duplication/ambiguity problems
      • In the later period, ARM11 began to use the virtual address as the index and the physical address as the mark, which is called the VIPT method, which solves the problem of Cache duplication/ambiguity
      • VIPT workflow:
        1. When the VA is sent to the MMU/TLB for translation, it is also sent to the Cache to find the group (actual use of group association mapping). After the MMU completes the translation and obtains the PA, the PA searches for a specific Cacheline in the group
        2. That is to say, VI refers to using VA to find Cache block grouping
        3. PT refers to using PA to find a specific Cacheline in the specified group
    2. Cache's acceleration principle: locality principle
      1. Temporal locality: recently accessed instructions or data may be accessed again in the near future due to the existence of loops
      2. Spatial locality: the instruction or data to be accessed is likely to be adjacent in storage space to the information currently in use
    3. Memory partition:
      • The memory is composed of 2 n byte addresses, and every 2 b bytes are divided into one block, then the offset in the block is 2 b addresses
      • Assuming that m blocks are divided, the first log m bits of each address represent the log mth block
      • Memory blocks are smaller than memory pages
        Cache block:
      • The Cache address is divided into two sections: the high bit indicates the cache block number (line number) , and the low bit indicates the offset within the block (line length)
      • The cache memory is much smaller than the main memory, so the cache block number of the cache is much smaller than the total number of memory blocks
      • According to the principle of spatial locality, Cache does not cache the main memory in units of bytes, but in units of blocks
    4. The famous array access speed problem:
    • The array is closely arranged in the lowest dimension in the main memory, and then the next lowest dimension (store the same column first, and then store the same row)
      Array storage method
    • The principle of spatial locality of Cache makes it possible to store arr[x][y] nearby when storing arr[x][y] in Cache, so traversing
      arr[][] line by line is less than 60 times faster than traversing column by column times
    1. The basic structure of Cache:
      • Cache storage body: store instructions and data blocks transferred from the main memory, and exchange information with the main memory in units of blocks
      • Main memory-Cache address conversion mechanism: realize the conversion of main memory address (PA) to cache address (block number + block offset) by looking up the table (the number of Cache blocks is much less than that of main memory), in order to distinguish whether the table entries are all 0 This block has no PA corresponding to it, or all 0 PAs correspond to it, and a valid bit is added to the entry, 1 means corresponding, and 0 means no corresponding.
      • Replacement control unit (hardware): When the cache is full, replace the data block according to the page replacement algorithm, and modify the address conversion mechanism
      • and other parts
    2. Mapping rules for Cache blocks and memory blocks:
      • Fully associative mapping: a piece of main memory can be mapped to any piece of Cache
      • Direct mapping: A block of main memory is only allowed to be mapped to a fixed block of Cache
      • Set associative mapping: a piece of main memory is only allowed to be mapped to a few pieces in a certain group of Cache
    3. Cache example:
      • Assume that the main memory address space of a certain computer is 256MB, and its data cache with byte addressing has 8 cache lines, and the line length is 64B.
        Analysis: 256MB indicates that the address is 28 bits, 8 lines indicate that the table maintained by the Cache address translation mechanism has 8 entries, and a line length of 64B indicates that the offset in the block is 6 bits
      • Table lookup O(n) under full associative mapping, but Cahceline is not replaced frequently:
        • The address issued by the CPU is 1111 1000 0100 1010 1010 1010 1010, the last 6 bits are the offset within the block, and the first 22 bits are used to record the corresponding relationship with the block number in the table entry, and look up the table in the address conversion mechanism
        • The last 6 bits are the offset in the block, and query the corresponding table in the address organization according to the first 22 bits (1111 1000 0100 1010 1010 10):
          Cache address conversion
        • If the valid bit corresponding to the query block is 1, return data; otherwise, it needs to fetch and replace the Cache block corresponding to PA
    • Under direct mapping, the lookup table is O(1), but the cacheline is frequently replaced:
      • The address issued by the cpu is 1111 1000 0100 1010 1010 1010 1010, the last 6 bits are the offset within the block, the number of log lines in the middle = log8 = 3 bits are the block number, and the first 19 bits are used in the entry to record and correspond to the Cache block relation
      • First use the middle 3 digits to lock the block number, and then check whether the first 19 digits exist in the table entry. If it exists, write it directly, and if it does not exist, fetch it
        Direct Mapped Lookup Table
    • Under the x-way group associative group mapping, the table lookup is O(x), and the replacement of the Cacheline is not frequent
      • Suppose there are 2-way group associative, or 8 lines, each line is 64 bytes
      • The address issued by the cpu is 1111 1000 0100 1010 1010 1010 1010, the last 6 bits are the offset within the block, the number of log groups in the middle = log4 = 2 bits are the group number, and the first 20 bits are used in the entry to record and correspond to the Cache block relation
      • First use the middle 2 digits to lock the group number, and then check whether the first 20 digits exist in the table entry of this group. If it exists, write it directly, and if it does not exist, fetch it
        2-way set associative
  • ❌A:
    • Level 1 Cache can query the data in the Cache block through PA & VA
    • Level 2 and Level 3 caches can only query the data in the cache block through the PA
  • ✅B:
    • Level 1 Cache is inside the CPU, Level 2 and Level 3 Cache are outside the CPU
    • The higher the speed of the storage device, the smaller the storage capacity and the more expensive the cost
  • ❓C:
    • When the process is switched, the TLB responsible for recording the PA corresponding to the VA must be refreshed(Refresh when miss and reduce permissions)
    • When the process is switched, because the used PA is inconsistent, the Cache has a high probability of switching
    • Cache has a concept called Cache ambiguity, which is caused by the fact that the Cache does not change after process switching.
      In order to prevent Cache ambiguity, the Cache is also cleared after process switching. It takes a long time from constant Miss to re-establishment
  • ✅D:
    • Cache caches content according to temporal locality and spatial locality
    • If the instruction execution sequence has strong time repeatability and space concentration, you can avoid memory access by accessing the Cache

14:

There is the following code in a multi-thread task, where x is a multi-thread shared variable
----------------------begin---------- ------------
static int x = 0; // lock.acquire()
at time T1 ; // x = 1 at time T2 ; // lock.release() at time T3 ; // time T4 ------------------------end--------------------- The following statement is correct yes? a. At time T1, the value of x must be 0 b. At time T2, the value of x must be 0 c. At time T3, the value of x must be 1 d. At time T4, the value of x must be 1 e. None of the above statements are correct













  • ❌A:
    • The first thread to enter the critical section, x has not been modified at this time, it is 0
    • After entering the critical section of the thread, x has been modified, which is 1
  • ❌B:
    • The first thread that enters the critical section, before x=1 is executed, x has not been modified, it is 0
    • After entering the critical section, x has been modified. Before x=1 is executed, x is already 1
  • ✅C:
    • At this time, only the current thread enters the critical section, and x=1 has just been executed, and there is no other thread to modify it before the lock is released
  • ✅D:
    • When going to T4, at least one thread executes x = 1, and no thread executes x=0, so once x is changed to 1, it is impossible to return to 0
  • ❌E

15:

Which of the following statements about condition variables (Condition Variable, cv) is correct?
a. The Wait(&lock) function must be called while holding the lock
b. The Signal() function will release the lock
c. When the Signal() function returns, the lock may not be held and the program needs to acquire the lock again
d .Broadcast() function will successfully wake up at least one waiting thread
  • ❌A:
    • wait() is to apply for a lock and is a condition variable value –
    • Try to apply for a lock when there is no lock. If the value of the condition variable is >0 at this time, you can apply for it. After acquiring the lock, the value of the condition variable –
    • If the value of the condition variable is 0 at this time, the lock cannot be applied, but the condition variable is still –, -1 means that a thread is blocked waiting to acquire the lock
  • ✅B:
    • signal() is to release the lock and is a condition variable++
    • When the condition variable value ++ > 0, the lock can be used by the process of wait(), otherwise there are |condition variable value| processes blocking and waiting
  • ✅C:
    • After signal() releases the lock, the current thread re-participates in the snatching of locks by many threads, and it is likely that the next round of locks will not be snatched
  • ❌D:
    • pthread_cond_signal() wakes up at least one process waiting to acquire the lock and starts competing for the lock
    • pthread_cond_broadcast() will wake up all processes waiting to acquire the lock and start competing for the lock

16:

Which of the following statements about the three common types of file systems: FAT, FFS, and NTFS is correct?
a. FAT file system has poor random read rate for large files;
b. FFS file system adopts asymmetric (depth) tree structure index to support efficient storage and search of small files and large files at the same time;
c . The NTFS file system is the most friendly to small files, because it can directly store data in the MFT, while the other two file systems require indexes; d. The
FAT file system can support sparse file representation;
e. The FAT file system uses next fit allocation Algorithm; FFS file system uses first fit allocation algorithm; NTFS file system uses best fit allocation algorithm
  • ✅A:

    • The process of finding files in the FAT file system in the explicit link file system:
      1. Find <file name: start block number> in the FCB table of memory
      2. Find some data in the starting block of the hard disk
      3. Find the next block number corresponding to the starting block number in the FAT table, if it is not EOR
      4. Find some data in the corresponding block number of the hard disk
      5. Find the next block number corresponding to the starting block number in the FAT table, if it is EOR
      6. End search, all data has been found
    • It can be seen that the xth block can only be found through the chain forward star method,
      so the FAT file system has a poor random read rate for large files.
    • In order to avoid repeated head movement, the FAT file system stores files according to the next fit principle
  • ✅B:

    • FFS uses BLA method to find data instead of CHS method
    • BLA divides the hard disk into two parts, BOOT Block & Block Groups
      Block Group is a large array, each array contains 6 parts:
      Block Bitmap and inode Bitmap are both bitmaps, so FFS uses the first fit algorithm Block Group- We use Block group[1 2 3] to store files, in which the inode node stores the metadata of the file information, and the Block stores the real file. The size of each BLock block is fixed, and the file metadata is stored in the inode instead of In the directory file where it is located - the data structure of FFS is a fixed-size asymmetric (different depth) tree of multi-level indexThe file number is the index of the inode Table, which is very suitable for accessing small and large files insert image description here
  • ✅C:

    • The NTFS file system adopts the best fit algorithm, which does not traverse all the bitmaps, but only caches a part: A
      function of NTFS: SetEndOfFile() is used to specify the expected size of the file when it is created
      NTSF

    • All NTFS file metadata (file-related information, similar to FFS inodes) are stored in the MTF area.
      Each MFT record item is about 1KB, and the format is: record header + attribute 1 + attribute 2 + attribute 3 + ...
      attribute contains the file name + file size + file modification time...,length can be extended
      Among them, the DATA attribute can directly store the file content of the complete small file, or the file pointer of the large file
      NTFSMFT

    • NTFS content directories are organized as B-trees or B+trees

      1. MTF file number 0 is the MFT itself
      2. MTF file number 5 is the root directory /
      3. MTF file number 6 is the free space bitmap
      4. MTF file number 8 is the list of bad blocks containing the volume
      5. MTF file number 9 is $Secure, that is, security access and control information
  • ❌D:

    • Only FAT does not support sparse files. The advantages of NTFS over FAT are:
      1. File Encryption & File and Folder Permissions
      2. Sparse File & Disk Compression
      3. single file size, partition size
      4. SetEndOfFile() is used to specify the expected size of the file on creation (file quota)
      5. Supports Active Directory (current working directory) & domains
    • FFS supports sparse files:
      where one or more ranges of empty space are surrounded by file data, and the empty space takes no disk space
    • NTFS also supports sparse files:
      useless 0 bytes are compressed by the algorithm and no longer take up a lot of space
    • ls shows sparse file size much larger than du shows sparse file size
  • ✅E:

    • Invention time:
      1. FAT is a file allocation table invented by Microsoft in the 1970s and still used today in flash memory sticks and digital cameras
      2. FFS was invented in the 1980s and has good spatial locality. Later EXT2 EXT3 is based on this
      3. NTFS is a new technology file system invented by Microsoft in the 1990s. It is the mainstream file system of MS and represents the layered file system of EXT4 XFS APPLE.
    • Adaptive Algorithm:
      1. FAT is next fit
      2. FFS is first fit
      3. NTFS is the best fit
    • Logical structure:
      1. FAT is a single linked list
      2. FFS is an asymmetric tree with good spatial locality
      3. NTFS is B/B+ tree, the tree structure is more flexible

17:

Which of the following statements about the virtual file system (vfs) is correct?
a. Just the definition of a standard API
b. The purpose is to better support different types of I/O hardware devices
c. Like traditional file systems, vfs also has the concepts of inode and dentry
d. I/O system calls It will be received by vfs first, and then passed to the corresponding file system
  • ❌A:
    • The early OS was tailor-made for hardware. After the network file system was born, people began to be interested in supporting multiple file types in a single system.
    • Modern VFS supports dozens of FSs, allows new functions and designs to be transparent to the program, and has a layer independent of the backing store. Support for in-memory filesystems and configurable pseudo-filesystems and network filesystems
    • VFS is not just an API wrapper, but also an important piece of code
  • ✅B:
    • VFS makes the user's perspective ignore hardware differences and can only see:
      1. Single programming interface, unified functions under POSIX that encapsulate system calls
      2. Single file system tree, which can transparently mount remote file systems
      3. Optionally customize libraries for each filesystem
  • ✅C:
    • VFS also uses BLA for disk management, where each Block group contains 6 parts
      1. The super block reflects the real file system, size, and status of the file
      2. There are two types of inodes, one is the inode of the VFS, and the other is the inode of the specific file system.
        The former is in memory, the latter is on disk. So each time, the inode in the disk is actually transferred to the inode in the memory, so that the inode of the disk file is used.
      3. The directory entry dentry is used to describe the logical attributes of the file. Each file has a directory entry structure, which
        contains the inode pointer. The directory entries of each file form a huge directory tree, and
        the directory entries only exist in memory. There is no actual corresponding on-disk description.
        A valid directory entry dentry is a structure, and its internal inode pointer must be valid
  • ✅D:
    • The real file system used by the Linux system is VFS, and its levels are as follows:
      VFS level

18:

Which of the following statements about the RAID 5 disk redundancy technology implemented using the parity method is correct?
a. Compared with RAID 1 full mirroring, it saves disk space
b. When more than 1 disk is damaged, data cannot be recovered
c. When only 1 disk is damaged, but it is unknown which disk, data cannot be recovered
d. The parity value is not placed on the same disk mainly to prevent the disk from becoming an I/O bottleneck
  • RAID0: Separately stored on multiple hard drives, no security measures

  • RAID1: Each hard disk has a backup disk

  • RAID10: Separately stored in multiple hard disks, each hard disk has a backup disk

  • RAID5: Assuming that there are n>2 hard disks, the data is divided into (n-1) parts, and an overall verification information is obtained for the (n-1) parts of data.
    Write n rounds, the first n-1 rounds are mainly for writing data, and the last round is mainly for writing verification information.
    In each round, a hard disk is selected as the total verification information carrier, and the rest are used as data carriers.

  • ✅A:
    - RAID1 consumes the most hard disk resources

  • ✅B:
    - Parity check is similar to XOR

  • ❌C:
    - You don't need to know what the damaged disk is, you only need to know the checksum information and the rest of the disk storage content

  • ✅D:
    - When a piece of data is read and found to be damaged:
    1. If the verification information is on every disk, then there is a disk head that is verifying the information at this time.
    2. If the verification information is only on one disk, At this time, the disk head needs to move to the verification information of this part


19:

Which of the following statements about virtual machines is correct?
a. The operating system kernel (guest kernel) in the virtual machine runs in the kernel state and can execute privileged instructions
b. When an interrupt occurs, the hardware decides to send it to the host kernel or the guest kernel
c. The guest kernel sends it from the kernel through the iret instruction When returning to the guest user program, it will fall into the host kernel
d. Use shadow page tables to realize the memory mapping in the virtual machine, and the host kernel needs to track the modification of the page table in the guest kernel
  • The most detailed and clear virtual machine introduction article summary:

  • Three conditions of virtualization:

    1. Equivalence: VMM needs to simulate the same environment for the virtual machine on the host machine as it runs on the physical machine
    2. Efficiency: The execution performance of virtual machine instructions needs to have no significant loss compared with the performance directly executed on a physical machine
    3. Resource control: VMM can fully control system resources, and vmm coordinates and controls the resource allocation of host machines to virtual machines
  • Trapped and simulated models:

    1. "Privilege-level compression": The kernel mode and user mode of the virtual machine are both running in the user mode of the physical machine.
    2. When the virtual machine executes unprivileged instructions, direct CPU execution
    3. When a virtual machine executes a privileged instruction, the essence is to execute the privileged instruction directly in the user mode, causing a processor exception, and falling into the VMM. The VMM exception handling function acts as a proxy for the virtual machine to complete access to system resources, that is, the VMM simulates the kernel mode.
    4. "Privilege-level compression" also meets the requirements of the VMM control system resources in the virtualization standard. The virtual machine will not modify the resources of the host machine because it can directly run privileged instructions, thereby destroying the environment of the host machine.
  • Barriers to x86 architecture virtualization:

    1. According to the permissions declared by the CPU's CS segment register, instructions are divided into privileged instructions that touch system resources and non-privileged instructions that do not touch system resources.
    2. However, under the x86 architecture, sensitive non-privileged instructions can also access system resources, and the virtual machine will not be intercepted by the VMM when executing these sensitive instructions
    3. There are two solutions: one is to modify the guest code, which does not comply with the virtualization transparency guidelines. The second is binary translation, which translates sensitive instructions into equivalent privileged instructions. Dynamic translation is better than static translation, and no new files need to be created.
  • Memory virtualization technology:

    1. Both the client and the host have their own address space, and this address space is divided into virtual address space and physical address space because of the process. Let's talk about how the virtual machine accesses the physical memory space of the physical machine
    2. Basic concepts:
      HPA: Host Physical Address
      HVA: Host Virtual Address
      GPA: Guest Physical Address
      GVA: Guest Virtual Address
      PDBR: Page Directory Table Physical Base Address Register, called CR3 on X86
      EPT: Extended Page Table
    3. The traditional MMU is only responsible for VA -> PA, and the MMU in the virtualization scenario is responsible for GVA -> GPA -> HVA -> HPA. Before the emergence of
      hardware-assisted memory virtualization, this process was realized through software, that is, through VMM. The most typical implementation is shadow page table technology.
    4. Shadow page table:
      VMM combines the page tables in Guest and Host into one page table, called shadow page table, to realize GVA->HPA mapping.
      shadow page table
    5. The shadow page table realizes the conversion of GVA -> HPA:
      • GVA->GPA: The software of the VMM layer will set the physical page of the physical machine used by the guest Page Table itself as write protected. When the Guest performs GVA->GPA writing content,Because it is read-only, it causes VM exit, traps to VMM. (We will explain the process of VM exit in detail when the CPU is virtualized).
      • GPA -> HVA: This process is realized by VMM software, which is easy to understand,It is the general malloc
      • HVA->HPA, this process is what we know to useThe physical MMU completes the conversion of the virtual memory of the VMM process to the physical memory.
      • Put GVA -> HPA,The mapping relationship of this path is recorded in the page table, which is the shadow page table
  • Intel's hardware processing of sensitive instructions:

    1. Intel does not modify non-privileged sensitive instructions into privileged instructions. After all, not all privileged instructions need to be intercepted.
      For example,kernel stateunderprocess switching, you need to change the CR3 register to save the page directory of the current process
    2. Originally, the VMM needs to capture every modification of the cr3 register by the guest kernel, so that it points to the shadow page table.
      useHardware EPT supportFinally, the cr3 register does not need to point to the shadow page table, just point to the page table of the guest kernel process,
      so the VMM does not need to capture the operation of the guest on the cr3 register, and this sensitive instruction does not need to be trapped in the VMM
    3. The running mode of the host machine is called VMX Root Mode.
      The running mode of the virtual machine is called VMX Non Root Mode.
      The cr3 register can clarify whether it is a guest or a host.
      The transition from the host machine to the virtual machine is called VM entry
      from the virtual machine to the host machine. The transition is called VM exit
    4. VMM runs in VMX Root Mode.
      With Ept hardware support, the virtual machine can get rid of the kernel mode and user mode and run in the user mode of the physical machine, that is, the way of privilege level compression, directly running under the two states of Non Root Mode.
    5. The VMM in Root Mode can switch to Non-Root_Mode by executing the CPU virtualization command VMLaunch, which is called VM entry.
      When a sensitive command appears, the CPU switches from Root Mode to Noe Root Mode, which is called VM exit, and then the VMM simulates it through privileged commands. Sensitive operation
    6. There are 3 differences in the VMX mode CPU with hardware support
      • The guest user state can go directly to the kernel state without going through the Host's kernel state
      • After the guest cpu receives the interrupt, the CPU exits from the guest to the host mode, and the host kernel state handles the interrupt, and returns to the guest mode after processing. IO can also be handled directly in guest mode
      • Originally, all privileged instructions occurred when the VM exited and returned to the kernel state of the host to be processed by the VMM. Now, the privileged instructions that do not require VMM intervention run directly on the guest. Sensitive instructions will still exit and return to the host kernel state, and the sensitive instructions will be simulated by privileged instructions.
    7. The VMX system designs a data structure for saving the context: VMCS, one of which is used to save the running status of the host and guest, and one is used to control the behavior of the guest.
  • ❌A:

    • Privilege-level compression, the kernel and user of the guest are in the user of the host
  • ❌B:

    • After the interrupt occurs, the host kernel first gives the interrupt handler, then the host kernel saves the stack pointer and register information, and finally the guest kernel executes the interrupt handler
  • ✅C:

    • After the guest kernel interrupt processing is completed, if you want to iret directly back to the guest user, a CPU exception occurs, return to the host kernel, and then the host kernel returns to the guest user
  • ✅D:

    • The host kernel modifies the page table by itself, of course it knows
    • If the guest kernel tries to modify the page table, it will trigger a read-only error, and then return to the host kernel, and the VMM will rewrite the shadow page table and OS page table

20:

(Single choice) Which of the following operations may not cause the user mode to switch to the kernel mode?
a. Page fault exception
b. Call a string function in libc
c. Divide by zero
d. User program opens a file on disk
  • ❌A:
    • The page fault exception will trigger the page fault exception handler, which is interrupt processing and needs to enter the kernel state
  • ✅B:
    • The libc library belongs to the system call library, and most APIs correspond to a system call. For example, the interface open() used in the application corresponds to the system call open() with the same name.
    • But there is no one-to-one correspondence between the APIs in the function library and the system calls. Applications can obtain the services provided by the kernel with the help of system calls, but some functions such as string operations do not need to be implemented with the help of the kernel, so they do not need to be associated with a system call.
    • There are interrupts, exceptions, and system calls for the user state to enter the kernel state. All system calls will make it enter the kernel state
  • ❌C:
    • A soft interrupt occurs when dividing by 0, and it is necessary to enter the kernel state for dividing by 0.
  • ❌D:
    • Disk IO belongs to file reading and writing, with system calls of open(), close() and write(), and needs to enter the kernel state

21:

The following operations that will immediately change the TLB content are
a. Increase the mapping in the page table entry
b. Modify the physical page address mapped in the page table entry
c. Thread A in the same process switches to thread B
d. User mode process A Switch to user mode process B
  • ☀️TLB is used as an address cache and needs to be consistent with the page table PTE
    • Three common flush TLB situations:
      1. TLB miss
      2. Process switching (excluding entering the kernel mode, only emphasizing different process switching)
      3. Page table entry permission changes
      4. Uncommon TLB shootdowns
    • Flush related to page table consistency:
      1. When the content of the page table PTE changes, such as when the page is swapped out during a page fault, only a certain TLB entry will be cleared
  • ✅A:
    • Since it is not sure whether there is VA corresponding to PA1 in the page table now, and the new mapping is the case of VA corresponding to PA2, an item in the flush TLB is required
  • ✅B:
    • Modifying the mapped physical page address belongs to modifying the page table entry, which needs to be synchronized with the TLB, and a separate item in the flush TLB
  • ✅C:
    • Since the physical addresses used between different processes are almost zero communication, flush is required
    • But after the process is switched, the TLB entries of the kernel part will not change greatly
  • ❌D:
    • If the entire TLB is flushed when entering the kernel mode, the kernel will face an empty TLB, and then the instruction and data access process in the kernel mode will be very long
    • After returning to the user state from the kernel state, the entire TLB is flushed, and it takes a long time to access instructions and data in the user state.

Subsequent:

  • Four months have passed since the last blog post, and we have entered 2023. I will soon finish my four-month internship and prepare for the postgraduate entrance examination. I hope to go ashore in World War I.

Guess you like

Origin blog.csdn.net/buptsd/article/details/128869779