The thickness of the lock granularity and the space-time loss are interchanged

1 Cases where space is exchanged for time

1.1 Redis user group current limiter and user-defined current limiter

Redis user group throttling and user-defined throttling : Using Redis for user throttling or custom throttling means that you use the Redis database to maintain user access restrictions. Current limiting can be implemented through algorithms such as counters, sliding windows, or token buckets. User group current limit is to set limits for specific user groups, while user-defined current limiter is to set custom limits for each user.

User group current limit and user-defined current limiter : Maintaining user current limit status in Redis requires additional memory space, but it can quickly check whether the user has reached the current limit threshold.

1.2 In the seckill scenario, the comparison between setting multiple sub-inventory schemes and setting only one total inventory scheme

Database row lock solution in the flash sale scenario : In the high-concurrency flash sale scenario, it is a very common method to use database row locks to ensure the consistency of inventory data. But this can only be requested one by one to deduct inventory, and the concurrency performance is not very good.

Therefore, you can set several more sub-inventories in the data table. The initial sum is equal to the total inventory, so that when a request comes, you can select a sub-inventory for deduction through the load balancing strategy. When a certain sub-inventory is deducted to 0, it will be closed. The routing for this inventory.

This is also the practice of exchanging space for time.

1.3 Using multi-message queue or single-message queue in multi-service

Use multiple message queues or single message queues in multi-service : In a multi-service system, using multiple message queues can help isolate different business processes and improve system scalability and maintainability. Using a single message queue is more suitable for scenarios where the business is relatively simple, or the global order of messages needs to be guaranteed.

  • Use multiple message queues or single message queues in multi-service : Using multiple message queues means that more memory and storage space are required to maintain these queues, but it can better isolate different business processes and increase the speed of concurrent consumption.

1.4 HashMap segment lock in jdk1.7 and slot lock in 1.8

HashMap's segment lock in JDK 1.7 and slot lock in JDK 1.8 : In JDK 1.7, ConcurrentHashMapthe segment lock technology is used to divide the data into multiple segments, and each segment is locked independently. In this way, different threads can operate different segments at the same time, improving concurrency. In JDK 1.8, ConcurrentHashMapsegment locks are no longer used, but slot locks ( ) are used synchronized, and red-black trees are introduced to optimize the lookup performance of linked lists.

  • HashMap segment locks in JDK 1.7 and slot locks in JDK 1.8 : Using segment locks or slot locks requires additional memory to maintain the state of the lock, but can improve ConcurrentHashMapconcurrency and search performance.

1.5 Redis implements fragmentation counter

Method: Set multiple sub-counters in multiple redis instances. Each request can use a load balancing strategy to increment a certain counter, and add up all these counters when summarization is required.

Advantages: Distributing counters on multiple Redis instances can effectively reduce the pressure on a single Redis instance and improve the overall concurrent processing capability. Each Redis instance can process requests in parallel, increasing throughput.
Disadvantages: Multiple counters will take up more memory space, more times of network IO, and there may be consistency problems.

2 cases where time is exchanged for space

Time-for-space is a common optimization strategy, which means saving space by spending more time. This is often used in resource-constrained environments, such as in embedded systems, mobile devices, or other resource-constrained environments. Here are some common usage scenarios:

  1. Data Compression : Save storage space by compressing data, but this increases the time required for compression and decompression.

  2. Algorithm Optimization : Use simpler algorithms to save space, but may take more time. For example, choose a sorting algorithm that doesn't use extra space (such as bubble sort) rather than one that uses extra space (such as quicksort).

  3. On-demand calculation (lazy loading in singleton mode) : calculation is performed when data is needed, rather than pre-calculated and stored results. For example, the values ​​of the Fibonacci sequence can be calculated when needed, rather than precomputing and storing the entire sequence.

  4. Use simple data structures : Use simple data structures such as arrays and linked lists to save space, but may increase the time required for lookups and operations.

  5. Database normalization : Reduce data redundancy and save storage space through database normalization, but may increase query complexity and query time.

  6. Online processing : When processing data, read directly from the input stream, output directly after processing, and do not store intermediate results. This saves space, but if you need to reprocess the data, you need to start all over again.

It should be noted that time-for-space is not always applicable. On some occasions, space may be the more important resource, while on other occasions, time may be the more important resource. Therefore, when deciding whether to use the time-for-space strategy, it needs to be weighed according to specific application scenarios and requirements.

Guess you like

Origin blog.csdn.net/yxg520s/article/details/132301663