Java non-relational database Redis related interview questions

1, Redis common data structure

String String, Hash, List, Set, Zset

2, the underlying implementation zset

ziplist / skiplist
参考

3, persistence solution

  • RDB: default, periodic snapshots to save
  • AOF: for each of a write command as a log, written to the log file append-only mode.

reference

4、rehash

rehash hash table refers to expansion or contraction. In order to allow the load factor of the hash table (load factor) is maintained within a reasonable range, when you save the hash table keys to too many or too few, the program needs to make corresponding size of the hash table expansion or shrink.
reference

5, features Redis transaction

Redis Learning II: Transaction

6. Why is single-threaded Redis can reach 100,000 + of QPS?

(A) a memory-operation
(b) single-threaded operation, avoiding frequent context switching
(iii) The non-blocking I / O multiplexer mechanism

7, Redis threading model

Classic Figure
Here Insert Picture Description
Reference

8, redis expiration policy and elimination mechanism memory

Expiration policy :
periodically delete + inert delete a policy
to periodically delete : refers to the redis default is every 100ms on a random sample of some of the key set expiration time, check if it has expired, expired delete
inert deleted : you get a key when, redis will check that the key so if you set an expiration time has expired, and if expired at this time will be deleted and will not give you anything in return.

Memory elimination mechanism :
why do we need memory elimination mechanism:
if not regularly delete delete key. Then you have no immediate request to the key, that is inert delete did not take effect. In this way, redis memory will be higher. Then it should use memory elimination mechanism.
Here Insert Picture Description
reference

9, redis the hash slot

Reference 1
Reference 2
Redis cluster description
consistent hashing and hash slot

10, redis how to implement the delay queue?

Use zset, with a time stamp for the score, is zadd production tasks, using zrangebyscore consumer task.

11, avalanche cache, cache penetration, warm cache, cache updates, cache downgrade and other issues

(1) cache Avalanche : a certain moment there is a lot of cache update or expire as a result of failure of a cache hit, causing all requests to check the database, the database resulting in high CPU and memory load, and even downtime.
Here Insert Picture Description
solution:

  • By setting different expiration time, to stagger the cache expires, thereby avoiding concentration cache failure, this way, we can be uniformly distributed cache expiration time on the time axis, while avoiding failure of the cache, where the update occurred.
  • From the perspective of application architecture, we can through the current limiting, fuse and other means to reduce the impact of such disasters can be avoided through a multi-level cache.
    Limiting embodiment:
    Here Insert Picture Description
    the user sends a request, the system A receives the request, will check the local cache ehcache, then check if not found redis. If ehcache and redis are not, then check the database,
    will result in the database, and write ehcache redis in.
    Limiting component, you can set requests per second, the number of components through the remainder of the request does not pass, how do? Go downgrade! You can return some default value, or Tips, or
    blank values.
    Benefits:
    database will never die, limiting the number of components to ensure that only requests per second pass.
    As long as the database die, that is to say, for the user, request 2/5 it is can be handled.
    2/5 as long as there is a request can be processed, it means that your system is not dead, for users, probably a few clicks brush out the page, but several times more, you can brush it once.

(2) penetrate cache :
In high concurrency scenarios, if a key is a high concurrent access, has not been hit, out, will try to get from the back-end database for fault tolerance consideration, resulting in a large number of requests to achieve database and when the case of the key corresponding to the data itself is empty, which led to the database concurrency to carry a lot of unnecessary search operation, leading to enormous pressure and shock. Easily be used by hackers to attack.
Solution:
not all data may correspond to an empty key unified storage, and do intercept before the request, thus avoiding penetration of the request to the backend database.

Bloom filter is a set of data structure similar to the ArrayList, it is a major method of determining whether comprising (means for determining a particular element is included in a set of elements).
The advantage is space efficiency and query time is far more than the general algorithm, the disadvantage is a certain error recognition rate. So Bloom Filter is not for the "zero error" application
occasion for applications can tolerate low error rate.
(3) breakdown cache:
Cache breakdown, a key that is very hot, very frequently accessed, in the case of a centralized high concurrent access, and when this key moment in the failure of the large number of requests to breakdown the cache directly It requests the database, like drilling a hole in the barrier.
Solution:
hot data can be set to never expire; or after the first request to complete construction of the cache, and then release the lock redis or zookeeper achieve mutex and wait on, then other requests to access the data through the key.
reference

12, Redis key to how to resolve conflicts

Chain address method

13, there is a security thread it Redis?

Of course there is. redis can only guarantee the atomicity of a single command, and either succeed or fail, but can not guarantee atomicity multiple commands, even with the transaction, can only guarantee the order and isolation orders, after the transaction is still one of failure will continue carried out.
Solution:
(1) Redis also has CAS (check-and-set) mechanism, using the watch command to do spin locks
(2) Lock

14, consistency hash algorithm redis clusters

reference

15, how to ensure data consistency Redis as cache

Use Redis as cache, there must be a problem is when the update data needs to be synchronized and update the database Redis, if one of the updates fail, a successful, may lead to inconsistent data.
Strong consistency solutions:
(1) for the database update operations to add transaction control, data after a successful update, delete the cache.
Simple, the disadvantage is after deleting the cache, if there are multiple concurrent queries over, have found no data in the cache, the request will fall on the database, the database resulting in instantaneous pressure increase.
(2) for the database update operations to add transaction control, data update is successful, the synchronization update the cache.
This is an improvement for the deletion of the way, but there are disadvantages, before writing to inquire once more, in some scenes is not used, for example, involve modification of data to multiple caches, they can not take the initiative to write, and so can only cache failure.

The final consistency Solution:
(. 1) MQ asynchronous refresh, the refresh timer
using asynchronous message MQ refresh mechanism, if the update fails to have the appropriate compensation mechanisms.
All objects need to be updated to store a timer task list, task list scheduled task scan asynchronous updates.
Both the query cache update mechanism does not guarantee consistency with DB, but to ensure eventual consistency.
(2) automatically expire
reasonable setting time cache invalidation, to be based on business scene setting each cache expiration time, the higher the required consistency, the shorter should be the natural expiration time.

Published 26 original articles · won praise 8 · views 10000 +

Guess you like

Origin blog.csdn.net/weixin_36142042/article/details/104909966