"Hanging the Interviewer" series-Redis common interview questions

The more you know, the more you don’t know. The
GitHub address https://github.com/AobingJava/JavaFamily is open source, there are interview sites, welcome [Star] and [Perfect]

Preface

Redis is so widely used in Internet technology storage, and almost all back-end technical interviewers have to make 360° difficulties for their friends in the use and principles of Redis.

As a face tyrant who took an offer once in the face of an Internet company, he defeated countless competitors. Every time he saw countless lonely figures leaving disappointedly, feeling a little guilty (please allow me to use exaggerated rhetoric).

So on a lonely night, I learned from the pain, and decided to start writing the "Hanging the Interviewer" series, hoping to help readers in the future, the interview will be like a smash, a 360° counterattack against the interviewer, and the interviewer will be slapped and asked together. The interview colleagues were dumbfounded and frantically harvesting offers from major manufacturers!

Talk

The last issue was completed in the environment of double eleven and staying up all night, so I feel that the quality is obviously not as good as before. I will work overtime as soon as I sleep well and prepare to compensate everyone and get some dry goods. (It’s too easy to catch a cold when staying up late, so please don’t use a prostitute this time!)

By the way, I drew a mind map about what I am going to write. I can't put a big picture in every article in the future, and I open it to my GitHub. If you are interested, you can improve and Star.

I will release this article for everyone to take a look.
"Hanging the Interviewer" series-Redis common interview questions

Look back

In the last issue of the hanging series, we mentioned some knowledge about Redis. Those who haven't read it can review it.

  • "Hanging the Interviewer" Series-Redis Basics
  • "Hanging the Interviewer" series-cache avalanche, breakdown, penetration
  • "Hanging the Interviewer" series-Redis sentinel, persistence, master-slave, hand tearing LRU
  • "Hanging the Interviewer" series-Redis final chapter-Winter is coming, FPX-the new king ascended the throne

Cache knowledge points

"Hanging the Interviewer" series-Redis common interview questions

What are the types of caches?

Caching is an effective means to improve the performance of hot data access in high-concurrency scenarios, and it is often used when developing projects.

The types of cache are divided into: local cache, distributed cache and multi-level cache.

Local cache:

Local caching is caching in the memory of the process. For example, in our JVM heap, it can be implemented with LRUMap or a tool such as Ehcache.

The local cache is memory access, without remote interaction overhead, and has the best performance, but is limited by the capacity of a single machine. Generally, the cache is small and cannot be expanded.

Distributed cache:

Distributed cache can solve this problem well.

Distributed caches generally have good horizontal scalability, and can cope with scenarios with larger data volumes. The disadvantage is that remote requests are required, and the performance is not as good as the local cache.

Multi-level cache:

In order to balance this situation, multi-level caching is generally used in actual business. The local cache only stores the most frequently accessed hotspot data, and other hotspot data is placed in the distributed cache.

This is also the most commonly used cache solution among the current first-tier manufacturers. A single cache solution is often difficult to support many high-concurrency scenarios.

Elimination strategy

Regardless of whether it is a local cache or a distributed cache, in order to ensure high performance, memory is used to store data. Due to cost and memory limitations, when the stored data exceeds the cache capacity, the cached data needs to be eliminated.

The general elimination strategies include FIFO elimination of the earliest data, LRU elimination of the least recently used data, and LFU elimination of the least recently used data.

  • noeviction: returns an error when the memory limit is reached and the client tries to execute commands that will make more memory used (most write commands, but DEL and a few exceptions)

  • allkeys-lru: Try to reclaim the least used keys (LRU) so that the newly added data has space for storage.

  • volatile-lru: Attempt to reclaim the least used keys (LRU), but only for keys in the expired collection, so that the newly added data has space for storage.

  • allkeys-random: Recycle random keys so that the newly added data has space to store.

  • volatile-random: Recycling random keys allows space for newly added data to be stored, but only for keys in the expired collection.

  • volatile-ttl: Reclaim the keys in the expired collection, and give priority to reclaiming the keys with a shorter time to live (TTL), so that newly added data has space for storage.

If no key meets the prerequisites for recycling, the strategies volatile-lru, volatile-random and volatile-ttl are almost the same as noeviction.

In fact, the Lru algorithm is also implemented in the familiar LinkedHashMap, which is implemented as follows:
"Hanging the Interviewer" series-Redis common interview questions

When the capacity exceeds 100, the LRU strategy is executed: evict the least recently unused TimeoutInfoHolder object.

In the real interview, you will be asked to write the LUR algorithm. Don’t do the original one. There are too many TMs and you can’t finish writing. You must either use the above or the following, find a data structure to implement the Java version of LRU or It's easier, just know the principle.
"Hanging the Interviewer" series-Redis common interview questions

Memcache

Note that Memcache will be referred to as MC later.

Let's take a look at the characteristics of MC:

  • MC uses multi-threaded asynchronous IO when processing requests, which can make reasonable use of the advantages of CPU multi-core, with excellent performance;
  • MC has simple functions and uses memory to store data;
  • I won’t go into details about the memory structure and calcification of MC, you can check the official website to understand;
  • MC can set an expiration date for the cached data, and the data after the expiration will be cleared;
  • The invalidation strategy adopts delayed invalidation, which is to check whether it is invalid when the data is used again;
  • When the capacity is full, the data in the cache will be eliminated. In addition to clearing the expired keys, the data will also be eliminated according to the LRU strategy.

In addition, the use of MC has some limitations, these limitations are very fatal in the current Internet scenario, and become an important reason why everyone chooses Redis and MongoDB:

  • The key cannot exceed 250 bytes;
  • The value cannot exceed 1M bytes;
  • The maximum expiration time of the key is 30 days;
  • Only supports KV structure, does not provide persistence and master-slave synchronization functions.

Redis

Let me briefly talk about the characteristics of Redis, which is convenient for comparison with MC.

  • Unlike MC, Redis uses a single-threaded mode to process requests. There are two reasons for this: one is because of the non-blocking asynchronous event processing mechanism; the other is that the cached data is memory operation IO time will not be too long, single thread can avoid the cost of thread context switching.
  • Redis supports persistence, so Redis can be used not only as a cache, but also as a NoSQL database.
  • Compared with MC, Redis has a very big advantage, that is, in addition to KV, it also supports multiple data formats, such as list, set, sorted set, hash, etc.
  • Redis provides a master-slave synchronization mechanism and Cluster deployment capabilities to provide highly available services.

Detailed Redis

The knowledge point structure of Redis is shown in the figure below.

"Hanging the Interviewer" series-Redis common interview questions

Features

Let's see what features Redis provides!

We first look at the basic types:

String:

The String type is the most commonly used type in Redis, and the internal implementation is stored through SDS (Simple Dynamic String). SDS is similar to ArrayList in Java and can reduce frequent memory allocation by pre-allocating redundant space.

This is the simplest type, that is, ordinary set and get, which do simple KV caching.

However, in a real development environment, many people may also convert many more complex structures into Strings for storage and use. For example, some people like to convert objects or Lists to JSONString for storage, and then take them out and reverse the sequence. of.

I will not discuss the right or wrong of doing this here, but I still hope that everyone can use the most suitable data structure in the most suitable scene. The object cannot find the most suitable but the type can choose the most suitable, and then others will take over Your code looks so standardized, eh, this young man has something, I see that you use String for everything, rubbish!
"Hanging the Interviewer" series-Redis common interview questions

Well, these are all digressions. I still hope that everyone will keep it in mind. Habits become natural, and small habits make you.

The actual application scenarios of String are more extensive:

  • Cache function: String is the most commonly used data type, not only Redis, but all languages ​​are the most basic type. Therefore, using Redis as a cache, cooperating with other databases as a storage layer, and using Redis to support high concurrency can greatly Speed ​​up the read and write speed of the system and reduce the pressure on the back-end database.

  • Counter: Many systems will use Redis as the system's real-time counter, which can quickly implement counting and query functions. And the final data results can be stored in a database or other storage media for permanent storage at a specific time.

  • Shared user session: The user refreshes the interface again, may need to access the data to log in again, or access the page to cache cookies, but the user's session can be centrally managed by Redis. In this mode, only the high availability of Redis needs to be guaranteed. Both the update and acquisition of the user session can be completed quickly. Greatly improve efficiency.

Hash:

This is a structure similar to Map. This generally means that structured data, such as an object (provided that this object is not nested with other objects) can be cached in Redis, and then every time you read and write the cache, you can Just manipulate a field in Hash.

But this scene is actually somewhat singular, because many objects are now more complex. For example, your product object may contain many attributes, including objects. I don’t use that many scenes myself.

List:

List is an ordered list, this can still play a lot of tricks.

For example, you can store some list-type data structures through List, such as fan lists and article comment lists.

For example, you can use the lrange command to read the elements in a closed interval, and you can implement paging query based on List. This is a great feature. Based on Redis to implement simple high-performance paging, you can do drop-down and continuous paging similar to Weibo. The things that have high performance, just go page by page.

For example, you can set up a simple message queue, go in from the top of the List, and get it out of the bottom of the List.

List itself is a commonly used data structure in the development process, let alone hot data.

  • Message queue: Redis's linked list structure can easily implement blocking queues. You can use left-in and right-out commands to complete the queue design. For example, the data producer can insert data from the left through the Lpush command, and multiple data consumers can use the BRpop command to block the data at the end of the list.

  • Application for displaying article list or data page.

For example, the article list of our commonly used blog site, when the number of users is increasing, and each user has his own article list, and when there are many articles, it needs to be displayed in pages. At this time, you can consider using Redis lists, lists It is not only orderly but also supports fetching elements according to the range, which can perfectly solve the paging query function. Greatly improve query efficiency.

Set:

Set is an unordered collection, which will automatically remove duplicates.

Throw in the data that needs to be deduplicated in the system directly based on the Set, and it will be automatically deduplicated. If you need to quickly deduplicate some data globally, you can of course also deduplicate based on the HashSet in the JVM memory, but if How about one of your systems deployed on multiple machines? It is necessary to perform global Set de-duplication based on Redis.

It can be based on Set to play the operations of intersection, union, and difference. For example, intersection. We can intersect the friends list of two people to see who their mutual friends are? Right.

Anyway, these scenes are more, because the comparison is fast, the operation is also simple, two queries and one Set are done.

Sorted Set:

Sorted set is a sorted Set, which is deduplicated but can be sorted. When it is written, a score is given and it is automatically sorted according to the score.

The usage scenario of ordered set is similar to set, but set set is not automatically ordered, and Sorted set can use scores to sort between members, and it is sorted when inserting. So when you need an ordered and non-repetitive collection list, you can choose the Sorted set data structure as the option.

  • Ranking: An orderly collection of classic usage scenarios. For example, a video website needs to make a ranking list for videos uploaded by users, and the list maintenance may be in many aspects: according to time, according to the number of views, according to the number of likes received, etc.

  • Use Sorted Sets to make a weighted queue. For example, the score of ordinary messages is 1, and the score of important messages is 2, and then the worker thread can choose to get work tasks in the reverse order of the score. Give priority to important tasks.

Weibo hot search list has a popularity value at the back and a name in front

Advanced usage:

Bitmap :

Bitmap supports storing information by bit, which can be used to implement BloomFilter;

HyperLogLog:

Provides inaccurate deduplication counting function, which is more suitable for deduplication statistics of large-scale data, such as statistical UV;

Geospatial:

It can be used to save the geographic location, and calculate the location distance or calculate the location based on the radius. Have you ever thought of using Redis to achieve people nearby? Or calculate the optimal map path?

These three can actually be counted as a kind of data structure. I don’t know how many friends remember it. I mentioned in the Redis basics where the dream started. If you only know the five basic types, you can only get 60 points. If you can speak advanced usage, then I think you have something.

pub/sub:

The function is a subscription publishing function, which can be used as a simple message queue.

Pipeline:

A group of instructions can be executed in batches, and all results can be returned at one time, which can reduce frequent request responses.

Contact:

Redis supports submitting Lua scripts to perform a series of functions.

When I was in the former e-commerce company, I often used this thing in the spike scene. It was a bit sensible to make sense, and I used its atomicity.

By the way, do you want to see the design of seckill? I remember that I seem to ask every time during the interview. If you want to see it, just like it and comment on it.

Affairs:

The last function is transactions, but Redis does not provide strict transactions. Redis only guarantees serial execution of commands and guarantees all execution, but when the execution of the command fails, it will not roll back, but will continue to execute.

Endurance

Redis provides two persistence methods: RDB and AOF. RDB writes data sets in memory to disk in the form of snapshots. The actual operation is executed by fork sub-processes and uses binary compression storage; AOF records Redis in the form of text logs. Each write or delete operation processed.

RDB saves the entire Redis data in a single file, which is more suitable for disaster recovery, but the disadvantage is that if there is a downtime before the snapshot is saved, the data during this period will be lost. In addition, saving the snapshot may cause short-term service failure use.

The append mode used by AOF for log file write operations has a flexible synchronization strategy that supports synchronization per second, synchronization per modification, and asynchrony. The disadvantage is that for data sets of the same size, AOF is larger than RDB, and AOF is more efficient in operation It tends to be slower than RDB.

High availability

Look at the high availability of Redis. Redis supports master-slave synchronization, provides a Cluster deployment mode, and monitors the status of the Redis master server through Sentine l. When the master hangs up, the new master is selected in the slave node according to a certain strategy, and other slaves are adjusted to the new master.

There are three simple strategies for selecting the master:

  • The lower the priority of the slave is set, the higher the priority;
  • Under the same circumstances, the more data the slave replicates, the higher the priority;
  • Under the same conditions, the smaller the runid, the easier it is to be selected.

In a Redis cluster, sentinel will also perform multi-instance deployment, and sentinels use the Raft protocol to ensure their high availability.

Redis Cluster uses a fragmentation mechanism, which is internally divided into 16384 slot slots, which are distributed on all master nodes, and each master node is responsible for a part of the slot. During data operation, CRC16 is performed according to the key to calculate which slot and which master processes it. Data redundancy is guaranteed by slave nodes.

sentinel

The sentry must use three instances to ensure its robustness. The sentry + master-slave does not guarantee that data will not be lost, but it can ensure the high availability of the cluster.

Why are three instances necessary? Let's see what happens to the two sentries first.
"Hanging the Interviewer" series-Redis common interview questions

The master is down. As long as one of the two sentries, s1 and s2, thinks that you are down, it will switch, and will elect a sentinel to perform the failure, but this time also requires most sentries to be running.

What's the problem with this? M1 is down, but S1 is not down. It is OK, but the whole machine is down? There are only S2 naked dicks left by the sentry. There is no sentry to allow failover. Although there is still R1 on another machine, the failover is not executed.

The classic sentinel cluster looks like this:
"Hanging the Interviewer" series-Redis common interview questions

The machine where M1 is located is down, and there are still two sentinels. When two people see if he is down, then we can elect one to perform failover.

Warm man, I will summarize the main functions of the sentry component:

  • Cluster monitoring: responsible for monitoring whether Redis master and slave processes are working properly.
  • Message notification: If a Redis instance fails, the sentry is responsible for sending a message as an alarm notification to the administrator.
  • Failover: If the master node fails, it will automatically be transferred to the slave node.
  • Configuration Center: If a failover occurs, notify the client of the new master address.

Master-slave

The mention of this is closely related to the RDB and AOF for data persistence I mentioned earlier.

Let me first talk about why you need to use a master-slave architecture model. I mentioned that the single-machine QPS has an upper limit, and the feature of Redis must support high read concurrency. Then you read and write on a machine, who is the best? Got to live, not to be a person! But if you let this master machine write and synchronize the data to other slave machines, it will be much better for them to read and distribute a large number of requests, and horizontal expansion can be easily achieved when expanding.
"Hanging the Interviewer" series-Redis common interview questions

When you start a slave, it will send a psync command to the master. If this slave connects to the master for the first time, it will trigger a full replication. The master will start a thread, generate an RDB snapshot, and cache all new write requests in memory. After the RDB file is generated, the master will send the RDB to the slave. The first thing the slave does after getting it is Write it to the local disk and load it into the memory. Then the master will send all the new names cached in the memory to the slave.

Netizen from CSDN after I sent it out: Jian_Shen_Zer asked a question:

When the master and slave are synchronized, RDB is used when the new slaver comes in. What about the subsequent data? How to synchronize new data into the master to slaver

Ao Bing: Stupid, AOF. The incremental ones are like MySQL's Binlog. Just synchronize the incremental log to the slave server.

key failure mechanism

Redis keys can be set with an expiration time. After expiration, Redis adopts a combination of active and passive invalidation mechanisms. One is to trigger passive deletion when accessed like MC, and the other is to periodically delete actively.

Cache FAQ

Cache update method

This is a question that should be considered when deciding to use the cache.

The cached data needs to be updated when the data source changes. The data source may be a DB or a remote service. The update method can be active update. When the data source is DB, you can update the cache directly after updating the DB.

When the data source is not a DB but other remote services, it may not be able to actively perceive data changes in time. In this case, it is generally chosen to set an expiration date for cached data, that is, the maximum tolerance time for data inconsistency.

In this scenario, you can choose to invalidate the update. When the key does not exist or becomes invalid, the data source is first requested to obtain the latest data, and then cached again, and the expiration date is updated.

But there is a problem with this. If the dependent remote service is abnormal during the update, the data will be unavailable. The improved method is to update asynchronously, that is, when it fails, the data is not cleared first, and the old data is continued to be used, and then the asynchronous thread performs the update task. This avoids the window period at the moment of failure. There is also a pure asynchronous update method, which updates the data in batches at regular intervals. In actual use, the update method can be selected according to the business scenario.

Inconsistent data

The second problem is the problem of data inconsistency. It can be said that as long as the cache is used, it is necessary to consider how to face this problem. The cause of cache inconsistency is generally the failure of active update, for example, after updating the DB, the update of Redis request timeout due to network reasons; or the asynchronous update failed.

The solution is to increase retry if the service is not particularly sensitive to time consumption; if the service is sensitive to time consumption, the failed update can be processed through asynchronous compensation tasks, or short-term data inconsistency will not affect the business, so as long as the next update The time can be successful and the final consistency can be guaranteed.

Cache penetration

Cache penetration. This problem may be caused by external malicious agents. For example, user information is cached, but malicious agents frequently request interfaces using non-existent user IDs, causing the query cache to miss and then penetrate the DB query Still missed. At this time, a large number of requests will penetrate the cache to access the DB.

The solution is as follows.

  1. For non-existent users, an empty object is stored in the cache for marking to prevent the same ID from accessing the DB again. However, sometimes this method does not solve the problem well, and may cause a large amount of useless data to be stored in the cache.
  2. Using BloomFilter, BloomFilter is characterized by existence detection. If BloomFilter does not exist, then data must not exist; if BloomFilter exists, actual data may also not exist. Very suitable for solving this kind of problem.

Cache breakdown

Cache breakdown means that when a certain hot data fails, a large number of requests for this data will penetrate the data source.

There are the following solutions to solve this problem.

  1. You can use the mutex lock to update to ensure that the same data will not be concurrently requested to the DB in the same process, reducing the pressure on the DB.
  2. Use the random backoff method, randomly sleep for a short time when it fails, query again, and perform the update if it fails.
  3. In view of the problem of multiple hot keys being invalidated at the same time, a fixed time can be added to a small random number during caching to avoid a large number of hot keys becoming invalid at the same time.

Cache avalanche

The caching avalanche is caused by the cache hanging, and all requests will penetrate the DB at this time.

Solution:

  1. Use fast failure fusing strategy to reduce DB instant pressure;
  2. Use master-slave mode and cluster mode to try to ensure the high availability of cache services.

In actual scenarios, these two methods will be used in combination.

Old friends all know why I don’t have a large space to introduce these points. I wrote too much in the previous article. I can’t help but like it. I won’t make duplicate copies here.

  • "Hanging the Interviewer" Series-Redis Basics
  • "Hanging the Interviewer" series-cache avalanche, breakdown, penetration
  • "Hanging the Interviewer" series-Redis sentinel, persistence, master-slave, hand tearing LRU
  • "Hanging the Interviewer" series-Redis final chapter-Winter is coming, FPX-the new king ascended the throne

Test sites and bonus items

Take a note!
"Hanging the Interviewer" series-Redis common interview questions

Test site

In the interview, I ask you about caching, mainly to investigate the understanding of caching features, and mastery of the features and usage of MC and Redis.

  • To know the usage scenarios of the cache, the usage of different types of caches, for example:

1. Cache DB hotspot data to reduce DB pressure; cache dependent services to improve concurrent performance;

2. The scenario of pure KV caching can use MC, while special data formats such as list and set need to be cached, and Redis can be used;

3. It is necessary to cache a list of recent videos played by the user and can be saved using Redis's list. When the leaderboard data needs to be calculated, it can be saved using Redis's zset structure.

  • To understand the common commands of MC and Redis, such as atomic increase and decrease, commands for operating on different data structures, etc.

Understand the storage structure of MC and Redis in memory, which will be very helpful for evaluating the used capacity.

  • Understand the data failure methods and elimination strategies of MC and Redis, such as actively triggered periodic elimination and passive trigger delayed elimination

  • It is necessary to understand the principles of Redis persistence, master-slave synchronization, and Cluster deployment, such as the implementation and differences between RDB and AOF.

  • You need to know the similarities and differences between cache penetration, breakdown, and avalanche, and solutions.

  • Regardless of whether you have e-commerce experience or not, I think you should know the specific implementation and details of the spike.

  • ……..

Welcome to GitHub to add

bonus

If you want to get better performance in the interview, you should also understand the following bonus points.

  • It is to introduce the use of cache in combination with actual application scenarios. For example, when calling the back-end service interface to obtain information, you can use a local + remote multi-level cache; for a dynamic leaderboard scenario, you can consider implementing it through Redis's Sorted set, and so on.

  • It is best if you have experience in the design and use of distributed caches, such as what scenarios have you used Redis in the project, what data structures have been used, and what types of problems have been solved; when using MC, adjust the McSlab allocation parameters according to the estimated value, etc.

  • It is best to understand the problems that may arise in the use of the cache. For example, Redis is a single-threaded processing request, and you should try to avoid time-consuming single request tasks to prevent mutual influence; Redis services should avoid deploying on the same machine with other CPU-intensive processes; or disable swap memory exchange to prevent Redis caching Data is exchanged to the hard disk, which affects performance. Another example is the MC calcification mentioned earlier.

  • To understand the typical application scenarios of Redis, for example, use Redis to implement distributed locks; use Bitmap to implement BloomFilter, use HyperLogLog to perform UV statistics, and so on.

  • Know the new features in Redis 4.0 and 5.0, such as support for multicast and persistent message queue Stream; use the Module system to expand customized functions, and so on.

  • ……..

Again, welcome to GitHub to add.

to sum up

This time is a summary of my Redis series. This should be the last article related to Redis. In fact, many of the four friends who have read it have gone from a little knowledge to a dumb face, haha ​​just kidding.

I think my approach should be fine. Most of my friends are quite understandable. After this article, I won’t write Redis-related articles (see the popularity of everyone you want to see). If you have any questions, please contact me on WeChat. , What's the next series?

Don’t worry, everyone, I will post an interesting article before the next series. It is the article I won in the company code creative competition. I think there is still something. I can’t help sharing it. By the way, I will start voting in that issue. Haha .

I saw that many friends have commented that they want to see other things. I probably collected them. I haven’t commented on this issue.

Nuggets

Yu Xin: I want to see computer basics, networks and operating systems (FPX cattle spleen)

Mr. Cherish: Talk about the interview questions that dubbo often encounters, too many people like to ask dubbo

Java architecture development note: It’s really fragrant. In the next issue, I will talk about Dubbbo (focus on SPI) and then talk about MQ.

CSDN

Your Highness: After reading all the redis articles, I hope I can publish ssm

Blog garden

程然:Dubbo Dubbo

Open Source China

linshi2019: This issue is obviously a rush work

Ao Bing: I’ll go back to this one, spur me on, I like it very much, but to be honest, I hope everyone understands, I stayed up for three days on Double Eleven, and now I’m on duty home around 2 o’clock when I write it to you. One day’s meal and work hours are definitely fixed. If you want to write something, you can only squeeze out time for sleep. This output is definitely not as high as the quality of writing on weekends.

In fact, the friends who watched the first issue should also know that I am typesetting and I have a lot of copywriting. In fact, I have been improving the pictures. I have to do it for a long time because I am afraid that everyone will look at the single. Black and white tones are dull.

I really do it with my heart, or I hope you can understand it with your support.

No one knows why, Jianshu, Sifei, MOOC Notes, no one knows why, the old iron who knows how to do it can tell me.

I just want to say that what you want to see is definitely in the picture at the beginning of me and GITHub, the problem is not big, and I will write later.

Now Double Eleven has passed, and the ads on my server can’t make money anymore. The only source of writing may be the advertising space under the article on my WeChat official account. You can click to log out. WeChat will pay me the advertising fee by clicking. Not much money is a little support for me, haha! But the white prostitute is fine.

It’s only 3 yuan for me to write an article, but there are not many people who follow my official account. Anyway, please.
"Hanging the Interviewer" series-Redis common interview questions

Just click on the ad below the article, clicking on him is equivalent to giving me a coin
"Hanging the Interviewer" series-Redis common interview questions

Thanks

Finally, thanks to Zhang Lei, a technical expert on Sina Weibo.

He joined Sina Weibo in 2013. As a core technician, he participated in several key projects such as Weibo servitization and hybrid cloud. He is the technical leader of Weibo's open source RPC framework Motan, and is also responsible for Weibo's Service Mesh solution. The R&D and promotion of the company focuses on the development of high-availability architecture and service middleware.

The Motan framework he is responsible for carries trillions of requests every day, and is the cornerstone of the servicing of the Weibo platform. Every sudden hot event and every Spring Festival Gala traffic peak cannot be separated from the support and guarantee of the Motan framework. In addition, he has been invited to share technology at ArchSummit, WOT, and GIAC technology summits many times.

Thank him for his support and ideas for part of the copywriting of the article.

END

Alright, everyone, the above is the entire content of this article. The people who can see here are all talents.

Later, I will update a few articles about the "Hanging the Interviewer" series and the Internet common technology stacks every week. If you have anything you want to know, you can also leave a message to me, I will write it out as soon as I have time, and we will make progress together.

Thank you very much for the talents to see here. If this article is well written, if I think "Ao Bing", I have something to ask for praise, for attention, for sharing, for leaving a comment, it is very useful for me! ! !

Your support and recognition is the biggest motivation for my creation. See you in the next article!

Ao Bing | Text [Original] [Reprint, please contact me]

The "Hanging the Interviewer" series is continuously updated every week. You can follow my official account JavaFamily to read and remind you for the first time (the official account is one or two days earlier than the blog). There is also my personal WeChat account. You can directly Didi me, we make progress together.

"Hanging the Interviewer" series-Redis common interview questions

Guess you like

Origin blog.51cto.com/14689292/2546073