Edge computing from entry to abandonment

Edge computing is an underlying technology architecture that collects, analyzes and stores data on-site at a production facility (equipment), saving time and helping maintain operations, rather than relying on slower systems that store all data in the cloud. 

Edge computing companies continue to provide solutions to meet people's demand for edge computing, which is triggered by factors like latency, bandwidth, privacy, and autonomy.

Edge Computing Development Prospects: Baidu Security Verification

MEC of Edge Computing: (6 messages) Introduction to MEC of Edge Computing Series

2. Technology 

2.1 Using logs in Java

slf4j : log facade, the abstract layer of the log, a layer of specification, the log framework can implement the specific details according to its specification, when using it, use the log directly by operating the facade

Ali specification : the application cannot directly use the API in the log system (Log4j, Logback), but should rely on the API in the log framework SLF4J. Using the log framework of the facade mode is conducive to maintenance and unification of the log processing methods of each class.

Actual use : In actual project development, follow the Ali specification and use the combination of SLF4J and other logging frameworks.

This article teaches you how to use logs in project development_See you! Blog-CSDN Blog_How to use logs in java projects

2.2 Redis

Redis is the cache of the database. The data queried by applications from MySQL should be registered in Redis, and when they need to be used later, they should ask Redis first, and Redis will not ask MySQL again.

2.2.1 Redis into several operations

[Cache expiration && cache elimination] : Delete some data that has not been accessed for a long time

* noeviction: return an error, no key value will be deleted

* allkeys-lru: use the LRU algorithm to delete the least recently used key value

* volatile-lru: Use the LRU algorithm to delete the least recently used key value from the key set with the expiration time set

* allkeys-random: Randomly delete from all keys

* volatile-random: Randomly delete from the set of keys with expiration time set

* volatile-ttl: Delete the key with the shortest remaining time from the keys with the expiration time set

* volatile-lfu: Remove the least frequently used key from the keys configured with an expiration time

* allkeys-lfu: delete the least frequently used key from all keys
[cache penetration && Bloom filter] : Redis directly rejects some key values ​​that cannot be queried in MySQL

[Cache Breakdown && Cache Avalanche] : Some data is deleted by Redis after the expiration date, and then a large amount of the same data is accessed. If it cannot be found in Redis, it directly accesses MySQL.

2.2.2 Data consistency between database and cache

When performing data query, the database and cache will be operated. At this time, concurrency problems will arise, so data consistency is required.

Retry mechanism and subscribe to MySQL binlog, and then operate the cache.

Retry mechanism :

We can introduce a message queue, add the data to be operated by the second operation (delete cache) to the message queue, and let the consumer operate the data.

If the application fails to delete the cache, it can re-read the data from the message queue, and then delete the cache again. This is the retry mechanism. Of course, if the retry exceeds a certain number of times and still fails, we need to send an error message to the business layer.

If the deletion of the cache is successful, the data must be removed from the message queue to avoid repeated operations, otherwise continue to retry.
 

Subscribe to MySQL binlog, and then operate the cache:

The first step in the strategy of "update the database first, then delete the cache" is to update the database. If the database is successfully updated, a change log will be generated and recorded in the binlog.

So we can obtain the specific data to be operated by subscribing to binlog logs, and then perform cache deletion. Alibaba's open source Canal middleware is based on this implementation.

Canal simulates the interactive protocol of MySQL master-slave replication, disguises itself as a MySQL slave node, and sends a dump request to the MySQL master node. After receiving the request, MySQL will start pushing Binlog to Canal. After Canal parses the Binlog byte stream, Converted into structured data that is easy to read and used by downstream program subscriptions.
 

 Therefore, if we want to ensure that the second operation of the "update the database first, then delete the cache" strategy can be successfully executed, we can use "message queue to retry the cache deletion", or "subscribe to MySQL binlog and then operate the cache". These methods have a common feature, they all use asynchronous operation cache .

2.2.3 Redis Persistence

RDB persistence and AOF persistence in two ways

2.2.4 Redis-api

Detailed explanation of redis (full)_Ferao's blog-CSDN blog_redis

2.3 Collection operations in Java

 

 

Guess you like

Origin blog.csdn.net/qq_43681154/article/details/125438843