Java ten years technology stack

The java technology stack
refers to many materials, so I won't list them in detail here. You can search for

1 java foundation by yourself:
1.1 Algorithm

1.1 Sorting algorithm: direct insertion sort, Hill sort, bubble sort, quick sort, direct selection sort, heap Sort, merge sort, radix sort
1.2 Binary search tree, red-black tree, B tree, B+ tree, LSM tree (with corresponding applications, database, HBase respectively)
1.3 BitSet solves problems such as data duplication and existence
1.2 Basic

2.1 Characters Migration of string constant pool
2.2 String KMP algorithm
2.3 equals and hashcode
2.4 Generics, exceptions, reflection
2.5 String hash algorithm
2.6 Solution to hash conflict: zipper method
2.7 The principle of foreach loop
2.8 Static, final, transient and other keywords
2.9 The underlying implementation principle of the volatile keyword 2.10
Which sorting method is used by the Collections.sort method
2.11 Future interface, the implementation of FutureTask in common thread pools, etc.
2.12 Internal details of the intern method of string, jdk1.6 and jdk1.7 Changes and implementation of internal cpp code StringTable
1.3 Design pattern

Singleton pattern
Factory pattern
Decorator pattern
Observer design pattern
ThreadLocal design pattern
. . .
1.4 Regular expressions

4.1 Capturing groups and non-capturing groups 4.2 Greedy , reluctant
, exclusive mode
1.5 Java memory model and garbage collection algorithm

5.1 Class loading mechanism, which is the parent delegation model Area, permanent area thread exclusive: virtual machine stack, local method stack, program counter 5.3 Memory allocation mechanism: young generation (Eden area, two Survivor areas), old generation, permanent generation and their allocation process 5.4 Strong reference , Soft References, Weak References, Phantom References and GC 5.5 happens-before rules 5.6 Instruction reordering, memory fences 5.7 Java 8 memory generation improvements 5.8 Garbage collection algorithm: mark-sweep (disadvantages: inefficiency, memory fragmentation ) replication algorithm (solves the above problems, but only half of the memory can be used. For the scenes where most objects have a short survival time, a default 8:1:1 improvement is introduced. The disadvantage is that it still needs to be solved by the outside world. Problem) Marking and finishing 5.8 Common garbage collectors: New generation: Serial collector, ParNew collector, Parallel Scavenge collector



























Old generation: Serial Old collector, Parallel Old collector, CMS (Concurrent Mark Sweep) collector, G1 collector (across young generation and old generation)

5.9 Common gc parameters: -Xmn, -Xms, -Xmx, -XX :MaxPermSize, -XX:SurvivorRatio, -XX:-PrintGCDetails

5.10 Common tools: jps, jstat, jmap, jstack, graphic tools jConsole, Visual VM, MAT

1.6 Lock and source code of concurrent containers

6.1 Synchronized and volatile Understand
the principle of 6.2 Unsafe class , use it to implement CAS. Therefore, the AtomicInteger series and other
6.3 CAS may be born to solve the ABA problem, such as adding the number of modifications, version number
6.4 Synchronizer AQS implementation principle
6.5 Exclusive lock, shared lock; reentrant exclusive lock ReentrantLock, shared lock implementation principle
6.6 Fair lock and unfair lock
6.7 Implementation principle of read-write lock ReentrantReadWriteLock
6.8 LockSupport tool
6.9 Condition interface and its implementation principle
6.10 Implementation principle of HashMap, HashSet, ArrayList, LinkedList, HashTable, ConcurrentHashMap, TreeMap
6.11 Concurrent problem of HashMap
6.12 Implementation of ConcurrentLinkedQueue principle
6.13 Fork/Join framework
6.14 CountDownLatch and CyclicBarrier
1.7 Thread pool source code

7.1 Internal execution principle
7.2 Differences between various thread pools
2 Web aspects:
2.1 Architecture design of SpringMVC

1.1 Problems in servlet development: mapping problem, parameter acquisition problem, format conversion Problems, return value processing problems, view rendering
problems 1.4 Describe the overall processing flow of requests by SpringMVC
1.5 SpringBoot 2.2 SpringAOP source code 2.1 AOP implementation classification: compile time, before bytecode loading, and after bytecode loading to achieve AOP 2.2 Deeply understand the roles: AOP alliance, aspectj, jboss AOP, AOP implemented by Spring itself, Spring embedded aspectj. In particular, the latter two 2.3 interface designs can be distinguished by code:










Concepts or interfaces defined by the AOP Alliance: Pointcut (concept, no corresponding interface is defined), Joinpoint, Advice, MethodInterceptor, MethodInvocation

SpringAOP defines interfaces and their implementation classes for the above Advice interfaces: BeforeAdvice, AfterAdvice, MethodBeforeAdvice, AfterReturningAdvice; for aspectj pairs The implementations of the above interfaces are AspectJMethodBeforeAdvice, AspectJAfterReturningAdvice, AspectJAfterThrowingAdvice, and AspectJAfterAdvice.

The defined AdvisorAdapter interface defined by SpringAOP: converts the above Advise into a MethodInterceptor

Pointcut interface defined by SpringAOP: contains two attributes ClassFilter (filtering class), MethodMatcher (filtering method)

ExpressionPointcut interface defined by SpringAOP: the pointcut expression of aspectj will be introduced in the implementation

The PointcutAdvisor interface defined by SpringAOP (combining the above Advice interface and Pointcut interface)

2.4 SpringAOP's calling process

2.5 SpringAOP's own implementation (representative ProxyFactoryBean) and the use of aspectj to distinguish

2.3 Spring transaction system source code and distributed transaction Jotm Atomikos source code Implementation

3.1 Problems existing in jdbc transaction
3.2 Hibernate's improvement of transaction
3.3 For various transactions, how does Spring define the interface of the transaction system, and how to integrate the jdbc transaction and Hibernate transaction
3.4 The roles and respective responsibilities of the three transaction models are included
3.5 The realization of the separation of transaction code and business code (AOP+
3.6 Spring transaction interceptor TransactionInterceptor panorama
3.7 X/Open DTP model, two-phase commit, JTA interface definition
3.8 The implementation principle of Jotm and Atomikos 3.9 The
propagation attribute of the transaction The principle of recovery and recovery 2.4 Database isolation level 4.1 Read uncommitted: read uncommitted 4.2 Read committed: read committed 4.3 Repeatable read: repeatable read 4.4 Serializable: serialization 2.5 Database 5.1 Database performance optimization 5.2 In-depth understanding of MySQL's Record Locks , Gap Locks, Next-Key Locks For example, under what circumstances will deadlock occur: start transaction; DELETE FROM t WHERE id =6; INSERT INTO t VALUES(6); commit;


















5.3 Locking of insert into select statement

5.4 ACID feature concept of transaction

5.5 MVCC understanding of innodb

5.6 undo redo binlog

1 undo redo can achieve persistence, what is their process? Why choose redo for persistence?
2 undo and redo are combined to achieve atomicity and persistence. Why does undo log persist before redo log?
3 Why does undo depend on redo?
4 The log content can be physical log or logical log? What are their respective advantages and disadvantages?
5 Redo log finally uses physical log plus logical log, physical to page, and logic within the page. What's the problem? How to deal with it? Why does Double Write
6 undo log use logical logs instead of physical logs?
7 Why introduce Checkpoint?
8 After the introduction of Checkpoint, it is necessary to block user operations for a period of time to ensure consistency. How to solve this problem? (This problem is still very common. Redis and ZooKeeper have similar situations and different coping strategies.) There are synchronous Checkpoint and asynchronous Checkpoint
9. When binlog is turned on, the general process of 2PC within the transaction (including 2 persistent Redo log and binlog persistence)
10 Explain the above process, why the binlog persistence should be after the redo log and before the storage engine commit?
11 Why keep the order of writing to binlog and executing storage engine commit operations between transactions? (That is, the transaction that is written to the binlog log first must be committed first)
12 In order to ensure the above sequence, the previous method is to lock prepare_commit_mutex, but this greatly reduces the efficiency of the transaction. How to realize the group commit of binlog?
13 How to implement group commit for persistence of redo log? So far, in the 2PC process within the transaction, the 2 persistent operations can be group committed, which greatly improves the efficiency.
2.6 ORM framework: mybatis, Hibernate

6.1 The most primitive jdbc->Spring's JdbcTemplate->hibernate->JPA->SpringDataJPA The evolution path of
2.7 SpringSecurity, shiro, SSO (single sign-on)

7.1 The difference and connection between Session and Cookie and the realization principle of Session
7.2 The authentication process of SpringSecurity and the relationship with Session
7.3 CAS implements SSO (see Cas (01)— —Introduction )
Input picture description

2.8 Log

8.1 JDK’s own logging, log4j, log4j2, logback
8.2 Facade commons-logging, slf4j
8.3 Log conversion during the above 6 melee
2.9 datasource

9.1 c3p0
9.2 druid
9.3 JdbcTemplate in the process of executing sql statement The use and management of Connection
2.10 HTTPS implementation principle

3 Distributed, java middleware, web server, etc.:
3.1 ZooKeeper source code

1.1 Client architecture
1.2 Server side stand-alone version and cluster version, corresponding request processor
1.3 Cluster version session establishment and activation process
1.4 Leader election process
1.5 Transaction log Detailed analysis of and snapshot files
1.6 Implementing distributed locks and distributed ID distributors
1.7 Implementing leader election
1.8 ZAB protocol implementing consistency principle
3.2 Serialization and deserialization framework

2.1 Avro research
2.2 Thrift research
2.3 Protobuf research
2.4 Protostuff research
2.5 Hessian
3.3 RPC framework dubbo source code

3.1 Implementation of dubbo extension mechanism, compared to SPI mechanism
3.2 Service publishing process
3.3 Service subscription process
3.4 RPC communication design
3.4 NIO module and corresponding Netty, Mina, thrift source code

4.1 TCP handshake and disconnection and limited State machine
4.2 backlog
4.3 BIO NIO
4.4 The difference between blocking/non-blocking, synchronous/asynchronous
4.5 Blocking IO, non-blocking IO, multiplexing IO, asynchronous IO
4.6 Reactor thread model
4.7 The connection between jdk's poll, epoll and underlying poll, epoll
4.8 Netty's own epoll implementation
4.9 Kernel layer poll, epoll rough realization
4.10 epoll edge trigger and horizontal trigger
4.11 Netty's EventLoopGroup design
4.12 Netty's ByteBuf Design
4.13 Netty's ChannelHandler
4.13 Netty's zero-copy
4.14 Netty's thread model, especially understanding of business threads and resource release
3.5 Message queue kafka, RocketMQ, Notify, Hermes

5.1 kafka's file storage design
5.2 kafka's copy replication process
5.3
5.4 Kafka's message loss problem
5.5 Kafka's message sequence problem
5.6 Kafka's isr design and more than half of the comparison 5.7
Kafka itself is very lightweight to maintain efficiency, many advanced features do not have: transaction, priority The filtering of messages and messages, and more importantly, the service governance is not perfect. Once a problem occurs, it cannot be reflected intuitively. It is not suitable for enterprise-level systems with strict data requirements, but is suitable for large concurrency such as logs but allows a small amount of data. Scenarios such as loss or duplication
5.8 Transaction design of Notify and RocketMQ
5.9 File-based kafka, RocketMQ and database-based Notify and Hermes
5.10 What aspects should be considered when designing a message system
5.11 Topics such as lost messages, message duplication, high availability, etc.
3.6 Database sub-database sub-table mycat

3.7 NoSql database mongodb

3.8 KV key value System memcached redis

8.1 Redis maintenance and management of client, read and write buffer
8.2 Implementation of redis transaction
8.3 Implementation of Jedis client
8.4 Implementation of JedisPool and ShardedJedisPool
8.5 Implementation of redis epoll, file events and time events in loops
8.6 RDB persistence of redis 8.7 Redis
AOF command appending, file writing, file synchronization to disk
8.8 Redis AOF rewriting, measures to reduce blocking time
8.9 Redis LRU memory recovery algorithm
8.10 Redis master slave replication
8.11 Redis sentinel high Available solutions
8.12 Redis cluster sharding solution
3.9 Design principles of web server tomcat and ngnix

9.1 Overall architecture design
of tomcat 9.2 Tomcat concurrency control of communication
9.3 The entire processing flow of http request reaching tomcat
3.10 ELK log real-time processing and query system

10.1 Elasticsearch, Logstash, Kibana
3.11 Service aspects

11.1 SOA and microservices
11.2 Combined deployment of services, multi-version automatic fast switching and rollback For details,
see Java container-based multi-application deployment technology practice

11.3 Service governance : Current limit, downgrade For
details see Zhang Kaitao 's architecture series Service

current limit: token bucket, leaky bucket : Such as general web applications, directly use hardware or software for load balancing, simple rotation training mechanism How to do linear expansion of stateful services: such as Redis extension: consistent hash, migration tool 11.5 Service link monitoring and alarm: CAT, Dapper , Pinpoint 3.12 Spring Cloud 12.1 Spring Cloud Zookeeper: for service registration and discovery 12.2 Spring Cloud Config: distributed configuration 12.2 Spring Cloud Netflix Eureka: for rest service registration and discovery 12.3 Spring Cloud Netflix Hystrix: service isolation, circuit breaker and downgrade





















12.4 Spring Cloud Netflix Zuul: Dynamic Routing, API Gateway
3.13 Distributed Transaction

13.1 JTA Distributed Transaction Interface Definition, Integration with Spring Transaction System
13.2 TCC Distributed Transaction Concept
13.3 TCC Distributed Transaction Implementation Framework Case 1: tcc-transaction
13.3.1 The TccCompensableAspect aspect intercepts and creates a ROOT transaction
13.3.2 The TccTransactionContextAspect aspect makes the remote RPC call resource join the above transaction as a participant
13.3.3 The TccCompensableAspect aspect creates a branch transaction according to the mark of the TransactionContext passed by the remote RPC
13.3.4 All After the RPC call is completed, the ROOT transaction begins to commit or roll back, and the commit or rollback of all participants is executed.
13.3.5 The commit or rollback of all participants, or through a remote RPC call, the provider side starts to execute the confirm or cancel of the corresponding branch transaction Method
13.3.6 Transaction storage, cluster sharing issues 13.3.7 Transaction recovery, avoiding cluster repeated recovery
13.4 TCC distributed transaction implementation framework Case 2: ByteTCC
13.4.1 JTA transaction management implementation, analogous to Jotm, Atomikos and other JTA implementations
13.4. 2 The storage and recovery of transactions, whether the cluster is shared or not, the caller creates a CompensableTransaction transaction and joins the resource
13.4.3 The CompensableMethodInterceptor interceptor injects CompensableInvocation resources into spring transactions
13.4.4 Spring's distributed transaction manager creates a CompensableTransaction type transaction as a coordinator, binds it to the current thread, and creates a jta transaction
13.4.5 Executes operations such as sql 13.4.6
Before dubbo RPC remote invocation, CompensableDubboServiceFilter creates a proxy XAResource, joins the above CompensableTransaction type transaction, and transfers the CompensableTransaction transaction of the TransactionContext participant to create a branch during the RPC invocation process. , and add resources, and then submit the jta transaction
13.4.7 RPC remote call to the provider side, CompensableDubboServiceFilter creates the corresponding CompensableTransaction type transaction 13.4.8 provider side according to the passed TransactionContext
, meets @Transactional and @Compensable during execution, as a The participant starts the transaction in the try phase, that is, a jta transaction is created.
13.4.9 After the execution of the try on the provider side, it starts to prepare for the submission of the try. It just submits the above jta transaction, and returns the result to the RPC caller. The caller decides to roll back or submit
13.4.10 Commit or roll back the transaction after all executions are completed. If it is committed, first submit the jta transaction (including the submission of XAResource resources such as jdbc), and then submit the CompensableTransaction type transaction after the submission is successful. If the jta transaction is submitted If it fails, you need to roll back the CompensableTransaction type transaction.
13.4.11 The submission of the CompensableTransaction type transaction is the submission of CompensableInvocation resources and RPC resources, respectively calling the confirm of each CompensableInvocation resource and the submission of each RPC resource.
13.4.12 At this time, the confirm of each CompensableInvocation resource is Will prepare to open a new transaction, the CompensableTransaction type transaction of the current thread already exists, so opening the transaction here just creates a new jta transaction.
13.4.13 For this, the transaction opened by confirm of each CompensableInvocation resource starts to repeat. In the above process, resources such as jdbc are added to the newly created jta transaction, while RPC resources and CompensableInvocation resources are still added to the CompensableTransaction type transaction bound to the current thread
13.4.14 After the completion of the transaction opened by the confirm of the current CompensableInvocation resource, it starts to execute the commit. At this time, the commit of the jta transaction is still executed. After the commit is completed, the confirm of a CompensableInvocation resource is completed, and the confirm of the next CompensableInvocation resource is continued, that is, To restart the submission of a new jta transaction RPC resource (participant CompensableTransaction transaction submission)
13.4.15 When the confirm execution of all CompensableInvocation resources is completed, the RPC resource commit is executed, and a remote call will be made to execute the remote provider branch transaction. Commit, the remote calling process will pass the transaction id
. 13.4.16 The provider side finds the corresponding CompensableTransaction transaction according to the passed transaction id, and starts to execute the commit operation. After the commit operation is completed, the response is returned. End
13.4.17 The coordinator continues to execute after receiving the response. When the next RPC resource is submitted, when all RPC resources also complete the corresponding submission, the coordinator is considered to have completely completed the transaction.
3.14 Consistency Algorithm

14.1 raft (see Raft Algorithm Appreciation for details)

14.1.1 Leader election process, leader election constraints, need Including all committed entries, the log can be up-to-date than more than half of the logs.
14.1.2 log replication process, the leader sends an AppendEntries RPC request to all followers, and more than half of the followers reply ok, the entry can be submitted, and then the client responds with OK
14.1.3 After the above-mentioned leader has received more than half of the replication and hangs up, the subsequent leader cannot directly submit more than half of the entries of the previous term (this part has a detailed case to prove it, and can tell the root cause), the current practice is Create empty entries in the current term, and if these newly created entries are mostly copied, then more than half of the entries of the previous term can be submitted at this time.
14.1.4 Once the leader thinks that a term can be submitted, then Update your own commitIndex, and apply the entry to the state machine at the same time, and then in the next heartbeat communication with the follower, bring the leader's commitIndex to the followers, let them update, and apply the entry to their state machine.
14.1.5 From the above As can be seen from the process, as a client, there may be such a situation: the leader thinks that a certain client request can be submitted (the corresponding entry has been more than half copied), at this time, the leader hangs up and has not had time to reply to the client. , that is to say, for the client, although the request fails, the entry corresponding to the request is persisted, but sometimes the request fails (more than half of them are not copied successfully) and the persistence is not successful, that is to say The request failed, the server side may succeed or fail. Therefore, at this time, it is necessary to work hard on the client side, that is, when the cleint side retries, it still uses the previous request data to retry, instead of using new data to retry, and the server side must also implement idempotency.
14.1.6 Cluster membership changes
14.2 ZAB protocol used by ZooKeeper (see the appreciation of ZooKeeper's consensus algorithm for details)

14.2.1 Leader election process. Important: For the collection of votes for servers in different states, voting requires electing a server that contains all logs as the leader
14.2.2 Leader and follower data synchronization process, full synchronization, differential synchronization, correction and truncation between logs, to ensure consistency with the leader. And the follower joins the system that has completed the election. The main point of synchronization at this time is to block the leader to process the write request, complete the differential synchronization between the logs, and also process the synchronization of the existing requests in progress. After the synchronization is completed, unblock.
14.2.3 In the broadcast phase, the client's request is processed normally, and the client can be replied to when more than half of the response is received.
14.2.4 Recovery and Persistence of Logs. Persistence: Every certain number of transaction logs are persisted, and once before leader election. Recovery: Simply consider that the transaction requests that have been written to the log are counted as submitted requests (regardless of whether they have been more than half-replicated before), and all commit submissions are performed. The specific recovery is: first restore the snapshot log, and then apply the corresponding transaction log
14.3 paxos (see the proof process of the paxos algorithm for details)

14.3.1 The operation process of paxos:

Phase 1: (a) A proposer selects a proposal numbered n , send a prepare request to all acceptors

Phase 1: (b) If the proposal number in the prepare request that the acceptor has responded is less than n, it promises not to respond to the prepare request or the proposal number in the accept request is less than n, and find out The value of the largest proposal that has been accepted is returned to the proposer. If the number of the response is greater than n, the prepare request is simply ignored.

Phase 2: (a) If the proposer receives more than half of the acceptors' responses, it will propose a proposal (n, v), where v is the value of the largest accept proposal among all the above acceptor responses, or the proposer's own value. Then send the proposal to all acceptors. This request is called an accept request, and this step is the so-called sending proposal request, while the previous prepare request is more of a process of constructing the final proposal (n, v).

Phase 2: (b) The acceptor receives the proposal numbered n. If the acceptor has not responded to the prepare request for the proposal larger than n, the acceptor accepts the proposal, otherwise it rejects

the proof process of 14.3.2 paxos:

1 enough Question

2 The initial accept of the acceptor

3 P2 - Requirement for the result

4 P2a - The accept requirement for the acceptor

5 P2b - The requirement for the proposer to propose a proposal (result requirement)

6 P2c - The requirement for the proposer to propose a proposal (practice requirement)

7 Introduce the prepare process and P1a

8 8 Optimizing prepare

14.3.3 Base paxos and multi-paxos

4 Big data direction
4.1 Hadoop

1.1 UserGroupInformation Source code interpretation: JAAS authentication, maintenance of user and group relationships
1.2 Implementation of RPC communication
1.3 The process of proxying users
1.4 kerberos Certification
4.2 MapReduce 2.1 MapReduce

Theory and Its Corresponding Interface Definition _ _ Optimization process 6.3 HiveServer2 authentication and authorization 6.4 Metastore authentication and authorization 6.5 HiveServer2 to metastore users transfer process 4.7 Hbase 7.1 The overall architecture of Hbase Figure 7.2 HBase's WAL and MVCC design 7.3 The process of asynchronous batch flush on the client side to find RegionServer 7.4 On Zookeeper 7.5 HBase Nodes Explained _



























7.9 Design of
rowKey 7.10 MemStore and LSM 4.8 Different deployment methods of
Spark 8.1 8.2 Implementation of SparkSql


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326208607&siteId=291194637