Summary of the beauty of design patterns (open source combat)


title: Summary of the Beauty of Design Patterns (Open Source Practice)
date: 2023-01-10 17:13:05
tags:

  • Design pattern
    categories:
  • Design mode
    cover: https://cover.png
    feature: false

Article directory


For detailed knowledge about design patterns, see the following three articles:

1. Design patterns applied by Java JDK

1.1 Application of factory pattern in Calendar class

When we talked about the factory pattern earlier, most of the factory classes are named with Factory as the suffix, and the factory class is mainly responsible for creating objects. But in actual project development, the design of the factory class is more flexible. For example, an application of the factory pattern in Java JDK: java.util.Calendar. From the name, we cannot see that it is a factory class

The Calendar class provides a large number of date-related function codes, and at the same time, provides a getInstance()factory method to create different Calendar subclass objects according to different TimeZone and Locale. That is, the function code and the factory method code are coupled in one class. Therefore, even if you look at its source code, if you are not careful, it is difficult to find that it uses the factory model. At the same time, because it is not just a factory class, it is not named with Factory as the suffix

The relevant code of the Calendar class is shown below, most of the code has been omitted, and only the code implementation of getInstance()the factory . It can be seen from the code that getInstance()the method can create different Calendar subclass objects according to different TimeZone and Locale, such as BuddhistCalendar, JapaneseImperialCalendar, GregorianCalendar, these details are completely encapsulated in the factory method, the user only needs to pass the current time zone and address, You can obtain a Calendar class object to use, and the obtained object is an object of the Calendar subclass, and the user does not care when using it

public abstract class Calendar implements Serializable, Cloneable, Comparable<Calendar> {
    
    
	//...
	public static Calendar getInstance(TimeZone zone, Locale aLocale) {
    
    
		return createCalendar(zone, aLocale);
	}
	private static Calendar createCalendar(TimeZone zone,Locale aLocale) {
    
    
		CalendarProvider provider = LocaleProviderAdapter.getAdapter(
		                                CalendarProvider.class, aLocale).getCalendarProvider();
		if (provider != null) {
    
    
			try {
    
    
				return provider.getInstance(zone, aLocale);
			} catch (IllegalArgumentException iae) {
    
    
                                // fall back to the default instantiation
			}
		}
		Calendar cal = null;
		if (aLocale.hasExtensions()) {
    
    
			String caltype = aLocale.getUnicodeLocaleType("ca");
			if (caltype != null) {
    
    
				switch (caltype) {
    
    
				case "buddhist":
					cal = new BuddhistCalendar(zone, aLocale);
					break;
				case "japanese":
					cal = new JapaneseImperialCalendar(zone, aLocale);
					break;
				case "gregory":
					cal = new GregorianCalendar(zone, aLocale);
					break;
				}
			}
		}
		if (cal == null) {
    
    
			if (aLocale.getLanguage() == "th" && aLocale.getCountry() == "TH") {
    
    
				cal = new BuddhistCalendar(zone, aLocale);
			} else if (aLocale.getVariant() == "JP" && aLocale.getLanguage() == "ja") {
    
    
				cal = new JapaneseImperialCalendar(zone, aLocale);
			} else {
    
    
				cal = new GregorianCalendar(zone, aLocale);
			}
		}
		return cal;
	}
	//...
}

1.2 Application of builder pattern in Calendar class

Still just the Calendar class, it not only uses the factory pattern, but also uses the builder pattern. There are two ways to implement the builder pattern, one is to define a Builder class separately, and the other is to implement the Builder as an inner class of the original class. Calendar adopts the second implementation idea

public abstract class Calendar implements Serializable, Cloneable, Comparable<Calendar> {
    
    
	//...
	public static class Builder {
    
    
		private static final int NFIELDS = FIELD_COUNT + 1;
		private static final int WEEK_YEAR = FIELD_COUNT;
		private long instant;
		private int[] fields;
		private int nextStamp;
		private int maxFieldIndex;
		private String type;
		private TimeZone zone;
		private boolean lenient = true;
		private Locale locale;
		private int firstDayOfWeek, minimalDaysInFirstWeek;
		public Builder() {
    
    }
		public Builder setInstant(long instant) {
    
    
			if (fields != null) {
    
    
				throw new IllegalStateException();
			}
			this.instant = instant;
			nextStamp = COMPUTED;
			return this;
		}
                //...省略n多set()方法
		public Calendar build() {
    
    
			if (locale == null) {
    
    
				locale = Locale.getDefault();
			}
			if (zone == null) {
    
    
				zone = TimeZone.getDefault();
			}
			Calendar cal;
			if (type == null) {
    
    
				type = locale.getUnicodeLocaleType("ca");
			}
			if (type == null) {
    
    
				if (locale.getCountry() == "TH" && locale.getLanguage() == "th") {
    
    
					type = "buddhist";
				} else {
    
    
					type = "gregory";
				}
			}
			switch (type) {
    
    
			case "gregory":
				cal = new GregorianCalendar(zone, locale, true);
				break;
			case "iso8601":
				GregorianCalendar gcal = new GregorianCalendar(zone, locale, true);
                                // make gcal a proleptic Gregorian
				gcal.setGregorianChange(new Date(Long.MIN_VALUE));
                                // and week definition to be compatible with ISO 8601
				setWeekDefinition(MONDAY, 4);
				cal = gcal;
				break;
			case "buddhist":
				cal = new BuddhistCalendar(zone, locale);
				cal.clear();
				break;
			case "japanese":
				cal = new JapaneseImperialCalendar(zone, locale, true);
				break;
			default:
				throw new IllegalArgumentException("unknown calendar type: " + type)
			}
			cal.setLenient(lenient);
			if (firstDayOfWeek != 0) {
    
    
				cal.setFirstDayOfWeek(firstDayOfWeek);
				cal.setMinimalDaysInFirstWeek(minimalDaysInFirstWeek);
			}
			if (isInstantSet()) {
    
    
				cal.setTimeInMillis(instant);
				cal.complete();
				return cal;
			}
			if (fields != null) {
    
    
				boolean weekDate = isSet(WEEK_YEAR) && fields[WEEK_YEAR] > fields[YEAR]
				if (weekDate && !cal.isWeekDateSupported()) {
    
    
					throw new IllegalArgumentException("week date is unsupported by " + t
				}
				for (int stamp = MINIMUM_USER_STAMP; stamp < nextStamp; stamp++) {
    
    
					for (int index = 0; index <= maxFieldIndex; index++) {
    
    
						if (fields[index] == stamp) {
    
    
							cal.set(index, fields[NFIELDS + index]);
							break;
						}
					}
				}
				if (weekDate) {
    
    
					int weekOfYear = isSet(WEEK_OF_YEAR) ? fields[NFIELDS + WEEK_OF_YEAR] : 1;
					int dayOfWeek = isSet(DAY_OF_WEEK)
					                ? fields[NFIELDS + DAY_OF_WEEK] : cal.getFirstDayOfWeek();
					cal.setWeekDate(fields[NFIELDS + WEEK_YEAR], weekOfYear, dayOfWeek);
				}
				cal.complete();
			}
			return cal;
		}
	}
}

After reading the above code, there is a question to think about: Since there is already getInstance()a factory method to create Calendar class objects, why use Builder to create Calendar class objects? Where is the difference between the two?

In fact, when we talked about these two modes earlier, we made a detailed comparison of the differences between them. The factory pattern is used to create different but related types of objects (a group of subclasses that inherit the same parent class or interface), and the given parameters determine which type of object to create. The builder pattern is used to create a type of complex object, and create different objects "customized" by setting different optional parameters

Looking at build()the methods , you may feel that it is a bit like a factory pattern. The first half of the code is indeed similar to getInstance()the factory method, creating different Calendar subclasses according to different types. In fact, the latter half of the code belongs to the standard builder pattern, which customizes the just-created Calendar subclass object according to the parameters set by setXXX()the method

At this time, you may say, can this still be regarded as the builder mode? To quote a passage mentioned earlier

We don't want to be too academic, we have to distinguish the factory mode and the builder mode so clearly. What we need to know is why each mode is designed in this way and what problems it can solve. Only by understanding these most essential things, can we not apply mechanically, but can apply flexibly, and even mix various modes to create new modes to solve problems in specific scenarios

In fact, you can also learn from the example of Calendar, don't apply the principles and implementations of various modes too rigidly, and don't be afraid to make the slightest changes. The model is dead, but the people who use it are alive. In actual project development, not only various modes can be mixed together, but also the specific code implementation can be flexibly adjusted according to specific functional requirements

1.3 Application of decorator pattern in Collections class

As mentioned earlier, the Java IO class library is a very classic application of the decorator pattern. In fact, Java's Collections class also uses the decorator pattern

The Collections class is a tool class for collection containers, which provides many static methods for creating various collection containers, such as creating UnmodifiableCollection class objects through unmodifiableColletion()static methods. And the UnmodifiableCollection class, CheckedCollection and SynchronizedCollection classes in these container classes are the decorator classes for the Collection class

Because the three decorator classes just mentioned are almost the same in code structure, so here we only take the UnmodifiableCollection class as an example to explain. The UnmodifiableCollection class is an inner class of the Collections class

public class Collections {
    
    
	private Collections() {
    
    }
	public static <T> Collection<T> unmodifiableCollection(Collection<? extends T> c) {
    
    
		return new UnmodifiableCollection<>(c);
	}

	/**
	 * @serial include
	 */
	static class UnmodifiableCollection<E> implements Collection<E>, Serializable {
    
    
		private static final long serialVersionUID = 1820017752578914078L;

		final Collection<? extends E> c;

		UnmodifiableCollection(Collection<? extends E> c) {
    
    
			if (c==null)
				throw new NullPointerException();
			this.c = c;
		}

		public int size()                   {
    
    
			return c.size();
		}
		public boolean isEmpty()            {
    
    
			return c.isEmpty();
		}
		public boolean contains(Object o)   {
    
    
			return c.contains(o);
		}
		public Object[] toArray()           {
    
    
			return c.toArray();
		}
		public <T> T[] toArray(T[] a)       {
    
    
			return c.toArray(a);
		}
		public String toString()            {
    
    
			return c.toString();
		}

		public Iterator<E> iterator() {
    
    
			return new Iterator<E>() {
    
    
				private final Iterator<? extends E> i = c.iterator();

				public boolean hasNext() {
    
    
					return i.hasNext();
				}
				public E next()          {
    
    
					return i.next();
				}
				public void remove() {
    
    
					throw new UnsupportedOperationException();
				}
				@Override
				public void forEachRemaining(Consumer<? super E> action) {
    
    
					// Use backing collection version
					i.forEachRemaining(action);
				}
			};
		}
		public boolean add(E e) {
    
    
			throw new UnsupportedOperationException();
		}
		public boolean remove(Object o) {
    
    
			hrow new UnsupportedOperationException();
		}
		public boolean containsAll(Collection<?> coll) {
    
    
			return c.containsAll(coll);
		}
		public boolean addAll(Collection<? extends E> coll) {
    
    
			throw new UnsupportedOperationException();
		}
		public boolean removeAll(Collection<?> coll) {
    
    
			throw new UnsupportedOperationException();
		}
		public boolean retainAll(Collection<?> coll) {
    
    
			throw new UnsupportedOperationException();
		}
		public void clear() {
    
    
			throw new UnsupportedOperationException();
		}
		// Override default methods in Collection
		@Override
		public void forEach(Consumer<? super E> action) {
    
    
			c.forEach(action);
		}
		@Override
		public boolean removeIf(Predicate<? super E> filter) {
    
    
			throw new UnsupportedOperationException();
		}
		@SuppressWarnings("unchecked")
		@Override
		public Spliterator<E> spliterator() {
    
    
			return (Spliterator<E>)c.spliterator();
		}
		@SuppressWarnings("unchecked")
		@Override
		public Stream<E> stream() {
    
    
			return (Stream<E>)c.stream();
		}
		@SuppressWarnings("unchecked")
		@Override
		public Stream<E> parallelStream() {
    
    
			return (Stream<E>)c.parallelStream();
		}
	}
}

After reading the above code and thinking about it, why is the UnmodifiableCollection class a decorator class for the Collection class? Can the two be regarded as a simple interface implementation relationship or a class inheritance relationship?

As mentioned earlier, the decorator class in the decorator pattern is an enhancement to the functionality of the original class. Although the UnmodifiableCollection class can be regarded as a functional enhancement of the Collection class, it is not convincing enough to conclude that the UnmodifiableCollection is the decorator class of the Collection class.

In fact, the most critical point is that the constructor of UnmodifiableCollection receives a Collection class object, and then wraps all its functions: reimplementation (such as add() function) or simple encapsulation (such as stream() function ). Simple interface implementation or inheritance does not implement the UnmodifiableCollection class in this way. Therefore, from the perspective of code implementation, the UnmodifiableCollection class is a typical decorator class

1.4 Application of adapter mode in Collections class

When we talked about the adapter mode earlier, we said that the adapter mode can be used to be compatible with the old version interface. At that time, I gave an example of JDK, let’s take a closer look here

Older versions of the JDK provided the Enumeration class to traverse containers. New versions of the JDK use the Iterator class instead of the Enumeration class to traverse containers. In order to be compatible with the old client code (using the code of the old version of JDK), the Enumeration class is retained, and in the Collections class, enumaration()the static (because this static function is generally used to create a container Enumeration class object)

However, the Enumeration class and enumeration()functions only for compatibility, and in fact, have nothing to do with adapters. So which part is the adapter?

In newer versions of the JDK, the Enumeration class is an adapter class. It adapts the client code (using the Enumeration class) and the new iterator Iterator class in the new version of JDK. However, from the perspective of code implementation, the code implementation of this adapter mode is slightly different from the code implementation of the classic adapter mode. enumeration()The logic of the static function is coupled with the code of the Enumeration adapter class, and enumeration()the static function directly creates an anonymous class object through new. The specific code is as follows:

/**
* Returns an enumeration over the specified collection. This provides
* interoperability with legacy APIs that require an enumeration
* as input.
*
* @param <T> the class of the objects in the collection
* @param c the collection for which an enumeration is to be returned.
* @return an enumeration over the specified collection.
* @see Enumeration
*/
public static <T> Enumeration<T> enumeration(final Collection<T> c) {
    
    
	return new Enumeration<T>() {
    
    
		private final Iterator<T> i = c.iterator();
		public boolean hasMoreElements() {
    
    
			return i.hasNext();
		}
		public T nextElement() {
    
    
			return i.next();
		}
	};
}

1.5 Application of template pattern in Collections class

As mentioned earlier, the three modes of strategy, template, and chain of responsibility are commonly used in the design of the framework, providing extension points of the framework, allowing framework users to customize the functions of the framework based on the extension points without modifying the source code of the framework. sort()The functions of the Collections class in Java take advantage of this extended feature of the template pattern

First, let's see how Collections.sort()the function is used. Sample code is shown below. This code implements sorting the students array according to different sorting methods (from young to old by age, from small to large alphabetically by name, and from large to small by grade)

public class Demo {
    
    
	public static void main(String[] args) {
    
    
		List<Student> students = new ArrayList<>();
		students.add(new Student("Alice", 19, 89.0f));
		students.add(new Student("Peter", 20, 78.0f));
		students.add(new Student("Leo", 18, 99.0f));
		Collections.sort(students, new AgeAscComparator());
		print(students);
		Collections.sort(students, new NameAscComparator());
		print(students);
		Collections.sort(students, new ScoreDescComparator());
		print(students);
	}
	public static void print(List<Student> students) {
    
    
		for (Student s : students) {
    
    
			System.out.println(s.getName() + " " + s.getAge() + " " + s.getScore());
		}
	}
	public static class AgeAscComparator implements Comparator<Student> {
    
    
		@Override
		public int compare(Student o1, Student o2) {
    
    
			return o1.getAge() - o2.getAge();
		}
	}
	public static class NameAscComparator implements Comparator<Student> {
    
    
		@Override
		public int compare(Student o1, Student o2) {
    
    
			return o1.getName().compareTo(o2.getName());
		}
	}
	public static class ScoreDescComparator implements Comparator<Student> {
    
    
		@Override
		public int compare(Student o1, Student o2) {
    
    
			if (Math.abs(o1.getScore() - o2.getScore()) < 0.001) {
    
    
				return 0;
			} else if (o1.getScore() < o2.getScore()) {
    
    
				return 1;
			} else {
    
    
				return -1;
			}
		}
	}
}

Combined with the example just now, let’s look at it again, why do you say that Collections.sort()the function uses the template mode? Collections.sort()Implemented sorting of collections. For the sake of scalability, it delegates the logic of "comparison of size" to the user for implementation. If you think of the logic of comparing sizes as one of the steps of the whole sorting logic, you can think of it as a template pattern. However, from the perspective of code implementation, it looks a bit similar to the JdbcTemplate mentioned before. It is not a classic code implementation of the template mode, but based on the Callback callback mechanism.

However, in other materials, I saw someone say that Collections.sort()the strategy mode is used. Such a statement is not unreasonable. If you don't think of "comparing size" as a step in sorting logic, but as an algorithm or strategy, then you can think of it as an application of the strategy pattern. However, this is not a typical strategy pattern. As mentioned earlier, in a typical strategy pattern, the strategy pattern is divided into three parts: definition, creation, and use of the strategy. Strategies are created through the factory pattern, and during program execution, which strategy to use is dynamically determined based on uncertain factors such as configuration, user input, and calculation results. In Collections.sort()the function , the creation of the strategy is not through the factory mode, and the use of the strategy is not determined dynamically

1.6 Application of Observer Mode in JDK

When I talked about the observer mode, I focused on the EventBus framework of Google Guava, which provides the skeleton code of the observer mode. With EventBus, we don't need to develop observer pattern from scratch. In fact, the Java JDK also provides a simple framework implementation of the Observer pattern. In normal development, if you do not want to introduce the Google Guava development library, you can directly use this framework class provided by the Java language itself

However, it is much simpler than EventBus, consisting of only two classes: java.util.Observable and java.util.Observer. The former is the observed and the latter is the observer. Their code implementation is also very simple

public interface Observer {
    
    
	void update(Observable o, Object arg);
}
public class Observable {
    
    
	private boolean changed = false;
	private Vector<Observer> obs;
	public Observable() {
    
    
		obs = new Vector<>();
	}
	public synchronized void addObserver(Observer o) {
    
    
		if (o == null)
			throw new NullPointerException();
		if (!obs.contains(o)) {
    
    
			obs.addElement(o);
		}
	}
	public synchronized void deleteObserver(Observer o) {
    
    
		obs.removeElement(o);
	}
	public void notifyObservers() {
    
    
		notifyObservers(null);
	}
	public void notifyObservers(Object arg) {
    
    
		Object[] arrLocal;
		synchronized (this) {
    
    
			if (!changed)
				return;
			arrLocal = obs.toArray();
			clearChanged();
		}
		for (int i = arrLocal.length-1; i>=0; i--)
			((Observer)arrLocal[i]).update(this, arg);
	}
	public synchronized void deleteObservers() {
    
    
		obs.removeAllElements();
	}
	protected synchronized void setChanged() {
    
    
		changed = true;
	}
	protected synchronized void clearChanged() {
    
    
		changed = false;
	}
}

Most of the code implementations of Observable and Observer are well understood, and we focus on two of them. One is a changed member variable and the other is notifyObservers()a function

First look at the changed member variable

It is used to indicate whether the Observable has a state update. When there is a status update, you need to call setChanged()the function and set the changed variable to true, so that notifyObservers()the observer (Observer) can be triggered to execute update()the function . Otherwise, even if notifyObservers()the function , the observer's update()function will not be executed. That is to say, when the observer is notified that the state of the observer is updated setChanged(),
notifyObservers()two functions, and , need to be called in sequence. Calling notifyObservers()the function does not work

Look at notifyObservers()the function

In order to ensure that there is no conflict between the three operations of adding, removing, and notifying observers in a multi-threaded environment, most functions in the Observable class are locked through synchronized, but there are special cases where this function is not notifyObservers()synchronized Lock. Why is this? In the code implementation of JDK, notifyObservers()how does the function ensure that it does not conflict with other function operations? Is there a problem with this locking method? What's the problem?

notifyObservers()The reason why the function does not add a big lock to the entire function like other functions is mainly due to performance considerations

notifyObservers()The function executes the update() function of each observer in turn, and the logic executed by each update()function is unknown in advance, which may be time-consuming. If you add a synchronized lock to notifyObservers()the function , notifyObservers()the function may hold the lock for a long time, which will cause other threads to be unable to acquire the lock for a long time, affecting the concurrency performance of the entire Observable class

The Vector class is not thread-safe. In a multi-threaded environment, adding, deleting, and traversing elements in the Vector class object at the same time will cause unpredictable results. Therefore, in the code implementation of JDK, in order to avoid performance problems caused by directly locking notifyObservers()functions , JDK adopts a compromise solution. This solution is somewhat similar to the solution of allowing iterators to support "snapshots" as mentioned before.

In notifyObservers()the function , first copy an observer list and assign it to the local variable of the function. The local variable is private to the thread and is not shared between threads. This copied thread-private observer list is equivalent to a snapshot. The snapshot is traversed, executing each observer's update()function . And this traversal execution process is operated on the local variable of the snapshot, there is no thread safety problem, and no lock is required. Therefore, only the process of copying and creating snapshots needs to be locked, the scope of locking is greatly reduced, and the concurrency performance is improved.

Why do you say this is a compromise? This is because there are actually some problems with this locking method. After the snapshot is created, adding or deleting observers will not update the snapshot, new observers will not be notified, and newly deleted observers will still be notified. Whether this trade-off is acceptable depends entirely on your business scenario. In fact, this processing method is also a common method to reduce lock granularity and improve concurrency performance in multi-threaded programming.

1.7 Application of singleton pattern in Runtime class

Each Java application starts a JVM process at runtime, and each JVM process corresponds to only one Runtime instance, which is used to view the JVM status and control the JVM behavior. It is unique in the process, so it is more suitable to be designed as a singleton. When programming, we cannot instantiate a Runtime object by ourselves, we can only obtain it through getRuntime()static methods

The code implementation of the Runtime class is shown below. Only part of the relevant code is included here, and other codes are omitted. It can also be seen from the code that it uses the simplest Hungry-style singleton implementation

/**
* Every Java application has a single instance of class
* <code>Runtime</code> that allows the application to interface with
* the environment in which the application is running. The current
* runtime can be obtained from the <code>getRuntime</code> method.
* <p>
* An application cannot create its own instance of this class.
*
* @author unascribed
* @see java.lang.Runtime#getRuntime()
* @since JDK1.0
*/
public class Runtime {
    
    
	private static Runtime currentRuntime = new Runtime();
	public static Runtime getRuntime() {
    
    
		return currentRuntime;
	}
	/** Don't let anyone else instantiate this class */
	private Runtime() {
    
    }
	//....
	public void addShutdownHook(Thread hook) {
    
    
		SecurityManager sm = System.getSecurityManager();
		if (sm != null) {
    
    
			sm.checkPermission(new RuntimePermission("shutdownHooks"));
		}
		ApplicationShutdownHooks.add(hook);
	}
	//...
}

1.8 Application summary of other modes in JDK

In fact, when explaining the theoretical part (see the first three articles at the beginning of the article), I have already talked about the application of many patterns in the Java JDK. Here's another recap:

  • When talking about the template mode, combined with four examples of Java Servlet, JUnit TestCase, Java InputStream, and Java AbstractList, it specifically explained its two functions: scalability and reusability
  • When talking about the flyweight mode, it is said that the integer objects between -128 and 127 in the Integer class can be reused, and it is also mentioned that the constant strings in the String type can also be reused. These are the classic applications of Flyweight mode
  • When talking about the responsibility chain mode, it is mentioned that the Filter in Java Servlet is realized through the responsibility chain, and it also compares the interceptor in Spring. In fact, most of the functions of interceptors and filters are implemented using the chain of responsibility model
  • When talking about the iterator mode, the focus is on the implementation of the Iterator iterator in Java

2. Learn from Unix to deal with large and complex project development

The difficulty of software development is nothing more than two points. One is technical difficulty, which means that the amount of code is not necessarily large, but the problem to be solved is more difficult, and some relatively deep technical solutions or algorithms are needed. People” can handle it, such as automatic driving, image recognition, high-performance message queue, etc.; the second is complexity, which means that the technology is not difficult, but the project is huge, the business is complicated, the amount of code is large, and many people participate in the development , such as logistics systems, financial systems, etc. The first point involves subdividing professional domain knowledge. Here we focus on the second point, how to deal with the complexity of software development

Anyone can write a simple "hello world" program. Thousands of lines of code can be maintained by anyone. However, when the code exceeds tens of thousands, hundreds of thousands, or even hundreds of thousands or millions of lines, the complexity of the software will increase exponentially. In this case, it is not only required that the program can run and run correctly, but also that the code should be understandable and maintainable. In fact, the complexity is not only reflected in the code itself, but also in collaborative research and development. How to manage a huge team for orderly collaborative development is also a very complicated problem

How to deal with complex software development? The Unix open source project is an example worth learning

Unix was born in 1969 and has been evolving until now. The amount of code is several million lines. Such a huge project development can be developed in such a perfect way, and it can be maintained for a long time to maintain sufficient code quality. There are many successful experiences that can be used for reference. . Therefore, the following three topics will be used as an introduction to the development of Unix open source projects, and the methodology for dealing with complex software development will be explained in detail.

  • From the perspective of design principles and ideas, how to deal with the development of large and complex projects?
  • From the perspective of R&D management and development skills, how to deal with the development of large and complex projects?
  • Focusing on Code Review, how to maintain the code quality of the project through Code Review?

2.1 Design principles and ideas

2.1.1 Encapsulation and abstraction

In Unix and Linux systems, there is a classic saying, "Everything is a file", which translates into Chinese as "Everything is a file". This sentence means that in Unix and Linux systems, many things are abstracted into the concept of "file", such as Socket, drive, hard disk, system information, etc. They use the path of the file system as a unified namespace (namespace), and use the unified read and write standard functions to access

For example, to view CPU information, in the Linux system, you only need to use editors such as Vim and Gedit or the cat command to open it like other files, and you /proc/cpuinfocan view the corresponding information. In addition, you can also check /proc/uptimethe file to know how long the system has been running, check /proc/versionthe kernel version of the system, etc.

In fact, "everything is a file" embodies the design idea of ​​encapsulation and abstraction

The access details of different types of devices are encapsulated, abstracted into a unified file access method, and higher-level codes can access different types of devices at the bottom layer based on the unified access method. The advantage of this is to isolate the complexity of the underlying device access

A unified access method can simplify the writing of upper-level code, and the code is easier to reuse. In addition, abstraction and encapsulation can effectively control the spread of code complexity, encapsulate complexity in local code, isolate the variability of implementation, and provide a simple and unified access interface for other modules to use. Other modules are based on Abstract interface rather than specific implementation programming, the code will be more stable

2.1.2 Layering and Modularization

Modularity is a common means of building complex systems

With a complex system like Unix, no one person can control all the details. The main reason why such a complex system can be developed and maintained is to divide the system into independent modules, such as process scheduling, process communication, memory management, virtual file system, network interface and other modules. Different modules communicate through interfaces, and the coupling between modules is very small. Each small team focuses on an independent high-cohesion module for development, and finally assembles each module like building blocks to build a super complex system

In addition, the reason why large-scale systems such as Unix and Linux can achieve orderly collaborative development by hundreds or thousands of people is also due to the good modularity. Different teams are responsible for the development of different modules, so that even without knowing all the details, managers can coordinate various modules to make the whole system work effectively

In fact, in addition to modularization, layering is also a common method for architecting complex systems

We often say that any problem in the computer field can be solved by adding an indirect middle layer, which itself reflects the importance of layering. For example, the Unix system is also based on layered development, which can be roughly divided into three layers, namely the kernel, system calls, and application layers. Each layer encapsulates the implementation details of the upper layer and exposes abstract interfaces to call. Moreover, any layer can be reimplemented without affecting the code of other layers

In the face of the development of complex systems, we must be good at applying layering technology, and move the code that is easy to reuse and has little to do with the specific business to the lower layer as much as possible, and move the code that is easy to change and strongly related to the specific business to the upper layer as much as possible. upper layer

2.1.3 Interface-based communication

We talked about layering and modularization earlier, so how do different layers and modules communicate? Generally speaking, it is called through the interface. When designing the interface to be exposed by a module or layer, learn to hide the implementation. The interface should be abstract from naming to definition, and the specific implementation details should be as little as possible.

For example, the underlying implementation of open()the file is very complicated, involving permission control, concurrency control, and physical storage, but it is very simple to use. In addition, because open()the function is defined based on abstraction rather than concrete implementation, open()when changing the underlying implementation of the function, there is no need to change the upper-level code that depends on it

2.1.4 High cohesion and loose coupling

High cohesion and loose coupling are a relatively general design idea. Code with good cohesion and less coupling can allow us to gather in a small range of modules or classes when modifying or reading code, without the need to understand There are too many codes of other modules or classes, so that the focus will not be too divergent, which reduces the difficulty of reading and modifying the code. Moreover, because the dependencies are simple and the coupling is small, modifying the code will not affect the whole body, and the code changes are relatively concentrated, and the risk of introducing bugs is greatly reduced

In fact, many methods mentioned above, such as encapsulation, abstraction, layering, modularization, and interface-based communication, can effectively achieve high cohesion and loose coupling of code. Conversely, the high cohesion and loose coupling of the code means that the abstraction and encapsulation are in place, the code structure is clear, the layering and modularization are reasonable, and the dependencies are simple, so the overall quality of the code will not be too bad . Even if a specific class or module is not well designed and the code quality is not very high, the scope of influence is very limited. You can focus on this module or class and do corresponding small refactorings. Compared with the adjustment of the code structure, the difficulty of small refactoring with a relatively concentrated range of changes is much less

2.1.5 Designing for Extension

The more complicated the project is, the more time should be spent on the pre-design. Think in advance which functions may need to be expanded in the project in the future, and reserve extension points in advance so that when future requirements change, new functions can be easily added without changing the overall structure of the code

To make the code scalable, the code needs to satisfy the principle of opening and closing. Especially for open source projects like Unix, there are more than N people participating in the development, and anyone can submit code to the code base. The code satisfies the principle of opening and closing, adding new functions based on extension rather than modification, minimizing and centralizing code changes, avoiding new codes affecting old codes, and reducing the risk of introducing bugs

In addition to meeting the principle of opening and closing and making the code scalable, there are many methods mentioned above, such as encapsulation and abstraction, and interface-based programming. Identify the variable and immutable parts of the code, encapsulate the variable parts, isolate the changes, and provide an abstract immutable interface for use by the upper system. When the specific implementation changes, it is only necessary to extend a new implementation based on the same abstract interface and replace the old implementation, and the code of the upstream system hardly needs to be modified.

2.1.6 KISS First Principles

Simple, clear, and readable are the first principles to be followed in any large-scale software development. As long as the readability is good, even if the scalability is not good, at most it will take more time and change a few lines of code. However, if the readability is not good, and you can’t even read it, then it’s not something that can be solved by spending more time. If you seem to understand the logic of the existing code, and modify the code with the mentality of trying, the possibility of introducing bugs will be very high

Whether you are yourself or a team, when participating in the development of large-scale projects, try to avoid over-design and premature optimization. When there is a conflict between scalability and readability, or when there is a trade-off between the two, when it is ambiguous, you should choose Follow the KISS principle, preferring readability

2.1.7 Principle of Least Surprise

The book "The Art of Unix Programming" mentions a classic Unix design principle called "The Least Surprise Principle", which is "The Least Surprise Principle" in English. In fact, this principle is equivalent to "obey the development specification", which means that when designing or coding, you must abide by the unified development specification and avoid counter-intuitive design

Following the unified coding standard, all codes are written by one person, which can effectively reduce reading interference. In large-scale software development, there are many people involved in the development. If everyone writes code according to their own coding habits, the code style of the entire project will be strange. This class is this coding style, and another class is another. style. When reading, you have to keep switching to adapt to different coding styles, and the readability will deteriorate. Therefore, for the development of large-scale projects, special attention should be paid to complying with unified development specifications

2.2 R&D management and development skills

The more complex the project, the larger the amount of code, the more developers involved, and the longer the development and maintenance time, the more attention should be paid to the quality of the code. The decline in code quality will lead to many difficulties in project development, such as: low development efficiency, recruiting a lot of people, working overtime every day, but not much work; frequent online bugs, difficult to find bugs, angry leaders, helpless middle managers, and constant complaints from engineers

There are many reasons for the low quality of the code, such as: no code comments, no documentation, poor naming, unclear hierarchical structure, confusing call relationships, hardcode everywhere, full of various temporary solutions, and so on. So how can we always ensure code quality? Of course, the most important thing is that the team's technical quality must be excellent, and they can properly use design principles, ideas, and patterns to write high-quality code. In addition, there are some external methods to follow

2.2.1 Executing coding standards in a critical way

Strict implementation of code specifications can make the code of a project and even the entire company have a completely unified style, as if written by the same person. Also, well-named variables, functions, classes, and comments can improve code readability. Coding standards are not difficult to master, the key is to strictly enforce them. During Code Review, you must be strict. When you see code that does not meet the specifications, you must point it out and ask for modification.

However, the actual situation often backfires. Although everyone knows what a good code specification looks like, in the process of writing the code, the implementation is not good. The main reason for this situation is still not enough attention. Many people think that it doesn't matter what a variable or function is named. So there is no scrutiny when naming, and no comments are written. When Code Review is also a mentality that has nothing to do with it, I feel that there is no need to pay too much attention to details. Over time, the project code will get worse and worse. So here I still want to emphasize that details determine success or failure, and strict implementation of code specifications is extremely critical

2.2.2 Writing high-quality unit tests

Unit testing is one of the easiest to implement and one of the fastest ways to improve code quality. High-quality unit testing not only requires high test coverage, but also requires comprehensive testing. In addition to testing the execution of normal logic, it is also necessary to focus on and comprehensively test the execution of exceptions. After all, most of the problems in the code occur in exceptions and boundary conditions

For large and complex projects, integration testing and black-box testing are difficult to test comprehensively, because of the combination explosion, the cost of exhausting all test cases is very high, and it is almost impossible. Unit testing is a great addition. It can ensure that the code runs correctly at the fine-grained code level such as classes and functions. There are fewer low-level fine-grained code bugs, and the bugs of the entire system built by combining are reduced accordingly

2.2.3 Code Review without form

If many engineers don't pay much attention to unit testing, they don't accept Code Review very much. When I chatted with some colleagues about Code Review, many people’s reaction was that this thing cannot be implemented well, the form is greater than the effect, and it is a waste of time. Yes, even if Code Review is done smoothly, it will take time. Therefore, when business development tasks are heavy, Code Review tends to be superficial and anticlimactic, and the effect is really not very good.

But this does not negate the value of Code Review itself. In foreign companies such as Google and Facebook, Code Review has been applied very successfully and has become an indispensable part of the development process. Therefore, in order to truly play the role of Code Review, the key is to implement it in place and not to be mere formality

2.2.4 Development is not moving, documentation first

For most engineers, writing technical documentation is a "disgusting" thing. Generally speaking, before developing a system or an important module or function, you should first write a technical document, and then send it to the same group or related colleagues for review, and then develop it if there is no problem in the review. This can ensure that a consensus is reached in advance, and the developed things will not be out of shape. Moreover, when the development is completed and Code Review is performed, the code reviewer can quickly understand the code by reading the development document

Beyond that, documentation is an important asset for teams and companies. Technical documentation is very helpful for newcomers to familiarize themselves with the code or the handover of tasks. Moreover, as a standardized technical team, technical documentation is an effective way to abandon workshop-style development and individual heroism, and is a way to ensure effective team collaboration

2.2.5 Continuous Refactoring, Refactoring, Refactoring

I personally oppose not paying attention to code quality, piling up bad code, and refactoring or even rewriting if it can't be maintained. Sometimes, because there are too many project codes, it is difficult to refactor thoroughly, and finally a monster with four different shapes is created, which is even more troublesome!

Excellent code or architecture cannot be designed at the beginning, just like excellent companies or products are also iterated. We cannot 100% foresee future needs, nor do we have enough energy, time, and resources to pay for the distant future. Therefore, as the system evolves, refactoring is inevitable

Although it was said just now that drastic, overthrow-and-reinvention major refactorings are not supported, continuous small refactorings are still more advocated. It is also an effective means to ensure code quality at all times and prevent code corruption. In other words, don't wait until there are too many problems to solve. Someone should be responsible for the overall quality of the code at all times, and change the code when there is nothing wrong

Especially for some business development teams, sometimes in order to quickly complete a business requirement, they only pursue speed, hardcode everywhere, and pile up bad code without considering non-functional requirements and code quality at all. In fact, this situation is relatively common. But it’s okay, when you have time, you must remember to refactor, otherwise there will be more and more bad code, and one day the code will become unmaintainable

2.2.6 Split the project and team

There are relatively few people in the team. For example, when there are more than a dozen people, the amount of code is not much, no more than 100,000 lines. It is no problem how to develop and manage. Everyone understands what each other does. Even if the code quality is too bad, we can rewrite it at worst. However, for a large-scale project, there will be many people involved in the development, the amount of code is large, there are hundreds of thousands or even millions of lines of code, and dozens or even hundreds of people develop and maintain at the same time. becomes extremely important

In the face of large and complex projects, not only the code needs to be split, but also the R&D team needs to be split. I mentioned some methods of code splitting, such as modularization and layering. In the same way, a large team can also be split into several small teams. Each small team is responsible for a small project (module, microservice, etc.), so that the code that each team is responsible for does not contain a lot of code, nor does it appear that the code quality is too poor to maintain

2.3 Code Review

2.3.1 Advantages

1. Code Review practice "three people must have my teacher"

Sometimes you may feel that the senior employees or technical leaders in the team have better skills and write good code, so their code does not need to be reviewed, and the code of the junior employees who focus on review is fine. In fact, this perception is wrong

The average research and development level of Google engineers is very high, but even so, no matter who submits the code, including Jeff Dean's, as long as they need Review, they will receive many comments (modification opinions). "Threesomes must have my teacher", even if you feel that the code is already well written, as long as you continue to scrutinize it, there is room for continuous improvement

Therefore, never think that you are very good, and the code you write does not need others to review; never think that you are not qualified to give others a review because your level is average; and don’t think that the technical masters let you review the code just because you lack one. "approve", just look at it

2. Code Review can abandon "personal heroism"

In a mature company, all architectural design and implementation should be the output of a team. Although this process may be led by a certain person, it should be the crystallization of the collective wisdom of the entire team

If a person writes code silently and submits it without review by the team, such code contains a person's wisdom. The quality of the code depends entirely on the skill level of the person. This leads to uneven code quality. If reviewed and polished by multiple people in the team, the code contains the wisdom of the entire team, which can ensure that the code is output according to the highest level in the team

3. Code Review can effectively improve code readability

It has been repeatedly emphasized that in most cases, the readability of the code is more important than any other aspect (such as scalability, etc.). Good readability means low post-maintenance costs, easy to troubleshoot online bugs, easy for newcomers to familiarize themselves with the code, and easy for old people to take over the code when they leave. Moreover, the readability is good, which also shows that the code is simple enough, the possibility of error is small, and there are few bugs

However, when you look at the code you write, you will always find it easy to read, but if another person reads your code, you may not think so. After all, the code written by myself, the business and technology involved in it are very familiar to me, and others may not be familiar with it. Since one's own judgment on readability is prone to illusions, Code Review is a good way to examine code readability. If the code reviewer is struggling to understand the code you wrote, it means that the readability of the code needs to be improved

4. Code Review is an effective way to pass on technology

A good team needs technical and business "passing and mentoring", so how to do "passing and mentoring"? Of course, in terms of business, it may be through documents or word of mouth. What about technology? How to cultivate the technical ability of junior engineers? Code Review is a good way. Every Code Review is an explanation of a real case. Through Code Review, passing technology to junior engineers in practice is more efficient than letting them learn and explore by themselves!

5. Code Review ensures that more than one person is familiar with the code

If only one person is familiar with a piece of code, if the colleague is on vacation or leaves the job, it will be more difficult to hand over the code. Sometimes, if you just look at the code and don't understand it well, you have to communicate with PM, business team, or other technical teams again and again, which makes people in other teams very annoying. And Code Review can ensure that at least two colleagues are familiar with any code at the same time, and they are backed up by each other, so they will be prepared, unless both colleagues leave at the same time...

6. Code Review can create a good technical atmosphere

Those who submit the code for review hope that the code they write is good enough. After all, it is a shameful thing to be reviewed by colleagues with many problems. And those who do Code review also hope that they can put forward constructive opinions as much as possible and show their abilities. Therefore, Code Review can also enhance technical exchanges, activate the technical atmosphere, cultivate everyone's geek spirit, and the pursuit of code quality

A good technical atmosphere can make the team have a strong self-driving force. Without the technical leader repeatedly emphasizing how important code quality is, members of the team will take the initiative to pay attention to code quality issues. This is more effective than formulating various rules and regulations and supervising their implementation every day. In fact, a good technical atmosphere can also reduce the turnover rate of the team

7. Code Review is a way of technical communication

Talk is cheap, show me the code. How to "show", use the Code Review tool to "show", which is also convenient for others to give feedback. Especially for communication across different offices and time zones, Code Review is a good way to communicate. The code written during the day today, when I come to work tomorrow, my colleagues across time zones have already reviewed it, changed it and submitted it, and continued to write new code. Such collaboration will be highly efficient

8. Code Review can improve the self-discipline of the team

In the development process, it is inevitable that some people will not be self-disciplined, and there is a fluke mentality: anyway, no one will read the code written, and submit it after writing it casually. Code Review is equivalent to a code live broadcast, exposing dirty code, which has a certain deterrent effect. In this way, everyone will not dare to deal with it casually and submit the code

2.3.2 How to implement Code Review in the team?

1. Some people think that the Code Review process is too long and wastes time, especially when the construction schedule is tight. The code changed today will be uploaded tomorrow. If you have to wait for a colleague to review, the colleague may not have time, so it will be too late. What should we do at this time?

None of the projects I have experienced have had no time for Code Review because of the tight schedule. The construction period is arranged by people, and it is enough to relax a little. The key lies in the acceptance of Code Review by the entire company. Moreover, after Code Review is proficient, it does not take too long. Although when you start to do Code Review, you may need to have a checklist to compare it with because you are not proficient. Getting started can be time consuming. But after proficiency, Code Review is like blind typing on the keyboard. I have forgotten which finger is pressing which key. Scanning the code can find out most of the problems.

2. Some people think that the business is always changing, the code written today may have to be changed tomorrow, the code may not be maintained for a long time, and it is useless to write too well. In this case, is there no need for Code Review?

This phenomenon is more common in game development, some early startups, or project verification stages. The project emphasizes short, smooth and fast, first verify the product, and then optimize the technology. If you are really facing the problem of survival, the quality of the code is really not the first priority. Under special circumstances, it is supported not to do Code Review!

3. Some people say that the technical level of the team members is not high, and they have no experience in Code Review in the past. I don't understand my own code, I don't know what kind of code is good and what kind of code is bad, let alone Review other people's code. During the Code Review, the team members stared wide-eyed and could only review some syntax, and the form was greater than the effect. What should I do in this situation?

This situation is also quite common. But it doesn't matter, the technical level of the team can be cultivated. You can first ask senior colleagues, good technical colleagues or technical leaders to review everyone else's code. The process of review itself is a process of "teaching and helping". Slowly, the whole team will know how to review. While this can be quite a lengthy process, if you really want to perform Code Reviews in your team, this can be regarded as a "curve to save the country" method

4. Some people said that when Code Review first started, everyone was quite serious, but after a long time, everyone felt that this matter has nothing to do with KPI, and they also need to look at other people's code and understand the business of code written by others, what a waste time. Slowly, Code Review becomes a mere formality. Someone submitted the code, just grab a personal Review. The reviewers are not serious, just click "approve" after a quick glance. How to deal with this situation?

First of all, we must clearly tell the importance of Code Review, and strictly enforce it, so that everyone will not slack off, and can "kill chickens to warn monkeys" when appropriate. Secondly, like Google, you can indirectly link Code Review with KPIs, promotions, etc. Senior engineers are obliged to do Code Review, just like they are obliged to do technical interviews. Thirdly, find ways to activate the technical atmosphere of the team, use Code Review as an opportunity to show your own technology, drive everyone's enthusiasm for Code Review, and improve everyone's sense of identity with Code Review

3. Google Guava

Google Guava is an open source version of Google's internal Java development tools library. Many Java projects within Google use it. It provides functionality not provided by the JDK, as well as enhancements to functionality already available in the JDK. These include: Collections, Caching, Primitives Support, Concurrency Libraries, Common Annotation, Strings Processing, Math, I/O, Event Bus (EventBus), etc.

insert image description here

The full name of JDK is Java Development Kit. It itself is a tool library provided by Java. Now that there is a JDK, why does Google have to develop a new class library Google Guava? Is it reinventing the wheel? Where is the difference between the two?

3.1 How to find common functional modules?

Many people think that there is no challenge in doing business development. In fact, doing business development also involves the development of many non-business functions, such as the ID generator, performance counter, EventBus, DI container mentioned earlier, and the current limiting mentioned later. Frameworks, idempotent frameworks, and grayscale components. The key is to have the ability to be good at discovery and abstraction, and have solid design and development capabilities, be able to discover these non-business, reusable function points, and decouple them from business logic. Developed into independent functional modules

In my opinion, in business development, there are generally three types of common functional modules that have nothing to do with business: library, framework, component, etc.

Among them, Google Guava belongs to the class library and provides a set of API interfaces. EventBus and DI containers belong to the framework and provide skeleton code, allowing business developers to focus on the business development part and fill in the reserved extension points with business code. ID generators and performance counters are functional components that provide a set of API interfaces with a special function. They are somewhat similar to class libraries, but are more focused and heavyweight. For example, ID generators may rely on external systems such as Redis, unlike library as simple as

Whether the current limiting, idempotence, and grayscale mentioned above belong to the framework or functional components depends on the specific situation. If the business code is nested and developed within them, they can be called frameworks. If they only open API interfaces for business system calls, they can be called components. However, it doesn't matter what it's called, and you don't have to go too deep into the concept

So how to find these common functional modules in the project?

In fact, whether they are class libraries, frameworks or functional components, these general-purpose functional modules have two biggest characteristics: reuse and business-independent. Google Guava is a typical example

If there is no reuse scene, then there is no need to extract it and design it as an independent module. If it is business-related and reusable, in most cases it will be designed as an independent system (such as microservices), rather than a class library, framework or functional component. Therefore, if the code responsible for development has nothing to do with the business and may be reused, then you can consider separating it and developing it into general functional modules such as class libraries, frameworks, and functional components.

Here is how to discover common functional modules in business development. In addition to business development teams, many companies also have some infrastructure teams and architecture development teams. In addition to developing class libraries, frameworks, and functional components, they will also develop some general systems and middleware, such as Google MapReduce, Kafka Message middleware, monitoring system, distributed call chain tracking system, etc.

3.2 How to develop general functional modules?

After discovering the development requirements of a general function module, how to design and develop it into an excellent class library, framework or function component? Let’s not talk about specific development skills here, but talk about some more general development ideas

As a general class library, framework, and functional component, we hope that after development, it can be used not only in our own projects, but also in projects of other teams. It can even be open sourced for more people to use, so that it can be used to a greater extent. value, build your own influence

Therefore, the development of these class libraries, frameworks, and functional components cannot be done behind closed doors, and they should be developed as "products". This product is a "technical product", the target users are "programmers", and it solves their "development pain points". Think differently, from the perspective of users, and think about what kind of functions they want

For a technical product, although technical indicators such as few bugs and good performance are very important, but whether it is easy to use, easy to integrate, easy to plug, whether the documentation is comprehensive, whether it is easy to use, etc., these product qualities are also very important, and even can play a decisive role. It is often these things that are easily overlooked and not taken seriously will determine whether a technical product can stand out from the crowd

Specific to Google Guava, it is a development class library, the target users are Java development engineers, to solve the main pain point of users, compared with JDK, it provides more tool classes to simplify code writing, for example, it provides a method for judging null values Preconditions class; Splitter, Joiner, CharMatcher string processing class; Multisets, Multimaps, Tables and other richer Collections classes, etc.

Its advantages are as follows: First, it is managed by Google and maintained for a long time. After sufficient unit testing, the code quality is guaranteed; second, it is reliable, good in performance, and highly optimized. For example, Immutable Collections provided by Google Guava is better than JDK The unmodifiableCollection has good performance; third, comprehensive and complete documentation, easy to use, low learning cost, you can go to its Github Wiki

I just talked about "product awareness", and then I will talk about "service awareness". If the things developed are provided to other teams, there must be "service awareness". For programmers, this may be more lacking than "product awareness"

First of all, from the mentality, other teams use the technical products we have developed, and we must learn to be grateful. This is an important point. The mentality is different, and there will be subtle differences in doing things. Secondly, in addition to writing code, you must also be mentally prepared to spend a lot of time answering questions and acting as customer service. With this psychological preparation, people in other teams will not be very annoying when they ask questions

Compared with business code, the development of this kind of general-purpose code that is reused in multiple places requires higher code quality, because these projects have a greater impact, and once a bug occurs, it will implicate many systems or other projects. Especially if you want to open source the project, the impact will be even greater. Therefore, the code quality of this type of project is generally very good, and the development of this type of project has a greater impact on the exercise of code ability.

Specific to Google Guava, it is developed by Google employees, the unit test is very complete, the comments are well written, and the code is well written. It can be said that it is the first-hand information for learning Google's development experience. If you have time, you can take it seriously. read its code

Although it is more technical to develop these general-purpose functional modules, don't reinvent the wheel, and reuse what can be reused as much as possible. Moreover, in the project, if you want to develop all common functions into independent class libraries, frameworks, and functional components, it will be a bit of a struggle, and you may not get the support of the leadership. After all, developing this part of the general function independently from the project will be more time-consuming than developing it as part of the project

Therefore, on balance, it is recommended to develop these common functions as part of the project in the early stage. However, when developing, do a good job of modularization, try to draw a clear line between them and other modules, and interact with other modes through loosely coupled methods such as interfaces and extension points. Wait until the time is right to spin it out of the project. Because the modularization was done well before and the degree of coupling was low, the cost of stripping out will not be very high

3.3 Several design patterns used in Google Guava

3.3.1 Builder mode

In project development, caching is often used. It can improve access speed very effectively. Commonly used caching systems include Redis, Memcache, etc. However, if the data to be cached is relatively small, there is no need to deploy a cache system independently in the project. After all, all systems have a certain probability of error. The more systems included in the project, the combination will increase the probability of error in the project as a whole and reduce the usability. At the same time, introducing one more system requires maintaining one more system, and the cost of project maintenance will become higher

Instead, a memory cache can be built inside the system and integrated with the system for development and deployment. So how to build memory cache? Based on the classes provided by JDK, such as HashMap, memory cache can be developed from scratch. However, developing a memory cache from scratch involves more work, such as cache elimination strategies. In order to simplify development, you can use the ready-made caching tools provided by Google Guava
com.google.common.cache.*, as in the following example:

public class CacheDemo {
    
    
	public static void main(String[] args) {
    
    
		Cache<String, String> cache = CacheBuilder.newBuilder()
		                              .initialCapacity(100)
		                              .maximumSize(1000)
		                              .expireAfterWrite(10, TimeUnit.MINUTES)
		                              .build();
		cache.put("key1", "value1");
		String value = cache.getIfPresent("key1");
		System.out.println(value);
	}
}

From the above code, it can be found that the Cache object is created through a Builder class such as CacheBuilder. Why should the Cache object be created by the Builder class?

To build a cache, you need to configure n multiple parameters, such as expiration time, elimination strategy, maximum cache size, and so on. Correspondingly, the Cache class will contain n multi-member variables. It is necessary to set the values ​​of these member variables in the constructor, but not all values ​​must be set, which values ​​to set are determined by the user. To meet this requirement, multiple constructors with different parameter lists need to be defined

In order to avoid too long parameter list of the constructor and too many different constructors, there are generally two solutions. Among them, one solution is to use the Builder mode; another solution is to first create an object through a no-argument constructor, and then use setXXX()the method to set the required member variables one by one

Why did Guava choose the first solution instead of the second? Is it also possible to use the second solution? The answer is no, first look at the source code as follows:

public <K1 extends K, V1 extends V> Cache<K1, V1> build() {
    
    
	this.checkWeightWithWeigher();
	this.checkNonLoadingCache();
	return new LocalManualCache(this);
}
private void checkNonLoadingCache() {
    
    
	Preconditions.checkState(this.refreshNanos == -1L, "refreshAfterWrite require");
}
private void checkWeightWithWeigher() {
    
    
	if (this.weigher == null) {
    
    
		Preconditions.checkState(this.maximumWeight == -1L, "maximumWeight requires");
	} else if (this.strictParsing) {
    
    
		Preconditions.checkState(this.maximumWeight != -1L, "weigher requires maximnum");
	} else if (this.maximumWeight == -1L) {
    
    
		logger.log(Level.WARNING, "ignoring weigher specified without maximumWeight");
	}
}

The main reason why the Builder mode must be used is that when actually constructing the Cache object, some necessary parameter verification must be done, which is the work to be done by the first two lines of code in build()the function . If the scheme of no-argument default constructor and setXXX()method , these two checks will have nowhere to be placed. Without verification, the created Cache object may be illegal and unavailable

3.3.2 Wrapper mode

Under the collection package path of Google Guava, there is a set of classes named starting with Forwarding

insert image description here

This set of Forwarding classes is numerous, but the implementations are all similar. Here is an excerpt of some of the code in the ForwardingCollection to here

@GwtCompatible
public abstract class ForwardingCollection<E> extends ForwardingObject implement Collection {
    
    
	protected ForwardingCollection() {
    
    
	}
	protected abstract Collection<E> delegate();
	public Iterator<E> iterator() {
    
    
		return this.delegate().iterator();
	}
	public int size() {
    
    
		return this.delegate().size();
	}
	@CanIgnoreReturnValue
	public boolean removeAll(Collection<?> collection) {
    
    
		return this.delegate().removeAll(collection);
	}
	public boolean isEmpty() {
    
    
		return this.delegate().isEmpty();
	}
	public boolean contains(Object object) {
    
    
		return this.delegate().contains(object);
	}
	@CanIgnoreReturnValue
	public boolean add(E element) {
    
    
		return this.delegate().add(element);
	}
	@CanIgnoreReturnValue
	public boolean remove(Object object) {
    
    
		return this.delegate().remove(object);
	}
	public boolean containsAll(Collection<?> collection) {
    
    
		return this.delegate().containsAll(collection);
	}
	@CanIgnoreReturnValue
	public boolean addAll(Collection<? extends E> collection) {
    
    
		return this.delegate().addAll(collection);
	}
	@CanIgnoreReturnValue
	public boolean retainAll(Collection<?> collection) {
    
    
		return this.delegate().retainAll(collection);
	}
	public void clear() {
    
    
		this.delegate().clear();
	}
	public Object[] toArray() {
    
    
		return this.delegate().toArray();
	}
	//...省略部分代码...
}

Here's an example of his usage:

public class AddLoggingCollection<E> extends ForwardingCollection<E> {
    
    
	private static final Logger logger = LoggerFactory.getLogger(AddLoggingCollec
	                                     private Collection<E> originalCollection;
	public AddLoggingCollection(Collection<E> originalCollection) {
    
    
		this.originalCollection = originalCollection;
	}
	@Override
	protected Collection delegate() {
    
    
		return this.originalCollection;
	}
	@Override
	public boolean add(E element) {
    
    
		logger.info("Add element: " + element);
		return this.delegate().add(element);
	}
	@Override
	public boolean addAll(Collection<? extends E> collection) {
    
    
		logger.info("Size of elements to add: " + collection.size());
		return this.delegate().addAll(collection);
	}
}

In the above code, AddLoggingCollection is a proxy class implemented based on the proxy mode. It adds the function of recording logs for "add" related operations on the basis of the original Collection class.

As mentioned earlier, the proxy mode, decorator, and adapter mode can be collectively referred to as the Wrapper mode, and the original class is encapsulated twice through the Wrapper class. Their code implementations are also very similar, and they can all be implemented by entrusting the function implementation of the Wrapper class to the function of the original class by way of combination.

public interface Interf {
    
    
	void f1();
	void f2();
}
public class OriginalClass implements Interf {
    
    
	@Override
	public void f1() {
    
    
		//...
	}
	@Override
	public void f2() {
    
    
		//...
	}
}
public class WrapperClass implements Interf {
    
    
	private OriginalClass oc;
	public WrapperClass(OriginalClass oc) {
    
    
		this.oc = oc;
	}
	@Override
	public void f1() {
    
    
		//...附加功能...
		this.oc.f1();
		//...附加功能...
	}
	@Override
	public void f2() {
    
    
		this.oc.f2();
	}
}

In fact, this ForwardingCollection class is a "default Wrapper class" or "default Wrapper class". FilterInputStream default decorator class for Java IO

If you do not use this ForwardinCollection class, but let the AddLoggingCollection proxy class directly implement the Collection interface, then all the methods in the Collection interface must be implemented in the AddLoggingCollection class, and only two functions, add()and , that really need to add log functions, other functions addAll()The realization of is just like the realization of f2()the function , simply delegated to the corresponding function of the original collection class object

In order to simplify the code implementation of the Wrapper pattern, Guava provides a series of default Forwarding classes. When users implement their own Wrapper class, based on the default Forwarding class to expand, they can only implement the methods they care about, and other methods that they don’t care about use the default Forwarding class implementation, just like the implementation of AddLoggingCollection class

3.3.3 Immutable mode

Immutable mode, which is called invariant mode in Chinese, does not belong to the classic 23 design modes, but as a more commonly used design idea, it can be summarized as a design mode to learn

The state of an object does not change after the object is created, which is the so-called immutable pattern. The class involved is the immutable class (Immutable Class), and the object is the immutable object (Immutable Object). In Java, the most commonly used immutable class is the String class. Once a String object is created, it cannot be changed.

Immutable patterns can be divided into two categories, one is ordinary immutable pattern, and the other is deep immutable pattern (Deeply Immutable Pattern). The ordinary invariant mode means that the reference object contained in the object can be changed. Unless otherwise specified, what is commonly referred to as an invariant mode refers to an ordinary invariant mode. The deep invariant mode means that the referenced objects contained by the object are also not mutable. The relationship between the two is somewhat similar to the relationship between shallow copy and deep copy mentioned earlier. For example:

// 普通不变模式
public class User {
    
    
	private String name;
	private int age;
	private Address addr;
	public User(String name, int age, Address addr) {
    
    
		this.name = name;
		this.age = age;
		this.addr = addr;
	}
	// 只有getter方法,无setter方法...
}
public class Address {
    
    
	private String province;
	private String city;
	public Address(String province, String city) {
    
    
		this.province = province;
		this.city= city;
	}
	// 有getter方法,也有setter方法...
}
// 深度不变模式
public class User {
    
    
	private String name;
	private int age;
	private Address addr;
	public User(String name, int age, Address addr) {
    
    
		this.name = name;
		this.age = age;
		this.addr = addr;
	}
	// 只有getter方法,无setter方法...
}
public class Address {
    
    
	private String province;
	private String city;
	public Address(String province, String city) {
    
    
		this.province = province;
		this.city= city;
	}
	// 只有getter方法,无setter方法..
}

In a certain business scenario, if an object conforms to the characteristic that it will not be modified after it is created, it can be designed as an immutable class. Explicitly enforces it to be immutable so that it cannot be accidentally modified. So how to make an immutable class? The method is very simple, as long as the class satisfies: all member variables are set at one time through the constructor, without exposing any method of modifying member variables such as set. In addition, because the data remains unchanged, there is no concurrent read and write problem, so the invariant mode is often used in a multi-threaded environment to avoid thread locking. Therefore, the invariant mode is often classified as a multi-threaded design mode.

Next, let's look at a special kind of immutable class, that is, immutable collections. Google Guava provides corresponding immutable collection classes (ImmutableCollection, ImmutableList, ImmutableSet, ImmutableMap...) for collection classes (Collection, List, Set, Map...). As mentioned above, the invariant mode is divided into two types, the normal invariant mode and the depth invariant mode. The immutable collection class provided by Google Guava belongs to the former, that is to say, the objects in the collection will not be added or deleted, but the member variables (or attribute values) of the objects can be changed

In fact, Java JDK also provides immutable collection classes (UnmodifiableCollection, UnmodifiableList, UnmodifiableSet, UnmodifiableMap...). So what is the difference between it and the immutable collection class provided by Google Guava? For example:

public class ImmutableDemo {
    
    
	public static void main(String[] args) {
    
    
		List<String> originalList = new ArrayList<>();
		originalList.add("a");
		originalList.add("b");
		originalList.add("c");
		List<String> jdkUnmodifiableList = Collections.unmodifiableList(originalList);
		List<String> guavaImmutableList = ImmutableList.copyOf(originalList);
		//jdkUnmodifiableList.add("d"); // 抛出UnsupportedOperationException
		// guavaImmutableList.add("d"); // 抛出UnsupportedOperationException
		originalList.add("d");
		print(originalList); // a b c d
		print(jdkUnmodifiableList); // a b c d
		print(guavaImmutableList); // a b c
	}
	private static void print(List<String> list) {
    
    
		for (String s : list) {
    
    
			System.out.print(s + " ");
		}
		System.out.println();
	}
}

3.4 Functional programming

There are currently three mainstream programming paradigms, process-oriented, object-oriented, and functional programming. Functional programming is not a very new thing, it has appeared more than 50 years ago. In recent years, functional programming has attracted more and more attention, and many new functional programming languages ​​have emerged, such as Clojure, Scala, Erlang, etc. Some non-functional programming languages ​​have also added many features, syntax, and class libraries to support functional programming, such as Java, Python, Ruby, JavaScript, etc. In addition, Google Guava also has enhancements for functional programming

Due to the particularity of programming, functional programming can only give full play to its advantages in scientific computing, data processing, statistical analysis and other fields. Therefore, I personally feel that it cannot completely replace the more general object-oriented programming paradigm. But, as a complement, it also has great significance for being, developing and learning

3.4.1 Concept

The English translation of functional programming is Functional Programming. So what exactly is functional programming?

As mentioned earlier, there is no strict official definition for process-oriented and object-oriented programming. In fact, the same is true for functional programming, and there is no strict official definition. So, let’s talk about what is functional programming in terms of features

Strictly speaking, the "function" in functional programming does not refer to the concept of "function" in a programming language, but refers to a mathematical "function" or "expression" (for example, ) y=f(x). However, when implementing programming, for mathematical "functions" or "expressions", it is generally customary to design them as functions. Therefore, if you don't go into it, the "function" in functional programming can also be understood as the "function" in the programming language

Every programming paradigm is unique in its own way, which is why they are abstracted away as a paradigm. The biggest feature of object-oriented programming is: using classes and objects as the unit of organizing code and its four major characteristics. The biggest feature of process-oriented programming is that functions are used as the unit of code organization, and data and methods are separated. Where is the most unique feature of functional programming?

In fact, the most unique feature of functional programming lies in its programming thinking. Functional programming believes that programs can be represented by a series of mathematical functions or combinations of expressions. Functional programming is a lower-level abstraction of programming for mathematics, which describes the calculation process as an expression. But can any program really be represented as a set of mathematical expressions?

In theory it is possible. However, not all programs are suitable for this. Functional programming has its own suitable application scenarios, such as scientific computing, data processing, and statistical analysis mentioned at the beginning of this section. In these fields, programs are often easier to express with mathematical expressions. Compared with non-functional programming, to achieve the same function, functional programming can be done with very little code. However, for the development of large-scale business systems that are strongly business-related, it is obviously asking for trouble if it is difficult to abstract it into a mathematical expression and insist on implementing it with functional programming. On the contrary, in this application scenario, object-oriented programming is more suitable, and the written code is more readable and maintainable

Specific to programming implementation, functional programming, like process-oriented programming, also uses functions as the unit of organizing code. However, it differs from procedural programming in that its functions are stateless. What is stateless? To put it simply, the variables involved in the function are all local variables, and will not share class member variables like object-oriented programming, nor will they share global variables like process-oriented programming. The execution result of the function is only related to the input parameters, and has nothing to do with any other external variables. The same input parameters, no matter how to execute, the result is the same. This is actually the basic requirement of mathematical functions or mathematical expressions, as in the following example:

// 有状态函数: 执行结果依赖b的值是多少,即便入参相同,多次执行函数,函数的返回值有可能不同
int b;
int increase(int a) {
    
    
    return a + b;
}
// 无状态函数:执行结果不依赖任何外部变量值,只要入参相同,不管执行多少次,函数的返回值就相同
int increase(int a, int b) {
    
    
    return a + b;
}

Different programming paradigms are not distinct, there are always some common programming rules. For example, whether it is process-oriented, object-oriented or functional programming, they all have the concepts of variables and functions, and the topmost layer must have a main function execution entry to assemble programming units (classes, functions, etc.). However, the object-oriented programming unit is a class or object, the process-oriented programming unit is a function, and the programming unit of functional programming is a stateless function

3.4.2 Java support for functional programming

You don't have to use an object-oriented programming language to implement object-oriented programming, and you don't have to use a functional programming language to implement functional programming. Now, many object-oriented programming languages ​​also provide corresponding syntax and class libraries to support functional programming

Java introduces three new syntactic concepts for functional programming: Stream class, Lambda expression, and Functional Inteface. The Stream class is used to support the code writing method of cascading multiple function operations through "."; the function of introducing Lambda expressions is to simplify code writing; the function of the function interface is to wrap the function into a function interface to realize the function as a parameter (Java does not support function pointers like C, and functions can be used directly as parameters)

Detailed relevant knowledge can be understood by yourself

3.4.3 Guava's Enhancements to Functional Programming

If you were the designer of Google Guava, what else can Google Guava do for Java functional programming?

Disruptive innovation is hard. However, some supplements can be made. On the one hand, operations on the Stream class can be added (terminal and intermediate operations like map, filter, and max), and on the other hand, more function interfaces can be added (like Function, Predicate, etc.) function interface). In fact, it is also possible to design some new classes that support cascading operations similar to the Stream class. In this way, it will be more convenient to use Java with Guava for functional programming

However, contrary to expectations, Google Guava does not provide much support for functional programming, and only encapsulates several interfaces for traversing collection operations. The code is as follows:

Iterables.transform(Iterable, Function);
Iterators.transform(Iterator, Function);
Collections.transfrom(Collection, Function);
Lists.transform(List, Function);
Maps.transformValues(Map, Function);
Multimaps.transformValues(Mltimap, Function);
...
Iterables.filter(Iterable, Predicate);
Iterators.filter(Iterator, Predicate);
Collections2.filter(Collection, Predicate);
...

From Google Guava's GitHub Wiki, it is found that Google is still very cautious about the use of functional programming. It believes that excessive use of functional programming will lead to poor code readability, and emphasizes not to abuse it. So, in terms of functional programming, Google Guava does not provide much support

The reason why the traversal collection operation is optimized is mainly because an important application scenario of functional programming is to traverse the collection. If you don't use functional programming, you can only use for loops to process the data in the collection one by one. Using functional programming can greatly simplify the code writing of traversal collection operations, which can be done with one line of code, and there is not much loss in readability

4. Spring framework

4.1 Contained design ideas

4.1.1 Convention over configuration

In projects developed with Spring, the configuration is often complicated and cumbersome. For example, to use Spring MVC to develop Web applications, you need to configure each Controller class and the URL corresponding to the interface in the Controller class. How to simplify the configuration? Generally speaking, there are two methods, one is based on annotations, and the other is based on conventions. Both configuration methods are used in Spring. Spring has done a great job in minimizing configuration, and there are many places worth learning from

  • Annotation-based configuration: Use specified annotations on specified classes to replace centralized XML configuration. For example, use @RequestMappingannotations to mark the corresponding URL on the Controller class or interface; use @Transactionannotations to indicate support for transactions, etc.
  • Convention-based configuration method: It is also often called "Convention over Configuration" or "Convention over Configuration". Reduce configuration through agreed code structure or naming. To put it bluntly, it is to provide the default value of the configuration, and the default value is preferred. Programmers only need to set those configurations that deviate from the convention

For example, in Spring JPA (a set of JPA application framework encapsulated based on the ORM framework and JPA specification), the default class name is the same as the table name, the attribute name is the same as the table field name by default, and the String type corresponds to the The varchar type, the long type correspond to the bigint type in the database, etc.

Based on the agreement just now, the Order class defined in the code corresponds to the "order" table in the database. Only when you deviate from this agreement, for example, the table in the database is named "order_info" instead of "order", you need to explicitly configure the mapping relationship between the class and the table (Order class -> order_info table)

In fact, convention is better than configuration, which well embodies the "28 rule". In normal project development, 80% of the configurations can use the default configuration, and only 20% of the configurations must be explicitly set by the user. Therefore, configuring based on conventions saves a lot of time for writing configurations without sacrificing configuration flexibility, saves a lot of mindless pure manual labor, and improves development efficiency. In addition, the development based on the same agreement also reduces the learning cost and maintenance cost of the project.

4.1.2 Low intrusion, loose coupling

How intrusive a framework is is an important indicator of how good a framework is. The so-called low intrusion means that the framework code is rarely coupled in the business code. Low intrusion means that when a framework is to be replaced, there will be very few changes to the original business code. On the contrary, if a framework is highly intrusive and the code is highly intrusive into the business code, the cost of replacing it with another framework will be very high, or even almost impossible. This is also a very important reason why some old projects that have been maintained for a long time use relatively old frameworks and technologies and cannot be updated

In fact, low intrusion is a very important design idea followed by the Spring framework

The IOC container provided by Spring can incorporate them into Spring's management only through configuration without requiring Beans to inherit any parent class or implement any interface. If you change to another IOC container, you only need to reconfigure it, and the original Bean does not need any modification

In addition, the AOP function provided by Spring also reflects the low intrusion feature. In the project, for non-business functions, such as request logs, data collection points, security verification, transactions, etc., there is no need to invade them into business code. Because once intruded, these codes will be scattered in various business codes, and the cost of deletion and modification will become very high. Based on the development model of AOP, the non-business code is concentrated in the aspect, and the cost of deletion and modification becomes very low.

4.1.3 Modular and lightweight

More than a decade ago, EJB was the mainstream development framework for Java enterprise applications. However, it is very bloated, complex, intrusive, highly coupled, and the development, maintenance, and learning costs are not low. Therefore, in order to replace the bulky EJB, Rod Johnson developed a set of open source Interface21 framework, which provides the most basic IOC functions. In fact, the Interface21 framework is the predecessor of the Spring framework

However, with continuous development, Spring is now not just a small framework that only includes IOC functions. It has obviously grown into a "platform" or "ecology" that includes various functions. Even so, it didn't repeat the same mistakes and become a bulky and unwieldy framework like EJB. How did Spring do it?

This is due to Spring's modular design idea. The following figure is the module and layered introduction diagram of Spring Framework

insert image description here

As can be seen from the figure, Spring does a very good job in layering and modularization. Each module is only responsible for a relatively independent function. The relationship between modules is only the dependency of the upper layer on the lower layer, while there is almost no dependence and coupling between the same layer and the lower layer to the upper layer. In addition, in projects that rely on Spring, developers can selectively introduce certain modules, and will not be forced to introduce the entire Spring framework just because they need a small function. Therefore, although the Spring Framework contains many modules, there are already more than two dozen, each module is very lightweight and can be used alone. Because of this, until now, the Spring framework can still be called a lightweight development framework

4.1.4 Re-encapsulation and re-abstraction

Spring not only provides various common functional modules for Java project development, but also further encapsulates and abstracts mainstream middleware and system access libraries on the market, providing a higher level and more unified access interface

For example, Spring provides the spring-data-redis module, which further encapsulates Redis Java development libraries (such as Jedis and Lettuce), adapts Spring's access methods, and makes programming access to Redis easier. There is also Spring Cache, which is actually a kind of re-encapsulation and re-abstraction. It defines a unified and abstract Cache access interface, which does not depend on specific Cache implementations (Redis, Guava Cache, Caffeine, etc.). In the project, Cache is accessed based on the abstract and unified interface provided by Spring. In this way, switching between different caches can be realized without modifying the code

In addition, Spring has further encapsulated JDBC exceptions. Encapsulated database exceptions inherit from the DataAccessException runtime exception. This type of exception does not need to be caught mandatory during development, thus reducing unnecessary exception catch and processing. In addition, the database exception encapsulated by Spring also shields the details of different database exceptions (for example, different databases define different error codes for the same error report), making exception handling easier

4.2 Two Design Patterns to Support Extensions

4.2.1 Application of Observer Pattern in Spring

The observer pattern implemented in Spring consists of three parts: Event event (equivalent to message), Listener listener (equivalent to observer), and Publisher sender (equivalent to observed object). An example looks like this:

// Event事件
public class DemoEvent extends ApplicationEvent {
    
    
	private String message;
	public DemoEvent(Object source, String message) {
    
    
		super(source);
	}
	public String getMessage() {
    
    
		return this.message;
	}
}
// Listener监听者
@Component
public class DemoListener implements ApplicationListener<DemoEvent> {
    
    
	@Override
	public void onApplicationEvent(DemoEvent demoEvent) {
    
    
		String message = demoEvent.getMessage();
		System.out.println(message);
	}
}
// Publisher发送者
@Component
public class DemoPublisher {
    
    
	@Autowired
	private ApplicationContext applicationContext;
	public void publishEvent(DemoEvent demoEvent) {
    
    
		this.applicationContext.publishEvent(demoEvent);
	}
}

As can be seen from the code, the framework is not complicated to use, mainly including three parts: define an event (DemoEvent) that inherits ApplicationEvent; define a listener (DemoListener) that implements ApplicationListener; define a sender (DemoPublisher), The sender calls ApplicationContext to send event messages

public abstract class ApplicationEvent extends EventObject {
    
    
	private static final long serialVersionUID = 7099057708183571937L;
	private final long timestamp = System.currentTimeMillis();
	public ApplicationEvent(Object source) {
    
    
		super(source);
	}
	public final long getTimestamp() {
    
    
		return this.timestamp;
	}
}
public class EventObject implements java.io.Serializable {
    
    
	private static final long serialVersionUID = 5516075349620653480L;
	protected transient Object source;
	public EventObject(Object source) {
    
    
		if (source == null)
			throw new IllegalArgumentException("null source");
		this.source = source;
	}
	public Object getSource() {
    
    
		return source;
	}
	public String toString() {
    
    
		return getClass().getName() + "[source=" + source + "]";
	}
}
public interface ApplicationListener<E extends ApplicationEvent> extends EventObject {
    
    
	void onApplicationEvent(E var1);
}

When I talked about the observer mode, I mentioned that the observer needs to be registered in advance with the observer (the implementation of JDK) or the event bus (the implementation of EventBus). So in Spring's implementation, where is the observer registered? And how did you register?

Spring registers observers with the ApplicationContext object. The ApplicationContext here is equivalent to the "event bus" in the Google EventBus framework. However, the ApplicationContext class is not just for the Observer pattern. It relies on BeanFactory (the main implementation class of IOC) at the bottom to provide application startup and runtime context information, and is the top-level interface to access this information

In fact, as far as the source code is concerned, ApplicationContext is just an interface, and the specific code implementation is contained in its implementation class AbstractApplicationContext. The code related to the observer mode is as follows, you only need to pay attention to how it sends events and registers listeners

public abstract class AbstractApplicationContext extends ...{
    
    
    private final Set<ApplicationListener<?>> applicationListeners;

    public AbstractApplicationContext() {
    
    
        this.applicationListeners = new LinkedHashSet();
        //...
    }

    public void publishEvent(ApplicationEvent event) {
    
    
        this.publishEvent(event, (ResolvableType) null);
    }

    public void publishEvent(Object event) {
    
    
        this.publishEvent(event, (ResolvableType) null);
    }

    protected void publishEvent(Object event, ResolvableType eventType) {
    
    
        //...
        Object applicationEvent;
        if (event instanceof ApplicationEvent) {
    
    
            applicationEvent = (ApplicationEvent) event;
        } else {
    
    
            applicationEvent = new PayloadApplicationEvent(this, event);
            if (eventType == null) {
    
    
                eventType = ((PayloadApplicationEvent) applicationEvent).getResolvableTy
            }
        }
        if (this.earlyApplicationEvents != null) {
    
    
            this.earlyApplicationEvents.add(applicationEvent);
        } else {
    
    
            this.getApplicationEventMulticaster().multicastEvent(
                    (ApplicationEvent) applicationEvent, eventType);
        }
        if (this.parent != null) {
    
    
            if (this.parent instanceof AbstractApplicationContext) {
    
    
                ((AbstractApplicationContext) this.parent).publishEvent(event, eventType);
            } else {
    
    
                this.parent.publishEvent(event);
            }
        }
    }

    public void addApplicationListener(ApplicationListener<?> listener) {
    
    
        Assert.notNull(listener, "ApplicationListener must not be null");
        if (this.applicationEventMulticaster != null) {
    
    
            this.applicationEventMulticaster.addApplicationListener(listener);
        } else {
    
    
            this.applicationListeners.add(listener);
        }
    }

    public Collection<ApplicationListener<?>> getApplicationListeners() {
    
    
        return this.applicationListeners;
    }

    protected void registerListeners() {
    
    
        Iterator var1 = this.getApplicationListeners().iterator();
        while (var1.hasNext()) {
    
    
            ApplicationListener<?> listener = (ApplicationListener) var1.next();
        }
        String[] listenerBeanNames = this.getBeanNamesForType(ApplicationListener.c
        String[]var7 = listenerBeanNames;
        int var3 = listenerBeanNames.length;
        for (int var4 = 0; var4 < var3; ++var4) {
    
    
            String listenerBeanName = var7[var4];
            this.getApplicationEventMulticaster().addApplicationListenerBean(listene
        }
        Set<ApplicationEvent> earlyEventsToProcess = this.earlyApplicationEvents;
        this.earlyApplicationEvents = null;
        if (earlyEventsToProcess != null) {
    
    
            Iterator var9 = earlyEventsToProcess.iterator();
            while (var9.hasNext()) {
    
    
                ApplicationEvent earlyEvent = (ApplicationEvent) var9.next();
                this.getApplicationEventMulticaster().multicastEvent(earlyEvent);
            }
        }
    }
}

It can be found from the above code that the real message sending is actually done through the ApplicationEventMulticaster class. Only the most critical part of the source code of this class is excerpted here, which is multicastEvent()the message sending function. However, its code is not complicated. It supports two types of observer modes: asynchronous non-blocking and synchronous blocking through the thread pool.

public void multicastEvent(ApplicationEvent event) {
    
    
    this.multicastEvent(event,this.resolveDefaultEventType(event));
}

public void multicastEvent(final ApplicationEvent event,ResolvableType eventType) {
    
    
    ResolvableType type = eventType !=null ? eventType : this.resolveDefaultEvent;
    Iterator var4=this.getApplicationListeners(event,type).iterator();
    while(var4.hasNext()) {
    
    
        final ApplicationListener<?> listener=(ApplicationListener)var4.next();
        Executor executor=this.getTaskExecutor();
        if(executor!=null){
    
    
            executor.execute(new Runnable(){
    
    
                public void run(){
    
    
                    SimpleApplicationEventMulticaster.this.invokeListener(listener,event
                }
            });
        }else{
    
    
            this.invokeListener(listener,event);
        }
    }
}

With the help of the skeleton code of the observer mode provided by Spring, if you want to send and listen to an event under Spring, you only need to do a little work, just define the event, define the listener, and send the event to the ApplicationContext. The following work is done by the Spring framework. In fact, this also reflects the extensibility of the Spring framework, that is, to expand new events and monitors without modifying any code

4.2.2 Application of Template Pattern in Spring

A question that is often asked in interviews: Please tell me what are the main steps in the creation process of Spring Bean. This involves the template pattern. It also reflects the extensibility of Spring. Using the template mode, Spring allows users to customize the Bean creation process

The creation process of Spring Bean can be roughly divided into two steps: object creation and object initialization

The creation of the object is to dynamically generate the object through reflection, not the new method. Either way, to put it bluntly, the constructor is still called to generate the object, there is nothing special about it. There are two ways to initialize an object. One is to customize an initialization function in the class, and explicitly tell Spring which function is the initialization function through the configuration file. As shown below, in the configuration file, the initialization function is specified through the init-method attribute

public class DemoClass {
    
    
    //...
    public void initDemo() {
    
    
        //...初始化..
    }
}
// 配置:需要通过init-method显式地指定初始化方法
<bean id="demoBean" class="com.xzg.cd.DemoClass" init-method="initDemo"></bean>

This initialization method has a disadvantage. The initialization function is not fixed and can be defined by the user at will. This requires Spring to dynamically call the initialization function at runtime through reflection. And reflection will affect the performance of code execution, so is there any alternative?

Spring provides another way to define an initialization function, which is to let the class implement the Initializingbean interface. This interface contains a fixed initialization function definition (afterPropertiesSet() function). When Spring initializes the Bean, it can bean.afterPropertiesSet()directly call this function on the Bean object without using reflection. As follows

public class DemoClass implements InitializingBean{
    
    
    @Override
    public void afterPropertiesSet() throws Exception {
    
    
        //...初始化...
    }
}
// 配置:不需要显式地指定初始化方法
<bean id="demoBean" class="com.xzg.cd.DemoClass"></bean>

Although this implementation does not use reflection and improves execution efficiency, the business code (DemoClass) is coupled with the framework code (InitializingBean). The framework code invades the business code, and the cost of replacing the framework becomes higher. Therefore, this way of writing is not recommended

In fact, in Spring's management of the entire life cycle of beans, there is another process corresponding to initialization, which is the process of bean destruction. In Java, object recycling is done automatically by the JVM. However, some destruction operations (such as closing file handles, etc.) can be performed before the Bean is officially handed over to the JVM for garbage collection.

The destruction process is very similar to the initialization process, and there are two implementations. One is to specify the destruction function in the class by configuring destroy-method, and the other is to let the class implement the DisposableBean interface

In fact, Spring further refines the initialization process of objects, splitting it into three small steps: initialization pre-operation, initialization, and initialization post-operation. Among them, the intermediate initialization operation is the part just mentioned. The pre- and post-operations of initialization are defined in the interface BeanPostProcessor. The interface definition of BeanPostProcessor is as follows:

public interface BeanPostProcessor {
    
    
    Object postProcessBeforeInitialization(Object var1, String var2) throws BeansException;
    Object postProcessAfterInitialization(Object var1, String var2) throws BeansException;
}

Let's look again, how to define the initialization pre- and post-operations through BeanPostProcessor?

You only need to define a processor class that implements the BeanPostProcessor interface, and configure it in the configuration file like configuring ordinary beans. The ApplicationContext in Spring will automatically detect all beans that implement the BeanPostProcessor interface in the configuration file and register them in the BeanPostProcessor processor list. During the process of creating beans in the Spring container, Spring will call these processors one by one

insert image description here

At this point, you may say, where is the template mode used here? Doesn't the template pattern need to define an abstract template class containing template methods, and define subclasses to implement template methods?

In fact, the implementation of the template mode here is not a standard abstract class implementation, but a bit similar to the implementation of the Callback callback mentioned above, that is, the function to be executed is encapsulated into an object (for example, the initialization method is encapsulated into InitializingBean object), passed to the template (BeanFactory) to execute

4.3 Eleven Design Patterns Used by Spring Framework

4.3.1 Application of Adapter Pattern in Spring

In Spring MVC, the most common way to define a Controller is to mark a certain class as a Controller class through @Controllerannotations , and @RequesMappingmark the URL corresponding to the function through annotations. However, there is much more to defining a Controller than just this one approach. You can also define a Controller by having a class implement the Controller interface or the Servlet interface. For these three definition methods, the sample code is as follows:

// 方法一:通过@Controller、@RequestMapping来定义
@Controller
public class DemoController {
    
    
	@RequestMapping("/employname")
	public ModelAndView getEmployeeName() {
    
    
		ModelAndView model = new ModelAndView("Greeting");
		model.addObject("message", "Dinesh");
		return model;
	}
}

// 方法二:实现Controller接口 + xml配置文件:配置DemoController与URL的对应关系
public class DemoController implements Controller {
    
    
	@Override
	public ModelAndView handleRequest(HttpServletRequest req, HttpServletResponse rep) {
    
    
		ModelAndView model = new ModelAndView("Greeting");
		model.addObject("message", "Dinesh Madhwal");
		return model;
	}
}

// 方法三:实现Servlet接口 + xml配置文件:配置DemoController类与URL的对应关系
public class DemoServlet extends HttpServlet {
    
    
	@Override
	protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws Exception {
    
    
		this.doPost(req, resp);
	}
	@Override
	protected void doPost(HttpServletRequest req, HttpServletResponse resp) throw Exception {
    
    
		resp.getWriter().write("Hello World.");
	}
}

When the application starts, the Spring container will load these Controller classes, and parse out the processing function corresponding to the URL, encapsulate it into a Handler object, and store it in the HandlerMapping object. When a request comes, DispatcherServlet looks up the Handler corresponding to the request URL from the HanderMapping, then calls and executes the function code corresponding to the Handler, and finally returns the execution result to the client

However, the definitions of functions (function names, input parameters, return values, etc.) of Controllers defined in different ways are not uniform. As shown in the sample code above, the definition of the function in method 1 is very random and not fixed, the function definition in method 2 is, the function definition in handleRequest()method 3 is service()(it seems to be defined doGet(), doPost(), in fact, the template mode is used here , the Servlet service()calls doGet()the or doPost()method , and the DispatcherServlet calls service()the method). DispatcherServlet needs to call different functions according to different types of Controllers. The following is the specific pseudocode:

Handler handler = handlerMapping.get(URL);
if (handler instanceof Controller) {
    
    
	((Controller)handler).handleRequest(...);
} else if (handler instanceof Servlet) {
    
    
	((Servlet)handler).service(...);
} else if (hanlder 对应通过注解来定义的Controller) {
    
    
	反射调用方法...
}

It can be seen from the code that this implementation method will have many if-else branch judgments, and if you want to add a new Controller definition method, you must add a corresponding paragraph in the DispatcherServlet class code as shown in the above pseudo code The if logic. This obviously does not conform to the principle of opening and closing

In fact, the adapter pattern can be used to transform the code so that it meets the principle of opening and closing, and can better support praise. As mentioned earlier, one of the functions of the adapter is to "unify the interface design of multiple classes". Use the adapter mode to adapt the functions in the Controller class defined in different ways to a unified function definition. In this way, the if-else branch judgment logic can be removed in the DispatcherServlet class code, and the unified function can be called

Specifically look at the code implementation of Spring, Spring defines a unified interface HandlerAdapter, and defines the corresponding adapter class for each Controller. These adapter classes include: AnnotationMethodHandlerAdapter, SimpleControllerHandlerAdapter, SimpleServletHandlerAdapter, etc. The source code is as follows:

public interface HandlerAdapter {
    
    
	boolean supports(Object var1);
	ModelAndView handle(HttpServletRequest var1, HttpServletResponse var2, Object
		long getLastModified(HttpServletRequest var1, Object var2);
	}
// 对应实现Controller接口的Controller
public class SimpleControllerHandlerAdapter implements HandlerAdapter {
    
    
	public SimpleControllerHandlerAdapter() {
    
    
	}
	public boolean supports(Object handler) {
    
    
		return handler instanceof Controller;
	}
	public ModelAndView handle(HttpServletRequest request, HttpServletResponse re
		return ((Controller)handler).handleRequest(request, response);
	}
	public long getLastModified(HttpServletRequest request, Object handler) {
    
    
		return handler instanceof LastModified ? ((LastModified)handler).getLastMod
	}
}
// 对应实现Servlet接口的Controller
public class SimpleServletHandlerAdapter implements HandlerAdapter {
    
    
	public SimpleServletHandlerAdapter() {
    
    
	}
	public boolean supports(Object handler) {
    
    
		return handler instanceof Servlet;
	}
	public ModelAndView handle(HttpServletRequest request, HttpServletResponse response) {
    
    
		((Servlet)handler).service(request, response);
		return null;
	}
	public long getLastModified(HttpServletRequest request, Object handler) {
    
    
		return -1L;
	}
}
//AnnotationMethodHandlerAdapter对应通过注解实现的Controller,
//...

In the DispatcherServlet class, there is no need to treat different Controller objects differently, and it is enough to call handle()the function . The pseudo code implemented according to this idea is as follows:

// 之前的实现方式
Handler handler = handlerMapping.get(URL);
if (handler instanceof Controller) {
    
    
	((Controller)handler).handleRequest(...);
} else if (handler instanceof Servlet) {
    
    
	((Servlet)handler).service(...);
} else if (hanlder 对应通过注解来定义的Controller) {
    
    
	反射调用方法...
}
// 现在实现方式
HandlerAdapter handlerAdapter = handlerMapping.get(URL);
handlerAdapter.handle(...);

4.3.2 Application of strategy pattern in Spring

Spring AOP is implemented through dynamic proxy. Specific to code implementation, Spring supports two dynamic proxy implementation methods, one is the dynamic proxy implementation method provided by JDK, and the other is the dynamic proxy implementation method provided by Cglib

The former requires the class being proxied to have an abstract interface definition, while the latter does not. For different proxied classes, Spring will dynamically select different dynamic proxy implementations at runtime. This application scenario is actually a typical application scenario of the strategy pattern

As mentioned earlier, the strategy pattern consists of three parts, the definition, creation and use of strategies. Let's take a closer look at how these three parts are reflected in the Spring source code

In the strategy pattern, the definition of the strategy is very simple. You only need to define a strategy interface, and let different strategy classes implement this strategy interface. Corresponding to the Spring source code, AopProxy is a policy interface, and JdkDynamicAopProxy and CglibAopProxy are two policy classes that implement the AopProxy interface. Among them, the definition of AopProxy interface is as follows:

public interface AopProxy {
    
    
    Object getProxy();
    Object getProxy(ClassLoader var1);
}

In the strategy pattern, the creation of strategies is generally implemented through factory methods. Corresponding to the Spring source code, AopProxyFactory is a factory class interface, and DefaultAopProxyFactory is a default factory class for creating AopProxy objects. The source code of both is as follows:

public interface AopProxyFactory {
    
    
	AopProxy createAopProxy(AdvisedSupport var1) throws AopConfigException;
}

public class DefaultAopProxyFactory implements AopProxyFactory, Serializable {
    
    
	public DefaultAopProxyFactory() {
    
    
	}
	public AopProxy createAopProxy(AdvisedSupport config) throws AopConfigException) {
    
    
		if (!config.isOptimize() && !config.isProxyTargetClass() && !this.hasNoUse) {
    
    
			return new JdkDynamicAopProxy(config);
		} else {
    
    
			Class<?> targetClass = config.getTargetClass();
			if (targetClass == null) {
    
    
				throw new AopConfigException("TargetSource cannot determine target class");
			} else {
    
    
				return (AopProxy)(!targetClass.isInterface() && !Proxy.isProxyClass(targetClass);
			}
		}
	}
	//用来判断用哪个动态代理实现方式
	private boolean hasNoUserSuppliedProxyInterfaces(AdvisedSupport config) {
    
    
		Class<?>[] ifcs = config.getProxiedInterfaces();
		return ifcs.length == 0 || ifcs.length == 1 && SpringProxy.class.isAssignab
	}
}

The typical application scenario of the strategy pattern is to dynamically determine which strategy to use through environment variables, state values, calculation results, etc. Corresponding to the Spring source code, you can refer to the code implementation of createAopProxy()the function . Among them, the 10th line of code is the judgment condition for which strategy to choose dynamically

4.3.3 Application of composite mode in Spring

When I talked about Spring's "re-encapsulation and re-abstraction" design idea, Spring Cache was mentioned. Spring Cache provides a set of abstract Cache interfaces. Using it we can unify the different access methods of different cache implementations (Redis, Google Guava...). Different cache access classes for different cache implementations in Spring all rely on this interface, such as: EhCacheCache, GuavaCache, NoOpCache, RedisCache, JCacheCache, ConcurrentMapCache, CaffeineCache. The source code of the Cache interface is as follows:

public interface Cache {
    
    
    String getName();
    Object getNativeCache();
    Cache.ValueWrapper get(Object var1);
    <T> T get(Object var1, Class<T> var2);
    <T> T get(Object var1, Callable<T> var2);
    void put(Object var1, Object var2);
    Cache.ValueWrapper putIfAbsent(Object var1, Object var2);
    void evict(Object var1);
    void clear();
    public static class ValueRetrievalException extends RuntimeException {
    
    
        private final Object key;
        public ValueRetrievalException(Object key, Callable<?> loader, Throwable exception) {
    
    
            super(String.format("Value for key '%s' could not be loaded using '%s'",
                                this.key = key;
        }
        public Object getKey() {
    
    
            return this.key;
        }
    }
    public interface ValueWrapper {
    
    
        Object get();
    }
}

In actual development, a project may use many different caches, such as Google Guava cache and Redis cache. In addition, the same cache instance can also be divided into multiple small logical cache units (or namespaces) according to different services

To manage multiple caches, Spring also provides cache management functionality. However, the functions it contains are very simple. There are two main parts: one is to obtain the Cache object according to the cache name (the name attribute should be set when creating the Cache object); the other is to obtain the name list of all caches managed by the manager. The corresponding Spring source code is as follows:

public interface CacheManager {
    
    
    Cache getCache(String var1);
    Collection<String> getCacheNames();
}

What has just been given is the definition of the CacheManager interface, so how to implement these two interfaces? This requires the use of the combination mode mentioned earlier. The combination mode is mainly applied to a set of data that can be expressed in a tree structure. The nodes in the tree are divided into leaf nodes and intermediate nodes. Corresponding to the Spring source code, EhCacheManager, SimpleCacheManager, NoOpCacheManager, RedisCacheManager, etc. represent leaf nodes, and CompositeCacheManager represents intermediate nodes

The leaf node contains the Cache object it manages, and the intermediate node contains other CacheManager managers, which can be CompositeCacheManager or specific managers, such as EhCacheManager, RedisManager, etc.

The code of CompositeCacheManger is as follows, in which, getCache()the getCacheNames()implementation of the two functions uses recursion. This is where the tree structure can give the most advantage

public class CompositeCacheManager implements CacheManager, InitializingBean {
    
    
    private final List<CacheManager> cacheManagers = new ArrayList();
    private boolean fallbackToNoOpCache = false;
    public CompositeCacheManager() {
    
    
    }
    public CompositeCacheManager(CacheManager... cacheManagers) {
    
    
        this.setCacheManagers(Arrays.asList(cacheManagers));
    }
    public void setCacheManagers(Collection<CacheManager> cacheManagers) {
    
    
        this.cacheManagers.addAll(cacheManagers);
    }
    public void setFallbackToNoOpCache(boolean fallbackToNoOpCache) {
    
    
        this.fallbackToNoOpCache = fallbackToNoOpCache;
    }
    public void afterPropertiesSet() {
    
    
        if (this.fallbackToNoOpCache) {
    
    
            this.cacheManagers.add(new NoOpCacheManager());
        }
    }
    public Cache getCache(String name) {
    
    
        Iterator var2 = this.cacheManagers.iterator();
        Cache cache;
        do {
    
    
            if (!var2.hasNext()) {
    
    
                return null;
            }
            CacheManager cacheManager = (CacheManager)var2.next();
            cache = cacheManager.getCache(name);
        } while(cache == null);
        return cache;
    }
    public Collection<String> getCacheNames() {
    
    
        Set<String> names = new LinkedHashSet();
        Iterator var2 = this.cacheManagers.iterator();
        while(var2.hasNext()) {
    
    
            CacheManager manager = (CacheManager)var2.next();
            names.addAll(manager.getCacheNames());
        }
        return Collections.unmodifiableSet(names);
    }
}

4.3.4 Application of decorator pattern in Spring

Caches are generally used in conjunction with databases. If the write cache succeeds, but the database transaction is rolled back, there will be dirty data in the cache. In order to solve this problem, it is necessary to put the write operation of the cache and the write operation of the database into the same transaction, either succeed or fail

To achieve such a function, Spring uses the decorator pattern. TransactionAwareCacheDecorator adds support for transactions, and processes Cache data separately when transactions are committed and rolled back

TransactionAwareCacheDecorator implements the Cache interface, and delegates all operations to targetCache, and adds transaction functions to the write operations. This is a typical application scenario and code implementation of the decorator pattern

public class TransactionAwareCacheDecorator implements Cache {
    
    
    private final Cache targetCache;
    public TransactionAwareCacheDecorator(Cache targetCache) {
    
    
        Assert.notNull(targetCache, "Target Cache must not be null");
        this.targetCache = targetCache;
    }
    public Cache getTargetCache() {
    
    
        return this.targetCache;
    }
    public String getName() {
    
    
        return this.targetCache.getName();
    }
    public Object getNativeCache() {
    
    
        return this.targetCache.getNativeCache();
    }
    public ValueWrapper get(Object key) {
    
    
        return this.targetCache.get(key);
    }
    public <T> T get(Object key, Class<T> type) {
    
    
        return this.targetCache.get(key, type);
    }
    public <T> T get(Object key, Callable<T> valueLoader) {
    
    
        return this.targetCache.get(key, valueLoader);
    }
    public void put(final Object key, final Object value) {
    
    
        if (TransactionSynchronizationManager.isSynchronizationActive()) {
    
    
            TransactionSynchronizationManager.registerSynchronization(new Transaction() {
    
    
                public void afterCommit() {
    
    
                    TransactionAwareCacheDecorator.this.targetCache.put(key, value);
                }
            });
        } else {
    
    
            this.targetCache.put(key, value);
        }
    }
    public ValueWrapper putIfAbsent(Object key, Object value) {
    
    
        return this.targetCache.putIfAbsent(key, value);
    }
    public void evict(final Object key) {
    
    
        if (TransactionSynchronizationManager.isSynchronizationActive()) {
    
    
            TransactionSynchronizationManager.registerSynchronization(new Transaction() {
    
    
                public void afterCommit() {
    
    
                    TransactionAwareCacheDecorator.this.targetCache.evict(key);
                }
            });
        } else {
    
    
            this.targetCache.evict(key);
        }
    }
    public void clear() {
    
    
        if (TransactionSynchronizationManager.isSynchronizationActive()) {
    
    
            TransactionSynchronizationManager.registerSynchronization(new Transaction() {
    
    
                public void afterCommit() {
    
    
                    TransactionAwareCacheDecorator.this.targetCache.clear();
                }
            });
        } else {
    
    
            this.targetCache.clear();
        }
    }
}

4.3.5 Application of factory pattern in Spring

In Spring, the most classic application of the factory mode is to implement the IOC container. The corresponding Spring source code is mainly the BeanFactory class and ApplicationContext related classes (AbstractApplicationContext, ClassPathXmlApplicationContext, FileSystemXmlApplicationContext...)

In Spring, there are many ways to create beans, such as the aforementioned pure constructor, no-argument constructor and setter method, as shown in the following example:

public class Student {
    
    
    private long id;
    private String name;
    public Student(long id, String name) {
    
    
        this.id = id;
        this.name = name;
    }
    public void setId(long id) {
    
    
        this.id = id;
    }
    public void setName(String name) {
    
    
        this.name = name;
    }
}
<bean id="student" class="com.xzg.cd.Student">
  < constructor arg="" name="id" value="1">
    < constructor arg="" name="name" value="wangzheng">
      <  bean="">// 使用无参构造函数+setter方法来创建Bean 
      <bean id="student" class="com.xzg.cd.Student">
        <property name="id" value="1">
          <  property="">
            <property name="name" value="wangzheng">
              <  property=""></ >
            </property>
          </ >
        </property>
      </bean></ >
    </ constructor>
  </ constructor>
</bean>

In fact, in addition to these two ways of creating beans, beans can also be created through factory methods. Still in the example just now, if you create a Bean in this way, it will look like this:

public class StudentFactory {
    
    
    private static Map<Long, Student> students = new HashMap<>();
    static {
    
    
        map.put(1, new Student(1, "wang"));
        map.put(2, new Student(2, "zheng"));
        map.put(3, new Student(3, "xzg"));
    }
    public static Student getStudent(long id) {
    
    
        return students.get(id);
    }
}

// 通过工厂方法getStudent(2)来创建BeanId="zheng""的Bean
<bean id="zheng" class="com.xzg.cd.StudentFactory" factory-method="getStudent">
  <constructor-arg value="2"></constructor-arg>
</bean>

4.3.6 Application of other patterns in Spring

SpEL, the full name is Spring Expression Language, is an expression language commonly used in Spring to write configuration. It defines a series of grammar rules. As long as the expression is written according to these grammatical rules, Spring can parse out the meaning of the expression. In fact, this is a typical application scenario of the interpreter mode mentioned earlier

Because the interpreter mode does not have a very fixed code implementation structure, and there are many SpEL-related codes in Spring, so I won't talk too much here. If you are interested or just want to implement similar functions in the project, you can read and learn from its code implementation. The code is mainly concentrated under the spring-expresssion module

As mentioned earlier when we talked about the singleton mode, the singleton mode has many disadvantages, such as unfriendly unit testing. The coping strategy is to manage the object through the IOC container, and realize the control of the uniqueness of the object through the IOC container. In fact, the singleton implemented in this way is not a real singleton, and its unique scope is only within the same IOC container

In addition, Spring also uses observer mode, template mode, responsibility chain mode, proxy mode

In fact, in Spring, as long as the class with Template suffix is ​​basically a template class, and most of them are implemented with Callback callback, such as JdbcTemplate, RedisTemplate and so on. The application of the remaining two patterns in Spring should be well known. The application of chain of responsibility mode in Spring is interceptor (Interceptor), and the classic application of proxy mode is AOP

5. MyBatis

5.1 Introduction to Mybatis and ORM Framework

MyBatis is an ORM (Object Relational Mapping, Object Relational Mapping) framework. The ORM framework is mainly based on the mapping relationship between classes and database tables to help programmers automatically realize the mutual conversion between objects and data in the database. To be more specific, ORM is responsible for storing the objects in the program in the database and converting the data in the database into objects in the program. In fact, there are many ORM frameworks in Java
, in addition to MyBatis just mentioned, there are also Hibernate, TopLink, etc.

When analyzing the Spring framework, I said that if I use one sentence to summarize the role of the framework, it is to simplify development. The MyBatis framework is no exception. What it simplifies is the development of the database. So how does MyBatis simplify database development?

As mentioned earlier, Java provides the JDBC class library to encapsulate different types of database operations. However, it is still a bit troublesome to use JDBC directly for database programming. Therefore, Spring provides JdbcTemplate, which further encapsulates JDBC to further simplify database programming

To use JdbcTemplate for database programming, you only need to write business-related codes (such as SQL statements, codes for mutual conversion between data and objects in the database), and other process-related codes (such as loading drivers, creating database connections, creating statement, close connection, close statement, etc.) are encapsulated in the JdbcTemplate class, no need to rewrite

Still the same example, let’s see how to use MyBatis to achieve it, is it simpler than using JdbcTemplate

Because MyBatis depends on the JDBC driver, to use MyBatis in the project, in addition to the introduction of the MyBatis framework itself (mybatis.jar), it is also necessary to introduce the JDBC driver (for example, to access MySQL's JDBC driver implementation class library mysql-connector-java. jar). After introducing the two jar packages into the project, you can start programming. The code for using MyBatis to access user information in the database is as follows:

// 1. 定义UserDO
public class UserDo {
    
    
    private long id;
    private String name;
    private String telephone;
    // 省略setter/getter方法
}
// 2. 定义访问接口
public interface UserMapper {
    
    
    public UserDo selectById(long id);
}
// 3. 定义映射关系:UserMapper.xml
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org/DTD Mapper 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="cn.xzg.cd.a87.repo.mapper.UserMapper">
  <select id="selectById" resultType="cn.xzg.cd.a87.repo.UserDo">select * from user where id=#{id}</select>
</mapper>
// 4. 全局配置文件: mybatis.xml
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-config.dtd">
<configuration>
  <environments default="dev">
    <environment id="dev">
      <transactionManager type="JDBC"></transactionManager>
      <dataSource type="POOLED">
        <property name="driver" value="com.mysql.jdbc.Driver" />
        <property name="url" value="root" />
        <property name="password" value="..." />
      </dataSource>
    </environment>
  </environments>
  <mappers>
    <mapper resource="mapper/UserMapper.xml" />
  </mappers>
</configuration>

It should be noted that in the UserMapper.xml configuration file, only the mapping relationship between the interface and the SQL statement is defined, and the mapping relationship between the class (UserDo) field and the database table (user) field is not explicitly defined. In fact, this embodies the design principle of "convention over configuration". The default mapping relationship is used between the class field and the database table field: the class field is mapped to the field with the same spelling in the database table one by one. Of course, if there is no way to do one-to-one mapping, you can also customize the mapping relationship between them

With the above code and configuration, you can access the user information in the database as follows

public class MyBatisDemo {
    
    
    public static void main(String[] args) throws IOException {
    
    
        Reader reader = Resources.getResourceAsReader("mybatis.xml");
        SqlSessionFactory sessionFactory = new SqlSessionFactoryBuilder().build(reader);
        SqlSession session = sessionFactory.openSession();
        UserMapper userMapper = session.getMapper(UserMapper.class);
        UserDo userDo = userMapper.selectById(8);
        //...
    }
}

It can be seen from the code that the implementation using MyBatis is more flexible than the implementation using JdbcTemplate. In the implementation using JdbcTemplate, the conversion code and SQL statement between the object and the data in the database are hard-coded in the business code. In the implementation using MyBatis, the mapping relationship between class fields and database fields, and the mapping relationship between interfaces and SQL is written in the XML configuration file, which is separated from the code, which will be more flexible and Clear and easy to maintain

5.2 How to balance ease of use, performance and flexibility?

Just made a brief introduction to the MyBatis framework, and then compare the other two frameworks: JdbcTemplate and Hibernate. By comparison, how MyBatis weighs the ease of use, performance and flexibility of the code

Let's look at JdbcTemplate first. Compared with MyBatis, JdbcTemplate is more lightweight. Because it only makes a very simple package for JDBC, so the performance loss is relatively small. Compared to the other two frameworks, it has the best performance. However, its shortcomings are also obvious, that is, SQL and code are coupled together, and it does not have the function of ORM. You need to write your own code to analyze the mapping relationship between objects and data in the database. Therefore, it is not as good as the other two frameworks in terms of ease of use

Look at Hibernate again. Compared with MyBatis, Hibernate is more heavyweight. Hibernate provides a more advanced mapping function, which can automatically generate SQL statements according to business needs. You don't need to write your own SQL like you do with MyBatis. Therefore, sometimes, MyBatis is also called a semi-automated ORM framework, and Hibernate is called a fully automated ORM framework. However, although the automatic generation of SQL simplifies development, it is automatically generated after all without targeted optimization. In terms of performance, the resulting SQL may not be as well written as the programmer. At the same time, this also loses the flexibility of programmers to write SQL by themselves.

In fact, no matter which implementation method is used, the code logic involved in the process of fetching data from the database and converting it into objects is basically the same. The difference between different implementation methods is just which part of the code logic is placed where. Some frameworks provide more powerful functions, and most of the code logic is completed by the framework, and programmers only need to implement a small part of the code. This makes the framework easier to use. However, the more functions the framework integrates, the more additional processing code will be introduced for the commonality of processing logic. Compared with specific programming for specific problems, the performance loss is relatively large

So, roughly speaking, sometimes, the ease of use and performance of the framework are in opposition. The pursuit of ease of use, the performance is worse. On the contrary, the pursuit of performance, ease of use is worse. In addition, the simpler it is to use, the less flexible it is. In fact, JdbcTemplate, MyBatis, and Hibernate also reflect the law just mentioned

JdbcTemplate provides the simplest functions, the worst usability, the least performance loss, and the best programming performance with it. Hibernate provides the most complete functions and the best ease of use, but relatively speaking, the performance loss is the highest. MyBatis is in the middle of the two, and has achieved a trade-off in terms of ease of use, performance, and flexibility. It supports programmers to write SQL by themselves, and can continue the programmer's accumulation of SQL knowledge. Compared with Hibernate, which is completely black box, many programmers prefer the translucent framework of MyBatis. This also reminds us that over-encapsulation and over-simplified development methods will also lose the flexibility of development

5.3 How to implement MyBatis Plugin by using the responsibility chain and proxy mode?

MyBatis Plugin, although the name is Plugin (plugin), it is actually similar to the Servlet Filter (filter) and Spring Interceptor (interceptor) mentioned earlier. The original intention of the design is for the scalability of the framework. The main used Design pattern is chain of responsibility pattern

However, compared to Servlet Filter and Spring Interceptor, the code implementation of the chain of responsibility pattern in MyBatis Plugin is a bit more complicated. It is a chain of responsibility implemented with the help of a dynamic proxy pattern

5.3.1 MyBatis Plugin Function Introduction

In fact, the functions of MyBatis Plugin and Servlet Filter and Spring Interceptor are similar. They intercept some method calls without modifying the original process code, and execute some additional code before and after the intercepted method call. logic. The only difference between them is that the location of the interception is different. Servlet Filter mainly intercepts Servlet requests, Spring Interceptor mainly intercepts Bean methods managed by Spring (such as methods of Controller class, etc.), and MyBatis Plugin mainly intercepts some methods involved in the process of executing SQL by MyBatis. MyBatis Plugin is relatively simple to use , for example:

Suppose you need to count the execution time of each SQL in the application. If you use MyBatis Plugin to implement it, you only need to define a SqlCostTimeInterceptor class, let it implement the Interceptor interface of MyBatis, and, in the global configuration file of MyBatis, simply declare this A plugin will do just fine. The specific code and configuration are as follows:

@Intercepts({
    
    
    @Signature(type = StatementHandler.class, method = "query", args = {
    
     StatementHandler }
    @Signature(type = StatementHandler.class, method = "update", args = {
    
     StatementHandler}
    @Signature(type = StatementHandler.class, method = "batch", args = {
    
     StatementHandler}
}
public class SqlCostTimeInterceptor implements Interceptor {
    
    
    private static Logger logger = LoggerFactory.getLogger(SqlCostTimeInterceptor);
    @Override
    public Object intercept(Invocation invocation) throws Throwable {
    
    
        Object target = invocation.getTarget();
        long startTime = System.currentTimeMillis();
        StatementHandler statementHandler = (StatementHandler) target;
        try {
    
    
            return invocation.proceed();
        } finally {
    
    
            long costTime = System.currentTimeMillis() - startTime;
            BoundSql boundSql = statementHandler.getBoundSql();
            String sql = boundSql.getSql();
            logger.info("执行 SQL:[ {} ]执行耗时[ {} ms]", sql, costTime);
        }
    }
    @Override
    public Object plugin(Object target) {
    
    
        return Plugin.wrap(target, this);
    }
    @Override
    public void setProperties(Properties properties) {
    
    
        System.out.println("插件配置的信息:" + properties);
    }
}
<!-- MyBatis全局配置文件:mybatis-config.xml -->
<plugins>
  <plugin interceptor="com.xzg.cd.a88.SqlCostTimeInterceptor">
    <property name="someProperty" value="100" />
  </plugin>
</plugins>

Let's focus on @Interceptsthe annotation . Whether it is an interceptor, filter or plug-in, it is necessary to clearly indicate the target method of interception. @InterceptsAnnotations actually play this role. Among them, @Interceptsannotations can nest @Signatureannotations . An @Signatureannotation identifies a target method to intercept. If you want to intercept multiple methods, you can write multiple @Signatureannotations

@SignatureThe annotation contains three elements: type, method, args. Among them, type specifies the class to be intercepted, method specifies the method name, and args specifies the parameter list of the method. By specifying these three elements, a method to be intercepted can be fully determined

By default, MyBatis Plugin allows interception methods as follows:

insert image description here

Why are the methods of these classes allowed to be intercepted by default?

The bottom layer of MyBatis executes SQL through the Executor class. The Executor class will create three objects of StatementHandler, ParameterHandler, and ResultSetHandler, and first use ParameterHandler to set the placeholder parameters in SQL, then use StatementHandler to execute SQL statements, and finally use ResultSetHandler to encapsulate the execution results. Therefore, you only need to intercept the methods of Executor, ParameterHandler, ResultSetHandler, and StatementHandler to basically satisfy the interception of the entire SQL execution process.

In fact, in addition to counting the execution time of SQL, you can use MyBatis Plugin to do many things, such as sub-database sub-table, automatic paging, data desensitization, encryption and decryption, etc.

5.3.2 Design and Implementation of MyBatis Plugin

The implementation of the responsibility chain mode generally includes two parts: the processor (Handler) and the processor chain (HandlerChain). These two parts correspond to the source code of Servlet Filter is Filter and FilterChain, the source code corresponding to Spring Interceptor is HandlerInterceptor and HandlerExecutionChain, and the source code corresponding to MyBatis Plugin is Interceptor and InterceptorChain. In addition, MyBatis Plugin also contains another very important class: Plugin. It is used to generate dynamic proxies for intercepted objects

When the application integrated with MyBatis starts, the MyBatis framework will read the global configuration file (mybatis-config.xml file in the previous example), parse out the Interceptor (that is, the SqlCostTimeInterceptor in the example), and inject it into the Configuration class in the InterceptorChain object. This part of the logic corresponds to the source code as follows:

public class XMLConfigBuilder extends BaseBuilder {
    
    
    //解析配置
    private void parseConfiguration(XNode root) {
    
    
        try {
    
    
            //省略部分代码...
            pluginElement(root.evalNode("plugins")); //解析插件
        } catch (Exception e) {
    
    
            throw new BuilderException("Error parsing SQL Mapper Configuration. Cause");
        }
    }
    //解析插件
    private void pluginElement(XNode parent) throws Exception {
    
    
        if (parent != null) {
    
    
            for (XNode child : parent.getChildren()) {
    
    
                String interceptor = child.getStringAttribute("interceptor");
                Properties properties = child.getChildrenAsProperties();
                //创建Interceptor类对象
                Interceptor interceptorInstance = (Interceptor) resolveClass(interceptor);
                //调用Interceptor上的setProperties()方法设置properties
                interceptorInstance.setProperties(properties);
                //下面这行代码会调用InterceptorChain.addInterceptor()方法
                configuration.addInterceptor(interceptorInstance);
            }
        }
    }
}
// Configuration类的addInterceptor()方法的代码如下所示
public void addInterceptor(Interceptor interceptor) {
    
    
    interceptorChain.addInterceptor(interceptor);
}

Let's look at the codes of the Interceptor and InterceptorChain classes, as shown below. setProperties()The method of Interceptor is a simple setter method, which is mainly for the convenience of configuring some attribute values ​​of Interceptor through configuration files, and has no other function. intecept()The and functions in the Interceptor class plugin(), and pluginAll()the functions are the three core functions

public class Invocation {
    
    
    private final Object target;
    private final Method method;
    private final Object[] args;
    // 省略构造函数和getter方法...
    public Object proceed() throws InvocationTargetException, IllegalAccessExcept
        return method.invoke(target, args);
}
}
public interface Interceptor {
    
    
    Object intercept(Invocation invocation) throws Throwable;
    Object plugin(Object target);
    void setProperties(Properties properties);
}
public class InterceptorChain {
    
    
    private final List<Interceptor> interceptors = new ArrayList<Interceptor>();
    public Object pluginAll(Object target) {
    
    
        for (Interceptor interceptor : interceptors) {
    
    
            target = interceptor.plugin(target);
        }
        return target;
    }
    public void addInterceptor(Interceptor interceptor) {
    
    
        interceptors.add(interceptor);
    }
    public List<Interceptor> getInterceptors() {
    
    
        return Collections.unmodifiableList(interceptors);
    }
}

After parsing the configuration file, all Interceptors are loaded into InterceptorChain. Next, let's take a look at when these interceptors are triggered and executed? And how is it triggered to execute?

As mentioned earlier, in the process of executing SQL, MyBatis will create objects of Executor, StatementHandler, ParameterHandler, and ResultSetHandler. The corresponding creation code is in the Configuration class, as follows:

public Executor newExecutor(Transaction transaction, ExecutorType executorType) {
    
    
    executorType = executorType == null ? defaultExecutorType : executorType;
    executorType = executorType == null ? ExecutorType.SIMPLE : executorType;
    Executor executor;
    if (ExecutorType.BATCH == executorType) {
    
    
        executor = new BatchExecutor(this, transaction);
    } else if (ExecutorType.REUSE == executorType) {
    
    
        executor = new ReuseExecutor(this, transaction);
    } else {
    
    
        executor = new SimpleExecutor(this, transaction);
    }
    if (cacheEnabled) {
    
    
        executor = new CachingExecutor(executor);
    }
    executor = (Executor) interceptorChain.pluginAll(executor);
    return executor;
}
public ParameterHandler newParameterHandler(MappedStatement mappedStatement, Object object) {
    
    
    ParameterHandler parameterHandler = mappedStatement.getLang().createParameter);
    parameterHandler = (ParameterHandler) interceptorChain.pluginAll(parameterHandler);
    return parameterHandler;
}
public ResultSetHandler newResultSetHandler(Executor executor, MappedStatement mappedStatement)
ResultHandler resultHandler, BoundSql boundSql) {
    
    
    ResultSetHandler resultSetHandler = new DefaultResultSetHandler(executor, map);
    resultSetHandler = (ResultSetHandler) interceptorChain.pluginAll(resultSetHandler);
    return resultSetHandler;
}
public StatementHandler newStatementHandler(Executor executor, MappedStatement mappedStatement) {
    
    
    StatementHandler statementHandler = new RoutingStatementHandler(executor, map);
    statementHandler = (StatementHandler) interceptorChain.pluginAll(statementHandler);
    return statementHandler;
}

From the above code, we can find that pluginAll()the methods . The code for this method has been given earlier. Its code implementation is very simple, nested call plugin()method . plugin()It is an interface method (not including the implementation code), and the specific implementation code needs to be given by the user. In the previous example, the SQLTimeCostInterceptor's plugin()method wrap()was implemented by calling the Plugin's method directly.
wrap()The code implementation of the method is as follows:

// 借助Java InvocationHandler实现的动态代理模式
public class Plugin implements InvocationHandler {
    
    
    private final Object target;
    private final Interceptor interceptor;
    private final Map<Class<?>, Set<Method>> signatureMap;
    private Plugin(Object target, Interceptor interceptor, Map < Class<?>, Set < Method>> signatureMap) {
    
    
        this.target = target;
        this.interceptor = interceptor;
        this.signatureMap = signatureMap;
    }
    // wrap()静态方法,用来生成target的动态代理,
    // 动态代理对象=target对象+interceptor对象。
    public static Object wrap(Object target, Interceptor interceptor) {
    
    
        Map<Class<?>, Set<Method>> signatureMap = getSignatureMap(interceptor);
        Class<?> type = target.getClass();
        Class<?>[] interfaces = getAllInterfaces(type, signatureMap);
        if (interfaces.length > 0) {
    
    
            return Proxy.newProxyInstance(
                       type.getClassLoader(),
                       interfaces,
                       new Plugin(target, interceptor, signatureMap));
        }
        return target;
    }
    // 调用target上的f()方法,会触发执行下面这个方法。
    // 这个方法包含:执行interceptor的intecept()方法 + 执行target上f()方法。
    @Override
    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
    
    
        try {
    
    
            Set<Method> methods = signatureMap.get(method.getDeclaringClass());
            if (methods != null && methods.contains(method)) {
    
    
                return interceptor.intercept(new Invocation(target, method, args));
            }
            return method.invoke(target, args);
        } catch (Exception e) {
    
    
            throw ExceptionUtil.unwrapThrowable(e);
        }
    }
    private static Map<Class<?>, Set<Method>> getSignatureMap(Interceptor interceptor) {
    
    
        Intercepts interceptsAnnotation = interceptor.getClass().getAnnotation(interceptor);
        // issue #251
        if (interceptsAnnotation == null) {
    
    
            throw new PluginException("No @Intercepts annotation was found in interce
        }
        Signature[] sigs = interceptsAnnotation.value();
        Map<Class<?>, Set<Method>> signatureMap = new HashMap < Class<?>, Set < Method>> signatureMap);
        for (Signature sig : sigs) {
    
    
        Set<Method> methods = signatureMap.get(sig.type());
            if (methods == null) {
    
    
                methods = new HashSet<Method>();
                signatureMap.put(sig.type(), methods);
            }
            try {
    
    
                Method method = sig.type().getMethod(sig.method(), sig.args());
                methods.add(method);
            } catch (NoSuchMethodException e) {
    
    
                throw new PluginException("Could not find method on " + sig.type() + "
            }
        }
        return signatureMap;
    }
    private static Class<?>[] getAllInterfaces(Class<?> type, Map < Class<?>, Set < Method>> signatureMap) {
    
    
        Set<Class<?>> interfaces = new HashSet<Class<?>>();
        while (type != null) {
    
    
            for (Class<?> c : type.getInterfaces()) {
    
    
                if (signatureMap.containsKey(c)) {
    
    
                    interfaces.add(c);
                }
            }
            type = type.getSuperclass();
        }
        return interfaces.toArray(new Class<?>[interfaces.size()]);
    }
}

In fact, Plugin is a dynamic proxy class implemented with the help of Java InvocationHandler. Used as a proxy to add the Interceptor function to the target object. Among them, the target objects to be proxied are objects of four classes: Executor, StatementHandler, ParameterHandler, and ResultSetHandler. wrap()The static method is a utility function used to generate a dynamic proxy object for the target object

wrap()Of course, the method returns the proxy object only if the interceptor matches the target , otherwise it returns the target object itself. What counts as a match? That is, the class to be intercepted by the interceptor through the @Signature annotation contains the target object

The implementation of the chain of responsibility pattern in MyBatis is quite special. It nests multiple proxies for the same target object (that is, the tasks to be performed by pluginAll()functions ). Each proxy object (Plugin object) proxies the functionality of an interceptor (Interceptor object). pluginAll()The code of the function is as follows:

public Object pluginAll(Object target) {
    
    
    // 嵌套代理
    for (Interceptor interceptor : interceptors) {
    
    
        target = interceptor.plugin(target);
        // 上面这行代码等于下面这行代码,target(代理对象)=target(目标对象)+interceptor(拦截器)
        // target = Plugin.wrap(target, interceptor);
    }
    return target;
}
// MyBatis像下面这样创建target(Executor、StatementHandler、ParameterHandler、Resul
Object target = interceptorChain.pluginAll(target);

When executing a method on the four classes of Executor, StatementHandler, ParameterHandler, and ResultSetHandler, MyBatis will nest and execute invoke()the methods invoke()The method will first execute the interceptor intecept()function in the proxy object, and then execute the method on the proxy object. In this way, after executing intercept()the functions , MyBatis finally executes the methods on the 4 original class objects

5.4 Summarize the 10 design patterns used in the MyBatis framework

5.4.1 SqlSessionFactoryBuilder: Why use the builder pattern to create SqlSessionFactory?

The previous user query example is as follows:

public class MyBatisDemo {
    
    
    public static void main(String[] args) throws IOException {
    
    
        Reader reader = Resources.getResourceAsReader("mybatis.xml");
        SqlSessionFactory sessionFactory = new SqlSessionFactoryBuilder().build(reader);
        SqlSession session = sessionFactory.openSession();
        UserMapper userMapper = session.getMapper(UserMapper.class);
        UserDo userDo = userMapper.selectById(8);
        //...
    }
}

When we talked about the builder pattern before, we used the Builder class to create objects. Generally, we first cascaded a set of setXXX()methods to set properties, and then called build()methods to finally create objects. However, in the above code, creating SqlSessionFactory through SqlSessionFactoryBuilder does not conform to this routine. It has no setter method, and build()the method is not without parameters, need to pass parameters. In addition, from the above code, the creation process of the SqlSessionFactory object is not complicated. Wouldn't it be enough to create SqlSessionFactory directly through the constructor? Why use the builder mode to create SqlSessionFactory?

To answer this question, we must first look at the source code of the SqlSessionFactoryBuilder class. The source code is as follows:

public class SqlSessionFactoryBuilder {
    
    
    public SqlSessionFactory build(Reader reader) {
    
    
        return build(reader, null, null);
    }

    public SqlSessionFactory build(Reader reader, String environment) {
    
    
        return build(reader, environment, null);

    }
    public SqlSessionFactory build(Reader reader, Properties properties) {
    
    
        return build(reader, null, properties);
    }


    public SqlSessionFactory build(Reader reader, String environment, Properties properties) {
    
    
        try {
    
    
            XMLConfigBuilder parser = new XMLConfigBuilder(reader, environment, properties);
            return build(parser.parse());
        } catch (Exception e) {
    
    
            throw ExceptionFactory.wrapException("Error building SqlSession.", e);
        } finally {
    
    
            ErrorContext.instance().reset();
            try {
    
    
                reader.close();
            } catch (IOException e) {
    
    
                // Intentionally ignore. Prefer previous error.
            }
        }
    }
    public SqlSessionFactory build(InputStream inputStream) {
    
    
        return build(inputStream, null, null);
    }


    public SqlSessionFactory build(InputStream inputStream, String environment) {
    
    
        return build(inputStream, environment, null);
    }

    public SqlSessionFactory build(InputStream inputStream, Properties properties) {
    
    
        return build(inputStream, null, properties);
    }

    public SqlSessionFactory build(InputStream inputStream, String environment, Properties properties) {
    
    
        try {
    
    

            XMLConfigBuilder parser = new XMLConfigBuilder(inputStream, environment, properties);
            return build(parser.parse());
        } catch (Exception e) {
    
    
            throw ExceptionFactory.wrapException("Error building SqlSession.", e);
        } finally {
    
    
            ErrorContext.instance().reset();
            try {
    
    
                inputStream.close();
            } catch (IOException e) {
    
    
                // Intentionally ignore. Prefer previous error.
            }
        }
    }
    public SqlSessionFactory build(Configuration config) {
    
    
        return new DefaultSqlSessionFactory(config);
    }
}

There are a large number of build()overloaded . For easy viewing and comparison with the code of the SqlSessionFactory class, the definition of the overloaded function is abstracted as follows:

public class SqlSessionFactoryBuilder {
    
    
    public SqlSessionFactory build(Reader reader);
    public SqlSessionFactory build(Reader reader, String environment);
    public SqlSessionFactory build(Reader reader, Properties properties);
    public SqlSessionFactory build(Reader reader, String environment, Properties properties);
    public SqlSessionFactory build(InputStream inputStream);
    public SqlSessionFactory build(InputStream inputStream, String environment);
    public SqlSessionFactory build(InputStream inputStream, Properties properties);
    public SqlSessionFactory build(InputStream inputStream, String environment, Properties properties);
    // 上面所有的方法最终都调用这个方法
    public SqlSessionFactory build(Configuration config);
}

If a class contains many member variables, it is not necessary to set all the member variables to construct the object, only a few of them need to be set selectively. In order to meet such construction requirements, it is necessary to define multiple constructors with different parameter lists. In order to avoid too many constructors and too long parameter lists, it is generally solved by adding a setter method to a no-argument constructor or through a builder pattern

From the original design intention of the builder mode, although SqlSessionFactoryBuilder has a Builder suffix, don't be confused by its name, it is not a standard builder mode. On the one hand, the construction of the original class SqlSessionFactory requires only one parameter and is not complicated. On the other hand, the Builder class SqlSessionFactoryBuilder still defines n multiple constructors with different parameter lists

In fact, the original intention of SqlSessionFactoryBuilder design is simply to simplify development. Because building SqlSessionFactory needs to build Configuration first, and building Configuration is very complicated and requires a lot of work, such as configuration reading, parsing, and creation of multiple objects. In order to hide the process of building SqlSessionFactory and be transparent to programmers, MyBatis designed the SqlSessionFactoryBuilder class to encapsulate these construction details

5.4.2 SqlSessionFactory: Does it belong to the factory mode or the builder mode?

In the above MyBatis sample code, SqlSessionFactory is created through SqlSessionFactoryBuilder, and then SqlSession is created through SqlSessionFactory. I talked about SqlSessionFactoryBuilder earlier, now let’s look at SqlSessionFactory

As you may have guessed from the name, SqlSessionFactory is a factory class, and the design pattern used is the factory pattern. However, it is similar to SqlSessionFactoryBuilder, and the name is very confusing. In fact, it is not a standard factory pattern. Why do you say that? First look at the source code of the SqlSessionFactory class

public interface SqlSessionFactory {
    
    
    SqlSession openSession();
    SqlSession openSession(boolean autoCommit);
    SqlSession openSession(Connection connection);
    SqlSession openSession(TransactionIsolationLevel level);
    SqlSession openSession(ExecutorType execType);
    SqlSession openSession(ExecutorType execType, boolean autoCommit);
    SqlSession openSession(ExecutorType execType, TransactionIsolationLevel level);
    SqlSession openSession(ExecutorType execType, Connection connection);
    Configuration getConfiguration();
}

SqlSessionFactory is an interface, and DefaultSqlSessionFactory is its only implementation class. DefaultSqlSessionFactory source code is as follows:

public class DefaultSqlSessionFactory implements SqlSessionFactory {
    
    
    private final Configuration configuration;
    public DefaultSqlSessionFactory(Configuration configuration) {
    
    
        this.configuration = configuration;
    }
    @Override
    public SqlSession openSession() {
    
    
        return openSessionFromDataSource(configuration.getDefaultExecutorType(), null);
    }
    @Override
    public SqlSession openSession(boolean autoCommit) {
    
    
        return openSessionFromDataSource(configuration.getDefaultExecutorType(), null);
    }
    @Override
    public SqlSession openSession(ExecutorType execType) {
    
    
        return openSessionFromDataSource(execType, null, false);
    }
    @Override
    public SqlSession openSession(TransactionIsolationLevel level) {
    
    
        return openSessionFromDataSource(configuration.getDefaultExecutorType(), level);
    }
    @Override
    public SqlSession openSession(ExecutorType execType, TransactionIsolationLevel level) {
    
    
        return openSessionFromDataSource(execType, level, false);
    }
    @Override
    public SqlSession openSession(ExecutorType execType, boolean autoCommit) {
    
    
        return openSessionFromDataSource(execType, null, autoCommit);
    }
    @Override
    public SqlSession openSession(Connection connection) {
    
    
        return openSessionFromConnection(configuration.getDefaultExecutorType(), co
    }
           @Override
    public SqlSession openSession(ExecutorType execType, Connection connection) {
    
    
        return openSessionFromConnection(execType, connection);
    }
    @Override
    public Configuration getConfiguration() {
    
    
        return configuration;
    }
    private SqlSession openSessionFromDataSource(ExecutorType execType, Transacti
            Transaction tx = null;
    try {
    
    
        final Environment environment = configuration.getEnvironment();
        final TransactionFactory transactionFactory = getTransactionFactoryFromEn
                tx = transactionFactory.newTransaction(environment.getDataSource(), level
                        final Executor executor = configuration.newExecutor(tx, execType);
                        return new DefaultSqlSession(configuration, executor, autoCommit);
    } catch (Exception e) {
    
    
        closeTransaction(tx); // may have fetched a connection so lets call close
        throw ExceptionFactory.wrapException("Error opening session. Cause: " +
    } finally {
    
    
        ErrorContext.instance().reset();
    }
}
private SqlSession openSessionFromConnection(ExecutorType execType, Connection connection) {
    
    
    try {
    
    
        boolean autoCommit;
        try {
    
    
            autoCommit = connection.getAutoCommit();
        } catch (SQLException e) {
    
    
            // Failover to true, as most poor drivers
            // or databases won't support transactions
            autoCommit = true;
        }
        final Environment environment = configuration.getEnvironment();
        final TransactionFactory transactionFactory = getTransactionFactoryFromEn
                final Transaction tx = transactionFactory.newTransaction(connection);
        final Executor executor = configuration.newExecutor(tx, execType);
        return new DefaultSqlSession(configuration, executor, autoCommit);
    } catch (Exception e) {
    
    
        throw ExceptionFactory.wrapException("Error opening session. Cause: " +
    } finally {
    
    
        ErrorContext.instance().reset();
    }
}
//...省略部分代码...

From the source code of SqlSessionFactory and DefaultSqlSessionFactory, its design is very similar to the SqlSessionFactoryBuilder just mentioned. By overloading multiple openSession()functions , it supports the creation of SqlSession objects by combining different parameters such as autoCommit, Executor, and Transaction. The standard factory mode uses type to create different subclass objects that inherit the same parent class, but here it is just to create objects of the same class by passing in different parameters. So, it's more like builder pattern

Although the design ideas are basically the same, one is called xxxBuilder (SqlSessionFactoryBuilder) and the other is called xxxFactory (SqlSessionFactory). Moreover, the one called xxxBuilder is not a standard builder pattern, and the one called xxxFactory is not a standard factory pattern. Therefore, I personally think that the design of this part of the code by MyBatis is still worth optimizing.

In fact, the role of these two classes is only to create SqlSession objects, there is no other role. Therefore, it is more recommended to refer to Spring's design ideas and put the logic of SqlSessionFactoryBuilder and SqlSessionFactory into a class called "ApplicationContext". Let this class take full responsibility for reading configuration files, creating Configuration, and generating SqlSession

5.4.3 BaseExecutor: What is the difference between the template pattern and ordinary inheritance?

If you check the source code of SqlSession and DefaultSqlSession, you will find that the business logic of SQL executed by SqlSession is entrusted to Executor for implementation. Executor related classes are mainly used to execute SQL

Among them, Executor itself is an interface; BaseExecutor is an abstract class that implements the Executor interface; and BatchExecutor, SimpleExecutor, and ReuseExecutor inherit the BaseExecutor abstract class

Are the three classes BatchExecutor, SimpleExecutor, ReuseExecutor and BaseExecutor a simple inheritance relationship, or a template pattern relationship? How to judge? Just look at the source code of BaseExecutor

public abstract class BaseExecutor implements Executor {
    
    
    //...省略其他无关代码...
    @Override
    public int update(MappedStatement ms, Object parameter) throws SQLException {
    
    
        ErrorContext.instance().resource(ms.getResource()).activity("executing an update");
        if (closed) {
    
    
            throw new ExecutorException("Executor was closed.");
        }
        clearLocalCache();
        return doUpdate(ms, parameter);
    }
    public List<BatchResult> flushStatements(boolean isRollBack) throws SQLException) {
    
    
        if (closed) {
    
    
            throw new ExecutorException("Executor was closed.");
        }
        return doFlushStatements(isRollBack);
    }

    private <E> List<E> queryFromDatabase(MappedStatement ms, Object parameter, RowBouds rowBounds) {
    
    
        List<E> list;
        localCache.putObject(key, EXECUTION_PLACEHOLDER);
        try {
    
    
            list = doQuery(ms, parameter, rowBounds, resultHandler, boundSql);
        } finally {
    
    
            localCache.removeObject(key);
        }
        localCache.putObject(key, list);
        if (ms.getStatementType() == StatementType.CALLABLE) {
    
    
            localOutputParameterCache.putObject(key, parameter);
        }
        return list;
    }
    @Override
    public <E> Cursor<E> queryCursor(MappedStatement ms, Object parameter, RowBouds rowBounds) {
    
    
        BoundSql boundSql = ms.getBoundSql(parameter);
        return doQueryCursor(ms, parameter, rowBounds, boundSql);
    }
    protected abstract int doUpdate(MappedStatement ms, Object parameter);
    protected abstract List<BatchResult> doFlushStatements(boolean isRollback);
    protected abstract <E> List<E> doQuery(MappedStatement ms, Object parameter);
    protected abstract <E> Cursor<E> doQueryCursor(MappedStatement ms, Object parameter);
}

The template pattern implements code reuse based on inheritance. If the abstract class contains a template method, the template method calls the abstract method to be implemented by the subclass, then this is generally the code implementation of the template pattern. Moreover, in terms of naming, there is generally a one-to-one correspondence between the template method and the abstract method. The abstract method has an extra "do" in front of the template method. For example, in the BaseExecutor class, one of the template methods is called, and the corresponding abstract method is update()calleddoUpdate()

5.4.4 SqlNode: How to use interpreter mode to parse dynamic SQL?

Writing dynamic SQL in configuration files is a very powerful feature of MyBatis. The so-called dynamic SQL means that syntax tags such as trim, if, #{} can be included in SQL, and different SQL can be generated according to conditions at runtime, as in the following example:

<update id="update" parameterType="com.xzg.cd.a89.User" UPDATE="" user="">
<trim prefix="SET" prefixOverrides=",">
  <if test="name != null and name != ''">name = #{name}</if>
  <if test="age != null and age != ''">, age = #{age}</if>
  <if test="birthday != null and birthday != ''">, birthday = #{birthday}</if>
</trim>where id = ${id}</update>

Obviously, the grammar rules of dynamic SQL are customized by MyBatis. If you want to replace the dynamic elements in dynamic SQL according to the grammatical rules and generate real executable SQL statements, MyBatis also needs to implement the corresponding interpreter

This part of the function can be regarded as the application of the interpreter mode. When the interpreter mode interprets grammar rules, it generally divides the rules into small units, especially small units that can be nested, parses each small unit, and finally merges the analysis results together. Here is no exception. MyBatis calls each grammatical unit SqlNode. The definition of SqlNode looks like this:

public interface SqlNode {
    
    
    boolean apply(DynamicContext context);
}

For different grammatical units, MyBatis defines different SqlNode implementation classes

insert image description here

The call entry of the entire interpreter is in the DynamicSqlSource.getBoundSql method, which calls rootSqlNode.apply(context)the method

5.4.5 ErrorContext: How to implement a thread-only singleton pattern?

As mentioned in the singleton mode, the singleton mode is unique to the process. At the same time, it also talks about several variants of the singleton mode, such as the only singleton for threads, the only singleton for clusters, and so on. In MyBatis, the ErrorContext class is a variant of the standard singleton: the only singleton for a thread

The code is implemented as follows, it is implemented based on ThreadLocal in Java, in fact, ThreadLocal here is equivalent to ConcurrentHashMap there

public class ErrorContext {
    
    
    private static final String LINE_SEPARATOR = System.getProperty("line.separate");
    private static final ThreadLocal<ErrorContext> LOCAL = new ThreadLocal < ErrorContext> ();
    private ErrorContext stored;
    private String resource;
    private String activity;
    private String object;
    private String message;
    private String sql;
    private Throwable cause;
    private ErrorContext() {
    
    
    }
    public static ErrorContext instance() {
    
    
        ErrorContext context = LOCAL.get();
        if (context == null) {
    
    
            context = new ErrorContext();
            LOCAL.set(context);
        }
        return context;
    }
}

5.4.6 Cache: Why use the decorator pattern instead of designing it to inherit subclasses?

MyBatis is an ORM framework. In fact, it not only simply completes the conversion between objects and database data, but also provides many other functions, such as caching and transactions. Next, let’s talk about its cache implementation

In MyBatis, the caching function is defined by the interface Cache. The PerpetualCache class is the most basic cache class, which is a cache with unlimited size. In addition, MyBatis also designed 9 decorator classes that wrap the PerpetualCache class to achieve functional enhancement. They are: FifoCache, LoggingCache, LruCache, ScheduledCache, SerializedCache, SoftCache, SynchronizedCache, WeakCache, TransactionalCache

public interface Cache {
    
    
    String getId();
    void putObject(Object key, Object value);
    Object getObject(Object key);
    Object removeObject(Object key);
    void clear();
    int getSize();
    ReadWriteLock getReadWriteLock();
}
public class PerpetualCache implements Cache {
    
    
    private final String id;
    private Map<Object, Object> cache = new HashMap<Object, Object>();
    public PerpetualCache(String id) {
    
    
        this.id = id;
    }
    @Override
    public String getId() {
    
    
        return id;
    }
    @Override
    public int getSize() {
    
    
        return cache.size();
    }
    @Override
    public void putObject(Object key, Object value) {
    
    
        cache.put(key, value);
    }
    @Override
    public Object getObject(Object key) {
    
    
        return cache.get(key);
    }
    @Override
    public Object removeObject(Object key) {
    
    
        return cache.remove(key);
    }
    @Override
    public void clear() {
    
    
        cache.clear();
    }
    @Override
    public ReadWriteLock getReadWriteLock() {
    
    
        return null;
    }
    //省略部分代码...
}

The code structures of these 9 decorator classes are all similar, here only the source code of LruCache is pasted here. As can be seen from the code, it is the code implementation of the standard decorator pattern

public class LruCache implements Cache {
    
    
    private final Cache delegate;
    private Map<Object, Object> keyMap;
    private Object eldestKey;
    public LruCache(Cache delegate) {
    
    
        this.delegate = delegate;
        setSize(1024);
    }
    @Override
    public String getId() {
    
    
        return delegate.getId();
    }
    @Override
    public int getSize() {
    
    
        return delegate.getSize();
    }
    public void setSize(final int size) {
    
    
        keyMap = new LinkedHashMap<Object, Object>(size, .75F, true) {
    
    
            private static final long serialVersionUID = 4267176411845948333L;
            @Override
            protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {
    
    
                boolean tooBig = size() > size;
                if (tooBig) {
    
    
                    eldestKey = eldest.getKey();
                }
                return tooBig;
            }
        };
    }
    @Override
    public void putObject(Object key, Object value) {
    
    
        delegate.putObject(key, value);
        cycleKeyList(key);
    }
    @Override
    public Object getObject(Object key) {
    
    
        keyMap.get(key); //touch
        return delegate.getObject(key);
    }
    @Override
    public Object removeObject(Object key) {
    
    
        return delegate.removeObject(key);
    }
    @Override
    public void clear() {
    
    
        delegate.clear();
        keyMap.clear();
    }
    @Override
    public ReadWriteLock getReadWriteLock() {
    
    
        return null;
    }
    private void cycleKeyList(Object key) {
    
    
        keyMap.put(key, key);
        if (eldestKey != null) {
    
    
            delegate.removeObject(eldestKey);
            eldestKey = null;
        }
    }
}

The reason why MyBatis uses the decorator mode to implement the cache function is because the decorator mode uses combination instead of inheritance, which is more flexible and can effectively avoid the combination explosion of inheritance relationships

5.4.7 PropertyTokenizer: How to use the iterator pattern to implement a property resolver?

The iterator pattern is often used instead of a for loop to traverse collection elements. The PropertyTokenizer class of Mybatis implements the Java Iterator interface, which is an iterator for parsing configuration properties. The specific code is as follows:

// person[0].birthdate.year 会被分解为3个PropertyTokenizer对象。其中,第一个Property
public class PropertyTokenizer implements Iterator<PropertyTokenizer> {
    
    
    private String name; // person
    private final String indexedName; // person[0]
    private String index; // 0
    private final String children; // birthdate.year
    public PropertyTokenizer(String fullname) {
    
    
        int delim = fullname.indexOf('.');
        if (delim > -1) {
    
    
            name = fullname.substring(0, delim);
            children = fullname.substring(delim + 1);
        } else {
    
    
            name = fullname;
            children = null;
        }
        indexedName = name;
        delim = name.indexOf('[');
        if (delim > -1) {
    
    
            index = name.substring(delim + 1, name.length() - 1);
            name = name.substring(0, delim);
        }
    }
    public String getName() {
    
    
        return name;
    }
    public String getIndex() {
    
    
        return index;
    }
    public String getIndexedName() {
    
    
        return indexedName;
    }
    public String getChildren() {
    
    
        return children;
    }
    @Override
    public boolean hasNext() {
    
    
        return children != null;
    }
    @Override
    public PropertyTokenizer next() {
    
    
        return new PropertyTokenizer(children);
    }
    @Override
    public void remove() {
    
    
        throw new UnsupportedOperationException("Remove is not supported, as it has");
    }
}

In fact, the PropertyTokenizer class is not a standard iterator class. It couples the configuration parsing, parsed elements, and iterators, which should have been put into three classes, into one class, so it seems a little difficult to understand. However, the advantage of doing this is that it can do lazy parsing. There is no need to parse the entire configuration into multiple PropertyTokenizer objects in advance. Part of the configuration will be parsed only when next()the function

5.4.8 Log: How to use the adapter pattern to adapt to different log frameworks?

As mentioned when talking about the adapter mode, the Slf4j framework provides a unified log interface in order to unify various log frameworks (Log4j, JCL, Logback, etc.). However, MyBatis does not directly use the unified log specification provided by Slf4j, but repeats the wheel itself and defines a set of its own log access interface

public interface Log {
    
    
    boolean isDebugEnabled();
    boolean isTraceEnabled();
    void error(String s, Throwable e);
    void error(String s);
    void debug(String s);
    void trace(String s);
    void warn(String s);
}

For the Log interface, MyBatis also provides various implementation classes, using different log frameworks to implement the Log interface

insert image description here

The code structures of these implementation classes are basically the same. The source code of Log4jImpl is as follows. In the adapter mode, the adapted class object is passed to the adapter constructor, and here is clazz (equivalent to the log name name), so, in terms of code implementation, it is not a standard adapter mode. However, from the perspective of the application scenario, it does play a role in adaptation, which is a typical application scenario of the adapter mode.

import org.apache.ibatis.logging.Log;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
public class Log4jImpl implements Log {
    
    
    private static final String FQCN = Log4jImpl.class.getName();
    private final Logger log;
    public Log4jImpl(String clazz) {
    
    
        log = Logger.getLogger(clazz);
    }
    @Override
    public boolean isDebugEnabled() {
    
    
        return log.isDebugEnabled();
    }
    @Override
    public boolean isTraceEnabled() {
    
    
        return log.isTraceEnabled();
    }
    @Override
    public void error(String s, Throwable e) {
    
    
        log.log(FQCN, Level.ERROR, s, e);
    }
    @Override
    public void error(String s) {
    
    
        log.log(FQCN, Level.ERROR, s, null);
    }
    @Override
    public void debug(String s) {
    
    
        log.log(FQCN, Level.DEBUG, s, null);
    }
    @Override
    public void trace(String s) {
    
    
        log.log(FQCN, Level.TRACE, s, null);
    }
    @Override
    public void warn(String s) {
    
    
        log.log(FQCN, Level.WARN, s, null);
    }
}

Guess you like

Origin blog.csdn.net/ACE_U_005A/article/details/128633344