Why is VS Code so awesome?

 

640?wx_fmt=jpeg


 Author: Li Shaoxia  

https://zhuanlan.zhihu.com/p/35303567 

Authorized to reprint

Visual Studio Code (VS Code) has experienced explosive growth in recent years and has become a must-have artifact in the vast developer tool library. As an open source project, it has also attracted countless third-party developers and end users, becoming one of the top open source projects. It has enough functions, is easy to use in terms of experience, and is simple and smooth in the case of having a large number of plug-ins, which is really commendable.

I am a VS Code user, and I also develop plug-ins for it. Many Java plug-ins in the plug-in market are basically the works of our team, so I have observed many engineering highlights of VS Code in my daily work. Let’s take a look at them one by one. discuss.

Concise and focused product positioning throughout

Did you know that the VS Code development team is only in their early twenties?

It's hard to believe, everyone thinks that VS Code is omnipotent, how can such a powerful tool be made by so few people. In fact, rich functions are a beautiful illusion, because most of the functions for specific programming languages ​​and technologies are provided by third-party plug-ins. The core of VS Code is always very streamlined, which is a test of the product team's ability to handle: do too much, bloated , There is not enough manpower; there are too few to do, too weak, and no one will use it.

Their team chose to focus on the development of core functions to provide users with a simple and smooth experience , and this idea runs through every link of product development. In my opinion, this is the first bright spot.

The first bright spot is also a difficulty, because "simplicity" is the "shape" of the product after all, and the more critical thing is the pre-positioning problem - the positioning of the product, and what problem does it solve . From the user's point of view, this question can be transformed into the following points - why do we need a new tool? Is it a code editor (Editor) or an integrated development environment (IDE)? Let's see what project lead Erich Gamma has to say:

640?wx_fmt=jpeg

(Video screenshot - Erich explained the positioning of VS Code: editor + code understanding + debugging)

This screenshot illustrates the positioning of VS Code: editor + code understanding + debugging. This is a very restrained and balanced choice, focusing on the "most commonly used" functions of developers, while striving for simplicity and efficiency in the form of the product. Judging from the results, this positioning is quite successful.

Under the guidance of this positioning, these more than 20 engineers came up with VS Code. The relatively small feature set enables developers to strive for excellence in code quality, and end users also get a tool with excellent performance. This is an important reason why VS Code stands out from other editors.

It is precisely because of product positioning and a high degree of restraint in team responsibilities that team members can spend time on such issues and write code that can stand the test.

At the same time, the smaller team also enables team members to achieve uniform behaviors, which is particularly evident in community interaction. You can go to GitHub to see their Issues, requests and feedback beyond the scope of product positioning are basically They were all declined or handed over to third-party plug-in projects, which can be said to be very dedicated.

Seeing this, it seems that everything is fine, but here comes the problem, there are thousands of coders, you use Node and I use Go, you do the front end and I do the backend, how does VS Code meet these various needs? You are quick to answer - massive plug-ins. Then let's delve into how VS Code manages a huge plug-in ecology.

Plug-in model for process isolation

It is commonplace to extend functions through plug-ins, but how to ensure that plug-ins are as good as native functions? History tells us: there are no guarantees.

You can refer to Eclipse. The plug-in model can be said to be very thorough, and the function level is omnipotent, but there are several annoying problems: unstable, difficult to use, and slow, so many users switch to IntelliJ. It can be said that success is also a plug-in, and failure is also a plug-in.

The essence of the problem lies in information asymmetry, which leads to inconsistent ideas and quality of code written by different teams . In the end, users get a messy and stuck product. Therefore, it can only be a good wish to make the plug-in consistent with the native function in terms of stability, speed and experience.

Let’s take a look at how other IDEs do it. Visual Studio handles all the functions by itself, and makes others have nothing to do . Out of the box, plug-ins are optional. It seems that it is a good way to do everything by yourself, but do you know that there is an engineering team of thousands of people behind Visual Studio. Obviously, this is not something that VS Code can handle. They chose to let everyone make plug-ins, so how to solve the problems encountered by Eclipse?

Here is a little knowledge to share - the developer of the core part of Eclipse is the early VS Code team. Well, so they didn't step into the same river twice. Unlike Eclipse, VS Code has chosen to keep the plugins in the box .

The first problem to be solved in this way is stability , which is especially important for VS Code. We all know that VS Code is based on Electron, which is essentially a Node.js environment, single-threaded, and any code collapse will have catastrophic consequences. So VS Code simply doesn't trust anyone, puts the plug-ins in a separate process, let you toss, the main program is fine.

640?wx_fmt=jpeg

Plugins are isolated from the main process

The decision of the VS Code team is not without reason. As mentioned earlier, many people in the team are actually from the old department of Eclipse, so they naturally have in-depth thinking on the Eclipse plug-in model. One of Eclipse's design goals is to push componentization to the extreme, so many core functions are implemented in the form of plug-ins. Unfortunately, Eclipse plug-ins run in the main process, and any plug-ins with poor or unstable performance will directly affect Eclipse . The end result is that everyone complains that Eclipse is bloated, slow, and unstable. VS Code achieves physical-level isolation based on processes and successfully solves this problem . In fact, process-level isolation also brings up another topic, that is, the isolation of interface and business logic.

UI rendering is isolated from business logic, consistent user experience

The problem after "unstable" is "difficult to use", specifically the confusing interface and process, the reason is the "inconsistency" of the interface language between plug-ins , which leads to an extremely steep learning curve There is no uniform solution path. VS Code's approach is to not give plug-ins the opportunity to "invent" new interfaces at all.

As shown above, the plug-ins are locked in the Extension Host process, while the UI is in the main process, so naturally the plug-ins cannot manipulate the user interface directly.

VS Code manages all user interaction portals and formulates interaction standards. All user operations are converted into various requests and sent to the plug-in. What the plug-in can do is respond to these requests and focus on business logic. But from the beginning to the end, the plug-in cannot "determine" or "affect" how the interface elements are rendered (color, font, etc., not at all) , and as for popping up dialog boxes, it is even more impossible.

VS Code’s control of the user interface can be said to be extremely cautious, and anyone who has done plug-ins will understand it. Interested students can dig deeper into the history of TreeView, and they will have a more intuitive experience. At first glance, third-party developers are stuck. Doesn't this limit everyone's creativity? I would like to say that this approach is closely related to the background of this team, and a different group of people is likely to fail. The reason for their success is that the team has been cultivating in the field of development tools for many years. They converted their experience into opinions, and finally implemented them in the interface elements and interactive language of VS Code. Judging from the results, they are very popular.

The complete isolation of interface and business logic makes all plug-ins have consistent behavior, and users get a uniform experience. Not only that, but this interface and behavioral consistency eventually translates into another "great" feature - Remote Development, which we'll discuss later. The next thing we are going to talk about is another innovation of VS Code - Language Server Protocol.

LSP - Text Based Protocol

The previous article mentioned two features in the positioning of VS Code: code understanding and debugging, most of which are implemented by third-party plug-ins, and the bridge in the middle is the two major protocols-Language Server Protocol (LSP) and Debug Adapter Protocol (DAP) . The two are highly similar from a design point of view, let's focus on the most popular LSP. First, why do you need an LSP?

Full-stack development has long become the mainstream of this era, and software practitioners are increasingly not limited by a specific language or technology, which also poses new challenges to the diamonds in our hands.

For example, I use TypeScript and Node.js as the front-end, and at the same time use Java to write the background, and occasionally use Python to do some data analysis, so I probably need a combination of several tools. Frequent switching is inefficient no matter from the perspective of system resource consumption and user experience.

So is there a tool that can handle all three languages ​​in the same workspace? That's right, it is VS Code - a development environment that supports multiple languages, and the basis for multilingual support is Language Server Protocol (LSP).

The protocol has achieved unprecedented success in just a few years. So far, there have been a hundred implementations from major companies such as Microsoft and the community, covering basically all mainstream programming languages. At the same time, it has also been adopted by other development tools, such as Atom, Vim, Sublime, Emacs, Visual Studio and Eclipse, which proves its excellence from another perspective.

What's even more commendable is that the protocol is also lightweight and fast, which can be said to be the killer feature of VS Code, and it is also one of Microsoft's most important IPs. . . Wow, it's powerful and light, it's a scam no matter how you look at it, so let's see how it does it.

Focus first: 1. Moderate design 2. Reasonable abstraction 2. Thoughtful details .

Let’s talk about design first. Large and complete is a very common problem. If I were to design such a thing to support all programming languages, the first reaction would probably be to create a superset covering all language features.

Microsoft has made such an attempt, such as Roslyn-a language-neutral compiler on which both C# and VB.NET compilers are based. Everyone knows that C# is very rich in language features, and Roslyn's ability to support C# is enough to show its strength. So the question is, why is it not widely used in the community? I think the root cause is the side effect of "powerful": complex and subjective (Opinionated). The syntax tree alone is already very complicated, and the various other features and the relationship between them are even more daunting. Such a behemoth, ordinary developers will not easily touch it.

In contrast, LSP obviously regards compactness as one of its design goals. It chooses to make the smallest subset, which implements the team's consistent style of restraint. It is concerned with the physical entities (such as files, directories) and state (cursor position) that users most often deal with when editing code . It does not try to understand the characteristics of the language at all, and compilation is not its concern, so naturally it does not involve complex concepts such as syntax trees.

It is not done in one step, but gradually developed with the iteration of VS Code functions. Therefore, since its birth, it still maintains a small size, is easy to understand, and has a low threshold for implementation. It has quickly gained wide support in the community, and Language Server (LS) in various languages ​​has blossomed everywhere.

Small is small, and functions are indispensable, so abstraction is very critical. The most important concepts of LSP are action and location. Most of the requests of LSP are to express "execute the specified action at the specified location".

For example, users hover the mouse over a class name to view related definitions and documentation. At this time, VS Code will send a 'textDocument/hover' request to LS. The most critical information in this request is the current document and the position of the cursor. After receiving the request, LS goes through a series of internal calculations (identifying the symbol corresponding to the cursor position and finding related documents), finds out relevant information, and then sends it back to VS Code for display to the user. Such back-and-forth interactions are abstracted into requests (Request) and replies (Response) in LSP, and LSP also stipulates their specifications (Schema). From the developer's point of view, there are very few concepts, and the interaction form is also very simple, so it is very easy to implement.

Seeing this, everyone should have a better understanding of LSP, which is essentially glue, sticking VS Code and LS of various languages ​​together. But it is no ordinary glue, but a very tasteful glue, and this taste is reflected in the details.

First of all, this is a text-based protocol, and the text reduces the difficulty of understanding and debugging. Referring to the success of HTTP and REST, it is hard to imagine what would happen if this is a binary protocol. Even SOAP, which is also a text protocol, has long since passed away, which is enough to illustrate the importance of "simplicity" in building a developer ecosystem.

Secondly, this is a protocol based on JSON. JSON can be said to be the most readable structured data format. If you look at the configuration files in each code warehouse, you will know how correct this is. There are still people Do you use XML in new projects? Once again - "simple".

Again, this is a JSONRPC-based protocol. Due to the popularity of JSON, all major languages ​​have excellent support for it, so developers do not need to deal with issues such as serialization and deserialization at all. This is an implementation level "Simple".

From these details, it can be seen that the VS Code team has a very precise grasp of today's technology trends, and their decision-making has fully considered "simple" and firmly grasped the hearts of community developers. So the important thing is said three times:

Always lean towards simplicity when designing.

Always lean towards simplicity when designing.

Always lean towards simplicity when designing.

Integrated Remote Development

In May of this year, VS Code released Remote Development (VSCRD). With it, we can open a VS Code workspace in a remote environment (such as a virtual machine, container), and then use the local VS Code to connect to work, as shown below Explains its mode of operation:

640?wx_fmt=jpeg

VSCRD essentially improves the experience of remote development. Compared with commonly used remote desktop sharing, the specific improvements are as follows:

Quick response : All interactions of VSCRD are completed in the local UI, and the response is fast; because the remote desktop transmits screenshots, the data round-trip delay is very large, and lagging is normal

Use local settings : VSCRD's UI runs locally and follows all local settings, so you can still use the shortcut keys, layouts, and fonts you are used to, avoiding the overhead of work efficiency

Small data transmission overhead : Remote Desktop transmits video data, while VS Code transmits operation requests and responses. The overhead is similar to that of the command line, and the freeze situation is further improved.

Third-party plug-ins are available : In the remote workspace, not only the native functions of VS Code are available, but also the functions of all third-party plug-ins are still available; for remote desktop, you have to install them one by one

The remote file system is available : the remote file system is completely mapped to the local, which is almost the same

So what magical operation does VSCRD do to achieve the above effects? Let's take a look at its architecture diagram:

640?wx_fmt=jpeg

In fact, the answer is mentioned in the previous article:

Plug-in model for process-level isolation

The Extension Host (that is, the VS Code Server in the figure) is physically separated from the main program, so there is no essential difference between running the Extension Host remotely or locally.

UI rendering is isolated from plug-in logic, uniform plug-in behavior

The UI of all plug-ins is uniformly rendered by VS Code, so there is only pure business logic in the plug-in, and the behavior is highly unified. It doesn’t matter where you run

Efficient Protocol LSP

The two major protocols of VS Code, LSP and DAP, are very streamlined, and are naturally suitable for situations with high network delays, and are perfect for remote development

The VS Code team's architectural decisions are undoubtedly very forward-looking, and at the same time, their grasp of details is impeccable. It is precisely because of such a solid engineering foundation that functions such as VSCRD can be born, so I think this is a masterpiece.

Students who haven't tried VSCRD yet, here's Amway again, it is very useful in the following scenarios:

The configuration of the development environment is very cumbersome , such as the development of the Internet of Things, you need to install and configure various tools and plug-ins by yourself. In VSCRD, a remote workspace template can be done. If you need to install additional tools, that is, to change the Dockerfile, it is very simple. Templates for common programming languages ​​and scenarios can be found here.

The local machine is too weak, and certain developments cannot be done , such as machine learning, massive data and computing needs require a very good machine. In VSCRD, you can directly operate the remote file system and use remote computing resources.

at last

VS Code is like a shining star, attracting thousands of developers to contribute to it. From the success of VS Code, we have seen how many miracles good design and engineering practices can do. Looking at the software industry, models at all levels are constantly being refreshed, which is not only exciting, but also requires practitioners to continuously improve their skills. From the perspective of personal learning, understanding the causes and consequences of these models and understanding the decision-making process in engineering practice is very beneficial to improving engineering capabilities.

640?wx_fmt=gif


The official account of the code farmer is open for submission, which may be the highest salary in the whole network:

Using storytelling technology, the manuscript fee is 1000

Technology/workplace/perception/interview, etc., the manuscript fee is 700

Translation articles, 200 words per thousand words

Contact information: onlyliuxin97 (WeChat)

Click for details:  It may be the highest salary in the whole network, hurry up!

Wonderful review of the past

i am a thread

I am a Java Class

object-oriented bible

functional programming bible

TCP/IP Daming postman

CPU Forrest Gump

i am a network card

i am a router

A story told over HTTPs

The pinnacle of programming languages

Java: The Birth of an Empire

JavaScript: A dick's counterattack

The principle of load balancing

Three realms of reading source code

640?wx_fmt=jpeg

Guess you like

Origin blog.csdn.net/coderising/article/details/100021895