Vue项目的性能优化

目录

 前言

一、代码层面的优化

 1. **v-if 和 v-show 区分使用场景**

2. **computed 和 watch 区分使用场景**

 3. **v-for 遍历必须为 item 添加 key,且避免同时使用 v-if**

 4. **长列表性能优化**

5. **事件的销毁**

 6. **图片资源懒加载**

7. **路由懒加载**

 8. **第三方插件的按需引入**

 9. **优化无限列表性能**

10. **服务端渲染 SSR or 预渲染**

二、Webpack 层面的优化

2. **减少 ES6 转为 ES5 的冗余代码**

3. **提取公共代码**

4. **模板预编译**

5. **提取组件的 CSS**

 6. **优化 SourceMap**

 7. **构建结果输出分析**

 8. **Vue 项目的编译优化**

三、基础的 Web 技术优化

 1. **开启 gzip 压缩**

 2. **浏览器缓存**

 3. **CDN 的使用**

 4. **使用 Chrome Performance 查找性能瓶颈**

  一、CDN

 1. CDN的概念

 2. CDN的作用

 3. CDN的原理

**CDN的工作原理:**

 4. CDN的使用场景

## 二、懒加载

## 三、回流与重绘

 1. 回流与重绘的概念及触发条件

 2. 如何避免回流与重绘?

 3. 如何优化动画?

 4. documentFragment 是什么?用它跟直接操作 DOM 的区别是什么?

## 四、节流与防抖

 1. 对节流与防抖的理解

 2. 实现节流函数和防抖函数

五、图片优化

 1. 如何对项目中的图片进行优化?

 2. 常见的图片格式及使用场景

六、Webpack优化

 1. 如何提⾼webpack的打包速度?

(2)HappyPack

(3)DllPlugin

(4)代码压缩

 (5)其他

 2. 如何减少 Webpack 打包体积

(3)Tree Shaking

 3. 如何⽤webpack来优化前端性能?

 4. 如何提⾼webpack的构建速度?

前端工程化面试题

一、Git

 1. git 和 svn 的区别

 2. 经常使用的 git 命令?

 3. git pull 和 git fetch 的区别

 4. git rebase 和 git merge 的区别

二、Webpack

 1. webpack与grunt、gulp的不同?

 2. webpack、rollup、parcel优劣?

 3. 有哪些常⻅的Loader?

 4. 有哪些常⻅的Plugin?

 5. bundle,chunk,module是什么?

 6. Loader和Plugin的不同?

 7. webpack的构建流程

 8. 编写loader或plugin的思路?

原理:

⾸先要知道server端和client端都做了处理⼯作:

 10. 如何⽤webpack来优化前端性能?

 11. 如何提⾼webpack的打包速度?

 12. 如何提⾼webpack的构建速度?

 13. 怎么配置单⻚应⽤?怎么配置多⻚应⽤?

三、其他

 Babel的原理是什么?

一、浏览器安全

 1. 什么是 XSS 攻击?

 2. 如何防御 XSS 攻击?

 3. 什么是 CSRF 攻击?

 4. 如何防御 CSRF 攻击?

 5. 什么是中间人攻击?如何防范中间人攻击?

 6. 有哪些可能引起前端安全的问题?

 7. 网络劫持有哪几种,如何防范?

 二、进程与线程

 1. 进程与线程的概念

 2. 进程和线程的区别

 3. 浏览器渲染进程的线程有哪些

 4. 进程之前的通信方式

 5. 僵尸进程和孤儿进程是什么?

 6. 死锁产生的原因? 如果解决死锁的问题?

 7. 如何实现浏览器内多个标签页之间的通信?

 8. 对Service Worker的理解

三、浏览器缓存

 1. 对浏览器的缓存机制的理解

 2. 浏览器资源缓存的位置有哪些?

 4. 为什么需要浏览器缓存?

 5. 点击刷新按钮或者按 F5、按 Ctrl+F5 (强制刷新)、地址栏回车有什么区别?

四、浏览器组成

 1. 对浏览器的理解

 2. 对浏览器内核的理解

 3. 常见的浏览器内核比较

 4. 常见浏览器所用内核

 5. 浏览器的主要组成部分

五、浏览器渲染原理

 1. 浏览器的渲染过程

 2. 浏览器渲染优化

 3. 渲染过程中遇到 JS 文件如何处理?

 4. 什么是文档的预解析?

 5. CSS 如何阻塞文档解析?

 6. 如何优化关键渲染路径?

 7. 什么情况会阻塞渲染?

六、浏览器本地存储

 1. 浏览器本地存储方式及使用场景

 2. Cookie有哪些字段,作用分别是什么

 3. Cookie、LocalStorage、SessionStorage区别

 4. 前端储存的⽅式有哪些?

 5. IndexedDB有哪些特点?

 七、浏览器同源策略

 1. 什么是同源策略

 2. 如何解决跨越问题

 3. 正向代理和反向代理的区别

 4. Nginx的概念及其工作原理

 八、浏览器事件机制

 1. 事件是什么?事件模型?

 2. 如何阻止事件冒泡

 3. 对事件委托的理解

 4. 事件委托的使用场景

 5. 同步和异步的区别

 6. 对事件循环的理解

 7. 宏任务和微任务分别有哪些

 8. 什么是执行栈

 9. Node 中的 Event Loop 和浏览器中的有什么区别?process.nextTick 执行顺序?

顺序

 10. 事件触发的过程是怎样的

九、浏览器垃圾回收机制

 1. V8的垃圾回收机制是怎样的

 2. 哪些操作会造成内存泄漏?

 一、HTTP协议

 1. GET和POST的请求的区别

 2. POST和PUT请求的区别

 3. 常见的HTTP请求头和响应头

 4. HTTP状态码304是多好还是少好

 5. 常见的HTTP请求方法

 6. OPTIONS请求方法及使用场景

 7. HTTP 1.0 和 HTTP 1.1 之间有哪些区别?

 8. HTTP 1.1 和 HTTP 2.0 的区别

 9. HTTP和HTTPS协议的区别

 10. GET方法URL长度限制的原因

 11. 当在浏览器中输入 Google.com 并且按下回车之后发生了什么?

 12. 对keep-alive的理解

 13. 页面有多张图片,HTTP是怎样的加载表现?

 14. HTTP2的头部压缩算法是怎样的?

 16. HTTP响应报文的是什么样的?

 17. HTTP协议的优点和缺点

 18. 说一下HTTP 3.0

 19. HTTP协议的性能怎么样

 20. URL有哪些组成部分

 21. 与缓存相关的HTTP请求头有哪些

 二、HTTPS协议

 1. 什么是HTTPS协议?

 2. TLS/SSL的工作原理

 3. 数字证书是什么?

 4. HTTPS通信(握手)过程

 5. HTTPS的特点

 6. HTTPS是如何保证安全的?

三、HTTP状态码

 1. 2XX (Success 成功状态码)

 2. 3XX (Redirection 重定向状态码)

 3. 4XX (Client Error 客户端错误状态码)

 4. 5XX (Server Error 服务器错误状态码)

 5. 总结

 6. 同样是重定向,307,303,302的区别?

四、DNS协议介绍

 1. DNS 协议是什么

 2. DNS同时使用TCP和UDP协议?

 3. DNS完整的查询过程

 4. 迭代查询与递归查询

 5. DNS 记录和报文

 五、网络模型

 1. OSI七层模型

 2. TCP/IP五层协议

六、TCP与UDP

 1. TCP 和 UDP的概念及特点

 2. TCP和UDP的区别

 3. TCP和UDP的使用场景

 4. UDP协议为什么不可靠?

 5. TCP的重传机制

 6. TCP的拥塞控制机制

 7. TCP的流量控制机制

 8. TCP的可靠传输机制

 9. TCP的三次握手和四次挥手

 10. TCP粘包是怎么回事,如何处理?

 11. 为什么udp不会粘包?

七、WebSocket

 1. 对 WebSocket 的理解

 2. 即时通讯的实现:短轮询、长轮询、SSE 和 WebSocket 间的区别?

一、异步&事件循环

 1. 代码输出结果

 2. 代码输出结果

 3. 代码输出结果

 4. 代码输出结果

 5. 代码输出结果

 6. 代码输出结果

 7. 代码输出结果

 8. Code output result

 9. Code output result

 10. Code output result

 11. Code output result

 12. Code output result

two, this

 1. Code output result

 3. Scope & variable promotion & closure

 4. Prototype & Inheritance

1. JavaScript Basics

 1. Handwritten Object.create

 2. Handwritten instanceof method

 2. Data processing

 1. Implement date formatting functions

 2. To exchange the values ​​of a and b, temporary variables cannot be used

 3. Realize the out-of-order output of the array

 5. Realize the flattening of the array

 3. Scenario application

 1. Print red, yellow and green in a cycle

 2. Realize printing 1,2,3,4 every second

 3. Children's counting problem

 4. Use Promise to realize asynchronous loading of pictures

 5. Implement the publish-subscribe model

 6. Find the most frequently occurring words in the article

 7. Encapsulate asynchronous fetch and use async await

 8. Realize prototype inheritance The so-called prototype chain inheritance is to make the prototype of the new instance equal to the instance of the parent class:

 9. Implement two-way data binding

 10. Implement simple routing

 11. Implement the Fibonacci sequence

 12. The longest non-repeating length of a string

 13. Use setTimeout to implement setInterval

 14. Implement jsonp

 15. Determine whether the object has a circular reference


 foreword

The Vue framework helps us deal with the dirtiest and most tiring part of DOM operation in front-end development through data two-way binding and virtual DOM technology. We no longer need to consider how to operate DOM and how to operate DOM most efficiently; but Vue projects are still There are problems such as project first screen optimization, Webpack compilation configuration optimization, etc., so we still need to pay attention to Vue project performance optimization, so that the project has more efficient performance and better user experience. This article is a summary of the author through the optimization practice of the actual project. I hope that after reading this article, readers will have some inspiration and thinking, so as to help optimize their own projects. The content of this article is divided into the following three parts:

Optimization at the Vue code level;

Optimization at the webpack configuration level;

Optimization at the basic web technology level.

1. Optimization at the code level

 1. **v-if and v-show differentiate usage scenarios**

v-if is true conditional rendering, as it will ensure that event listeners and child components inside the conditional block are destroyed and rebuilt appropriately during the toggle; also lazy: if the condition is false on initial render, do nothing do - the conditional block won't start rendering until the condition becomes true for the first time.

v-show is much simpler, the element is always rendered regardless of the initial condition, and is simply toggled based on the CSS display property.

Therefore, v-if is suitable for scenarios that rarely change conditions at runtime and does not require frequent switching conditions; v-show is suitable for scenarios that require very frequent switching conditions.

2. **computed and watch differentiate usage scenarios**

computed: is a computed attribute, dependent on other attribute values, and the value of computed is cached, only when the value of the attribute it depends on changes, the value of computed will be recalculated the next time the value of computed is obtained;

watch: It is more of an "observation" function, similar to the monitoring callback of some data. Whenever the monitored data changes, the callback will be executed for subsequent operations;

Application scenario:

When we need to perform numerical calculations and depend on other data, computed should be used, because the cache feature of computed can be used to avoid recalculation every time a value is obtained;

When we need to perform asynchronous or expensive operations when data changes, we should use watch, using the watch option allows us to perform asynchronous operations (access an API), limit the frequency of our execution of the operation, and before we get the final result , which sets the intermediate state. These are things that computed properties cannot do.

 3. **v-for traversal must add key to item, and avoid using v-if at the same time**

(1) v-for traversal must add key to item

When traversing and rendering the list data, it is necessary to set a unique key value for each item, so that the internal mechanism of Vue.js can accurately find the list data. When the state is updated, the new state value is compared with the old state value, and the diff is quickly located.

(2) v-for traversal avoids using v-if at the same time

v-for 比 v-if 优先级高,如果每一次都需要遍历整个数组,将会影响速度,尤其是当之需要渲染很小一部分的时候,必要情况下应该替换成 computed 属性。

推荐:

<ul>
  <li
    v-for="user in activeUsers"
    :key="user.id">
    {
   
   { user.name }}
  </li>
</ul>
computed: {
  activeUsers: function () {
    return this.users.filter(function (user) {
   return user.isActive
    })
  }
}

不推荐:

<ul>
  <li
    v-for="user in users"
    v-if="user.isActive"
    :key="user.id">
    {
   
   { user.name }}
  </li>
</ul>

 4. **长列表性能优化**

Vue 会通过 Object.defineProperty 对数据进行劫持,来实现视图响应数据的变化,然而有些时候我们的组件就是纯粹的数据展示,不会有任何改变,我们就不需要 Vue 来劫持我们的数据,在大量数据展示的情况下,这能够很明显的减少组件初始化的时间,那如何禁止 Vue 劫持我们的数据呢?可以通过 Object.freeze 方法来冻结一个对象,一旦被冻结的对象就再也不能被修改了。

export default {
  data: () => ({
    users: {}
  }),
  async created() {
    const users = await axios.get("/api/users");
    this.users = Object.freeze(users);
  }
};

5. **事件的销毁**

Vue 组件销毁时,会自动清理它与其它实例的连接,解绑它的全部指令及事件监听器,但是仅限于组件本身的事件。如果在 js 内

created() {
  addEventListener('click', this.click, false)
},
beforeDestroy() {
  removeEventListener('click', this.click, false)
}

 6. **图片资源懒加载**

对于图片过多的页面,为了加速页面加载速度,所以很多时候我们需要将页面内未出现在可视区域内的图片先不做加载, 等到滚动到可视区域后再去加载。这样对于页面加载性能上会有很大的提升,也提高了用户体验。我们在项目中使用 Vue 的 vue-lazyload 插件:

(1)安装插件

npm install vue-lazyload --save-dev

(2)在入口文件 man.js 中引入并使用

import VueLazyload from 'vue-lazyload'

然后再 vue 中直接使用

Vue.use(VueLazyload)

或者添加自定义选项

Vue.use(VueLazyload, {
preLoad: 1.3,
error: 'dist/error.png',
loading: 'dist/loading.gif',
attempt: 1
})

(3)在 vue 文件中将 img 标签的 src 属性直接改为 v-lazy ,从而将图片显示方式更改为懒加载显示:

以上为 vue-lazyload 插件的简单使用,如果要看插件的更多参数选项,可以查看 vue-lazyload 的 github 地址。

7. **路由懒加载**

Vue is a single-page application, and there may be many routes introduced. In this way, the file packaged with webpcak is very large. When entering the home page, too many resources are loaded, and the page will appear white screen, which is not conducive to user experience. If we can divide the components corresponding to different routes into different code blocks, and then load the corresponding components when the route is accessed, it will be more efficient. This will greatly increase the speed of the first screen display, but the speed of other pages may slow down.

Routing lazy loading:

const Foo = () => import('./Foo.vue')
const router = new VueRouter({
  routes: [
    { path: '/foo', component: Foo }
  ]
})

 8. **On-demand introduction of third-party plug-ins**

We often need to introduce third-party plug-ins in the project. If we directly introduce the entire plug-in, the size of the project will be too large. We can use babel-plugin-component, and then only the required components can be introduced to reduce the project size. the goal of. The following is an example of introducing the element-ui component library into the project:

(1) First, install babel-plugin-component:

npm install babel-plugin-component -D

(2) Then, modify .babelrc to:

{
  "presets": [["es2015", { "modules": false }]],
  "plugins": [
    [
      "component",
      {
        "libraryName": "element-ui",
        "styleLibraryName": "theme-chalk"
      }
    ]
  ]
}

(3) Introduce some components in main.js:

import Vue from 'vue';
import { Button, Select } from 'element-ui';

 Vue.use(Button)
 Vue.use(Select)

 9. **Optimize infinite list performance**

If your application has a very long or infinitely scrolling list, you need to use windowing technology to optimize performance, only need to render the content of a small part of the area, and reduce the time to re-render components and create dom nodes. You can refer to the following open source projects vue-virtual-scroll-list and vue-virtual-scroller to optimize this infinite list scenario.

10. **Server-side rendering SSR or pre-rendering**

Server-side rendering means that Vue renders the tag into the entire html fragment on the client side and completes the work on the server side, and the html fragment formed by the server side is directly returned to the client. This process is called server-side rendering.

(1) Advantages of server-side rendering:

Better SEO: Because the content of the SPA page is obtained through Ajax, and the search engine crawler will not wait for Ajax to complete asynchronously before crawling the content of the page, so it is impossible to crawl the page obtained through Ajax in the SPA Content; while SSR returns the rendered page directly from the server (the data is already included in the page), so search engine crawling tools can grab the rendered page;

Faster content arrival time (first screen loads faster): SPA will wait for all Vue compiled js files to be downloaded before starting page rendering. File downloads take a certain amount of time, so first screen rendering requires A certain period of time; SSR directly renders the page directly from the server and returns it for display, without waiting to download the js file and then render it, so SSR has a faster content arrival time;

(2) Disadvantages of server-side rendering:

More restrictions on development conditions: For example, server-side rendering only supports two hook functions, beforCreate and created, which will cause some external extension libraries to require special handling in order to run in server-side rendering applications; and can be deployed in any static file Unlike the fully static single-page application SPA on the server, the server-side rendering application needs to be in the Node.js server operating environment;

More server load: Rendering a full application in Node.js will obviously consume more CPU resources than a server that only serves static files, so if you expect to use it in a high-traffic environment, please prepare for the corresponding server load, And use caching strategies wisely.

If your project's SEO and first-screen rendering are key indicators for evaluating projects, then your project needs server-side rendering to help you achieve the best initial loading performance and SEO, specifically how to implement Vue SSR. If your Vue project only needs to improve the SEO of a few marketing pages (eg /, /about, /contact, etc.), then you may want to prerender, which simply generates static HTML files for specific routes at build time. The advantage is that it is easier to set up prerendering, and you can use your front end as a completely static site. Specifically, you can use prerender-spa-plugin to easily add prerendering.

2. Optimization at the Webpack level

 1. **Webpack compresses images**

在 vue 项目中除了可以在 webpack.base.conf.js 中 url-loader 中设置 limit 大小来对图片处理,对小于 limit 的图片转化为 base64 格式,其余的不做操作。所以对有些较大的图片资源,在请求资源的时候,加载会很慢,我们可以用 image-webpack-loader来压缩图片:

(1)首先,安装 image-webpack-loader :

npm install image-webpack-loader --save-dev

(2)然后,在 webpack.base.conf.js 中进行配置:

{
  test: /\.(png|jpe?g|gif|svg)(\?.*)?$/,
  use:[
    {
    loader: 'url-loader',
    options: {
      limit: 10000,
      name: utils.assetsPath('img/[name].[hash:7].[ext]')
      }
    },
    {
      loader: 'image-webpack-loader',
      options: {
        bypassOnDebug: true,
      }
    }
  ]
}

2. **减少 ES6 转为 ES5 的冗余代码**

Babel 插件会在将 ES6 代码转换成 ES5 代码时会注入一些辅助函数,例如下面的 ES6 代码:

class HelloWebpack extends Component{...}

这段代码再被转换成能正常运行的 ES5 代码时需要以下两个辅助函数:

babel-runtime/helpers/createClass  // 用于实现 class 语法
babel-runtime/helpers/inherits  // 用于实现 extends 语法

在默认情况下, Babel 会在每个输出文件中内嵌这些依赖的辅助函数代码,如果多个源代码文件都依赖这些辅助函数,那么这些辅助函数的代码将会出现很多次,造成代码冗余。为了不让这些辅助函数的代码重复出现,可以在依赖它们时通过 require(‘babel-runtime/helpers/createClass’) 的方式导入,这样就能做到只让它们出现一次。babel-plugin-transform-runtime 插件就是用来实现这个作用的,将相关辅助函数进行替换成导入语句,从而减小 babel 编译出来的代码的文件大小。

(1)首先,安装 babel-plugin-transform-runtime :

npm install babel-plugin-transform-runtime --save-dev

(2)然后,修改 .babelrc 配置文件为:

"plugins": [
    "transform-runtime"
]

如果要看插件的更多详细内容,可以查看babel-plugin-transform-runtime 的 详细介绍。

3. **提取公共代码**

如果项目中没有去将每个页面的第三方库和公共模块提取出来,则项目会存在以下问题:

相同的资源被重复加载,浪费用户的流量和服务器的成本。

每个页面需要加载的资源太大,导致网页首屏加载缓慢,影响用户体验。

So we need to separate the common code of multiple pages into separate files to optimize the above problems. Webpack has a built-in CommonsChunkPlugin plug-in specially used to extract the common parts of multiple Chunks. Our configuration of CommonsChunkPlugin in the project is as follows:

// 所有在 package.json 里面依赖的包,都会被打包进 vendor.js 这个文件中。
new webpack.optimize.CommonsChunkPlugin({
  name: 'vendor',
  minChunks: function(module, count) {
    return (
      module.resource &&
      /\.js$/.test(module.resource) &&
      module.resource.indexOf(
        path.join(__dirname, '../node_modules')
      ) === 0
    );
  }
}),
// 抽取出代码模块的映射关系
new webpack.optimize.CommonsChunkPlugin({
  name: 'manifest',
  chunks: ['vendor']
})

If you want to see more details about the plugin, you can check the detailed introduction of CommonsChunkPlugin.

4. **Template Precompilation**

When using templates within the DOM or string templates within JavaScript, the templates are compiled into render functions at runtime. Usually this process is fast enough, but it is best to avoid this usage for performance-sensitive applications.

The easiest way to precompile templates is to use single-file components - the relevant build settings will automatically handle precompilation, so the built code already contains compiled render functions instead of raw template strings.

If you use webpack and like to separate JavaScript and template files, you can use vue-template-loader, which can also convert template files into JavaScript render functions during the build process.

5. **Extract component CSS**

When using a single-file component, the CSS inside the component will be dynamically injected through JavaScript in the style tag. This has a small runtime overhead, which can lead to a "flash of unstyled content (fouc)" if you're using server-side rendering. Extracting the CSS for all components into the same file avoids this problem and also allows for better compression and caching of the CSS.

See the respective documentation for the build tools to learn more:

webpack + vue-loader ( vue-cli 的 webpack 模板已经预先配置好)

Browserify + vueify

Rollup + rollup-plugin-vue

 6. **Optimize SourceMap**

After we package the project, we will package the codes of multiple files under development into one file, and after compressing, removing redundant spaces, and compiling with babel, the compiled code will finally be used in the online environment, then There will be a big difference between the processed code and the source code in this way. When there is a bug, we can only locate the compressed code location, but cannot locate the code in the development environment, which is not good for development. problem, so sourceMap appeared, it is to solve the problem of poorly debugged code.

SourceMap 的可选值如下(+ 号越多,代表速度越快,- 号越多,代表速度越慢, o 代表中等速度 )

开发环境推荐:cheap-module-eval-source-map

生产环境推荐:cheap-module-source-map

原因如下:

cheap:源代码中的列信息是没有任何作用,因此我们打包后的文件不希望包含列相关信息,只有行信息能建立打包前后的依赖关系。因此不管是开发环境或生产环境,我们都希望添加 cheap 的基本类型来忽略打包前后的列信息;

module :不管是开发环境还是正式环境,我们都希望能定位到bug的源代码具体的位置,比如说某个 Vue 文件报错了,我们希望能定位到具体的 Vue 文件,因此我们也需要 module 配置;

soure-map :source-map 会为每一个打包后的模块生成独立的 soucemap 文件 ,因此我们需要增加source-map 属性;

eval-source-map:eval 打包代码的速度非常快,因为它不生成 map 文件,但是可以对 eval 组合使用 eval-source-map 使用会将 map 文件以 DataURL 的形式存在打包后的 js 文件中。在正式环境中不要使用 eval-source-map, 因为它会增加文件的大小,但是在开发环境中,可以试用下,因为他们打包的速度很快。 b

 7. **构建结果输出分析**

Webpack 输出的代码可读性非常差而且文件非常大,让我们非常头疼。为了更简单、直观地分析输出结果,社区中出现了许多可视化分析工具。这些工具以图形的方式将结果更直观地展示出来,让我们快速了解问题所在。接下来讲解我们在 Vue 项目中用到的分析工具:webpack-bundle-analyzer 。

我们在项目中 webpack.prod.conf.js 进行配置:

if (config.build.bundleAnalyzerReport) {
  var BundleAnalyzerPlugin =   require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
  webpackConfig.plugins.push(new BundleAnalyzerPlugin());
}

执行 $ npm run build --report 后生成分析报告如下:

 8. **Vue 项目的编译优化**

如果你的 Vue 项目使用 Webpack 编译,需要你喝一杯咖啡的时间,那么也许你需要对项目的 Webpack 配置进行优化,提高 Webpack 的构建效率。

三、基础的 Web 技术优化

 1. **开启 gzip 压缩**

gzip 是 GNUzip 的缩写,最早用于 UNIX 系统的文件压缩。HTTP 协议上的 gzip 编码是一种用来改进 web 应用程序性能的技术,web 服务器和客户端(浏览器)必须共同支持 gzip。目前主流的浏览器,Chrome,firefox,IE等都支持该协议。常见的服务器如 Apache,Nginx,IIS 同样支持,gzip 压缩效率非常高,通常可以达到 70% 的压缩率,也就是说,如果你的网页有 30K,压缩之后就变成了 9K 左右

以下我们以服务端使用我们熟悉的 express 为例,开启 gzip 非常简单,相关步骤如下:

安装:

npm install compression --save 添加代码逻辑:
var compression = require('compression');
var app = express();
app.use(compression())

 2. **浏览器缓存**

为了提高用户加载页面的速度,对静态资源进行缓存是非常必要的,根据是否需要重新向服务器发起请求来分类,将 HTTP 缓存规则分为两大类(强制缓存,对比缓存)。

 3. **CDN 的使用**

浏览器从服务器上下载 CSS、js 和图片等文件时都要和服务器连接,而大部分服务器的带宽有限,如果超过限制,网页就半天反应不过来。而 CDN 可以通过不同的域名来加载文件,从而使下载文件的并发连接数大大增加,且CDN 具有更好的可用性,更低的网络延迟和丢包率 。

 4. **使用 Chrome Performance 查找性能瓶颈**

Chrome 的 Performance 面板可以录制一段时间内的 js 执行细节及时间。使用 Chrome 开发者工具分析页面性能的步骤如下。

打开 Chrome 开发者工具,切换到 Performance 面板

点击 Record 开始录制

刷新页面或展开某个节点

点击 Stop 停止录制

  一、CDN

 1. CDN的概念

CDN(Content Delivery Network,**内容分发网络**)是指一种通过互联网互相连接的电脑网络系统,利用最靠近每位用户的服务器,更快、更可靠地将音乐、图片、视频、应用程序及其他文件发送给用户,来提供高性能、可扩展性及低成本的网络内容传递给用户。

典型的CDN系统由下面三个部分组成:

- **分发服务系统:**最基本的工作单元就是Cache设备,cache(边缘cache)负责直接响应最终用户的访问请求,把缓存在本地的内容快速地提供给用户。同时cache还负责与源站点进行内容同步,把更新的内容以及本地没有的内容从源站点获取并保存在本地。Cache设备的数量、规模、总服务能力是衡量一个CDN系统服务能力的最基本的指标。
- **负载均衡系统:**主要功能是负责对所有发起服务请求的用户进行访问调度,确定提供给用户的最终实际访问地址。两级调度体系分为全局负载均衡(GSLB)和本地负载均衡(SLB)。**全局负载均衡**主要根据用户就近性原则,通过对每个服务节点进行“最优”判断,确定向用户提供服务的cache的物理位置。**本地负载均衡**主要负责节点内部的设备负载均衡
- **运营管理系统:**运营管理系统分为运营管理和网络管理子系统,负责处理业务层面的与外界系统交互所必须的收集、整理、交付工作,包含客户管理、产品管理、计费管理、统计分析等功能。

 2. CDN的作用

CDN一般会用来托管Web资源(包括文本、图片和脚本等),可供下载的资源(媒体文件、软件、文档等),应用程序(门户网站等)。使用CDN来加速这些资源的访问。

(1)在性能方面,引入CDN的作用在于:

- The content received by the user comes from the nearest data center, with lower latency and faster content loading
- Some resource requests are allocated to CDN, reducing the load on the server

(2) In terms of security, CDN helps defend against network attacks such as DDoS and MITM:

- For DDoS: through monitoring and analyzing abnormal traffic, limit its request frequency
- For MITM: from source server to CDN node to ISP (Internet Service Provider), full-link HTTPS communication

In addition, as a basic cloud service, CDN also has the advantages of resource hosting and on-demand expansion (capable of handling traffic peaks).

 3. Principle of CDN

CDN and DNS are inextricably linked. First, let’s take a look at the domain name resolution process of DNS. The resolution process of entering www.test.com in the browser is as follows:

(1) Check browser cache

(2) Check the operating system cache, common such as the hosts file

(3) Check router cache

(4) If the previous steps are not found, it will query the LDNS server of the ISP (Internet Service Provider)

(5) If the LDNS server is not found, it will request the root domain name server (Root Server) for resolution, which is divided into the following steps:

- The root server returns the address of the top-level domain name (TLD) server such as `.com`, `.cn`, `.org`, etc. In this example, it will return the address of `.com` - Then send a request
to the top-level domain name server, and then It will return the address of the secondary domain name (SLD) server, this example will return the address of `.test`
- then send a request to the secondary domain name server, and then return the target IP queried by the domain name, this example will return `www. The address of test.com`
- Local DNS Server will cache the result and return it to the user, cached in the system

**How ​​CDN works:**

(1) The process in which the user does not use the CDN to cache resources:

1. 浏览器通过DNS对域名进行解析(就是上面的DNS解析过程),依次得到此域名对应的IP地址
2. 浏览器根据得到的IP地址,向域名的服务主机发送数据请求
3. 服务器向浏览器返回响应数据

(2)用户使用CDN缓存资源的过程:

1. 对于点击的数据的URL,经过本地DNS系统的解析,发现该URL对应的是一个CDN专用的DNS服务器,DNS系统就会将域名解析权交给CNAME指向的CDN专用的DNS服务器。
2. CND专用DNS服务器将CND的全局负载均衡设备IP地址返回给用户
3. 用户向CDN的全局负载均衡设备发起数据请求
4. CDN的全局负载均衡设备根据用户的IP地址,以及用户请求的内容URL,选择一台用户所属区域的区域负载均衡设备,告诉用户向这台设备发起请求
5. 区域负载均衡设备选择一台合适的缓存服务器来提供服务,将该缓存服务器的IP地址返回给全局负载均衡设备
6. 全局负载均衡设备把服务器的IP地址返回给用户
7. 用户向该缓存服务器发起请求,缓存服务器响应用户的请求,将用户所需内容发送至用户终端。

如果缓存服务器没有用户想要的内容,那么缓存服务器就会向它的上一级缓存服务器请求内容,以此类推,直到获取到需要的资源。最后如果还是没有,就会回到自己的服务器去获取资源。

CNAME(意为:别名):在域名解析中,实际上解析出来的指定域名对应的IP地址,或者该域名的一个CNAME,然后再根据这个CNAME来查找对应的IP地址。

 4. CDN的使用场景

- **Use a third-party CDN service: **If you want to open source some projects, you can use a third-party CDN service-
**Use CDN for static resource caching: **Put your website's static resources on the CDN, Such as js, css, pictures, etc. The entire project can be placed on the CDN to complete one-click deployment.
- **Live broadcast transmission: **Live broadcast is essentially transmitted using streaming media, and CDN also supports streaming media transmission, so live broadcast can use CDN to improve access speed. When CDN processes streaming media, it is different from processing ordinary static files. If ordinary files are not found on the edge node, it will go to the upper layer to search for them. However, the data volume of streaming media itself is very large. The method will inevitably bring about performance problems, so streaming media generally adopts the method of active push.

## Two, lazy loading

 1. The concept of lazy loading

Lazy loading is also called delayed loading and on-demand loading. It refers to the delayed loading of image data in long web pages, which is a better way to optimize web page performance. In a relatively long web page or application, if there are many pictures, all the pictures are loaded, and the user can only see the part of the picture data in the visible window, which wastes performance.

If you use lazy loading of pictures, you can solve the above problems. Images outside the visual area will not be loaded until the screen is scrolled, and will only be loaded when the screen is scrolled. This makes web pages load faster and reduces server load. Lazy loading is suitable for scenarios with many images and long page lists (long lists).

 2. The characteristics of lazy loading

- **Reduce the loading of useless resources: **The use of lazy loading significantly reduces the pressure and traffic of the server, and also reduces the burden on the browser.
- **Improve user experience: **If you load more pictures at the same time, it may take a long time to wait, which affects the user experience, and using lazy loading can greatly improve the user experience.
- **Prevent the loading of too many pictures from affecting the loading of other resource files: **It will affect the normal use of website applications.

 3. Implementation principle of lazy loading

The loading of the image is caused by `src`, when the value of `src` is assigned, the browser will request the image resource. According to this principle, we use the `data-xxx` attribute of HTML5 to store the path of the image, and when the image needs to be loaded, assign the path of the image in `data-xxx` to `src`, so that the image can be clicked It needs to be loaded, that is, lazy loading.

Note: `xxx` in `data-xxx` can be customized, here we use `data-src` to define.

The focus of lazy loading is to determine which picture the user needs to load. In the browser, the resources in the visible area are the resources that the user needs. So when the picture appears in the visible area, just get the real address of the picture and assign it to the picture.

Use native JavaScript to implement lazy loading:

**Knowledge points:**

(1) `window.innerHeight` is the height of the browser's visible area `document.documentElement.clientHeight`

(2) `document.body.scrollTop || document.documentElement.scrollTop` is the distance scrolled by the browser

(3) `imgs.offsetTop` is the height from the top of the element to the top of the document (including the distance of the scroll bar)

(4) Image loading conditions: `img.offsetTop < window.innerHeight + document.body.scrollTop;`

**Code:**

<div class="container">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
</div>
<script>
var imgs = document.querySelectorAll('img');
function lozyLoad(){
        var scrollTop = document.body.scrollTop || document.documentElement.scrollTop;
        var winHeight= window.innerHeight;
        for(var i=0;i < imgs.length;i++){
            if(imgs[i].offsetTop < scrollTop + winHeight ){
                imgs[i].src = imgs[i].getAttribute('data-src');
            }
        }
    }
  window.onscroll = lozyLoad();
</script>

 4. The difference between lazy loading and preloading

These two methods are ways to improve the performance of web pages. The main difference between the two is that one is to load in advance, and the other is to load slowly or even not. Lazy loading has a certain pressure-relieving effect on the front-end of the server, while pre-loading will increase the pressure on the front-end of the server.

- **懒加载也叫延迟加载,指的是在长网页中延迟加载图片的时机,当用户需要访问时,再去加载**,这样可以提高网站的首屏加载速度,提升用户的体验,并且可以减少服务器的压力。它适用于图片很多,页面很长的电商网站的场景。懒加载的实现原理是,将页面上的图片的 src 属性设置为空字符串,将图片的真实路径保存在一个自定义属性中,当页面滚动的时候,进行判断,如果图片进入页面可视区域内,则从自定义属性中取出真实路径赋值给图片的 src 属性,以此来实现图片的延迟加载。
- **预加载指的是将所需的资源提前请求加载到本地,这样后面在需要用到时就直接从缓存取资源。**通过预加载能够减少用户的等待时间,提高用户的体验。我了解的预加载的最常用的方式是使用 js 中的 image 对象,通过为 image 对象来设置 scr 属性,来实现图片的预加载。

## 三、回流与重绘

 1. 回流与重绘的概念及触发条件

# (1)回流

当渲染树中部分或者全部元素的尺寸、结构或者属性发生变化时,浏览器会重新渲染部分或者全部文档的过程就称为**回流**。

下面这些操作会导致回流:

- 页面的首次渲染
- 浏览器的窗口大小发生变化
- 元素的内容发生变化
- 元素的尺寸或者位置发生变化
- 元素的字体大小发生变化
- 激活CSS伪类
- 查询某些属性或者调用某些方法
- 添加或者删除可见的DOM元素

在触发回流(重排)的时候,由于浏览器渲染页面是基于流式布局的,所以当触发回流时,会导致周围的DOM元素重新排列,它的影响范围有两种:

- 全局范围:从根节点开始,对整个渲染树进行重新布局
- 局部范围:对渲染树的某部分或者一个渲染对象进行重新布局

# (2)重绘

When the style of some elements on the page changes, but does not affect its position in the document flow, the browser will redraw the elements. This process is **redrawing**.

The following actions cause reflows:

- Color, background related attributes: background-color, background-image, etc.
- Outline related attributes: outline-color, outline-width, text-decoration
- border-radius, visibility, box-shadow

Note: **When reflow is triggered, redrawing must be triggered, but redrawing does not necessarily cause reflow. **

 2. How to avoid reflow and redrawing?

**Measures to reduce reflow and redrawing:**

- When manipulating DOM, try to operate on low-level DOM nodes
- Do not use `table` layout, a small change may cause the entire `table` to be re-layouted
- Use CSS expressions
- Do not frequently manipulate element styles, For static pages, you can modify the class name, not the style.
- Use absolute or fixed to keep elements out of the document flow, so that their changes will not affect other elements -
To avoid frequent manipulation of the DOM, you can create a document fragment `documentFragment`, apply all DOM operations on it, and finally add it To the document
- set the element to `display: none` first, and then display it after the operation is complete. Because DOM operations on elements with a display attribute of none will not cause reflow and redrawing.
- Put multiple read operations (or write operations) of the DOM together instead of interspersed with read and write operations. This is due to **browser's rendering queue mechanism**.

The browser optimizes itself for page reflow and redrawing—**rendering queue**

**浏览器会将所有的回流、重绘的操作放在一个队列中,当队列中的操作到了一定的数量或者到了一定的时间间隔,浏览器就会对队列进行批处理。这样就会让多次的回流、重绘变成一次回流重绘。**

上面,将多个读操作(或者写操作)放在一起,就会等所有的读操作进入队列之后执行,这样,原本应该是触发多次回流,变成了只触发一次回流。

 3. 如何优化动画?

对于如何优化动画,我们知道,一般情况下,动画需要频繁的操作DOM,就就会导致页面的性能问题,我们可以将动画的 `position`属性设置为 `absolute`或者 `fixed`,将动画脱离文档流,这样他的回流就不会影响到页面了。

 4. documentFragment 是什么?用它跟直接操作 DOM 的区别是什么?

MDN中对 `documentFragment`的解释:

DocumentFragment,文档片段接口,一个没有父对象的最小文档对象。它被作为一个轻量版的 Document使用,就像标准的document一样,存储由节点(nodes)组成的文档结构。与document相比,最大的区别是DocumentFragment不是真实 DOM 树的一部分,它的变化不会触发 DOM 树的重新渲染,且不会导致性能等问题。

当我们把一个 DocumentFragment 节点插入文档树时,插入的不是 DocumentFragment 自身,而是它的所有子孙节点。在频繁的DOM操作时,我们就可以将DOM元素插入DocumentFragment,之后一次性的将所有的子孙节点插入文档中。和直接操作DOM相比,将DocumentFragment 节点插入DOM树时,不会触发页面的重绘,这样就大大提高了页面的性能。

## 四、节流与防抖

 1. 对节流与防抖的理解

- Function anti-shake refers to executing the callback after n seconds after the event is triggered. If the event is triggered again within this n seconds, the timing will be restarted. This can be used in some click request events to avoid sending multiple requests to the backend due to multiple clicks by the user.
- Function throttling refers to specifying a unit time. Within this unit time, only one callback function that triggers an event can be executed. If an event is triggered multiple times within the same unit time, only one time will take effect. Throttling can be used in the event monitoring of the scroll function to reduce the frequency of event calls through event throttling.

**Application scenario of anti-shake function:**

- Button submission scenario: prevent multiple button submissions, and only execute the last submission
- Server-side verification scenario: Form verification requires the cooperation of the server, only executes the last of a continuous input event, and The search association word function is similar to the living environment, please use lodash.debounce

**Applicable scenarios of throttling function:**

- Drag scene: execute only once within a fixed period of time to prevent extremely high frequency trigger position changes
- Zoom scene: monitor browser resize
- Animation scene: avoid performance problems caused by triggering animation multiple times in a short period of time

 2. Realize throttling function and anti-shake function

**Implementation of function anti-shake:**```

function debounce(fn, wait) {
  var timer = null;

  return function() {
    var context = this,
      args = [...arguments];

    // 如果此时存在定时器的话,则取消之前的定时器重新记时
    if (timer) {
      clearTimeout(timer);
      timer = null;
    }

    // 设置定时器,使事件间隔指定事件后执行
    timer = setTimeout(() => {
      fn.apply(context, args);
    }, wait);
  };
}

** Implementation of function throttling: **

// 时间戳版
function throttle(fn, delay) {
  var preTime = Date.now();

  return function() {
    var context = this,
      args = [...arguments],
      nowTime = Date.now();

    // 如果两次时间间隔超过了指定时间,则执行函数。
    if (nowTime - preTime >= delay) {
      preTime = Date.now();
      return fn.apply(context, args);
    }
  };
}

// 定时器版
function throttle (fun, wait){
  let timeout = null
  return function(){
    let context = this
    let args = [...arguments]
    if(!timeout){
      timeout = setTimeout(() => {
        fun.apply(context, args)
        timeout = null 
      }, wait)
    }
  }
}

5. Image optimization

 1. How to optimize the pictures in the project?

1. No pictures. Many times, a lot of modified pictures are used. In fact, this kind of modified pictures can be replaced by CSS.
2. For the mobile terminal, the screen width is only so small, there is no need to waste bandwidth by loading the original image. Generally, images are loaded by CDN, which can calculate the width to fit the screen, and then request the corresponding cropped images.
3. Use the base64 format for the small image
4. Combine multiple icon files into one image (Sprite image)
5. Choose the correct image format:
6. - For browsers that can display the WebP format, use the WebP format as much as possible. Because the WebP format has a better image data compression algorithm, which can bring a smaller image size, and has the same image quality as the naked eye, the disadvantage is that the compatibility is not good-
   small images use PNG, in fact, for most icons this Image-like, you can use SVG instead
   - use JPEG for photos

 2. Common image formats and usage scenarios

(1) **BMP** is a lossless bitmap that supports both indexed and direct colors. This picture format does little to compress the data, so pictures in BMP format are usually larger files.

(2) **GIF** is a lossless bitmap with indexed colors. Encoded using LZW compression algorithm. Small file size is the advantage of the GIF format. At the same time, the GIF format also has the advantages of supporting animation and transparency. However, the GIF format only supports 8-bit indexed colors, so the GIF format is suitable for scenes that do not require high color requirements and require a small file size.

(3) **JPEG** is a lossy bitmap with direct color. The advantage of JPEG pictures is that they use direct color. Thanks to richer colors, JPEG is very suitable for storing photos. Compared with GIF, JPEG is not suitable for storing corporate logos and wireframes. Lossy compression will cause the image to be blurred, and the selection of direct color will cause the image file to be larger than GIF.

(4) **PNG-8** is a lossless bitmap using indexed colors. PNG is a relatively new image format, and PNG-8 is a very good substitute for GIF format. Where possible, PNG-8 should be used instead of GIF, because under the same image effect, PNG- 8 has a smaller file size. In addition, PNG-8 also supports the adjustment of transparency, while GIF does not support it. There is no reason to use GIF over PNG-8 unless support for animation is required.

(5) **PNG-24** is a lossless, direct-color bitmap. The advantage of PNG-24 is that it compresses the data of the picture, so that the file size of PNG-24 format is much smaller than that of BMP for pictures with the same effect. Of course, PNG24 images are still much larger than JPEG, GIF, and PNG-8.

(6) **SVG** is a lossless vector image. SVG is a vector diagram meaning that the SVG image consists of lines and curves and the means to draw them. When zooming in on an SVG image, you still see lines and curves instead of pixels. This means that the SVG image will not be distorted when enlarged, so it is very suitable for drawing Logos, Icons, etc.

(7) **WebP** is a new image format developed by Google. WebP is a bitmap that supports both lossy and lossless compression and uses direct color. It can be seen from the name that it was born for the Web. What is born for the Web? That is to say, for pictures of the same quality, WebP has a smaller file size. Now the website is filled with a large number of pictures, if the file size of each picture can be reduced, the amount of data transmission between the browser and the server will be greatly reduced, thereby reducing the access delay and improving the access experience. Currently, only Chrome browser and Opera browser support the WebP format, and the compatibility is not very good.

- In the case of lossless compression, the file size of WebP images with the same quality is 26% smaller than that of PNG; -
In the case of lossy compression, the file size of WebP images with the same image precision is 25%~34 smaller than that of JPEG %;
- The WebP image format supports image transparency. A lossless compressed WebP image only needs 22% extra file size to support transparency.

6. Webpack optimization

 1. How to improve the packaging speed of webpack?

# (1) Optimize Loader

对于 Loader 来说,影响打包效率首当其冲必属 Babel 了。因为 Babel 会将代码转为字符串生成 AST,然后对 AST 继续进行转变最后再生成新的代码,项目越大,**转换代码越多,效率就越低**。当然了,这是可以优化的。

首先我们**优化 Loader 的文件搜索范围**

module.exports = {
  module: {
    rules: [
      {
        // js 文件才使用 babel
        test: /\.js$/,
        loader: 'babel-loader',
        // 只在 src 文件夹下查找
        include: [resolve('src')],
        // 不会去查找的路径
        exclude: /node_modules/
      }
    ]
  }
}

对于 Babel 来说,希望只作用在 JS 代码上的,然后 `node_modules` 中使用的代码都是编译过的,所以完全没有必要再去处理一遍。

当然这样做还不够,还可以将 Babel 编译过的文件**缓存**起来,下次只需要编译更改过的代码文件即可,这样可以大幅度加快打包时间

loader: 'babel-loader?cacheDirectory=true'

(2)HappyPack

受限于 Node 是单线程运行的,所以 Webpack 在打包的过程中也是单线程的,特别是在执行 Loader 的时候,长时间编译的任务很多,这样就会导致等待的情况。

**HappyPack 可以将 Loader 的同步执行转换为并行的**,这样就能充分利用系统资源来加快打包效率了

module: {
  loaders: [
    {
      test: /\.js$/,
      include: [resolve('src')],
      exclude: /node_modules/,
      // id 后面的内容对应下面
      loader: 'happypack/loader?id=happybabel'
    }
  ]
},
plugins: [
  new HappyPack({
    id: 'happybabel',
    loaders: ['babel-loader?cacheDirectory'],
    // 开启 4 个线程
    threads: 4
  })
]

(3)DllPlugin

**DllPlugin 可以将特定的类库提前打包然后引入**。这种方式可以极大的减少打包类库的次数,只有当类库更新版本才有需要重新打包,并且也实现了将公共代码抽离成单独文件的优化方案。DllPlugin的使用方法如下:

// 单独配置在一个文件中
// webpack.dll.conf.js
const path = require('path')
const webpack = require('webpack')
module.exports = {
  entry: {
    // 想统一打包的类库
    vendor: ['react']
  },
  output: {
    path: path.join(__dirname, 'dist'),
    filename: '[name].dll.js',
    library: '[name]-[hash]'
  },
  plugins: [
    new webpack.DllPlugin({
      // name 必须和 output.library 一致
      name: '[name]-[hash]',
      // 该属性需要与 DllReferencePlugin 中一致
      context: __dirname,
      path: path.join(__dirname, 'dist', '[name]-manifest.json')
    })
  ]
}

然后需要执行这个配置文件生成依赖文件,接下来需要使用 `DllReferencePlugin` 将依赖文件引入项目中

// webpack.conf.js
module.exports = {
  // ...省略其他配置
  plugins: [
    new webpack.DllReferencePlugin({
      context: __dirname,
      // manifest 就是之前打包出来的 json 文件
      manifest: require('./dist/vendor-manifest.json'),
    })
  ]
}

(4)代码压缩

在 Webpack3 中,一般使用 `UglifyJS` 来压缩代码,但是这个是单线程运行的,为了加快效率,可以使用 `webpack-parallel-uglify-plugin` 来并行运行 `UglifyJS`,从而提高效率。

In Webpack4, the above operations are not needed, just set `mode` to `production` to enable the above functions by default. Code compression is also a performance optimization solution that we must do. Of course, we can not only compress JS code, but also compress HTML and CSS code. In the process of compressing JS code, we can also implement such as deleting `console.log` through configuration. The functionality of the class code.

 (5) Others

Some small optimization points can be used to speed up packaging

- `resolve.extensions`: used to indicate the file suffix list, the default search order is `['.js', '.json']`, if your import file does not add a suffix, it will search for files in this order. We should reduce the length of the suffix list as much as possible, and then put the suffixes with high frequency in front
- `resolve.alias`: You can map a path through an alias, so that Webpack can find the path faster
- `module.noParse`: If you are sure that there are no other dependencies under a file, you can use this attribute to prevent Webpack from scanning the file, which is very helpful for large class libraries

 2. How to reduce the packaging volume of Webpack

 (1) Load on demand

When developing a SPA project, there will be many routing pages in the project. If all these pages are packaged into one JS file, although multiple requests are combined, a lot of unnecessary code is also loaded, which takes a longer time. So in order to present the homepage to the user faster, I hope that the file size that can be loaded on the homepage is as small as possible. **At this time, you can use on-demand loading to package each routing page into a file**. Of course, not only routes can be loaded on demand, but this function can also be used for large class libraries such as `loadash`.

The code implementation of on-demand loading will not be expanded here in detail, because the implementation is different in view of the different frameworks used. Of course, although their usage may be different, the underlying mechanism is the same. It is used to download the corresponding file, return a `Promise`, and execute the callback when the `Promise` succeeds.

 (2)Scope Hoisting

**Scope Hoisting will analyze the dependencies between modules, and merge the packaged modules into one function as much as possible. **

For example, if you want to pack two files:

// test.js
export const a = 1
// index.js
import { a } from './test.js'

For this case, the packaged code will look like this:

[
  /* 0 */
  function (module, exports, require) {
    //...
  },
  /* 1 */
  function (module, exports, require) {
    //...
  }
]

But if you use Scope Hoisting, the code will be merged into a function as much as possible, and it will become similar code like this:

[
  /* 0 */
  function (module, exports, require) {
    //...
  }
]

The code generated by this packaging method is obviously much less than the previous one. If you want to enable this feature in Webpack4, just enable `optimization.concatenateModules`:

module.exports = {
  optimization: {
    concatenateModules: true
  }
}

(3)Tree Shaking

**Tree Shaking can delete unreferenced code in the project**, such as:

// test.js
export const a = 1
export const b = 2
// index.js
import { a } from './test.js'

For the above cases, if the variable `b` in the `test` file is not used in the project, it will not be packaged into the file.

If you use Webpack 4, this optimization will be enabled automatically when you start the production environment.

 3. How to use webpack to optimize front-end performance?

Optimizing front-end performance with webpack refers to optimizing the output of webpack so that the final result of packaging can run quickly and efficiently in the browser.

- **Code Compression**: delete redundant code, comments, simplify code writing, etc. You can use webpack's UglifyJsPlugin and ParallelUglifyPlugin to compress JS files, and use cssnano (css-loader?minimize) to compress css - **Use
CDN to accelerate**: During the construction process, modify the referenced static resource path is the corresponding path on the CDN. You can use webpack to modify the resource path for the output parameter and the publicPath parameter of each loader
- **Tree Shaking**: Delete the fragments in the code that will never be accessed. It can be achieved by adding the parameter --optimize-minimize when starting webpack-
**Code Splitting**: Divide the code into chunks according to the routing dimension or component, so that it can be loaded on demand and can make full use of the browser Cache-
**Extract public third-party libraries**: SplitChunksPlugin plug-in for public module extraction, using browser cache to cache these public codes that do not require frequent changes for a long time

 4. How to improve the build speed of webpack?

1. In the case of multiple entries, use CommonsChunkPlugin to extract common code
2. Use externals configuration to extract common libraries
3. Use DllPlugin and DllReferencePlugin to precompile resource modules and use DllPlugin to reference those that we will never modify npm package to precompile, and then load the precompiled module through DllReferencePlugin.
4. Use Happypack to achieve multi-threaded accelerated compilation.
5. Use webpack-uglify-parallel to increase the compression speed of uglifyPlugin. In principle, webpack-uglify-parallel uses multi-core parallel compression to improve compression speed
6. Use Tree-shaking and Scope Hoisting to eliminate redundant code

Front-end engineering interview questions

1. Git

 1. The difference between git and svn

- git 和 svn 最大的区别在于 git 是分布式的,而 svn 是集中式的。因此我们不能再离线的情况下使用 svn。如果服务器出现问题,就没有办法使用 svn 来提交代码。
- svn 中的分支是整个版本库的复制的一份完整目录,而 git 的分支是指针指向某次提交,因此 git 的分支创建更加开销更小并且分支上的变化不会影响到其他人。svn 的分支变化会影响到所有的人。
- svn 的指令相对于 git 来说要简单一些,比 git 更容易上手。
- GIT把内容按元数据方式存储,而SVN是按文件:因为git目录是处于个人机器上的一个克隆版的版本库,它拥有中心版本库上所有的东西,例如标签,分支,版本记录等。
- GIT分支和SVN的分支不同:svn会发生分支遗漏的情况,而git可以同一个工作目录下快速的在几个分支间切换,很容易发现未被合并的分支,简单而快捷的合并这些文件。
- GIT没有一个全局的版本号,而SVN有
- GIT的内容完整性要优于SVN:GIT的内容存储使用的是SHA-1哈希算法。这能确保代码内容的完整性,确保在遇到磁盘故障和网络问题时降低对版本库的破坏

 2. 经常使用的 git 命令?

```
git init // Create a new git code base
git add // Add specified files to the temporary storage area
git rm // Delete workspace files, and put this deletion into the temporary storage area
git commit -m [message] // Submit the staging area to the warehouse area
git branch // list all branches
git checkout -b [branch] // create a new branch and switch to it
git status // display the status of changed files

```

 3. The difference between git pull and git fetch

- git fetch just downloads the changes in the remote warehouse, and does not merge with the local branch.
- git pull will download the changes in the remote warehouse and merge them with the current branch.

 4. The difference between git rebase and git merge

Both git merge and git rebase are used for branch merging, the key **is different in the processing of ** **commit records**:

- git merge 会新建一个新的 commit 对象,然后两个分支以前的 commit 记录都指向这个新 commit 记录。这种方法会保留之前每个分支的 commit 历史。
- git rebase 会先找到两个分支的第一个共同的 commit 祖先记录,然后将提取当前分支这之后的所有 commit 记录,然后将这个 commit 记录添加到目标分支的最新提交后面。经过这个合并后,两个分支合并后的 commit 记录就变为了线性的记录了。

二、Webpack

 1. webpack与grunt、gulp的不同?

**Grunt****、Gulp是基于任务运⾏的⼯具**: 它们会⾃动执⾏指定的任务,就像流⽔线,把资源放上去然后通过不同插件进⾏加⼯,它们包含活跃的社区,丰富的插件,能⽅便的打造各种⼯作流。

**Webpack是基于模块化打包的⼯具:** ⾃动化处理模块,webpack把⼀切当成模块,当 webpack 处理应⽤程序时,它会递归地构建⼀个依赖关系图 (dependency graph),其中包含应⽤程序需要的每个模块,然后将所有这些模块打包成⼀个或多个 bundle。

因此这是完全不同的两类⼯具,⽽现在主流的⽅式是⽤npm script代替Grunt、Gulp,npm script同样可以打造任务流。

 2. webpack、rollup、parcel优劣?

- webpack适⽤于⼤型复杂的前端站点构建: webpack有强⼤的loader和插件⽣态,打包后的⽂件实际上就是⼀个⽴即执⾏函数,这个⽴即执⾏函数接收⼀个参数,这个参数是模块对象,键为各个模块的路径,值为模块内容。⽴即执⾏函数内部则处理模块之间的引⽤,执⾏模块等,这种情况更适合⽂件依赖复杂的应⽤开发。
- rollup适⽤于基础库的打包,如vue、d3等: Rollup 就是将各个模块打包进⼀个⽂件中,并且通过 Tree-shaking 来删除⽆⽤的代码,可以最⼤程度上降低代码体积,但是rollup没有webpack如此多的的如代码分割、按需加载等⾼级功能,其更聚焦于库的打包,因此更适合库的开发。
- parcel适⽤于简单的实验性项⽬: 他可以满⾜低⻔槛的快速看到效果,但是⽣态差、报错信息不够全⾯都是他的硬伤,除了⼀些玩具项⽬或者实验项⽬不建议使⽤。

 3. 有哪些常⻅的Loader?

- file-loader:把⽂件输出到⼀个⽂件夹中,在代码中通过相对 URL 去引⽤输出的⽂件
- url-loader:和 file-loader 类似,但是能在⽂件很⼩的情况下以 base64 的⽅式把⽂件内容注⼊到代码中去
- source-map-loader:加载额外的 Source Map ⽂件,以⽅便断点调试
- image-loader:加载并且压缩图⽚⽂件
- babel-loader:把 ES6 转换成 ES5
- css-loader:加载 CSS,⽀持模块化、压缩、⽂件导⼊等特性
- style-loader:把 CSS 代码注⼊到 JavaScript 中,通过 DOM 操作去加载 CSS。
- eslint-loader:通过 ESLint 检查 JavaScript 代码

**注意:在Webpack中,loader的执行顺序是从右向左**执行的。因为webpack选择了**compose这样的函数式编程方式**,这种方式的表达式执行是从右向左的。

 4. 有哪些常⻅的Plugin?

- define-plugin:定义环境变量
- html-webpack-plugin:简化html⽂件创建
- uglifyjs-webpack-plugin:通过 UglifyES 压缩 ES6 代码
- webpack-parallel-uglify-plugin: 多核压缩,提⾼压缩速度
- webpack-bundle-analyzer: 可视化webpack输出⽂件的体积
- mini-css-extract-plugin: CSS提取到单独的⽂件中,⽀持按需加载

 5. What are bundles, chunks, and modules?

- bundle: the file packaged by webpack;
- chunk: code block, a chunk is composed of multiple modules, used for code merging and splitting;
- module: a single module in development, in webpack In the world, everything is a module, a module corresponds to a file, and webpack will recursively find all dependent modules from the configured entry.

 6. What is the difference between Loader and Plugin?

Different roles:

- Loader is literally translated as "loader". Webpack treats all files as modules, but webpack can only parse js files natively. If you want to package other files, you will use loader. So the role of Loader is to give webpack the ability to load and parse non-JavaScript files.
- Plugin is literally translated as "plugin". Plugin can extend the functionality of webpack, making webpack more flexible. During the life cycle of Webpack running, many events will be broadcast. Plugin can listen to these events and change the output results at the right time through the API provided by Webpack.

Different usages:

- Loader is configured in module.rules, that is to say, it exists as a module parsing rule. The type is an array, and each item is an Object, which describes what type of file ( test ), what to load ( loader ) and the parameters used ( options ) - Plugin is configured separately in plugins
. The type is an array, each item is an instance of plugin, and the parameters are passed in through the constructor.

 7. The construction process of webpack

The running process of Webpack is a serial process, and the following processes will be executed sequentially from start to finish:

1. 初始化参数:从配置⽂件和 Shell 语句中读取与合并参数,得出最终的参数;
2. 开始编译:⽤上⼀步得到的参数初始化 Compiler 对象,加载所有配置的插件,执⾏对象的 run ⽅法开始执⾏编译;
3. 确定⼊⼝:根据配置中的 entry 找出所有的⼊⼝⽂件;
4. 编译模块:从⼊⼝⽂件出发,调⽤所有配置的 Loader 对模块进⾏翻译,再找出该模块依赖的模块,再递归本步骤直到所有⼊⼝依赖的⽂件都经过了本步骤的处理;
5. 完成模块编译:在经过第4步使⽤ Loader 翻译完所有模块后,得到了每个模块被翻译后的最终内容以及它们之间的依赖关系;
6. 输出资源:根据⼊⼝和模块之间的依赖关系,组装成⼀个个包含多个模块的 Chunk,再把每个 Chunk 转换成⼀个单独的⽂件加⼊到输出列表,这步是可以修改输出内容的最后机会;
7. 输出完成:在确定好输出内容后,根据配置确定输出的路径和⽂件名,把⽂件内容写⼊到⽂件系统。

在以上过程中,Webpack 会在特定的时间点⼴播出特定的事件,插件在监听到感兴趣的事件后会执⾏特定的逻辑,并且插件可以调⽤ Webpack 提供的 API 改变 Webpack 的运⾏结果。

 8. 编写loader或plugin的思路?

Loader像⼀个"翻译官"把读到的源⽂件内容转义成新的⽂件内容,并且每个Loader通过链式操作,将源⽂件⼀步步翻译成想要的样⼦。

When writing Loaders, a single principle should be followed, and each Loader only does one kind of "escaping" work. Each Loader gets the content of the source file (source), and can output the processed content by returning a value, or call the this.callback() method to return the content to webpack. You can also generate a callback function through this.async(), and then use this callback to output the processed content. In addition, webpack also prepares a set of tool functions for developers to develop loaders——loader-utils.

Compared with Loader, the writing of Plugin is much more flexible. During the running life cycle of webpack, many events will be broadcast. Plugin can listen to these events and change the output results at the right time through the API provided by Webpack.

 9. How does the hot update of webpack work? Explain its principle?

The hot update of webpack is also called hot replacement (Hot Module Replacement), abbreviated as HMR. This mechanism can replace the old module with the newly changed module without refreshing the browser.

principle:

First of all, you need to know that both the server side and the client side have done processing work:

1. In the first step, in the watch mode of webpack, if a file in the file system is modified, webpack will monitor the file change, recompile and package the module according to the configuration file, and pass the packaged code through Simple JavaScript objects are kept in memory.
2. The second step is the interface interaction between webpack-dev-server and webpack, and in this step, it is mainly the interaction between webpack-dev-middleware of dev-server and webpack, webpack-dev- middleware calls the API exposed by webpack to monitor code changes, and tells webpack to pack the code into memory.
3. The third step is webpack-dev-server's monitoring of file changes. This step is different from the first step in that it does not monitor code changes and repackage. When we configure devServer.watchContentBase to true in the configuration file, the server will monitor the changes of the static files in these configuration folders, and will notify the browser to live reload the application after the changes. Note that this is browser refresh, and HMR is two concepts.
4. The fourth step is also the work of the webpack-dev-server code. This step is mainly to establish a websocket long connection between the browser and the server through sockjs (the dependency of webpack-dev-server), and connect the webpack The status information of each stage of compilation and packaging is notified to the browser, and it also includes the information that the server monitors the static file changes in the third step. The browser performs different operations according to these socket messages. Of course, the most important information transmitted by the server is the hash value of the new module, and the following steps will perform module hot replacement according to this hash value.
5. webpack-dev-server/client 端并不能够请求更新的代码,也不会执⾏热更模块操作,⽽把这些⼯作⼜交回给了webpack,webpack/hot/dev-server 的⼯作就是根据 webpack-dev-server/client 传给它的信息以及 dev-server 的配置决定是刷新浏览器呢还是进⾏模块热更新。当然如果仅仅是刷新浏览器,也就没有后⾯那些步骤了。
6. HotModuleReplacement.runtime 是客户端 HMR 的中枢,它接收到上⼀步传递给他的新模块的 hash 值,它通过JsonpMainTemplate.runtime 向 server 端发送 Ajax 请求,服务端返回⼀个 json,该 json 包含了所有要更新的模块的 hash 值,获取到更新列表后,该模块再次通过 jsonp 请求,获取到最新的模块代码。这就是上图中 7、8、9 步骤。
7. ⽽第 10 步是决定 HMR 成功与否的关键步骤,在该步骤中,HotModulePlugin 将会对新旧模块进⾏对⽐,决定是否更新模块,在决定更新模块后,检查模块之间的依赖关系,更新模块的同时更新模块间的依赖引⽤。
8. 最后⼀步,当 HMR 失败后,回退到 live reload 操作,也就是进⾏浏览器刷新来获取最新打包代码。

大概流程是我们用webpack-dev-server启动一个服务之后,浏览器和服务端是通过websocket进行长连接,webpack内部实现的watch就会监听文件修改,只要有修改就webpack会重新打包编译到内存中,然后webpack-dev-server依赖中间件webpack-dev-middleware和webpack之间进行交互,每次热更新都会请求一个携带hash值的json文件和一个js,websocker传递的也是hash值,内部机制通过hash值检查进行热更新, 至于内部原理,因为水平限制,目前还看不懂。

 10. 如何⽤webpack来优化前端性能?

⽤webpack优化前端性能是指优化webpack的输出结果,让打包的最终结果在浏览器运⾏快速⾼效。

- **压缩代码**:删除多余的代码、注释、简化代码的写法等等⽅式。可以利⽤webpack的 UglifyJsPlugin 和 ParallelUglifyPlugin 来压缩JS⽂件, 利⽤ cssnano (css-loader?minimize)来压缩css
- **利⽤CDN加速**: 在构建过程中,将引⽤的静态资源路径修改为CDN上对应的路径。可以利⽤webpack对于 output 参数和各loader的 publicPath 参数来修改资源路径
- **Tree Shaking**: 将代码中永远不会⾛到的⽚段删除掉。可以通过在启动webpack时追加参数 --optimize-minimize 来实现
- **Code Splitting**: 将代码按路由维度或者组件分块(chunk),这样做到按需加载,同时可以充分利⽤浏览器缓存
- **提取公共第三⽅库**: SplitChunksPlugin插件来进⾏公共模块抽取,利⽤浏览器缓存可以⻓期缓存这些⽆需频繁变动的公共代码

 11. 如何提⾼webpack的打包速度?

- happypack: 利⽤进程并⾏编译loader,利⽤缓存来使得 rebuild 更快,遗憾的是作者表示已经不会继续开发此项⽬,类似的替代者是thread-loader
- 外部扩展(externals): 将不怎么需要更新的第三⽅库脱离webpack打包,不被打⼊bundle中,从⽽减少打包时间,⽐如jQuery⽤script标签引⼊
- dll: 采⽤webpack的 DllPlugin 和 DllReferencePlugin 引⼊dll,让⼀些基本不会改动的代码先打包成静态资源,避免反复编译浪费时间
- 利⽤缓存: webpack.cache 、babel-loader.cacheDirectory、 HappyPack.cache 都可以利⽤缓存提⾼rebuild效率缩⼩⽂件搜索范围: ⽐如babel-loader插件,如果你的⽂件仅存在于src中,那么可以 include: path.resolve(__dirname,‘src’) ,当然绝⼤多数情况下这种操作的提升有限,除⾮不⼩⼼build了node_modules⽂件

 12. 如何提⾼webpack的构建速度?

1. 多⼊⼝情况下,使⽤ CommonsChunkPlugin 来提取公共代码
2. 通过 externals 配置来提取常⽤库
3. 利⽤ DllPlugin 和 DllReferencePlugin 预编译资源模块 通过 DllPlugin 来对那些我们引⽤但是绝对不会修改的npm包来进⾏预编译,再通过 DllReferencePlugin 将预编译的模块加载进来。
4. 使⽤ Happypack 实现多线程加速编译
5. 使⽤ webpack-uglify-parallel 来提升 uglifyPlugin 的压缩速度。 原理上 webpack-uglify-parallel 采⽤了多核并⾏压缩来提升压缩速度
6. 使⽤ Tree-shaking 和 Scope Hoisting 来剔除多余代码

 13. 怎么配置单⻚应⽤?怎么配置多⻚应⽤?

单⻚应⽤可以理解为webpack的标准模式,直接在 entry 中指定单⻚应⽤的⼊⼝即可,这⾥不再赘述多⻚应⽤的话,可以使⽤webpack的 AutoWebPlugin 来完成简单⾃动化的构建,但是前提是项⽬的⽬录结构必须遵守他预设的规范。 多⻚应⽤中要注意的是:

- 每个⻚⾯都有公共的代码,可以将这些代码抽离出来,避免重复的加载。⽐如,每个⻚⾯都引⽤了同⼀套css样式表
- 随着业务的不断扩展,⻚⾯可能会不断的追加,所以⼀定要让⼊⼝的配置⾜够灵活,避免每次添加新⻚⾯还需要修改构建配置

三、其他

 Babel的原理是什么?

babel 的转译过程也分为三个阶段,这三步具体是:

- **解析 Parse**: 将代码解析⽣成抽象语法树(AST),即词法分析与语法分析的过程;
- **转换 Transform**: 对于 AST 进⾏变换⼀系列的操作,babel 接受得到 AST 并通过 babel-traverse 对其进⾏遍历,在此过程中进⾏添加、更新及移除等操作;
- **⽣成 Generate**: 将变换后的 AST 再转换为 JS 代码, 使⽤到的模块是 babel-generator。

一、浏览器安全

 1. 什么是 XSS 攻击?

# (1)概念

XSS 攻击指的是跨站脚本攻击,是一种代码注入攻击。攻击者通过在网站注入恶意脚本,使之在用户的浏览器上运行,从而盗取用户的信息如 cookie 等。

XSS 的本质是因为网站没有对恶意代码进行过滤,与正常的代码混合在一起了,浏览器没有办法分辨哪些脚本是可信的,从而导致了恶意代码的执行。

攻击者可以通过这种攻击方式可以进行以下操作:

- 获取页面的数据,如DOM、cookie、localStorage;
- DOS攻击,发送合理请求,占用服务器资源,从而使用户无法访问服务器;
- 破坏页面结构;
- 流量劫持(将链接指向某网站);

# (2)攻击类型

XSS 可以分为存储型、反射型和 DOM 型:

- 存储型指的是恶意脚本会存储在目标服务器上,当浏览器请求数据时,脚本从服务器传回并执行。
- 反射型指的是攻击者诱导用户访问一个带有恶意代码的 URL 后,服务器端接收数据后处理,然后把带有恶意代码的数据发送到浏览器端,浏览器端解析这段带有 XSS 代码的数据后当做脚本执行,最终完成 XSS 攻击。
- DOM 型指的通过修改页面的 DOM 节点形成的 XSS。

**1)存储型** **XSS** **的攻击步骤:**

1. The attacker submits malicious code to the database of the target website.
2. When the user opens the target website, the website server retrieves the malicious code from the database, splices it in HTML and returns it to the browser.
3. The user browser parses and executes the response after receiving it, and the malicious code mixed in it is also executed.
4. Malicious code steals user data and sends it to the attacker's website, or impersonates the user, calling the target website interface to perform the operation specified by the attacker.

This kind of attack is common in website functions with user saved data, such as forum posts, product reviews, and user private messages.

**2) Reflective** **XSS** **Attack steps:**

1. The attacker constructs a special URL containing malicious code.
2. When a user opens a URL with malicious code, the website server takes the malicious code out of the URL, splicing it into HTML and returning it to the browser.
3. The user browser parses and executes the response after receiving it, and the malicious code mixed in it is also executed.
4. Malicious code steals user data and sends it to the attacker's website, or impersonates the user, calling the target website interface to perform the operation specified by the attacker.

The difference between reflected XSS and stored XSS is that the malicious code of stored XSS is stored in the database, and the malicious code of reflected XSS is stored in the URL.

Reflected XSS vulnerabilities are common in functions that pass parameters through URLs, such as website search and redirection. Since users need to actively open malicious URLs to take effect, attackers often combine multiple methods to induce users to click.

**3) DOM** **Type** **XSS** **Attack steps:**

1. The attacker constructs a special URL containing malicious code.
2. The user opens a URL with malicious code.
3. After the user's browser receives the response, it parses and executes it, and the front-end JavaScript takes out the malicious code in the URL and executes it.
4. Malicious code steals user data and sends it to the attacker's website, or impersonates the user, calling the target website interface to perform the operation specified by the attacker.

The difference between DOM-type XSS and the previous two types of XSS: In DOM-type XSS attacks, the removal and execution of malicious code is completed by the browser, which is a security vulnerability of the front-end JavaScript itself, while the other two types of XSS are both server-side security vulnerabilities.

 2. How to defend against XSS attacks?

It can be seen that XSS is so harmful, so it is necessary to take defensive measures when developing the website. The specific measures are as follows:

- It can be prevented from the execution of the browser. One is to use a pure front-end method without server-side splicing and returning (does not use server-side rendering). The other is to adequately escape the code that needs to be inserted into the HTML. For DOM-type attacks, it is mainly caused by unreliable front-end scripts. For data acquisition, rendering and string splicing, it is necessary to judge possible malicious codes.

- Using CSP, the essence of CSP is to establish a white list to tell the browser which external resources can be loaded and executed, thereby preventing malicious code injection attacks.

  > 1. CSP refers to the content security policy, its essence is to establish a white list to tell the browser which external resources can be loaded and executed. We only need to configure the rules, how to intercept is implemented by the browser itself.
  > 2. There are usually two ways to enable CSP, one is to set the Content-Security-Policy in the HTTP header, and the other is to set the meta tag `<meta http-equiv="Content-Security-Policy">`

- 对一些敏感信息进行保护,比如 cookie 使用 http-only,使得脚本无法获取。也可以使用验证码,避免脚本伪装成用户执行一些操作。

 3. 什么是 CSRF 攻击?

# (1)概念

CSRF 攻击指的是**跨站请求伪造攻击**,攻击者诱导用户进入一个第三方网站,然后该网站向被攻击网站发送跨站请求。如果用户在被攻击网站中保存了登录状态,那么攻击者就可以利用这个登录状态,绕过后台的用户验证,冒充用户向服务器执行一些操作。

CSRF 攻击的**本质是利用 cookie 会在同源请求中携带发送给服务器的特点,以此来实现用户的冒充。**

# (2)攻击类型

常见的 CSRF 攻击有三种:

- GET 类型的 CSRF 攻击,比如在网站中的一个 img 标签里构建一个请求,当用户打开这个网站的时候就会自动发起提交。
- POST 类型的 CSRF 攻击,比如构建一个表单,然后隐藏它,当用户进入页面时,自动提交这个表单。
- 链接类型的 CSRF 攻击,比如在 a 标签的 href 属性里构建一个请求,然后诱导用户去点击。

 4. 如何防御 CSRF 攻击?

**CSRF 攻击可以使用以下方法来防护:**

- **Same-origin detection**, the server judges whether the request is a site that is allowed to visit according to the origin or referer information in the http request header, so as to filter the request. When the origin or referer information does not exist, block the request directly. The disadvantage of this method is that the referer can be forged in some cases, and at the same time, the link of the search engine will also be blocked. Therefore, general websites will allow page requests from search engines, but the corresponding page request method may also be used by attackers. (The Referer field will tell the server which page the page is linked from)
- **Use CSRF Token for verification**, the server returns a random number Token to the user, and when the website initiates a request again, add the server side to the request parameter The returned token, and then the server verifies the token. This method solves the problem of fraudulent use when using the cookie single authentication method, but this method has a disadvantage that we need to add this token to all requests in the website, and the operation is cumbersome. Another problem is that there is generally not only one website server. If the request is transferred to other servers after load balancing, but the token is not reserved in the session of this server, there is no way to verify it. This situation can be resolved by changing the way the token is constructed.
- **Double verification of cookies**, the server injects a cookie into the requested domain name when the user visits the website page, the content is a random string, and then when the user sends a request to the server again, the string is taken out of the cookie , added to the URL parameters, and then the server performs verification by comparing the data in the cookie with the data in the parameters. Using this method takes advantage of the fact that the attacker can only use cookies, but cannot access and obtain cookies. And this method is more convenient than the method of CSRF Token, and does not involve the problem of distributed access. The disadvantage of this method is that if the website has XSS vulnerabilities, then this method will fail. At the same time, this method cannot achieve the isolation of subdomain names.
- **Set Samesite when setting the cookie attribute to restrict the cookie from being used by a third party**, so as to avoid being used by attackers. Samesite has two modes, one is strict mode, in strict mode cookies cannot be used as third-party cookies under any circumstances, in loose mode, cookies can be requested as GET requests, and page jumps will occur used by requests.

 5. What is a man-in-the-middle attack? How to prevent man-in-the-middle attacks?

Man-in-the-middle attack (MITM) means that the attacker establishes independent connections with the two ends of the communication and exchanges the received data, making the two ends of the communication think that they are passing through a A private connection talks directly to the other party, but in fact the entire session is completely controlled by the attacker. In a man-in-the-middle attack, the attacker can intercept the conversation between the two communicating parties and insert new content.

The attack process is as follows:

- The client sends a request to the server, and the request is intercepted by the middleman
- The server sends the public key to the client
- The middleman intercepts the public key and keeps it in his own hands. Then generate a **forged** public key by yourself and send it to the client
- after receiving the forged public key, the client generates an encrypted hash value and sends it to the server
- the middleman obtains the encrypted hash value and uses it The private key is decrypted to obtain the real key, and at the same time, a fake encrypted hash value is generated and sent to the server
- the server decrypts with the private key to obtain a fake key, and then encrypts the data and transmits it to the client

 6. What are the problems that may cause front-end security?

- Cross-Site Scripting (Cross-Site Scripting, XSS): A code injection method, in order to distinguish it from CSS, it is called XSS. It was commonly used in online forums in the early days. The reason is that the website did not impose strict restrictions on user input, allowing attackers to upload scripts to posts and let others browse to pages with malicious scripts. The method is very simple, including but not limited to JavaScript / CSS / Flash, etc.;
- Abuse of iframe: the content in the iframe is provided by a third party, and they are not controlled by default, they can run JavaScript in the iframe Scripts, Flash plug-ins, pop-up dialog boxes, etc., which may damage the front-end user experience;
- Cross-Site Request Forgery (Cross-Site Request Forgeries, CSRF): means that the attacker forces the authenticated Some status updates such as unexpected personal information or setting information by users are passive attacks
- Malicious third-party libraries: Whether it is back-end server applications or front-end application development, most of the time it is With the help of development frameworks and various class libraries for rapid development, once the third-party library is implanted with malicious code, it is easy to cause security problems.

 7. What are the types of network hijacking and how to prevent them?

There are two types of cyber hijacking:

(1) **DNS Hijacking**: (Input JD.com and be forced to jump to Taobao, which is dns hijacking)


- DNS mandatory resolution: guide user traffic to the cache server by modifying the operator’s local DNS records , and then initiate a 302 jump reply to the hijacked memory to guide the user to obtain the content

(2) **HTTP hijacking**: (Visit Google but there are always advertisements for playing Lanyue), due to http plaintext transmission, the operator will modify your http response content (that is, add advertisements)

DNS hijacking has been regulated because it is suspected of breaking the law. Now DNS hijacking is rare, but http hijacking is still very popular. The most effective way is to use HTTPS on the whole site to encrypt HTTP, which makes it impossible for operators to obtain plaintext , there is no way to hijack your response content.

 二、进程与线程

 1. 进程与线程的概念

从本质上说,进程和线程都是 CPU 工作时间片的一个描述:

- 进程描述了 CPU 在运行指令及加载和保存上下文所需的时间,放在应用上来说就代表了一个程序。
- 线程是进程中的更小单位,描述了执行一段指令所需的时间。

**进程是资源分配的最小单位,线程是CPU调度的最小单位。**

一个进程就是一个程序的运行实例。详细解释就是,启动一个程序的时候,操作系统会为该程序创建一块内存,用来存放代码、运行中的数据和一个执行任务的主线程,我们把这样的一个运行环境叫**进程**。**进程是运行在虚拟内存上的,虚拟内存是用来解决用户对硬件资源的无限需求和有限的硬件资源之间的矛盾的。从操作系统角度来看,虚拟内存即交换文件;从处理器角度看,虚拟内存即虚拟地址空间。**

如果程序很多时,内存可能会不够,操作系统为每个进程提供一套独立的虚拟地址空间,从而使得同一块物理内存在不同的进程中可以对应到不同或相同的虚拟地址,变相的增加了程序可以使用的内存。

进程和线程之间的关系有以下四个特点:

(1)进程中的任意一线程执行出错,都会导致整个进程的崩溃。

(2)线程之间共享进程中的数据。

(3)当一个进程关闭之后,操作系统会回收进程所占用的内存,**当一个进程退出时,操作系统会回收该进程所申请的所有资源;即使其中任意线程因为操作不当导致内存泄漏,当进程退出时,这些内存也会被正确回收。

(4)进程之间的内容相互隔离。**进程隔离就是为了使操作系统中的进程互不干扰,每一个进程只能访问自己占有的数据,也就避免出现进程 A 写入数据到进程 B 的情况。正是因为进程之间的数据是严格隔离的,所以一个进程如果崩溃了,或者挂起了,是不会影响到其他进程的。如果进程之间需要进行数据的通信,这时候,就需要使用用于进程间通信的机制了。

最新的 Chrome 浏览器包括:

- 1 个浏览器主进程
- 1 个 GPU 进程
- 1 个网络进程
- 多个渲染进程
- 多个插件进程

这些进程的功能:

- **浏览器进程**:主要负责界面显示、用户交互、子进程管理,同时提供存储等功能。
- **渲染进程**:核心任务是将 HTML、CSS 和 JavaScript 转换为用户可以与之交互的网页,排版引擎 Blink 和 JavaScript 引擎 V8 都是运行在该进程中,默认情况下,Chrome 会为每个 Tab 标签创建一个渲染进程。出于安全考虑,渲染进程都是运行在沙箱模式下。
- **GPU 进程**:其实, GPU 的使用初衷是为了实现 3D CSS 的效果,只是随后网页、Chrome 的 UI 界面都选择采用 GPU 来绘制,这使得 GPU 成为浏览器普遍的需求。最后,Chrome 在其多进程架构上也引入了 GPU 进程。
- **网络进程**:主要负责页面的网络资源加载,之前是作为一个模块运行在浏览器进程里面的,直至最近才独立出来,成为一个单独的进程。
- **插件进程**:主要是负责插件的运行,因插件易崩溃,所以需要通过插件进程来隔离,以保证插件进程崩溃不会对浏览器和页面造成影响。

所以,**打开一个网页,最少需要四个进程**:1 个网络进程、1 个浏览器进程、1 个 GPU 进程以及 1 个渲染进程。如果打开的页面有运行插件的话,还需要再加上 1 个插件进程。

虽然多进程模型提升了浏览器的稳定性、流畅性和安全性,但同样不可避免地带来了一些问题:

- **更高的资源占用**:因为每个进程都会包含公共基础结构的副本(如 JavaScript 运行环境),这就意味着浏览器会消耗更多的内存资源。
- **更复杂的体系架构**:浏览器各模块之间耦合性高、扩展性差等问题,会导致现在的架构已经很难适应新的需求了。

 2. 进程和线程的区别

- 进程可以看做独立应用,线程不能
- 资源:进程是cpu资源分配的最小单位(是能拥有资源和独立运行的最小单位);线程是cpu调度的最小单位(线程是建立在进程的基础上的一次程序运行单位,一个进程中可以有多个线程)。
- 通信方面:线程间可以通过直接共享同一进程中的资源,而进程通信需要借助 进程间通信。
- 调度:进程切换比线程切换的开销要大。线程是CPU调度的基本单位,线程的切换不会引起进程切换,但某个进程中的线程切换到另一个进程中的线程时,会引起进程切换。
- 系统开销:由于创建或撤销进程时,系统都要为之分配或回收资源,如内存、I/O 等,其开销远大于创建或撤销线程时的开销。同理,在进行进程切换时,涉及当前执行进程 CPU 环境还有各种各样状态的保存及新调度进程状态的设置,而线程切换时只需保存和设置少量寄存器内容,开销较小。

 3. 浏览器渲染进程的线程有哪些

浏览器的渲染进程的线程总共有五种:

**(1)GUI渲染线程**

负责渲染浏览器页面,解析HTML、CSS,构建DOM树、构建CSSOM树、构建渲染树和绘制页面;当界面需要**重绘**或由于某种操作引发**回流**时,该线程就会执行。

注意:GUI渲染线程和JS引擎线程是互斥的,当JS引擎执行时GUI线程会被挂起,GUI更新会被保存在一个队列中等到JS引擎空闲时立即被执行。

**(2)JS引擎线程**

The JS engine thread is also called the JS kernel, which is responsible for processing Javascript scripts, parsing Javascript scripts, and running codes; the JS engine thread has been waiting for the arrival of tasks in the task queue, and then processes them. There is only one JS in a Tab page at any time. The engine thread is running the JS program;

Note: The GUI rendering thread and the JS engine thread are mutually exclusive, so if the execution time of JS is too long, the rendering of the page will be incoherent, resulting in blocking of page rendering and loading.

**(3) Event trigger thread**

**Event trigger thread** belongs to the browser instead of the JS engine, and is used to control the event loop; when the JS engine executes code blocks such as setTimeOut (or other threads from the browser kernel, such as mouse clicks, AJAX asynchronous requests, etc.) , will add the corresponding task to the event trigger thread; when the corresponding event meets the trigger conditions and is triggered, the thread will add the event to the end of the queue to be processed, waiting for the JS engine to process;

Note: Due to the single-threaded relationship of JS, the events in these pending queues have to be queued for processing by the JS engine (executed when the JS engine is idle);

**(4) Timer trigger process**

**Timer trigger process** is the thread where setInterval and setTimeout are located; the browser timing counter is not counted by the JS engine, because the JS engine is single-threaded, and if it is in a blocked thread state, it will affect the accuracy of timing; therefore Use a separate thread to time and trigger the timer. After the time is over, it will be added to the event queue and executed after the JS engine is idle. Therefore, the tasks in the timer may not be executed on time at the set time point. The timer is only specified Add the task to the event queue at the point in time;

Note: W3C stipulates in the HTML standard that the timing time of the timer cannot be less than 4ms. If it is less than 4ms, the default is 4ms.

**(5) Asynchronous http request thread**

- After XMLHttpRequest is connected, a new thread request is opened through the browser;
- When a state change is detected, if a callback function is set, the asynchronous thread will generate a state change event, put the callback function into the event queue, and wait for the JS engine to be idle before executing it;

 4. Communication before the process

**(1) Pipeline communication**

Pipes are the most basic inter-process communication mechanism. **The pipeline is a buffer created by the operating system in the kernel. Process 1 can copy the data that needs to be interacted with to this buffer, and process 2 can read it. **

Features of the pipeline:

- Only one-way communication
- Only blood-related processes can communicate
- Depends on the file system
- Life cycle follows the process
- Byte stream-oriented service
- Synchronization mechanism is provided inside the pipeline

**(2) Message queue communication**

A message queue is a list of messages. Users can add messages, read messages, etc. in the message queue. Message queues provide a way to send a block of data from one process to another. Each data block is considered to have a type, and the receiving process can independently receive data structures containing different types. The synchronization and blocking problems of named pipes can be avoided by sending messages. But the message queue is the same as the named pipe, each data block has a maximum length limit.

Using message queues for inter-process communication may receive restrictions on the maximum length of data blocks, which is also a shortcoming of this communication method. If inter-process communication occurs frequently, the process needs to frequently read the data in the queue to the memory, which is equivalent to indirectly copying from one process to another, which takes time.

**(3) Signal volume communication**

The biggest problem with shared memory is the problem of multi-process competition for memory, just like thread safety issues. We can use semaphores to solve this problem. The essence of the semaphore is a counter, which is used to realize mutual exclusion and synchronization between processes. For example, the initial value of the semaphore is 1, and then when process a accesses memory 1, we set the value of the semaphore to 0, and then when process b also comes to access memory 1, we see that the value of the semaphore is 0 It is known that there is already a process accessing memory 1, and at this time process b will not be able to access memory 1. Therefore, semaphores are also a means of communication between processes.

**(4) Signaling**

Signals are one of the oldest methods of interprocess communication used in Unix systems. The operating system notifies the process through a signal that a predetermined event (one of a set of events) has occurred in the system, and it is also a primitive mechanism for communication and synchronization between user processes.

**(5) Shared memory communication**

共享内存就是映射一段能被其他进程所访问的内存,这段共享内存由一个进程创建,但多个进程都可以访问(使多个进程可以访问同一块内存空间)。共享内存是最快的 IPC 方式,它是针对其他进程间通信方式运行效率低而专门设计的。它往往与其他通信机制,如信号量,配合使用,来实现进程间的同步和通信。

**(6)套接字通信**

上面我们说的共享内存、管道、信号量、消息队列,他们都是多个进程在一台主机之间的通信,那两个相隔几千里的进程能够进行通信吗?答是必须的,这个时候 Socket 这家伙就派上用场了,例如我们平时通过浏览器发起一个 http 请求,然后服务器给你返回对应的数据,这种就是采用 Socket 的通信方式了。

 5. 僵尸进程和孤儿进程是什么?

- **孤儿进程**:父进程退出了,而它的一个或多个进程还在运行,那这些子进程都会成为孤儿进程。孤儿进程将被init进程(进程号为1)所收养,并由init进程对它们完成状态收集工作。
- **僵尸进程**:子进程比父进程先结束,而父进程又没有释放子进程占用的资源,那么子进程的进程描述符仍然保存在系统中,这种进程称之为僵死进程。

 6. 死锁产生的原因? 如果解决死锁的问题?

所谓死锁,是指多个进程在运行过程中因争夺资源而造成的一种僵局,当进程处于这种僵持状态时,若无外力作用,它们都将无法再向前推进。

系统中的资源可以分为两类:

- 可剥夺资源,是指某进程在获得这类资源后,该资源可以再被其他进程或系统剥夺,CPU和主存均属于可剥夺性资源;
- 不可剥夺资源,当系统把这类资源分配给某进程后,再不能强行收回,只能在进程用完后自行释放,如磁带机、打印机等。

**产生死锁的原因:**

**(1)竞争资源**

- One of the competitive resources in the deadlock refers to **competitive inalienable resources** (for example: there is only one printer in the system, which can be used by process P1, assuming that P1 has occupied the printer, if P2 continues to require the printer to print will block)
- Competing resources in deadlock Another resource refers to **competing temporary resources** (temporary resources include hardware interrupts, signals, messages, messages in buffers, etc.), usually the order of message communication is improper , a deadlock will occur

**(2) Illegal progress sequence between processes**

If P1 keeps the resource R1, and P2 keeps the resource R2, the system is in an unsafe state, because if the two processes move forward, a deadlock may occur. For example, when P1 runs to P1: Request (R2), it will block because R2 is already occupied by P2; when P2 runs to P2: Request (R1), it will also block because R1 is already occupied by P1, so the process occurs deadlock

**Necessary conditions for a deadlock to occur:**

- Mutually exclusive condition: The process requires exclusive control over the allocated resources, that is, a resource is only occupied by one process within a period of time.
- Request and hold conditions: When a process is blocked by requesting resources, hold on to the obtained resources.
- Non-deprivation condition: The resources obtained by the process cannot be deprived before they are used up, and can only be released by themselves when they are used up.
- Loop wait condition: When a deadlock occurs, there must be a process—a ring chain of resources.

**Methods to prevent deadlock:**

- One-time allocation of resources: allocate all resources at one time, so that there will be no more requests (destruction of request conditions)
- As long as one resource cannot be allocated, no other resources will be allocated to this process (destruction of please keep the conditions)
- Deprivable resources: that is, when a process obtains some resources but cannot obtain other resources, the occupied resources are released (breaking the inalienable condition) - Orderly
resource allocation method: the system assigns a number to each type of resource, and each A process requests resources in order of increasing number, and releases the opposite (breaking the loop wait condition)

 7. How to implement communication between multiple tabs in the browser?

实现多个标签页之间的通信,本质上都是通过中介者模式来实现的。因为标签页之间没有办法直接通信,因此我们可以找一个中介者,让标签页和中介者进行通信,然后让这个中介者来进行消息的转发。通信方法如下:

- **使用 websocket 协议**,因为 websocket 协议可以实现服务器推送,所以服务器就可以用来当做这个中介者。标签页通过向服务器发送数据,然后由服务器向其他标签页推送转发。
- **使用 ShareWorker 的方式**,shareWorker 会在页面存在的生命周期内创建一个唯一的线程,并且开启多个页面也只会使用同一个线程。这个时候共享线程就可以充当中介者的角色。标签页间通过共享一个线程,然后通过这个共享的线程来实现数据的交换。
- **使用 localStorage 的方式**,我们可以在一个标签页对 localStorage 的变化事件进行监听,然后当另一个标签页修改数据的时候,我们就可以通过这个监听事件来获取到数据。这个时候 localStorage 对象就是充当的中介者的角色。
- **使用 postMessage 方法**,如果我们能够获得对应标签页的引用,就可以使用postMessage 方法,进行通信。

 8. 对Service Worker的理解

Service Worker 是运行在浏览器背后的**独立线程**,一般可以用来实现缓存功能。使用 Service Worker的话,传输协议必须为 **HTTPS**。因为 Service Worker 中涉及到请求拦截,所以必须使用 HTTPS 协议来保障安全。

The cache function of Service Worker is generally divided into three steps: First, you need to register Service Worker first, and then you can cache the required files after listening to the `install` event, then you can query whether the request is intercepted next time when the user visits There is a cache, and if there is a cache, the cache file can be read directly, otherwise, the data is requested. Here is the implementation of this step:

// index.js
if (navigator.serviceWorker) {
  navigator.serviceWorker
    .register('sw.js')
    .then(function(registration) {
      console.log('service worker 注册成功')
    })
    .catch(function(err) {
      console.log('servcie worker 注册失败')
    })
}
// sw.js
// 监听 `install` 事件,回调中缓存所需文件
self.addEventListener('install', e => {
  e.waitUntil(
    caches.open('my-cache').then(function(cache) {
      return cache.addAll(['./index.html', './index.js'])
    })
  )
})
// 拦截所有请求事件
// 如果缓存中已经有请求的数据就直接用缓存,否则去请求数据
self.addEventListener('fetch', e => {
  e.respondWith(
    caches.match(e.request).then(function(response) {
      if (response) {
        return response
      }
      console.log('fetch source')
    })
  )
})

Open the page, you can see that the Service Worker has started in `Application` in the developer tools:

In the Cache, you can also find that the required files have been cached

3. Browser cache

 1. Understanding of browser caching mechanism

**The whole process of browser caching:**

- The browser loads the resource for the first time, the server returns 200, the browser downloads the resource file from the server, and caches the resource file and the response header for comparison during the next loading; - The next time the resource is loaded, due to the mandatory cache
priority Higher, first compare the time difference between the current time and the last time when 200 was returned, if it does not exceed the max-age set by cache-control, it is not expired, and hits the strong cache, directly reading resources from the local. If the browser does not support HTTP1.1, use the expires header to determine whether it has expired;
- If the resource has expired, it means that the mandatory cache has not been hit, then start to negotiate the cache, and send a message with If-None-Match and If-Modified to the server -Since request;
- After receiving the request, the server first judges whether the requested file has been modified according to the Etag value. If the Etag value is consistent, there is no modification, and the negotiation cache is hit, and 304 is returned; if it is inconsistent, there is a change, and the new file is returned directly. The resource file with a new Etag value and returns 200;
- If the request received by the server does not have an Etag value, compare If-Modified-Since with the last modification time of the requested file, and if they match, it will hit the negotiation cache and return 304; if inconsistent, return the new last-modified and file and return 200;

![img](https://bingjs.com:8008/img/llq/llq5.png)

The version number is added to the resources of many websites. The purpose of this is: every time the JS or CSS file is upgraded, in order to prevent the browser from caching and force the version number to be changed, the client browser will re-download the new JS or CSS files to ensure that users can get the latest updates of the website in a timely manner.

 2. Where are the browser resource cache locations?

There are three locations for resource caches, which are listed in order of priority from high to low:

1. **Service Worker**:**Service Worker 运行在 JavaScript 主线程之外,虽然由于脱离了浏览器窗体无法直接访问 DOM,但是它可以完成离线缓存、消息推送、网络代理等功能。它可以让我们自由控制**缓存哪些文件、如何匹配缓存、如何读取缓存,并且**缓存是持续性的**。当 Service Worker 没有命中缓存的时候,需要去调用 `fetch `函数获取 数据。也就是说,如果没有在 Service Worker 命中缓存,会根据缓存查找优先级去查找数据。**但是不管是从 Memory Cache 中还是从网络请求中获取的数据,浏览器都会显示是从 Service Worker 中获取的内容。**
2. **Memory Cache**:Memory Cache 就是内存缓存,它的效率最快,但是内存缓存虽然读取高效,可是缓存持续性很短,会随着进程的释放而释放。一旦我们关闭 Tab 页面,内存中的缓存也就被释放了。
3. **Disk Cache**:Disk Cache 也就是存储在硬盘中的缓存,读取速度慢点,但是什么都能存储到磁盘中,比之 Memory Cache 胜在容量和存储时效性上。在所有浏览器缓存中,Disk Cache 覆盖面基本是最大的。它会根据 HTTP Herder 中的字段判断哪些资源需要缓存,哪些资源可以不请求直接使用,哪些资源已经过期需要重新请求。**并且即使在跨站点的情况下,相同地址的资源一旦被硬盘缓存下来,就不会再次去请求数据。**

**Disk Cache**:Push Cache 是 HTTP/2 中的内容,当以上三种缓存都没有命中时,它才会被使用。并且缓存时间也很短暂,只在会话(Session)中存在,一旦会话结束就被释放。其具有以下特点:

- All resources can be pushed, but Edge and Safari browsers are not very compatible
- Can push resources with `no-cache` and `no-store`
- Once the connection is closed, the Push Cache will be released-
Multiple Pages can use the same HTTP/2 connection, which means they can use the same cache
- The cache in Push Cache can only be used once
- The browser can refuse to accept existing resource push
- Can push resources to other domain names

 3. The difference between negotiation cache and strong cache

# (1) Strong cache

When using a strong cache policy, if the cache resource is valid, the cache resource will be used directly without having to initiate a request to the server.

A strong cache policy can be set in two ways, namely the Expires attribute and the Cache-Control attribute in the http header information.

(1) The server specifies the expiration time of the resource by adding the Expires attribute in the response header. Within the expiration time, the resource can be used by the cache without sending a request to the server. This time is an absolute time, it is the time of the server, so there may be such a problem that the time of the client is inconsistent with the time of the server, or the user can modify the time of the client, which may affect the cache hit result.

(2) Expires is the way in http1.0. Because of some of its shortcomings, a new header attribute is proposed in HTTP 1.1, which is the Cache-Control attribute, which provides more precise control over resource caching. It has many different values,

Fields that can be set by `Cache-Control`:

- `public`: The resource representation with this field value set can be cached by any object (including: the client sending the request, the proxy server, etc.). The value of this field is not commonly used, and max-age= is generally used for precise control;
- `private`: resources with this field value set can only be cached by the user's browser, and no proxy server is allowed to cache. In actual development, for some HTML containing user information, this field value is usually set to avoid proxy server (CDN) caching; - `
no-cache`: If this field is set, you need to confirm with the server whether the returned resource is If the resource has not changed, the cached resource will be used directly;
- `no-store`: If this field is set, it means that any caching is prohibited, and a new request will be sent to the server every time to pull the latest resource ;
- `max-age=`: Set the maximum validity period of the cache, in seconds;
- `s-maxage=`: The priority is higher than max-age=, only applicable to shared cache (CDN), and the priority is higher than max -age or Expires header;
- `max-stale[=]`: Setting this field indicates that the client is willing to receive expired resources, but cannot exceed the given time limit.

Generally speaking, only one of the methods needs to be set to implement a strong caching strategy. When the two methods are used together, the priority of Cache-Control is higher than that of Expires.

**no-cache and no-store are easily confused:**

- no-cache means first confirming with the server whether there is a resource update before making a judgment. That is to say, there is no strong cache, but there will be a negotiated cache;
- no-store means that no cache is used, and resources are directly obtained from the server for each request.

# (2) Negotiation cache

If the mandatory cache is hit, we do not need to initiate a new request, and use the cached content directly. If the mandatory cache is not hit, if the negotiation cache is set, the negotiation cache will play a role at this time.

As mentioned above, there are two conditions for hitting the negotiation cache:

- `max-age=xxx` expired
- value is `no-cache`

When using the negotiated caching strategy, a request will be sent to the server first, and if the resource has not been modified, a 304 status will be returned to let the browser use the local cached copy. If the resource has been modified, return the modified resource.

The negotiation cache can also be set in two ways, namely the **Etag** and **Last-Modified** attributes in the http header information.

(1) The server indicates the time when the resource was last modified by adding the Last-Modified attribute in the response header. When the browser makes a request next time, it will add an If-Modified-Since attribute in the request header. The attribute value is The value of Last-Modified when the resource was last returned. When the request is sent to the server, the server will compare this attribute with the last modification time of the resource to determine whether the resource has been modified. If the resource has not been modified, return a 304 status to let the client use the local cache. If the resource has been modified, return the modified resource. One disadvantage of using this method is that the last modification time marked by Last-Modified can only be accurate to the second level. If some files have been modified multiple times within 1 second, the file has changed but the Last-Modified But it has not changed, which will cause inaccurate cache hits.

(2) Because of the possible inaccuracy of Last-Modified, another way is provided in http, which is the Etag attribute. When the server returns the resource, it adds the Etag attribute in the header information. This attribute is a unique identifier generated by the resource. When the resource changes, this value will also change. In the next resource request, the browser will add an If-None-Match attribute to the request header, and the value of this attribute is the value of the Etag of the resource returned last time. After receiving the request, the service will compare this value with the current Etag value of the resource to determine whether the resource has changed and whether the resource needs to be returned. In this way, it is more accurate than Last-Modified.

When the Last-Modified and Etag properties appear at the same time, the priority of Etag is higher. When using negotiation caching, the server needs to consider the issue of load balancing, so the Last-Modified of resources on multiple servers should be consistent, because the value of Etag on each server is different, so when considering load balancing, it is best not to Set the Etag property.

**Summarize:**

Both the strong cache policy and the negotiated cache policy will directly use the local cache copy when the cache hits. The only difference is that the negotiated cache will send a request to the server. When they miss the cache, they will send a request to the server to get the resource. In the actual caching mechanism, the strong caching strategy and the negotiated caching strategy are used together. The browser will first judge based on the requested information whether the strong cache hits, and if it hits, the resource will be used directly. If there is no hit, a request is made to the server according to the header information, and the negotiation cache is used. If the negotiation cache hits, the server does not return the resource, and the browser directly uses a copy of the local resource. If the negotiation cache fails, the browser returns the latest resource. to the browser.

 4. Why is browser caching required?

For the browser's cache, it is mainly aimed at the front-end static resources. The best effect is that after the request is initiated, the corresponding static resources are pulled and stored locally. If the server's static resources have not been updated, then the next request can be read directly from the local. If the server's static resources have been updated, then when we request again, we will go to the server to pull new resources and save them. locally. This greatly reduces the number of requests and improves the performance of the website. This will use the browser's caching strategy.

The so-called **browser cache** means that the browser stores the static resources requested by the user in the local disk of the computer. When the browser visits again, it can be loaded directly from the local, without the need to go to the server to request up.

Using browser cache has the following advantages:

- Reduce the burden on the server and improve the performance of the website
- Speed ​​up the loading speed of the client web page
- Reduce redundant network data transmission

 5. What is the difference between clicking the refresh button or pressing F5, pressing Ctrl+F5 (forced refresh), and pressing Enter in the address bar?

- 点击刷新按钮或者按 F5:浏览器直接对本地的缓存文件过期,但是会带上If-Modifed-Since,If-None-Match,这就意味着服务器会对文件检查新鲜度,返回结果可能是 304,也有可能是 200。
- 用户按 Ctrl+F5(强制刷新):浏览器不仅会对本地文件过期,而且不会带上 If-Modifed-Since,If-None-Match,相当于之前从来没有请求过,返回结果是 200。
- 地址栏回车: 浏览器发起请求,按照正常流程,本地检查是否过期,然后服务器检查新鲜度,最后返回内容。

四、浏览器组成

 1. 对浏览器的理解

浏览器的主要功能是将用户选择的 web 资源呈现出来,它需要从服务器请求资源,并将其显示在浏览器窗口中,资源的格式通常是 HTML,也包括 PDF、image 及其他格式。用户用 URI(Uniform Resource Identifier 统一资源标识符)来指定所请求资源的位置。

HTML 和 CSS 规范中规定了浏览器解释 html 文档的方式,由 W3C 组织对这些规范进行维护,W3C 是负责制定 web 标准的组织。但是浏览器厂商纷纷开发自己的扩展,对规范的遵循并不完善,这为 web 开发者带来了严重的兼容性问题。

浏览器可以分为两部分,shell 和 内核。其中 shell 的种类相对比较多,内核则比较少。也有一些浏览器并不区分外壳和内核。从 Mozilla 将 Gecko 独立出来后,才有了外壳和内核的明确划分。

- shell 是指浏览器的外壳:例如菜单,工具栏等。主要是提供给用户界面操作,参数设置等等。它是调用内核来实现各种功能的。
- 内核是浏览器的核心。内核是基于标记语言显示内容的程序或模块。

 2. 对浏览器内核的理解

浏览器内核主要分成两部分:

- The responsibility of the rendering engine is to render, that is, to display the requested content in the browser window. By default, the rendering engine can display html, xml documents and pictures, and it can also display other types of data with the help of plug-ins, such as using the PDF reader plug-in to display PDF format.
- JS engine: parse and execute javascript to achieve dynamic web page effects.

At first, the rendering engine and the JS engine were not clearly distinguished. Later, the JS engine became more and more independent, and the kernel tended to only refer to the rendering engine.

 3. Common browser kernel comparison

- **Trident**: This browser kernel is the kernel used by the IE browser. Because IE had a large market share in the early days, this kernel is more popular. Many web pages were written according to this kernel standard before. But in fact, this kernel does not support real web standards very well. However, due to the high market share of IE, Microsoft has not updated the Trident kernel for a long time, which leads to the disconnection between the Trident kernel and the W3C standard. In addition, a large number of bugs in the Trident kernel and other security issues have not been resolved, and some experts and scholars have publicly believed that IE browser is not safe, so many users began to turn to other browsers.
- **Gecko**: This is the kernel used by Firefox and Flock. The advantage of this kernel is that it is powerful and rich, and can support many complex web page effects and browser extension interfaces, but the cost is obvious and consumes a lot of resources. , such as memory.
- **Presto**: Opera used to use the Presto kernel, and the Presto kernel is known as the fastest kernel for web browsing, thanks to its inherent advantages in development, when dealing with scripting languages ​​such as JS scripts , will be about 3 times faster than other kernels. The disadvantage is that part of the webpage compatibility is lost in order to achieve a fast speed.
- **Webkit**: Webkit is the kernel adopted by Safari. Its advantage is that the web browsing speed is faster. Although it is not as good as Presto, it is better than Gecko and Trident. The compatibility of web page codes is low, which may cause some non-standard web pages to be displayed incorrectly. The predecessor of WebKit is the KHTML engine of the KDE team. It can be said that WebKit is an open source branch of KHTML.
- **Blink**: Google published a blog on the Chromium Blog, saying that it will part ways with Apple's open source browser core Webkit, and develop the Blink rendering engine (browser core) in the Chromium project, which is built into the Chrome browser. In fact, the Blink engine is a branch of Webkit, just like webkit is a branch of KHTML. The Blink engine is now jointly developed by Google and Opera Software. As mentioned above, Opera abandoned its own Presto kernel and joined the Google camp to develop Blink with Google.

 4. Kernels used by common browsers

(1) IE browser kernel: Trident kernel, also commonly known as IE kernel;

(2) Chrome browser kernel: collectively referred to as the Chromium kernel or Chrome kernel, which used to be the Webkit kernel and is now the Blink kernel;

(3) Firefox browser kernel: Gecko kernel, commonly known as Firefox kernel;

(4) Safari browser kernel: Webkit kernel;

(5) Opera browser kernel: At first it was its own Presto kernel, and later joined the Google army, from Webkit to Blink kernel;

(6) 360 browser, cheetah browser kernel: IE + Chrome dual-core;

(7) Sogou, Aoyou, QQ browser kernel: Trident (compatibility mode) + Webkit (high-speed mode);

(8) Baidu browser, Window of the World kernel: IE kernel;

(9) 2345 browser kernel: It seems that it used to be an IE kernel, and now it is also a dual-core IE + Chrome;

(10) UC browser kernel: There are different opinions on this. UC says it is the U3 kernel developed by themselves, but it seems to be based on Webkit and Trident, and it is also said to be based on the Firefox kernel.

 5. The main components of the browser

- **UI** - Includes address bar, forward/back buttons, bookmark menu, etc. Except for the page that you request that is displayed in the main browser window, each part of the display is the user interface.
- **Browser Engine** - Passes instructions between the UI and the rendering engine.
- **rendering engine** - responsible for displaying the requested content. If the requested content is HTML, it is responsible for parsing the HTML and CSS content and displaying the parsed content on the screen.
- **Network**- Used for network calls, such as HTTP requests. Its interface is platform-independent and provides underlying implementations for all platforms.
- **UI Backend** - Used to draw basic widgets like combo boxes and windows. It exposes a common platform-independent interface, while using the operating system's user interface methods under the hood.
- **JavaScript Interpreter**. Used to parse and execute JavaScript code.
- **Datastore** - This is the persistence layer. Browsers need to save various data, such as cookies, on the hard disk. The new HTML specification (HTML5) defines a "web database", which is a complete (but lightweight) in-browser database.

It is worth noting that, unlike most browsers, each tab of the Chrome browser corresponds to a rendering engine instance. Each tab is an independent process.

Five, browser rendering principle

 1. The rendering process of the browser

Browser rendering mainly has the following steps:

- 首先解析收到的文档,根据文档定义构建一棵 DOM 树,DOM 树是由 DOM 元素及属性节点组成的。
- 然后对 CSS 进行解析,生成 CSSOM 规则树。
- 根据 DOM 树和 CSSOM 规则树构建渲染树。渲染树的节点被称为渲染对象,渲染对象是一个包含有颜色和大小等属性的矩形,渲染对象和 DOM 元素相对应,但这种对应关系不是一对一的,不可见的 DOM 元素不会被插入渲染树。还有一些 DOM元素对应几个可见对象,它们一般是一些具有复杂结构的元素,无法用一个矩形来描述。
- 当渲染对象被创建并添加到树中,它们并没有位置和大小,所以当浏览器生成渲染树以后,就会根据渲染树来进行布局(也可以叫做回流)。这一阶段浏览器要做的事情是要弄清楚各个节点在页面中的确切位置和大小。通常这一行为也被称为“自动重排”。
- 布局阶段结束后是绘制阶段,遍历渲染树并调用渲染对象的 paint 方法将它们的内容显示在屏幕上,绘制使用 UI 基础组件。

**注意**:这个过程是逐步完成的,为了更好的用户体验,渲染引擎将会尽可能早的将内容呈现到屏幕上,并不会等到所有的html 都解析完成之后再去构建和布局 render 树。它是解析完一部分内容就显示一部分内容,同时,可能还在通过网络下载其余内容。

 2. 浏览器渲染优化

**(1)针对JavaScript**:JavaScript既会阻塞HTML的解析,也会阻塞CSS的解析。因此我们可以对JavaScript的加载方式进行改变,来进行优化:

(1)尽量将JavaScript文件放在body的最后

(2) body中间尽量不要写 `<script>`标签

(3)`<script>`标签的引入资源方式有三种,有一种就是我们常用的直接引入,还有两种就是使用 async 属性和 defer 属性来异步引入,两者都是去异步加载外部的JS文件,不会阻塞DOM的解析(尽量使用异步加载)。三者的区别如下:

- **script **立即停止页面渲染去加载资源文件,当资源加载完毕后立即执行js代码,js代码执行完毕后继续渲染页面;
- **async **是在下载完成之后,立即异步加载,加载好后立即执行,多个带async属性的标签,不能保证加载的顺序;
- **defer **是在下载完成之后,立即异步加载。加载好后,如果 DOM 树还没构建好,则先等 DOM 树解析好再执行;如果DOM树已经准备好,则立即执行。多个带defer属性的标签,按照顺序执行。

**(2)针对CSS:使用CSS有三种方式:使用link、@import、内联样式**,其中link和@import都是导入外部样式。它们之间的区别:

- **link**:浏览器会派发一个新等线程(HTTP线程)去加载资源文件,与此同时GUI渲染线程会继续向下渲染代码
- **@import**:GUI渲染线程会暂时停止渲染,去服务器加载资源文件,资源文件没有返回之前不会继续渲染(阻碍浏览器渲染)
- **style**:GUI直接渲染

外部样式如果长时间没有加载完毕,浏览器为了用户体验,会使用浏览器会默认样式,确保首次渲染的速度。所以CSS一般写在headr中,让浏览器尽快发送请求去获取css样式。

所以,在开发过程中,导入外部样式使用link,而不用@import。如果css少,尽可能采用内嵌样式,直接写在style标签中。

**(3)针对DOM树、CSSOM树:**

可以通过以下几种方式来减少渲染的时间:

- HTML文件的代码层级尽量不要太深
- 使用语义化的标签,来避免不标准语义化的特殊处理
- 减少CSSD代码的层级,因为选择器是从右向左进行解析的

**(4)减少回流与重绘:**

- 操作DOM时,尽量在低层级的DOM节点进行操作
- 不要使用 `table`布局, 一个小的改动可能会使整个 `table`进行重新布局
- 使用CSS的表达式
- 不要频繁操作元素的样式,对于静态页面,可以修改类名,而不是样式。
- 使用absolute或者fixed,使元素脱离文档流,这样他们发生变化就不会影响其他元素
- 避免频繁操作DOM,可以创建一个文档片段 `documentFragment`,在它上面应用所有DOM操作,最后再把它添加到文档中
- 将元素先设置 `display: none`,操作结束后再把它显示出来。因为在display属性为none的元素上进行的DOM操作不会引发回流和重绘。
- 将DOM的多个读操作(或者写操作)放在一起,而不是读写操作穿插着写。这得益于**浏览器的渲染队列机制**。

浏览器针对页面的回流与重绘,进行了自身的优化——**渲染队列**

**浏览器会将所有的回流、重绘的操作放在一个队列中,当队列中的操作到了一定的数量或者到了一定的时间间隔,浏览器就会对队列进行批处理。这样就会让多次的回流、重绘变成一次回流重绘。**

将多个读操作(或者写操作)放在一起,就会等所有的读操作进入队列之后执行,这样,原本应该是触发多次回流,变成了只触发一次回流。

 3. 渲染过程中遇到 JS 文件如何处理?

JavaScript 的加载、解析与执行会阻塞文档的解析,也就是说,在构建 DOM 时,HTML 解析器若遇到了 JavaScript,那么它会暂停文档的解析,将控制权移交给 JavaScript 引擎,等 JavaScript 引擎运行完毕,浏览器再从中断的地方恢复继续解析文档。也就是说,如果想要首屏渲染的越快,就越不应该在首屏就加载 JS 文件,这也是都建议将 script 标签放在 body 标签底部的原因。当然在当下,并不是说 script 标签必须放在底部,因为你可以给 script 标签添加 defer 或者 async 属性。

 4. 什么是文档的预解析?

Webkit 和 Firefox 都做了这个优化,当执行 JavaScript 脚本时,另一个线程解析剩下的文档,并加载后面需要通过网络加载的资源。这种方式可以使资源并行加载从而使整体速度更快。需要注意的是,预解析并不改变 DOM 树,它将这个工作留给主解析过程,自己只解析外部资源的引用,比如外部脚本、样式表及图片。

 5. CSS 如何阻塞文档解析?

理论上,既然样式表不改变 DOM 树,也就没有必要停下文档的解析等待它们。然而,存在一个问题,JavaScript 脚本执行时可能在文档的解析过程中请求样式信息,如果样式还没有加载和解析,脚本将得到错误的值,显然这将会导致很多问题。所以如果浏览器尚未完成 CSSOM 的下载和构建,而我们却想在此时运行脚本,那么浏览器将延迟 JavaScript 脚本执行和文档的解析,直至其完成 CSSOM 的下载和构建。也就是说,在这种情况下,浏览器会先下载和构建 CSSOM,然后再执行 JavaScript,最后再继续文档的解析。

 6. 如何优化关键渲染路径?

为尽快完成首次渲染,我们需要最大限度减小以下三种可变因素:

(1)关键资源的数量。

(2)关键路径长度。

(3)关键字节的数量。

Critical resources are resources that may prevent a web page from rendering for the first time. The fewer these resources, the less work the browser has to do, and the less CPU and other resources it uses. Likewise, the critical path length is influenced by the graph of dependencies between all critical resources and their byte sizes: some resources cannot be downloaded until the previous resource has finished processing, and the larger the resource, the more round-trips it will take to download. many. In the end, the fewer critical bytes the browser has to download, the faster it can process the content and get it to appear on the screen. To reduce the number of bytes, we can reduce the number of resources (remove them or make them non-critical), and also compress and optimize each resource to ensure that the transfer size is minimized.

The general steps to optimize the critical rendering path are as follows:

(1) Analyze and describe the characteristics of the critical path: the number of resources, the number of bytes, and the length.

(2) Minimize the number of critical resources: remove them, delay their download, mark them as asynchronous, etc.

(3) Optimize the number of key bytes to shorten the download time (number of round trips).

(4) Optimize the loading order of the remaining critical assets: you need to download all critical assets as early as possible to shorten the critical path length

 7. What will block rendering?

The premise of rendering first is to generate the rendering tree, so HTML and CSS will definitely block rendering. If you want to render faster, you should reduce the file size that needs to be rendered at the beginning, and flatten the hierarchy and optimize the selector. Then when the browser parses the script tag, it will pause the construction of the DOM, and then restart from the paused place after completion. In other words, if you want to render the first screen faster, you should not load the JS file on the first screen, which is why it is recommended to place the script tag at the bottom of the body tag.

Of course, at the moment, it doesn't mean that the script tag must be placed at the bottom, because you can add defer or async attributes to the script tag. When the defer attribute is added to the script tag, it means that the JS file will be downloaded in parallel, but it will be executed sequentially after the HTML parsing is completed, so you can put the script tag anywhere in this case. For JS files without any dependencies, the async attribute can be added to indicate that the download and parsing of JS files will not block rendering.

6. Browser local storage

 1. Browser local storage methods and usage scenarios

# (1)Cookie

Cookie is the earliest local storage method proposed. Before that, the server could not judge whether two requests in the network were initiated by the same user. To solve this problem, Cookie appeared. The size of the cookie is only 4kb. It is a plain text file, and every time an HTTP request is initiated, the cookie will be carried.

**Cookie characteristics:**

- Once the cookie is successfully created, the name cannot be modified
- Cookies cannot cross domain names, that is to say, cookies under domain a and domain b cannot be shared, which is also determined by the privacy security of cookies, which can prevent illegal Obtain cookies from other websites
- the number of cookies under each domain name cannot exceed 20, and the size of each cookie cannot exceed 4kb
- there is a security problem. If the cookie is intercepted, all session information can be obtained, even if encrypted It doesn't help, you don't need to know the meaning of the cookie, just forward the cookie to achieve the goal
- the cookie will be sent when a new page is requested

If you need to share cookies across domains between domain names, there are two methods:

1. Use Nginx reverse proxy
2. After logging in to a site, write cookies to other sites. The session on the server side is stored in a node, and the cookie stores the sessionId

**Cookie usage scenarios:**

- The most common usage scenario is the combination of cookies and sessions. We store the sessionId in the cookie, and each request will carry the sessionId, so that the server will know who initiated the request and respond to the corresponding information.
- Can be used to count the number of clicks on the page

# (2)LocalStorage

LocalStorage is a new feature introduced by HTML5. Because sometimes the information we store is large, cookies cannot meet our needs. At this time, LocalStorage comes in handy.

**Advantages of LocalStorage:**

- 在大小方面,LocalStorage的大小一般为5MB,可以储存更多的信息
- LocalStorage是持久储存,并不会随着页面的关闭而消失,除非主动清理,不然会永久存在
- 仅储存在本地,不像Cookie那样每次HTTP请求都会被携带

**LocalStorage的缺点:**

- 存在浏览器兼容问题,IE8以下版本的浏览器不支持
- 如果浏览器设置为隐私模式,那我们将无法读取到LocalStorage
- LocalStorage受到同源策略的限制,即端口、协议、主机地址有任何一个不相同,都不会访问**LocalStorage的常用API:**

// 保存数据到 localStorage
localStorage.setItem('key', 'value');

// 从 localStorage 获取数据
let data = localStorage.getItem('key');

// 从 localStorage 删除保存的数据
localStorage.removeItem('key');

// 从 localStorage 删除所有保存的数据
localStorage.clear();

// 获取某个索引的Key
localStorage.key(index)

**LocalStorage的使用场景:**

- 有些网站有换肤的功能,这时候就可以将换肤的信息存储在本地的LocalStorage中,当需要换肤的时候,直接操作LocalStorage即可
- 在网站中的用户浏览信息也会存储在LocalStorage中,还有网站的一些不常变动的个人信息等也可以存储在本地的LocalStorage中

# (3)SessionStorage

SessionStorage和LocalStorage都是在HTML5才提出来的存储方案,SessionStorage 主要用于临时保存同一窗口(或标签页)的数据,刷新页面时不会删除,关闭窗口或标签页之后将会删除这些数据。

**SessionStorage与LocalStorage对比:**

- Both SessionStorage and LocalStorage are **local for data storage**;
- SessionStorage also has the restriction of the same-origin policy, but SessionStorage has a stricter restriction, SessionStorage** can only be shared under the same window of the same browser** ;
- Both LocalStorage and SessionStorage** cannot be crawled by crawlers**; **Common APIs of SessionStorage:**

// 保存数据到 sessionStorage
sessionStorage.setItem('key', 'value');

// 从 sessionStorage 获取数据
let data = sessionStorage.getItem('key');

// 从 sessionStorage 删除保存的数据
sessionStorage.removeItem('key');

// 从 sessionStorage 删除所有保存的数据
sessionStorage.clear();

// 获取某个索引的Key
sessionStorage.key(index)

**SessionStorage usage scenarios**

- Since SessionStorage is time-sensitive, it can be used to store visitor login information of some websites, as well as temporary browsing record information. When the website is closed, this information will also be deleted.

 2. What fields does the cookie have and what are their functions?

**Cookies consist of the following fields:**

- **Name**: the name of the cookie
- **Value**: the value of the cookie, for authentication cookies, the value includes the access token provided by the web server; -
**Size**: the size of the cookie-
** Path**: The path of the page that can access this cookie. For example, the domain is abc.com, and the path is `/test`, then only pages under the `/test` path can read this cookie.
- **Secure**: Specifies whether to use the HTTPS security protocol to send cookies. Using the HTTPS security protocol can protect cookies from being stolen and tampered with during transmission between the browser and the web server. This method can also be used for identity authentication of a Web site, that is, at the HTTPS connection establishment stage, the browser will check the validity of the SSL certificate of the Web site. However, based on compatibility reasons (for example, some websites use self-signed certificates), when an invalid SSL certificate is detected, the browser will not immediately terminate the user's connection request, but will display a security risk message, and the user can still choose to continue to visit the site. site.
- **Domain**: The domain name that can access the cookie. The cookie mechanism does not follow the strict same-origin policy, allowing a subdomain to set or get the cookie of its parent domain. The above characteristics of cookies are very useful when implementing a single sign-on solution, but they also increase the risk of cookies being attacked, for example, attackers can use this to launch session fixation attacks. Therefore, the browser forbids to set gTLDs such as .org, .com, and second-level domain names registered under country and region top-level domains in the Domain attribute, so as to reduce the scope of attacks.
- **HTTP**: 该字段包含 `HTTPOnly `属性 ,该属性用来设置cookie能否通过脚本来访问,默认为空,即可以通过脚本访问。在客户端是不能通过js代码去设置一个httpOnly类型的cookie的,这种类型的cookie只能通过服务端来设置。该属性用于防止客户端脚本通过 `document.cookie`属性访问Cookie,有助于保护Cookie不被跨站脚本攻击窃取或篡改。但是,HTTPOnly的应用仍存在局限性,一些浏览器可以阻止客户端脚本对Cookie的读操作,但允许写操作;此外大多数浏览器仍允许通过XMLHTTP对象读取HTTP响应中的Set-Cookie头。
- **Expires/Max-size** : 此cookie的超时时间。若设置其值为一个时间,那么当到达此时间后,此cookie失效。不设置的话默认值是Session,意思是cookie会和session一起失效。当浏览器关闭(不是浏览器标签页,而是整个浏览器) 后,此cookie失效。

**总结:**

服务器端可以使用 Set-Cookie 的响应头部来配置 cookie 信息。一条cookie 包括了5个属性值 expires、domain、path、secure、HttpOnly。其中 expires 指定了 cookie 失效的时间,domain 是域名、path是路径,domain 和 path 一起限制了 cookie 能够被哪些 url 访问。secure 规定了 cookie 只能在确保安全的情况下传输,HttpOnly 规定了这个 cookie 只能被服务器访问,不能使用 js 脚本访问。

 3. Cookie、LocalStorage、SessionStorage区别

浏览器端常用的存储技术是 cookie 、localStorage 和 sessionStorage。

- **cookie**:其实最开始是服务器端用于记录用户状态的一种方式,由服务器设置,在客户端存储,然后每次发起同源请求时,发送给服务器端。cookie 最多能存储 4 k 数据,它的生存时间由 expires 属性指定,并且 cookie 只能被同源的页面访问共享。
- **sessionStorage**:html5 提供的一种浏览器本地存储的方法,它借鉴了服务器端 session 的概念,代表的是一次会话中所保存的数据。它一般能够存储 5M 或者更大的数据,它在当前窗口关闭后就失效了,并且 sessionStorage 只能被同一个窗口的同源页面所访问共享。
- **localStorage**:html5 提供的一种浏览器本地存储的方法,它一般也能够存储 5M 或者更大的数据。它和 sessionStorage 不同的是,除非手动删除它,否则它不会失效,并且 localStorage 也只能被同源页面所访问共享。

上面几种方式都是存储少量数据的时候的存储方式,当需要在本地存储大量数据的时候,我们可以使用浏览器的 indexDB 这是浏览器提供的一种本地的数据库存储机制。它不是关系型数据库,它内部采用对象仓库的形式存储数据,它更接近 NoSQL 数据库。

 4. 前端储存的⽅式有哪些?

- cookies: The main method of local storage before the HTML5 standard, the advantage is good compatibility, the request header comes with a cookie for convenience, the disadvantage is that the size is only 4k, automatic request header adding cookies wastes traffic, and each domain is limited to 20 A cookie is cumbersome to use and needs to be encapsulated by itself;
- localStorage: a key-value pair (Key-Value) standard method added by HTML5. The advantage is that it is easy to operate and permanently stored (unless manually deleted) ), the size is 5M, compatible with IE8+;
- sessionStorage: basically similar to localStorage, the difference is that sessionStorage will be cleared when the page is closed, and unlike cookies and localStorage, it cannot be shared among all windows of the same origin, yes Session-level storage method;
- Web SQL: a local database data storage solution abandoned by W3C in 2010, but mainstream browsers (except Firefox) already have relevant implementations. Web SQL is similar to SQLite, which is the real meaning The relational database on the Internet uses sql to operate, and it is cumbersome to convert when we use JavaScript;
- IndexedDB: It is a database storage solution that has been officially included in the HTML5 standard. It is a NoSQL database that uses key-value pairs It can be stored for fast reading operations, which is very suitable for web scenarios, and it is very convenient to operate with JavaScript.

 5. What are the characteristics of IndexedDB?

IndexedDB has the following characteristics:

- **Key-value pair storage**: IndexedDB internally uses an object store to store data. All types of data can be stored directly, including JavaScript objects. In the object warehouse, data is stored in the form of "key-value pairs". Each data record has a corresponding primary key. The primary key is unique and cannot be duplicated, otherwise an error will be thrown.
- **Asynchronous**: IndexedDB will not lock the browser during operation, and users can still perform other operations. This is in contrast to LocalStorage, whose operations are synchronous. The asynchronous design is to prevent the reading and writing of large amounts of data and slow down the performance of web pages.
- **Support transactions**: IndexedDB supports transactions (transaction), which means that in a series of operation steps, as long as one step fails, the entire transaction will be cancelled, and the database will be rolled back to the state before the transaction occurred. There is no rewrite The case of a part of the data.
- **Same-origin restriction**: IndexedDB is subject to same-origin restrictions, and each database corresponds to the domain name where it was created. A webpage can only access databases under its own domain name, but not cross-domain databases.
- **Large storage space**: The storage space of IndexedDB is much larger than that of LocalStorage, generally not less than 250MB, and there is even no upper limit.
- **Support binary storage**: IndexedDB can not only store strings, but also store binary data (ArrayBuffer objects and Blob objects).

 Seven, browser same-origin policy

 1. What is the same-origin policy

The cross-domain problem is actually caused by the same-origin policy of the browser.

> The same-origin policy restricts how documents or scripts loaded from the same origin can interact with resources from another origin. This is an important security mechanism for browsers to isolate potentially malicious files. Same origin means: **protocol**, **port number**, **domain name** must be consistent.

The following table gives an example comparison with the source of the URL `**http://store.company.com/dir/page.html**`:

| URL | Cross-domain or not | Reason |
| ------------------------------------------- | ---- | ----------------------- |
| http://store.company.com/dir/page.html | Same source| Exactly the same |
| http://store.company.com/dir/inner/another.html | Same origin | Only the path is different |
| https://store.company.com/secure.html | Cross domain | Protocol is different |
| http://store.company.com:81/dir/etc.html | cross domain | port is different (http:// default port is 80) |
| http://news.company.com/dir/other. html | cross-domain | different hosts |

**Same-origin policy: protocol (protocol), domain (domain name), port (port) must be consistent. **

**The same-origin policy mainly restricts three aspects:**

- JS scripts under the current domain cannot access cookies, localStorage and indexDB under other domains.
- The js script under the current domain cannot operate and access DOM under other domains.
- Ajax cannot send cross-domain requests under the current domain.

同源政策的目的主要是为了保证用户的信息安全,它只是对 js 脚本的一种限制,并不是对浏览器的限制,对于一般的 img、或者script 脚本请求都不会有跨域的限制,这是因为这些操作都不会通过响应结果来进行可能出现安全问题的操作。

 2. 如何解决跨越问题

# (1)CORS

下面是MDN对于CORS的定义:

> 跨域资源共享(CORS) 是一种机制,它使用额外的 HTTP 头来告诉浏览器 让运行在一个 origin (domain)上的Web应用被准许访问来自不同源服务器上的指定的资源。当一个资源从与该资源本身所在的服务器不同的域、协议或端口请求一个资源时,资源会发起一个跨域HTTP 请求。

CORS需要浏览器和服务器同时支持,整个CORS过程都是浏览器完成的,无需用户参与。因此实现**CORS的关键就是服务器,只要服务器实现了CORS请求**,就可以跨源通信了。

浏览器将CORS分为**简单请求**和**非简单请求**:

简单请求不会触发CORS预检请求。若该请求满足以下两个条件,就可以看作是简单请求:

**1)请求方法是以下三种方法之一:**

- HEAD
- GET
- POST

**2)HTTP的头信息不超出以下几种字段:**

- Accept
- Accept-Language
- Content-Language
- Last-Event-ID
- Content-Type:只限于三个值application/x-www-form-urlencoded、multipart/form-data、text/plain

若不满足以上条件,就属于非简单请求了。

**(1)简单请求过程:**

对于简单请求,浏览器会直接发出CORS请求,它会在请求的头信息中增加一个Orign字段,该字段用来说明本次请求来自哪个源(协议+端口+域名),服务器会根据这个值来决定是否同意这次请求。如果Orign指定的域名在许可范围之内,服务器返回的响应就会多出以下信息头:

```
Access-Control-Allow-Origin: http://api.bob.com  // 和Orign一直
Access-Control-Allow-Credentials: true   // 表示是否允许发送Cookie
Access-Control-Expose-Headers: FooBar   // 指定返回其他字段的值
Content-Type: text/html; charset=utf-8   // 表示文档类型

```

如果Orign指定的域名不在许可范围之内,服务器会返回一个正常的HTTP回应,浏览器发现没有上面的Access-Control-Allow-Origin头部信息,就知道出错了。这个错误无法通过状态码识别,因为返回的状态码可能是200。

**在简单请求中,在服务器内,至少需要设置字段:**`Access-Control-Allow-Origin`

**(2)非简单请求过程**

非简单请求是对服务器有特殊要求的请求,比如请求方法为DELETE或者PUT等。非简单请求的CORS请求会在正式通信之前进行一次HTTP查询请求,**称为预检请求**。

浏览器会询问服务器,当前所在的网页是否在服务器允许访问的范围内,以及可以使用哪些HTTP请求方式和头信息字段,只有得到肯定的回复,才会进行正式的HTTP请求,否则就会报错。

预检请求使用的**请求方法是OPTIONS**,表示这个请求是来询问的。他的头信息中的关键字段是Orign,表示请求来自哪个源。除此之外,头信息中还包括两个字段:

- **Access-Control-Request-Method**:该字段是必须的,用来列出浏览器的CORS请求会用到哪些HTTP方法。
- **Access-Control-Request-Headers**: 该字段是一个逗号分隔的字符串,指定浏览器CORS请求会额外发送的头信息字段。

服务器在收到浏览器的预检请求之后,会根据头信息的三个字段来进行判断,如果返回的头信息在中有Access-Control-Allow-Origin这个字段就是允许跨域请求,如果没有,就是不同意这个预检请求,就会报错。

服务器回应的CORS的字段如下:

```
Access-Control-Allow-Origin: http://api.bob.com  // 允许跨域的源地址
Access-Control-Allow-Methods: GET, POST, PUT // 服务器支持的所有跨域请求的方法
Access-Control-Allow-Headers: X-Custom-Header  // 服务器支持的所有头信息字段
Access-Control-Allow-Credentials: true   // 表示是否允许发送Cookie
Access-Control-Max-Age: 1728000  // 用来指定本次预检请求的有效期,单位为秒

```

只要服务器通过了预检请求,在以后每次的CORS请求都会自带一个Origin头信息字段。服务器的回应,也都会有一个Access-Control-Allow-Origin头信息字段。

**在非简单请求中,至少需要设置以下字段:**

```
'Access-Control-Allow-Origin'  
'Access-Control-Allow-Methods'
'Access-Control-Allow-Headers'

```

## 减少OPTIONS请求次数:

OPTIONS请求次数过多就会损耗页面加载的性能,降低用户体验度。所以尽量要减少OPTIONS请求次数,可以后端在请求的返回头部添加:Access-Control-Max-Age:number。它表示预检请求的返回结果可以被缓存多久,单位是秒。该字段只对完全一样的URL的缓存设置生效,所以设置了缓存时间,在这个时间范围内,再次发送请求就不需要进行预检请求了。

## CORS中Cookie相关问题:

在CORS请求中,如果想要传递Cookie,就要满足以下三个条件:

- 在请求中设置`withCredentials`

默认情况下在跨域请求,浏览器是不带 cookie 的。但是我们可以通过设置 withCredentials 来进行传递 cookie.

```
// 原生 xml 的设置方式
var xhr = new XMLHttpRequest();
xhr.withCredentials = true;
// axios 设置方式
axios.defaults.withCredentials = true;

```

- **Access-Control-Allow-Credentials** 设置为 `true`
- **Access-Control-Allow-Origin** 设置为 `false`

# (2)JSONP

The principle of **jsonp** is to use the `<script>` tag without cross-domain restrictions, send a GET request with a callback parameter through the `<script>` tag src attribute, and the server will piece together the returned data from the interface into the callback function , returned to the browser, the browser parses and executes, so that the front end gets the data returned by the callback function.

1) Native JS implementation:

<script>
    var script = document.createElement('script');
    script.type = 'text/javascript';
    // 传参一个回调函数名给后端,方便后端返回时执行这个在前端定义的回调函数
    script.src = 'http://www.domain2.com:8080/login?user=admin&callback=handleCallback';
    document.head.appendChild(script);
    // 回调执行函数
    function handleCallback(res) {
        alert(JSON.stringify(res));
    }
 </script>

The server returns as follows (the global function is executed when it returns):

handleCallback({"success": true, "user": "admin"})

2) Vue axios implementation:

this.$http = axios;
this.$http.jsonp('http://www.domain2.com:8080/login', {
    params: {},
    jsonp: 'handleCallback'
}).then((res) => {
    console.log(res); 
})

Backend node.js code:

var querystring = require('querystring');
var http = require('http');
var server = http.createServer();
server.on('request', function(req, res) {
    var params = querystring.parse(req.url.split('?')[1]);
    var fn = params.callback;
    // jsonp返回设置
    res.writeHead(200, { 'Content-Type': 'text/javascript' });
    res.write(fn + '(' + JSON.stringify(params) + ')');
    res.end();
});
server.listen('8080');
console.log('Server is running at port 8080...');

**Disadvantages of JSONP:**

- has limitations, only supports the get method
- not safe, may be subject to XSS attacks

# (3) postMessage cross-domain

postMessage is an API in HTML5 XMLHttpRequest Level 2, and it is one of the few window attributes that can be operated across domains. It can be used to solve the following problems:

- Data transfer between pages and new windows opened by them-
Message transfer between multiple windows- Message transfer between
pages and nested iframes- Cross
-domain data transfer for the above three scenarios

Usage: The postMessage(data, origin) method accepts two parameters:

- **data**: The html5 specification supports any basic type or reproducible object, but some browsers only support strings, so it is best to serialize with JSON.stringify() when passing parameters.
- **origin**: protocol + host + port number, can also be set to "*", which means it can be passed to any window, if you want to specify the same origin as the current window, set it to "/".

1)a.html:(domain1.com/a.html)

<iframe id="iframe" src="http://www.domain2.com/b.html" rel="external nofollow"  rel="external nofollow"  style="display:none;"></iframe>
<script>     
    var iframe = document.getElementById('iframe');
    iframe.onload = function() {
        var data = {
            name: 'aym'
        };
        // 向domain2传送跨域数据
        iframe.contentWindow.postMessage(JSON.stringify(data), 'http://www.domain2.com');
    };
    // 接受domain2返回数据
    window.addEventListener('message', function(e) {
        alert('data from domain2 ---> ' + e.data);
    }, false);
</script>

2)b.html:(domain2.com/b.html)

<script>
    // 接收domain1的数据
    window.addEventListener('message', function(e) {
        alert('data from domain1 ---> ' + e.data);
        var data = JSON.parse(e.data);
        if (data) {
            data.number = 16;
            // 处理后再发回domain1
            window.parent.postMessage(JSON.stringify(data), 'http://www.domain1.com');
        }
    }, false);
</script>

# (4) nginx proxy cross-domain

The nginx proxy cross-domain is essentially the same as the CORS cross-domain principle. The request response header Access-Control-Allow-Origin... and other fields are set through the configuration file.

1)nginx配置解决iconfont跨域

浏览器跨域访问js、css、img等常规静态资源被同源策略许可,但iconfont字体文件(eot|otf|ttf|woff|svg)例外,此时可在nginx的静态资源服务器中加入以下配置。

location / {
  add_header Access-Control-Allow-Origin *;
}

2)nginx反向代理接口跨域

跨域问题:同源策略仅是针对浏览器的安全策略。服务器端调用HTTP接口只是使用HTTP协议,不需要同源策略,也就不存在跨域问题。

实现思路:通过Nginx配置一个代理服务器域名与domain1相同,端口不同)做跳板机,反向代理访问domain2接口,并且可以顺便修改cookie中domain信息,方便当前域cookie写入,实现跨域访问。

nginx具体配置:

#proxy服务器
server {
    listen       81;
    server_name  www.domain1.com;
    location / {
        proxy_pass   http://www.domain2.com:8080;  #反向代理
        proxy_cookie_domain www.domain2.com www.domain1.com; #修改cookie里域名
        index  index.html index.htm;
        # 当用webpack-dev-server等中间件代理接口访问nignx时,此时无浏览器参与,故没有同源限制,下面的跨域配置可不启用
        add_header Access-Control-Allow-Origin http://www.domain1.com;  #当前端只跨域不带cookie时,可为*
        add_header Access-Control-Allow-Credentials true;
    }
}

# (5)nodejs 中间件代理跨域

node中间件实现跨域代理,原理大致与nginx相同,都是通过启一个代理服务器,实现数据的转发,也可以通过设置cookieDomainRewrite参数修改响应头中cookie中域名,实现当前域的cookie写入,方便接口登录认证。

**1)非vue框架的跨域**

使用node + express + http-proxy-middleware搭建一个proxy服务器。

- 前端代码:

#proxy服务器
server {
    listen       81;
    server_name  www.domain1.com;
    location / {
        proxy_pass   http://www.domain2.com:8080;  #反向代理
        proxy_cookie_domain www.domain2.com www.domain1.com; #修改cookie里域名
        index  index.html index.htm;
        # 当用webpack-dev-server等中间件代理接口访问nignx时,此时无浏览器参与,故没有同源限制,下面的跨域配置可不启用
        add_header Access-Control-Allow-Origin http://www.domain1.com;  #当前端只跨域不带cookie时,可为*
        add_header Access-Control-Allow-Credentials true;
    }
}

- 中间件服务器代码:

var express = require('express');
var proxy = require('http-proxy-middleware');
var app = express();
app.use('/', proxy({
    // 代理跨域目标接口
    target: 'http://www.domain2.com:8080',
    changeOrigin: true,
    // 修改响应头信息,实现跨域并允许带cookie
    onProxyRes: function(proxyRes, req, res) {
        res.header('Access-Control-Allow-Origin', 'http://www.domain1.com');
        res.header('Access-Control-Allow-Credentials', 'true');
    },
    // 修改响应信息中的cookie域名
    cookieDomainRewrite: 'www.domain1.com'  // 可以为false,表示不修改
}));
app.listen(3000);
console.log('Proxy server is listen at port 3000...');

**2)vue框架的跨域**

node + vue + webpack + webpack-dev-server搭建的项目,跨域请求接口,直接修改webpack.config.js配置。开发环境下,vue渲染服务和接口代理服务都是webpack-dev-server同一个,所以页面与代理接口之间不再跨域。

webpack.config.js部分配置:

module.exports = {
    entry: {},
    module: {},
    ...
    devServer: {
        historyApiFallback: true,
        proxy: [{
            context: '/login',
            target: 'http://www.domain2.com:8080',  // 代理跨域目标接口
            changeOrigin: true,
            secure: false,  // 当代理某些https服务报错时用
            cookieDomainRewrite: 'www.domain1.com'  // 可以为false,表示不修改
        }],
        noInfo: true
    }
}

# (6)document.domain + iframe跨域

此方案仅限主域相同,子域不同的跨域应用场景。实现原理:两个页面都通过js强制设置document.domain为基础主域,就实现了同域。

1)父窗口:(domain.com/a.html)

<iframe id="iframe" src="http://child.domain.com/b.html" rel="external nofollow" ></iframe>
<script>
    document.domain = 'domain.com';
    var user = 'admin';
</script>

2)子窗口:(child.domain.com/a.html)

<script>
    document.domain = 'domain.com';
    // 获取父窗口中变量
    console.log('get js data from parent ---> ' + window.parent.user);
</script>

# (7) location.hash + iframe cross domain

Implementation principle: a wants to communicate with b across domains, and it is realized through the middle page c. For the three pages, use the location.hash of the iframe to pass values ​​between different domains, and communicate with each other through direct js access between the same domains.

Specific implementation: domain A: a.html -> domain B: b.html -> domain A: c.html, different domains of a and b can only communicate in one direction through the hash value, and domains of different b and c can only communicate one-way Direct communication, but c is in the same domain as a, so c can access all objects on page a through parent.parent.

1)a.html:(domain1.com/a.html)

<iframe id="iframe" src="http://www.domain2.com/b.html" rel="external nofollow"  rel="external nofollow"  style="display:none;"></iframe>
<script>
    var iframe = document.getElementById('iframe');
    // 向b.html传hash值
    setTimeout(function() {
        iframe.src = iframe.src + '#user=admin';
    }, 1000);
  
    // 开放给同域c.html的回调方法
    function onCallback(res) {
        alert('data from c.html ---> ' + res);
    }
</script>

2)b.html:(.domain2.com/b.html)

<iframe id="iframe" src="http://www.domain1.com/c.html" rel="external nofollow"  style="display:none;"></iframe>
<script>
    var iframe = document.getElementById('iframe');
    // 监听a.html传来的hash值,再传给c.html
    window.onhashchange = function () {
        iframe.src = iframe.src + location.hash;
    };
</script>

3)c.html:(`http://www.domain1.com/c.html`)

<script>
    // 监听b.html传来的hash值
    window.onhashchange = function () {
        // 再通过操作同域a.html的js回调,将结果传回
        window.parent.parent.onCallback('hello: ' + location.hash.replace('#user=', ''));
    };
</script>

# (8) window.name + iframe cross domain

The uniqueness of the window.name attribute: the name value still exists after loading different pages (even different domain names), and can support very long name values ​​(2MB).

1)a.html:(domain1.com/a.html)

var proxy = function(url, callback) {
    var state = 0;
    var iframe = document.createElement('iframe');
    // 加载跨域页面
    iframe.src = url;
    // onload事件会触发2次,第1次加载跨域页,并留存数据于window.name
    iframe.onload = function() {
        if (state === 1) {
            // 第2次onload(同域proxy页)成功后,读取同域window.name中数据
            callback(iframe.contentWindow.name);
            destoryFrame();
        } else if (state === 0) {
            // 第1次onload(跨域页)成功后,切换到同域代理页面
            iframe.contentWindow.location = 'http://www.domain1.com/proxy.html';
            state = 1;
        }
    };
    document.body.appendChild(iframe);
    // 获取数据以后销毁这个iframe,释放内存;这也保证了安全(不被其他域frame js访问)
    function destoryFrame() {
        iframe.contentWindow.document.write('');
        iframe.contentWindow.close();
        document.body.removeChild(iframe);
    }
};
// 请求跨域b页面数据
proxy('http://www.domain2.com/b.html', function(data){
    alert(data);
});

2)proxy.html:(domain1.com/proxy.html)

The intermediate proxy page is in the same domain as a.html, and the content can be empty.

3)b.html:(domain2.com/b.html)

<script>
    window.name = 'This is domain2 data!';
</script>

The src attribute of the iframe is transferred from the external domain to the local domain, and the cross-domain data is transferred from the external domain to the local domain by the window.name of the iframe. This cleverly bypasses the browser's cross-domain access restrictions, but at the same time it is a safe operation.

# (9) WebSocket protocol cross-domain

WebSocket protocol is a new protocol in HTML5. It realizes full-duplex communication between browser and server, and allows cross-domain communication at the same time, which is a good implementation of server push technology.

The native WebSocket API is not very convenient to use. We use Socket.io, which encapsulates the webSocket interface well, provides a simpler and more flexible interface, and provides backward compatibility for browsers that do not support webSocket.

1) Front-end code:

<div>user input:<input type="text"></div>
<script src="https://cdn.bootcss.com/socket.io/2.2.0/socket.io.js" rel="external nofollow" ></script>
<script>
var socket = io('http://www.domain2.com:8080');
// 连接成功处理
socket.on('connect', function() {
    // 监听服务端消息
    socket.on('message', function(msg) {
        console.log('data from server: ---> ' + msg); 
    });
    // 监听服务端关闭
    socket.on('disconnect', function() { 
        console.log('Server socket has closed.'); 
    });
});
document.getElementsByTagName('input')[0].onblur = function() {
    socket.send(this.value);
};
</script>

2) Nodejs socket background:

var http = require('http');
var socket = require('socket.io');
// 启http服务
var server = http.createServer(function(req, res) {
    res.writeHead(200, {
        'Content-type': 'text/html'
    });
    res.end();
});
server.listen('8080');
console.log('Server is running at port 8080...');
// 监听socket连接
socket.listen(server).on('connection', function(client) {
    // 接收信息
    client.on('message', function(msg) {
        client.send('hello:' + msg);
        console.log('data from client: ---> ' + msg);
    });
    // 断开处理
    client.on('disconnect', function() {
        console.log('Client socket has closed.'); 
    });
});

 3. The difference between forward proxy and reverse proxy

- forward proxy:

The client wants to obtain the data of a server, but cannot obtain it directly for various reasons. So the client sets up a proxy server and specifies the target server, and then the proxy server forwards the request to the target server and sends the obtained content to the client. This essentially serves the purpose of hiding the real client from the real server. Implementing a forward proxy requires modifying the client, such as modifying the browser configuration.

- Reverse proxy:

In order to improve website performance (load balancing) by dividing the workload into multiple servers, the server will first determine which server the request should be forwarded to according to the forwarding rules when it receives a request, and then forward the request to on the corresponding real server. This essentially acts to hide the real server from the client.

Generally, after using a reverse proxy, you need to modify the DNS to resolve the domain name to the proxy server IP. At this time, the browser cannot detect the existence of the real server, and of course there is no need to modify the configuration.

The difference between the two is shown in the figure:

![img](https://bingjs.com:8008/img/llq/llq7.png)

The structure of forward proxy and reverse proxy is the same, both are client-proxy-server structures, and the main difference between them is which side sets up the proxy in the middle. In the forward proxy, the proxy is set by the client to hide the client; while in the reverse proxy, the proxy is set by the server to hide the server.

 4. The concept of Nginx and its working principle

Nginx is a lightweight web server that can also be used for reverse proxy, load balancing, and HTTP caching. Nginx uses an asynchronous event-driven method to process requests and is a performance-oriented HTTP server.

Traditional web servers such as Apache are process-based, while Nginx is event-driven. It is this main difference that gives Nginx its performance advantage.

Nginx 架构的最顶层是一个 master process,这个 master process 用于产生其他的 worker process,这一点和Apache 非常像,但是 Nginx 的 worker process 可以同时处理大量的HTTP请求,而每个 Apache process 只能处理一个。

 八、浏览器事件机制

 1. 事件是什么?事件模型?

事件是用户操作网页时发生的交互动作,比如 click/move, 事件除了用户触发的动作外,还可以是文档加载,窗口滚动和大小调整。事件被封装成一个 event 对象,包含了该事件发生时的所有相关信息( event 的属性)以及可以对事件进行的操作( event 的方法)。

事件是用户操作网页时发生的交互动作或者网页本身的一些操作,现代浏览器一共有三种事件模型:

- **DOM0-level event model**, this model will not propagate, so there is no concept of event flow, but now some browsers support it in a bubbling way, it can directly define the listening function in the web page, or The listener function is specified through the js attribute. All browsers are compatible with this approach. Register the event name directly on the dom object, which is the DOM0 writing method.
- **IE event model**, in this event model, an event has two processes, the event processing phase and the event bubbling phase. In the event processing phase, the listening event bound to the target element will be executed first. Then there is the event bubbling stage. Bubbling refers to the event bubbling from the target element to the document, checking in turn whether the passing nodes are bound to the event listening function, and executing it if so. This model adds listener functions through attachEvent, and multiple listener functions can be added, which will be executed sequentially.
- **DOM2 level event model**, in this event model, there are three processes for an event, the first process is the event capture phase. Capturing means that the event propagates downward from the document to the target element, checks in turn whether the passing nodes are bound to event listening functions, and executes them if so. The latter two phases are identical to the two phases of the IE event model. In this event model, the event binding function is addEventListener, where the third parameter can specify whether the event is executed in the capture phase.

 2. How to prevent event bubbling

- Normal browser use: event.stopPropagation()
- IE browser use: event.cancelBubble = true;

 3. Understanding of event delegation

# 1) The concept of event delegation

Event delegation essentially utilizes the mechanism of **browser event bubbling**. Because the event will be uploaded to the parent node during the bubbling process, the parent node can obtain the target node through the event object, so the listener function of the child node can be defined on the parent node, and the listener function of the parent node will handle multiple child elements uniformly Events, this approach is called event delegation (event proxy).

Using event delegation eliminates the need to bind a listening event to each child element, which reduces memory consumption. And the use of event proxy can also realize the dynamic binding of events. For example, if a new child node is added, there is no need to add a listening event for it separately. The event it binds will be handed over to the listening function in the parent element for processing .

# (2) Features of event delegation

- Reduced memory consumption

If there is a list with a large number of list items, you need to respond to an event when the list item is clicked:

<ul id="list">
  <li>item 1</li>
  <li>item 2</li>
  <li>item 3</li>
  ......
  <li>item n</li>
</ul>

If a function is bound to each list item one by one, it will consume a lot of memory and consume a lot of performance in terms of efficiency. Therefore, a better way is to bind this click event to its parent layer, that is, ul, and then match and judge the target element when executing the event, so event delegation can reduce a lot of memory consumption and save efficiency.

- Dynamically bind events

Each list item in the above example is bound to an event. In many cases, it is necessary to dynamically add or remove list item elements through AJAX or user operations, so it is necessary to re-bind the newly added element every time it changes. event, to unbind the event for the element that is about to be deleted; if you use event delegation, there is no such trouble, because the event is bound to the parent layer, and it has nothing to do with the increase or decrease of the target element. Execution to the target element is It is matched in the process of actually responding to the execution of the event function, so using the event to dynamically bind the event can reduce a lot of repetitive work.

y In the above code, the target element is the clicked element under the #list element, and then by judging some attributes of the target (such as: nodeName, id, etc.), it can be more accurately matched to a certain type of #list li over the element;

# (3) Limitations

Of course, event delegation is also limited. For example, events such as focus and blur do not have an event bubbling mechanism, so event delegation cannot be realized; events such as mousemove and mouseout, although there are event bubbling, can only be continuously calculated and positioned through the position, which consumes high performance, so it is also Not suitable for event delegation.

Of course, event delegation does not only have advantages, it also has **disadvantages**, event delegation will affect page performance, the main influencing factors are:

- The number of bound event delegates in the element;
- The number of DOM layers between the lowest element clicked and the bound event element;

在必须使用事件委托的地方,可以进行如下的处理:

- 只在必须的地方,使用事件委托,比如:`ajax`的局部刷新区域
- 尽量的减少绑定的层级,不在 `body`元素上,进行绑定
- 减少绑定的次数,如果可以,那么把多个事件的绑定,合并到一次事件委托中去,由这个事件委托的回调,来进行分发。

 4. 事件委托的使用场景

场景:给页面的所有的a标签添加click事件,代码如下:

document.addEventListener("click", function(e) {
    if (e.target.nodeName == "A")
        console.log("a");
}, false);

但是这些a标签可能包含一些像span、img等元素,如果点击到了这些a标签中的元素,就不会触发click事件,因为事件绑定上在a标签元素上,而触发这些内部的元素时,e.target指向的是触发click事件的元素(span、img等其他元素)。

这种情况下就可以使用事件委托来处理,将事件绑定在a标签的内部元素上,当点击它的时候,就会逐级向上查找,知道找到a标签为止,代码如下:

document.addEventListener("click", function(e) {
    var node = e.target;
    while (node.parentNode.nodeName != "BODY") {
        if (node.nodeName == "A") {
            console.log("a");
            break;
        }
        node = node.parentNode;
    }
}, false);

 5. 同步和异步的区别

- **同步**指的是当一个进程在执行某个请求时,如果这个请求需要等待一段时间才能返回,那么这个进程会一直等待下去,直到消息返回为止再继续向下执行。
- **异步**指的是当一个进程在执行某个请求时,如果这个请求需要等待一段时间才能返回,这个时候进程会继续往下执行,不会阻塞等待消息的返回,当消息返回时系统再通知进程进行处理。

 6. 对事件循环的理解

因为 js 是单线程运行的,在代码执行时,通过将不同函数的执行上下文压入执行栈中来保证代码的有序执行。在执行同步代码时,如果遇到异步事件,js 引擎并不会一直等待其返回结果,而是会将这个事件挂起,继续执行执行栈中的其他任务。当异步事件执行完毕后,再将异步事件对应的回调加入到一个任务队列中等待执行。任务队列可以分为宏任务队列和微任务队列,当当前执行栈中的事件执行完毕后,js 引擎首先会判断微任务队列中是否有任务可以执行,如果有就将微任务队首的事件压入栈中执行。当微任务队列中的任务都执行完成后再去执行宏任务队列中的任务。

Event Loop 执行顺序如下所示:

- 首先执行同步代码,这属于宏任务
- 当执行完所有同步代码后,执行栈为空,查询是否有异步代码需要执行
- 执行所有微任务
- 当执行完所有微任务后,如有必要会渲染页面
- 然后开始下一轮 Event Loop,执行宏任务中的异步代码

 7. 宏任务和微任务分别有哪些

- 微任务包括: promise 的回调、node 中的 process.nextTick 、对 Dom 变化监听的 MutationObserver。
- 宏任务包括: script 脚本的执行、setTimeout ,setInterval ,setImmediate 一类的定时事件,还有如 I/O 操作、UI 渲染等。

 8. 什么是执行栈

可以把执行栈认为是一个存储函数调用的**栈结构**,遵循先进后出的原则。

![img](https://bingjs.com:8008/img/llq/llq1.gif)

当开始执行 JS 代码时,根据先进后出的原则,后执行的函数会先弹出栈,可以看到,`foo` 函数后执行,当执行完毕后就从栈中弹出了。

平时在开发中,可以在报错中找到执行栈的痕迹:

function foo() {
  throw new Error('error')
}
function bar() {
  foo()
}
bar()

可以看到报错在 `foo` 函数,`foo` 函数又是在 `bar` 函数中调用的。当使用递归时,因为栈可存放的函数是有**限制**的,一旦存放了过多的函数且没有得到释放的话,就会出现爆栈的问题

function bar() {
  bar()
}
bar()

 9. Node 中的 Event Loop 和浏览器中的有什么区别?process.nextTick 执行顺序?

Node 中的 Event Loop 和浏览器中的是完全不相同的东西。

Node 的 Event Loop 分为 6 个阶段,它们会按照**顺序**反复运行。每当进入某一个阶段的时候,都会从对应的回调队列中取出函数去执行。当队列为空或者执行的回调函数数量到达系统设定的阈值,就会进入下一阶段。

1)**Timers(计时器阶段)**:初次进入事件循环,会从计时器阶段开始。此阶段会判断是否存在过期的计时器回调(包含 setTimeout 和 setInterval),如果存在则会执行所有过期的计时器回调,执行完毕后,如果回调中触发了相应的微任务,会接着执行所有微任务,执行完微任务后再进入 Pending callbacks 阶段。

(2)**Pending callbacks**:执行推迟到下一个循环迭代的I / O回调(系统调用相关的回调)。

(3)**Idle/Prepare**:仅供内部使用。

(4)**Poll**(轮询阶段):

- 当回调队列不为空时:会执行回调,若回调中触发了相应的微任务,这里的微任务执行时机和其他地方有所不同,不会等到所有回调执行完毕后才执行,而是针对每一个回调执行完毕后,就执行相应微任务。执行完所有的回调后,变为下面的情况。
- 当回调队列为空时(没有回调或所有回调执行完毕):但如果存在有计时器(setTimeout、setInterval和setImmediate)没有执行,会结束轮询阶段,进入 Check 阶段。否则会阻塞并等待任何正在执行的I/O操作完成,并马上执行相应的回调,直到所有回调执行完毕。

(5)**Check(查询阶段)**:会检查是否存在 setImmediate 相关的回调,如果存在则执行所有回调,执行完毕后,如果回调中触发了相应的微任务,会接着执行所有微任务,执行完微任务后再进入 Close callbacks 阶段。

(6)**Close callbacks**:执行一些关闭回调,比如socket.on(‘close’, …)等。

下面来看一个例子,首先在有些情况下,定时器的执行顺序其实是**随机**的

setTimeout(() => {
    console.log('setTimeout')
}, 0)
setImmediate(() => {
    console.log('setImmediate')
})

对于以上代码来说,`setTimeout` 可能执行在前,也可能执行在后

- 首先 `setTimeout(fn, 0) === setTimeout(fn, 1)`,这是由源码决定的
- 进入事件循环也是需要成本的,如果在准备时候花费了大于 1ms 的时间,那么在 timer 阶段就会直接执行 `setTimeout `回调
- 那么如果准备时间花费小于 1ms,那么就是 `setImmediate `回调先执行了

当然在某些情况下,他们的执行顺序一定是固定的,比如以下代码:

const fs = require('fs')
fs.readFile(__filename, () => {
    setTimeout(() => {
        console.log('timeout');
    }, 0)
    setImmediate(() => {
        console.log('immediate')
    })
})

在上述代码中,`setImmediate` 永远**先执行**。因为两个代码写在 IO 回调中,IO 回调是在 poll 阶段执行,当回调执行完毕后队列为空,发现存在 `setImmediate` 回调,所以就直接跳转到 check 阶段去执行回调了。

上面都是 macrotask 的执行情况,对于 microtask 来说,它会在以上每个阶段完成前**清空** microtask 队列,下图中的 Tick 就代表了 microtask

setTimeout(() => {
  console.log('timer21')
}, 0)
Promise.resolve().then(function() {
  console.log('promise1')
})

对于以上代码来说,其实和浏览器中的输出是一样的,microtask 永远执行在 macrotask 前面。

Finally, let’s look at `process.nextTick` in Node. This function is actually independent of the Event Loop. It has its own queue. When each stage is completed, if there is a nextTick queue, it will **clear the queue All callback functions of **, and have priority over other microtask execution.

setTimeout(() => {
 console.log('timer1')
 Promise.resolve().then(function() {
   console.log('promise1')
 })
}, 0)
process.nextTick(() => {
 console.log('nextTick')
 process.nextTick(() => {
   console.log('nextTick')
   process.nextTick(() => {
     console.log('nextTick')
     process.nextTick(() => {
       console.log('nextTick')
     })
   })
 })
})

For the above code, all nextTicks are always printed first.

order

//macro-task:script(全部的代码) setInterval setTimeout setImmediate I/O
//micro-task:process.nextTick  Promise

 10. What is the process of event triggering

Event triggering has three phases:

- `window` propagates to the event trigger, and triggers when it encounters a registered capture event
- Triggers a registered event when it propagates to the event trigger
- Propagates from the event trigger to `window`, encounters a registered bubbling event and triggers

Generally speaking, event triggering will be performed in the above order, but there are also special cases. **If a child node in ** `body` ** registers bubbling and capturing events at the same time, event triggering will be executed in the order of registration. **

// 以下会先打印冒泡然后是捕获
node.addEventListener(
  'click',
  event => {
    console.log('冒泡')
  },
  false
)
node.addEventListener(
  'click',
  event => {
    console.log('捕获 ')
  },
  true
)

Events are usually registered using `addEventListener`, the third parameter of this function can be a boolean value or an object. For the boolean `useCapture` parameter, the default value of this parameter is `false`, `useCapture` determines whether the registered event is a capture event or a bubbling event. For object parameters, the following properties are available:

- `capture`: Boolean value, same as `useCapture`
- `once`: Boolean value, value `true` means that the callback will only be called once, and the listener will be removed after calling
- `passive`: Boolean value, means `preventDefault` is never called

In general, if you only want the event to be triggered only on the target, then you can use `stopPropagation` to prevent further propagation of the event. It is generally believed that `stopPropagation` is used to prevent event bubbling, but this function can also prevent capturing events.

`stopImmediatePropagation` also implements blocking events, but also prevents the event target from executing other registered events.

node.addEventListener(
  'click',
  event => {
    event.stopImmediatePropagation()
    console.log('冒泡')
  },
  false
)
// 点击 node 只会执行上面的函数,该函数不会执行
node.addEventListener(
  'click',
  event => {
    console.log('捕获 ')
  },
  true
)

Nine, browser garbage collection mechanism

 1. What is the garbage collection mechanism of V8?

V8 implements accurate GC, and the GC algorithm adopts a generational garbage collection mechanism. Therefore, V8 divides the memory (heap) into two parts: the new generation and the old generation.

**(1) New Generation Algorithm**

The objects in the new generation generally have a short survival time, and the Scavenge GC algorithm is used.

In the new generation space, the memory space is divided into two parts, namely From space and To space. Of these two spaces, one must be used and the other must be free. Newly allocated objects will be put into the From space. When the From space is full, the new generation GC will start. The algorithm will check the surviving objects in the From space and copy them to the To space. If there are inactive objects, they will be destroyed. When the copy is completed, the From space and the To space are exchanged, so that the GC is over.

**(2) Old Generation Algorithm**

The objects in the old generation generally have a long survival time and a large number. Two algorithms are used, namely the mark removal algorithm and the mark compression algorithm.

First, let’s talk about the circumstances under which objects will appear in the old generation space:

- Whether the objects in the young generation have undergone the Scavenge algorithm once, and if so, the objects will be moved from the young generation space to the old generation space.
- Objects in the To space account for more than 25 % of the size. In this case, in order not to affect the memory allocation, the object will be moved from the young generation space to the old generation space.

The space in the old generation is very complicated, there are the following spaces

enum AllocationSpace {
  // TODO(v8:7464): Actually map this space's memory as read-only.
  RO_SPACE,    // 不变的对象空间
  NEW_SPACE,   // 新生代用于 GC 复制算法的空间
  OLD_SPACE,   // 老生代常驻对象空间
  CODE_SPACE,  // 老生代代码对象空间
  MAP_SPACE,   // 老生代 map 对象
  LO_SPACE,    // 老生代大空间对象
  NEW_LO_SPACE,  // 新生代大空间对象
  FIRST_SPACE = RO_SPACE,
  LAST_SPACE = NEW_LO_SPACE,
  FIRST_GROWABLE_PAGED_SPACE = OLD_SPACE,
  LAST_GROWABLE_PAGED_SPACE = MAP_SPACE
};

In the old generation, the following situations will start the mark clearing algorithm first:

- When a certain space is not divided into blocks
- The objects in the space exceed a certain limit
- The space cannot guarantee that the objects in the new generation will be moved to the old generation

In this phase, all objects in the heap will be traversed, and then live objects will be marked. After the marking is completed, all unmarked objects will be destroyed. When marking large pairs of memory, it can take hundreds of milliseconds to complete a mark. This can cause some performance issues. To address this, in 2011, V8 switched from a stop-the-world flag to an incremental flag. During incremental marking, GC decomposes the marking work into smaller modules, allowing JS application logic to execute for a while between modules, so that the application does not stall. But in 2018, there was another major breakthrough in GC technology called concurrent marking. This technique allows the GC to scan and mark objects while allowing JS to run.

After objects are cleared, the heap memory will be fragmented. When the fragmentation exceeds a certain limit, the compression algorithm will be started. During compaction, live objects are moved to one end until all objects are moved and then unneeded memory is cleaned up.

 2. Which operations will cause memory leaks?

- The first case is that a global variable is accidentally created due to the use of an undeclared variable, and this variable remains in memory and cannot be recycled.
- The second case is to set the setInterval timer and forget to cancel it. If the loop function has a reference to an external variable, then this variable will remain in memory and cannot be recycled.
- The third case is to obtain a reference to a DOM element, and the latter element is deleted. Since we have kept the reference to this element, it cannot be recycled.
- The fourth case is the unreasonable use of closures, resulting in certain variables being kept in memory.

 1. HTTP protocol

 1. The difference between GET and POST requests

Post and Get are two methods of HTTP request, the differences are as follows:

- **应用场景:**GET 请求是一个幂等的请求,一般 Get 请求用于对服务器资源不会产生影响的场景,比如说请求一个网页的资源。而 Post 不是一个幂等的请求,一般用于对服务器资源会产生影响的情景,比如注册用户这一类的操作。
- **是否缓存:**因为两者应用场景不同,浏览器一般会对 Get 请求缓存,但很少对 Post 请求缓存。
- **发送的报文格式:**Get 请求的报文中实体部分为空,Post 请求的报文中实体部分一般为向服务器发送的数据。
- **安全性:**Get 请求可以将请求的参数放入 url 中向服务器发送,这样的做法相对于 Post 请求来说是不太安全的,因为请求的 url 会被保留在历史记录中。
- **请求长度:**浏览器由于对 url 长度的限制,所以会影响 get 请求发送数据时的长度。这个限制是浏览器规定的,并不是 RFC 规定的。
- **参数类型:**post 的参数传递支持更多的数据类型。

 2. POST和PUT请求的区别

- PUT请求是向服务器端发送数据,从而修改数据的内容,但是不会增加数据的种类等,也就是说无论进行多少次PUT操作,其结果并没有不同。(可以理解为是**更新数据**)
- POST请求是向服务器端发送数据,该请求会改变数据的种类等资源,它会创建新的内容。(可以理解为是**创建数据**)

 3. 常见的HTTP请求头和响应头

**HTTP Request Header 常见的请求头:**

- Accept: the content type that the browser can handle
- Accept-Charset: the character set that the browser can display
- Accept-Encoding: the compression encoding that the browser can handle
- Accept-Language: the language currently set by the browser
- Connection: the browser The type of connection to the server
- Cookie: any cookie set by the current page
- Host: the domain of the page making the request
- Referer: the URL of the page making the request
- User-Agent: the user agent string of the browser

**HTTP Responses Header Common response headers:**

- Date: indicates the time when the message was sent, and the description format of the time is defined by rfc822
- server: server name
- Connection: the type of connection between the browser and the server
- Cache-Control: controls HTTP cache
- content-type: indicates the following documents What MIME type is it?

Common Content-Type attribute values ​​are as follows:

(1) application/x-www-form-urlencoded: the native form form of the browser, if the enctype attribute is not set, the data will be submitted in the form of application/x-www-form-urlencoded eventually. The data submitted in this way is placed in the body, and the data is encoded according to the key1=val1&key2=val2 method, and both key and val are URL-transcoded.

(2) multipart/form-data: This method is also a common POST submission method, which is usually used when uploading files in the form.

(3) application/json: The server message body is a serialized JSON string.

(4) text/xml: This method is mainly used to submit data in XML format.

The server sends the data format type to the client: XML, HTML, JSON

 4. How good is HTTP status code 304?

In order to improve the website access speed, the server specifies a caching mechanism for some of the previously visited pages. When the client requests these pages here, the server will judge whether the page is the same as before based on the cached content. If it is the same, it will directly return 304. At this time, the client The end calls the cached content without the need for secondary downloads.

The status code 304 should not be considered an error, but a response to the server when the client **has a cache**.

Search engine spiders will prefer websites with frequently updated content sources. The frequency of crawling the website is adjusted by the status code returned by crawling the website within a certain period of time. If the website has been in the 304 state for a certain period of time, then the spider may reduce the number of times it crawls the website. On the contrary, if the frequency of website changes is very fast, and new content can be obtained every time it is crawled, then the return visit rate will increase over time.

**Reason for generating more 304 status codes:**

- The page update cycle is long or not updated
- Pure static pages or forced to generate static html

**Too many 304 status codes will cause the following problems:**

- Site snapshot stops;
- Indexes decrease;
- Weight drops.

 5. Common HTTP request methods

- GET: 向服务器获取数据;
- POST:将实体提交到指定的资源,通常会造成服务器资源的修改;
- PUT:上传文件,更新数据;
- DELETE:删除服务器上的对象;
- HEAD:获取报文首部,与GET相比,不返回报文主体部分;
- OPTIONS:询问支持的请求方法,用来跨域请求;
- CONNECT:要求在与代理服务器通信时建立隧道,使用隧道进行TCP通信;
- TRACE: 回显服务器收到的请求,主要⽤于测试或诊断。

 6. OPTIONS请求方法及使用场景

OPTIONS是除了GET和POST之外的其中一种 HTTP请求方法。

OPTIONS方法是用于请求获得由 `Request-URI`标识的资源在请求/响应的通信过程中可以使用的功能选项。通过这个方法,客户端可以**在采取具体资源请求之前,决定对该资源采取何种必要措施,或者了解服务器的性能**。该请求方法的响应不能缓存。

OPTIONS请求方法的**主要用途**有两个:

- 获取服务器支持的所有HTTP请求方法;
- 用来检查访问权限。例如:在进行 CORS 跨域资源共享时,对于复杂请求,就是使用 OPTIONS 方法发送嗅探请求,以判断是否有对指定资源的访问权限。

 7. HTTP 1.0 和 HTTP 1.1 之间有哪些区别?

**HTTP 1.0和 HTTP 1.1** **有以下区别**:

- **Connection**, http1.0 uses non-persistent connection by default, while http1.1 uses persistent connection by default. http1.1 enables multiple http requests to reuse the same TCP connection by using a persistent connection, so as to avoid the delay of establishing a connection each time when using a non-persistent connection.
- **In terms of resource requests**, in http1.0, there are some phenomena of wasting bandwidth, for example, the client only needs a part of an object, but the server sends the entire object over, and does not support resuming uploads function, http1.1 introduces the range header field in the request header, which allows only a certain part of the resource to be requested, that is, the return code is 206 (Partial Content), which is convenient for developers to choose freely so as to make full use of bandwidth and connect.
- **Cache**, in http1.0, the If-Modified-Since and Expires in the header are mainly used as the criteria for cache judgment, while http1.1 introduces more cache control strategies, such as Etag, If -Unmodified-Since, If-Match, If-None-Match and more optional cache headers to control the cache strategy.
- A new host field** has been added in http1.1, which is used to specify the domain name of the server. In http1.0, each server is bound to a unique IP address, so the URL in the request message does not pass the hostname (hostname). But with the development of virtual host technology, multiple virtual hosts can exist on a physical server, and they share an IP address. So with the host field, requests can be sent to different websites on the same server.
- Compared with http1.0, http1.1 has added many **request methods**, such as PUT, HEAD, OPTIONS, etc.

 8. The difference between HTTP 1.1 and HTTP 2.0

- **Binary protocol: **HTTP/2 is a binary protocol. In HTTP/1.1 version, the header information of the message must be text (ASCII encoding), and the data body can be text or binary. HTTP/2 is a complete binary protocol. Both the header information and the data body are binary, and they are collectively referred to as "frames", which can be divided into header information frames and data frames. The concept of a frame is the basis for its multiplexing.
- **Multiplexing: **HTTP/2 implements multiplexing, HTTP/2 still multiplexes TCP connections, but in a connection, both the client and the server can send multiple requests or responses at the same time, and There is no need to send them one by one in order, thus avoiding the problem of "head of line blocking" [1].
- **Data flow: **HTTP/2 uses the concept of data flow, because HTTP/2 data packets are sent out of order, and consecutive data packets in the same connection may belong to different requests. Therefore, the packet must be marked to indicate which request it belongs to. HTTP/2 refers to all data packets of each request or response as a data stream. Each data stream has a unique number. When a data packet is sent, it must be marked with a data flow ID to distinguish which data flow it belongs to.
- **Header information compression: **HTTP/2 implements header information compression. Since the HTTP 1.1 protocol has no state, all information must be attached to each request. Therefore, many fields of the request are repeated, such as Cookie and User Agent, the exact same content must be attached to each request, which will waste a lot of bandwidth and affect the speed. HTTP/2 optimizes this by introducing a header compression mechanism. On the one hand, the header information is compressed with gzip or compress before sending; on the other hand, the client and server maintain a header information table at the same time, and all fields will be stored in this table to generate an index number, and the same fields will not be sent in the future , only the index number is sent, which improves the speed.
- **Server push: **HTTP/2 allows the server to actively send resources to the client without request, which is called server push. Use the server push to push the necessary resources to the client in advance, so that some delay time can be relatively reduced. It should be noted here that the server actively pushes static resources under http2, which is different from the push of real-time data sent to the client by means of WebSocket and SSE.

**【1】The head of the queue is blocked:**

> Head-of-line blocking is caused by HTTP's basic "request-reply" model. HTTP stipulates that messages must be "sent and received", which forms a first-in, first-out "serial" queue. The requests in the queue have no priority, only the order of entering the queue, and the request at the top will be processed with the highest priority. If the request at the head of the queue delays time because it is processed too slowly, then all subsequent requests in the queue have to wait together. As a result, other requests bear undue time costs, causing the head of the queue to be blocked.

 9. The difference between HTTP and HTTPS protocols

The main differences between the HTTP and HTTPS protocols are as follows:

- The HTTPS protocol requires a CA certificate, which is expensive; while the HTTP protocol does not;
- The HTTP protocol is a hypertext transfer protocol, and information is transmitted in plain text, while HTTPS is a secure SSL encrypted transfer protocol; - Different connection
methods are used , the port is also different, HTTP protocol port is 80, HTTPS protocol port is 443;
- HTTP protocol connection is very simple and stateless; HTTPS protocol is a network protocol built with SSL and HTTP protocols that can perform encrypted transmission and identity authentication. More secure than HTTP.

 10. The reason for the GET method URL length limit

In fact, the HTTP protocol specification does not limit the length of the url requested by the get method. This limitation is a limitation imposed by specific browsers and servers.

IE limits the length of the URL to 2083 bytes (2K+35). Since the IE browser allows the minimum URL length, during the development process, as long as the URL does not exceed 2083 bytes, there will be no problem in working in all browsers.

```
GET length value = URL (2083) - (your Domain+Path) - 2 (2 is the length of ? = two characters in the get request)

```

Let's take a look at the length limit range of url in the get method of mainstream browsers:

- Microsoft Internet Explorer (Browser): The IE browser has a maximum limit of 2083 characters for the URL, if it exceeds this number, the submit button does not respond.
- Firefox (Browser): The URL length limit for the Firefox browser is 65,536 characters.
- Safari (Browser): The maximum URL length is limited to 80,000 characters.
- Opera (Browser): The maximum URL length is limited to 190,000 characters.
- Google (chrome): The maximum URL length is limited to 8182 characters.

Mainstream servers limit the length of url in the get method:

- Apache (Server): The maximum URL length that can be accepted is 8192 characters.
- Microsoft Internet Information Server (IIS): The maximum URL length that can be accepted is 16384 characters.

According to the above data, it can be known that the length of the URL in the get method does not exceed 2083 characters, so that all browsers and servers may work normally.

 11. What happens when you type Google.com into your browser and press enter?

**(1) URL parsing: **Firstly, the URL will be parsed to analyze the transmission protocol to be used and the path of the requested resource. If the protocol or host name in the URL entered is invalid, the content entered in the address bar will be passed to the search engine. If there is no problem, the browser will check whether there are any illegal characters in the URL, and if there are illegal characters, escape the illegal characters before proceeding to the next process.

**(2) Cache judgment: **The browser will judge whether the requested resource is in the cache. If the requested resource is in the cache and has not expired, then use it directly, otherwise initiate a new request to the server.

**(3) DNS resolution: **The next step is to obtain the IP address of the domain name in the input URL. First, it will determine whether there is a cache of the IP address of the domain name locally. If there is, use it. If not, send it to The local DNS server initiates the request. The local DNS server will also first check whether there is a cache, if not, it will first initiate a request to the root domain name server, obtain the address of the responsible top-level domain name server, then request the top-level domain name server, and then obtain the address of the responsible authoritative domain name server , and then initiate a request to the authoritative domain name server, and finally obtain the IP address of the domain name, and then the local DNS server returns the IP address to the requesting user. The user initiates a request to the local DNS server as a recursive request, and the local DNS server initiates a request to all levels of domain name servers as an iterative request.

**(4) Obtain the MAC address: **After the browser obtains the IP address, the data transmission also needs to know the MAC address of the destination host, because the application layer sends data to the transport layer, and the TCP protocol will specify the source port number and destination port number , and then sent to the network layer. The network layer will use the local address as the source address, and the obtained IP address as the destination address. Then it will be sent to the data link layer. The transmission of the data link layer needs to add the MAC addresses of the two parties in communication. The MAC address of the local machine is used as the source MAC address, and the destination MAC address needs to be handled according to the situation. By comparing the IP address with the subnet mask of this machine, you can judge whether it is in the same subnet as the requesting host. If it is in the same subnet, you can use the ARP protocol to obtain the MAC address of the destination host. If it is not in the same subnet In the network, the request should be forwarded to the gateway, and it will forward it on its behalf. At this time, the MAC address of the gateway can also be obtained through the ARP protocol. At this time, the MAC address of the destination host should be the address of the gateway.

**(5)TCP三次握手:**下面是 TCP 建立连接的三次握手的过程,首先客户端向服务器发送一个 SYN 连接请求报文段和一个随机序号,服务端接收到请求后向服务器端发送一个 SYN ACK报文段,确认连接请求,并且也向客户端发送一个随机序号。客户端接收服务器的确认应答后,进入连接建立的状态,同时向服务器也发送一个ACK 确认报文段,服务器端接收到确认后,也进入连接建立状态,此时双方的连接就建立起来了。

**(6)HTTPS握手:**如果使用的是 HTTPS 协议,在通信前还存在 TLS 的一个四次握手的过程。首先由客户端向服务器端发送使用的协议的版本号、一个随机数和可以使用的加密方法。服务器端收到后,确认加密的方法,也向客户端发送一个随机数和自己的数字证书。客户端收到后,首先检查数字证书是否有效,如果有效,则再生成一个随机数,并使用证书中的公钥对随机数加密,然后发送给服务器端,并且还会提供一个前面所有内容的 hash 值供服务器端检验。服务器端接收后,使用自己的私钥对数据解密,同时向客户端发送一个前面所有内容的 hash 值供客户端检验。这个时候双方都有了三个随机数,按照之前所约定的加密方法,使用这三个随机数生成一把秘钥,以后双方通信前,就使用这个秘钥对数据进行加密后再传输。

**(7)返回数据:**当页面请求发送到服务器端后,服务器端会返回一个 html 文件作为响应,浏览器接收到响应后,开始对 html 文件进行解析,开始页面的渲染过程。

**(8)页面渲染:**浏览器首先会根据 html 文件构建 DOM 树,根据解析到的 css 文件构建 CSSOM 树,如果遇到 script 标签,则判端是否含有 defer 或者 async 属性,要不然 script 的加载和执行会造成页面的渲染的阻塞。当 DOM 树和 CSSOM 树建立好后,根据它们来构建渲染树。渲染树构建好后,会根据渲染树来进行布局。布局完成后,最后使用浏览器的 UI 接口对页面进行绘制。这个时候整个页面就显示出来了。

**(9)TCP四次挥手:**最后一步是 TCP 断开连接的四次挥手过程。若客户端认为数据发送完成,则它需要向服务端发送连接释放请求。服务端收到连接释放请求后,会告诉应用层要释放 TCP 链接。然后会发送 ACK 包,并进入 CLOSE_WAIT 状态,此时表明客户端到服务端的连接已经释放,不再接收客户端发的数据了。但是因为 TCP 连接是双向的,所以服务端仍旧可以发送数据给客户端。服务端如果此时还有没发完的数据会继续发送,完毕后会向客户端发送连接释放请求,然后服务端便进入 LAST-ACK 状态。客户端收到释放请求后,向服务端发送确认应答,此时客户端进入 TIME-WAIT 状态。该状态会持续 2MSL(最大段生存期,指报文段在网络中生存的时间,超时会被抛弃) 时间,若该时间段内没有服务端的重发请求的话,就进入 CLOSED 状态。当服务端收到确认应答后,也便进入 CLOSED 状态。

 12. 对keep-alive的理解

HTTP1.0 中默认是在每次请求/应答,客户端和服务器都要新建一个连接,完成之后立即断开连接,这就是**短连接**。当使用Keep-Alive模式时,Keep-Alive功能使客户端到服务器端的连接持续有效,当出现对服务器的后继请求时,Keep-Alive功能避免了建立或者重新建立连接,这就是**长连接**。其使用方法如下:

- HTTP1.0版本是默认没有Keep-alive的(也就是默认会发送keep-alive),所以要想连接得到保持,必须手动配置发送 `Connection: keep-alive`字段。若想断开keep-alive连接,需发送 `Connection:close`字段;
- HTTP1.1规定了默认保持长连接,数据传输完成了保持TCP连接不断开,等待在同域名下继续用这个通道传输数据。如果需要关闭,需要客户端发送 `Connection:close`首部字段。

Keep-Alive的**建立过程**:

- The client sends the request message to the server while adding the Connection field to the header
- The server receives the request and processes the Connection field
- The server sends back the Connection: Keep-Alive field to the client
- The client receives the Connection field
- Keep-Alive connection build success

**The server automatically disconnects the process (that is, there is no keep-alive)**:

- The client just sends a content message to the server (does not contain the Connection field)
- The server receives the request and processes it
- The server returns the resource requested by the client and closes the connection
- The client receives the resource, finds that there is no Connection field, and disconnects

**Client request disconnect process**:

- The client sends the Connection:close field to the server
- The server receives the request and processes the connection field
- The server sends back the response resource and disconnects
- The client receives the resource and disconnects

**Advantages of turning on Keep-Alive:**

- less CPU and memory usage (due to fewer concurrently open connections);
- HTTP pipelining of requests and responses allowed;
- reduced congestion control (TCP connections are reduced);
- reduced latency for subsequent requests ( no more handshake);
- report errors without closing the TCP connection;

**Disadvantages** of turning on Keep-Alive:

- Long-term Tcp connections can easily lead to invalid occupation of system resources and waste system resources.

 13. There are multiple images on the page, what is the HTTP loading performance?

- Under **HTTP 1**, the maximum number of TCP connections a browser can make to a domain name is 6, so it will request multiple times. It can be solved with **Multiple Domain Name Deployment**. This can increase the number of simultaneous requests and speed up the acquisition of page images.
- Under **HTTP 2**, many resources can be loaded in an instant, because HTTP2 supports multiplexing, and multiple HTTP requests can be sent in one TCP connection.

 14. What is the header compression algorithm of HTTP2?

The header compression of HTTP2 is the HPACK algorithm. Establish a "dictionary" at both ends of the client and server, use index numbers to represent repeated strings, and use Huffman coding to compress integers and strings, which can achieve a high compression rate of 50% to 90%.

Specifically:

- Use the "header table" on the client and server to track and store the previously sent key-value pairs. For the same data, it is no longer sent through each request and response; - The header table is within the connection duration of HTTP/
2 Always present, updated incrementally by both client and server;
- Each new header key-value pair is either appended to the end of the current table, or replaces the previous value in the table.

For example, in the two requests in the figure below, the first request sends all the header fields, and the second request only needs to send the difference data, which can reduce redundant data and reduce overhead.

 15. What does the HTTP request message look like?

The request message consists of 4 parts:

- Request line
- Request header
- Empty line
- Request body

**in:**

(1) The request line includes: request method field, URL field, HTTP protocol version field. They are separated by spaces. For example, GET /index.html HTTP/1.1.

(2) Request header: The request header consists of keyword/value pairs, one pair per line, and the keyword and value are separated by a colon ":"

- User-Agent: The type of browser that made the request.
- Accept: A list of content types recognized by the client.
- Host: The requested host name, which allows multiple domain names to be in the same IP address, that is, a virtual host.

(3)请求体: post put等请求携带的数据

 16. HTTP响应报文的是什么样的?

请求报⽂有4部分组成:

- 响应⾏
- 响应头
- 空⾏
- 响应体

- 响应⾏:由网络协议版本,状态码和状态码的原因短语组成,例如 HTTP/1.1 200 OK 。
- 响应头:响应部⾸组成
- 响应体:服务器响应的数据

 17. HTTP协议的优点和缺点

HTTP 是超文本传输协议,它定义了客户端和服务器之间交换报文的格式和方式,默认使用 80 端口。它使用 TCP 作为传输层协议,保证了数据传输的可靠性。

HTTP协议具有以下**优点**:

- 支持客户端/服务器模式
- **简单快速**:客户向服务器请求服务时,只需传送请求方法和路径。由于 HTTP 协议简单,使得 HTTP 服务器的程序规模小,因而通信速度很快。
- **无连接**:无连接就是限制每次连接只处理一个请求。服务器处理完客户的请求,并收到客户的应答后,即断开连接,采用这种方式可以节省传输时间。
- **无状态**:HTTP 协议是无状态协议,这里的状态是指通信过程的上下文信息。缺少状态意味着如果后续处理需要前面的信息,则它必须重传,这样可能会导致每次连接传送的数据量增大。另一方面,在服务器不需要先前信息时它的应答就比较快。
- **灵活**:HTTP 允许传输任意类型的数据对象。正在传输的类型由 Content-Type 加以标记。

HTTP协议具有以下**缺点**:

- **无状态**:HTTP 是一个无状态的协议,HTTP 服务器不会保存关于客户的任何信息。
- **明文传输**:协议中的报文使用的是文本形式,这就直接暴露给外界,不安全。
- **不安全**

(1)通信使用明文(不加密),内容可能会被窃听;

(2)不验证通信方的身份,因此有可能遭遇伪装;

(3)无法证明报文的完整性,所以有可能已遭篡改;

 18. 说一下HTTP 3.0

HTTP/3基于UDP协议实现了类似于TCP的多路复用数据流、传输可靠性等功能,这套功能被称为QUIC协议。

1. 流量控制、传输可靠性功能:QUIC在UDP的基础上增加了一层来保证数据传输可靠性,它提供了数据包重传、拥塞控制、以及其他一些TCP中的特性。
2. 集成TLS加密功能:目前QUIC使用TLS1.3,减少了握手所花费的RTT数。
3. 多路复用:同一物理连接上可以有多个独立的逻辑数据流,实现了数据流的单独传输,解决了TCP的队头阻塞问题。
4. 快速握手:由于基于UDP,可以实现使用0 ~ 1个RTT来建立连接。

 19. HTTP协议的性能怎么样

HTTP 协议是基于 TCP/IP,并且使用了**请求-应答**的通信模式,所以性能的关键就在这两点里。

- **长连接**

HTTP协议有两种连接模式,一种是持续连接,一种非持续连接。

(1)非持续连接指的是服务器必须为每一个请求的对象建立和维护一个全新的连接。

(2)持续连接下,TCP 连接默认不关闭,可以被多个请求复用。采用持续连接的好处是可以避免每次建立 TCP 连接三次握手时所花费的时间。

对于不同版本的采用不同的连接方式:

- Every time a request is initiated in HTTP/1.0, a new TCP connection (three-way handshake) is required, and it is a serial request, which creates and disconnects the TCP connection fearlessly, which increases the communication overhead. This version uses a non-persistent connection, but you can add Connection: keep-a live to ask the server not to close the TCP connection when requesting.
- In HTTP/1.1, the communication method of **long connection** is proposed, also called persistent connection. The advantage of this method is that it reduces the additional overhead caused by the repeated establishment and disconnection of TCP connections, and reduces the load on the server side. This version and later versions use persistent connections by default. Currently, for the same domain, most browsers support the establishment of 6 persistent connections at the same time.

- **Pipeline network transmission**

HTTP/1.1 adopts the long connection method, which makes pipeline network transmission possible.

Pipeline (pipeline) network transmission means: in the same TCP connection, the client can initiate multiple requests, as long as the first request is sent out, the second request can be sent out without waiting for it to come back, which can reduce overall response time. But the server still responds to the requests in order. If the previous response is particularly slow, there will be many requests waiting in line later. This is called head-of-line blocking.

- **Head of line jam**

Messages transmitted by HTTP must be sent and received once, but the tasks inside are placed in a task queue for serial execution. Once the request processing at the head of the queue is too slow, the processing of subsequent requests will be blocked. This is the HTTP head-of-line blocking problem.

**Solutions for head-of-line blocking:**

(1) Concurrent connections: Multiple persistent connections are allowed for a domain name, which is equivalent to increasing the task queue, so that the tasks of one team will not block all other tasks.

(2) Domain name fragmentation: The domain name is divided into many second-level domain names, all of which point to the same server, and the number of concurrent long connections increases, which solves the problem of head-of-line blocking.

 20. What are the components of a URL?

Take the following URL as an example: **http://www.aspxfans.com:8080/news/index.asp?boardID=5&ID=24618&page=1#name**

As can be seen from the above URL, a complete URL includes the following parts:

- **Protocol part**: The protocol part of the URL is "http:", which means that the web page uses the HTTP protocol. Various protocols can be used in the Internet, such as HTTP, FTP, etc. In this example, the HTTP protocol is used. The "//" behind "HTTP" is a separator;
- **domain name part**: The domain name part of the URL is "www.aspxfans.com". In a URL, an IP address can also be used as a domain name
- **port part**: the port follows the domain name, and ":" is used as a separator between the domain name and the port. The port is not a necessary part of a URL. If the port part is omitted, the default port will be used (the default port of the HTTP protocol is 80, and the default port of the HTTPS protocol is 443); - **Virtual directory part**: from the first " after the domain name
" /" to the last "/" is the part of the virtual directory. Virtual directories are also not a required part of a URL. The virtual directory in this example is "/news/";
- **File name part**: From the last "/" after the domain name to "?", it is the file name part, if there is no "?", then From the last "/" after the domain name to "#", it is the file part. If there are no "?" and "#", then from the last "/" after the domain name to the end, it is the file name part . The file name in this example is "index.asp". The file name part is not a mandatory part of the URL. If this part is omitted, the default file name will be used;
- **anchor part**: from "#" to the end, it is an anchor part. The anchor part in this example is "name". The anchor part is not a necessary part of a URL;
- **parameter part**: The part from "?" to "#" is the parameter part, also known as the search part and the query part. The parameter part in this example is "boardID=5&ID=24618&page=1". Parameters can allow multiple parameters, and "&" is used as a separator between parameters.

 21. What are the HTTP request headers related to caching?

Strong cache:

- Expires
- Cache-Control

协商缓存:

- Etag、If-None-Match
- Last-Modified、If-Modified-Since

 二、HTTPS协议

 1. 什么是HTTPS协议?

超文本传输安全协议(Hypertext Transfer Protocol Secure,简称:HTTPS)是一种通过计算机网络进行安全通信的传输协议。HTTPS经由HTTP进行通信,利用SSL/TLS来加密数据包。HTTPS的主要目的是提供对网站服务器的身份认证,保护交换数据的隐私与完整性。

HTTP协议采用**明文传输**信息,存在**信息窃听**、**信息篡改**和**信息劫持**的风险,而协议TLS/SSL具有**身份验证**、**信息加密**和**完整性校验**的功能,可以避免此类问题发生。

安全层的主要职责就是**对发起的HTTP请求的数据进行加密操作** 和 **对接收到的HTTP的内容进行解密操作**。

 2. TLS/SSL的工作原理

**TLS/SSL**全称**安全传输层协议**(Transport Layer Security), 是介于TCP和HTTP之间的一层安全协议,不影响原有的TCP协议和HTTP协议,所以使用HTTPS基本上不需要对HTTP页面进行太多的改造。

TLS/SSL的功能实现主要依赖三类基本算法:**散列函数hash**、**对称加密**、**非对称加密**。这三类算法的作用如下:

- 基于散列函数验证信息的完整性
- 对称加密算法采用协商的秘钥对数据加密
- 非对称加密实现身份认证和秘钥协商

# (1)散列函数hash

常见的散列函数有MD5、SHA1、SHA256。该函数的特点是单向不可逆,对输入数据非常敏感,输出的长度固定,任何数据的修改都会改变散列函数的结果,可以用于防止信息篡改并验证数据的完整性。

**特点**:在信息传输过程中,散列函数不能三都实现信息防篡改,由于传输是明文传输,中间人可以修改信息后重新计算信息的摘要,所以需要对传输的信息和信息摘要进行加密。

# (2)对称加密

对称加密的方法是,双方使用同一个秘钥对数据进行加密和解密。但是对称加密的存在一个问题,就是如何保证秘钥传输的安全性,因为秘钥还是会通过网络传输的,一旦秘钥被其他人获取到,那么整个加密过程就毫无作用了。 这就要用到非对称加密的方法。

常见的对称加密算法有AES-CBC、DES、3DES、AES-GCM等。相同的秘钥可以用于信息的加密和解密。掌握秘钥才能获取信息,防止信息窃听,其通讯方式是一对一。

**特点**:对称加密的优势就是信息传输使用一对一,需要共享相同的密码,密码的安全是保证信息安全的基础,服务器和N个客户端通信,需要维持N个密码记录且不能修改密码。

# (3)非对称加密

非对称加密的方法是,我们拥有两个秘钥,一个是公钥,一个是私钥。公钥是公开的,私钥是保密的。用私钥加密的数据,只有对应的公钥才能解密,用公钥加密的数据,只有对应的私钥才能解密。我们可以将公钥公布出去,任何想和我们通信的客户, 都可以使用我们提供的公钥对数据进行加密,这样我们就可以使用私钥进行解密,这样就能保证数据的安全了。但是非对称加密有一个缺点就是加密的过程很慢,因此如果每次通信都使用非对称加密的方式的话,反而会造成等待时间过长的问题。

常见的非对称加密算法有RSA、ECC、DH等。秘钥成对出现,一般称为公钥(公开)和私钥(保密)。公钥加密的信息只有私钥可以解开,私钥加密的信息只能公钥解开,因此掌握公钥的不同客户端之间不能相互解密信息,只能和服务器进行加密通信,服务器可以实现一对多的的通信,客户端也可以用来验证掌握私钥的服务器的身份。

**Features**: Asymmetric encryption is characterized by one-to-many information. The server only needs to maintain a private key to communicate with multiple clients, but the information sent by the server can be decrypted by all clients, and the algorithm The calculation is complex and the encryption speed is slow.

Based on the characteristics of the above algorithms, the working method of TLS/SSL is that the client uses asymmetric encryption to communicate with the server, realizes identity verification and negotiates the secret key used for symmetric encryption. The symmetric encryption algorithm uses the negotiated secret key to encrypt the communication of information and information summary, and the symmetric secret key used by different nodes is different, so as to ensure that the information can only be obtained by the two parties in communication. This solves the respective problems of the two methods.

 3. What is a digital certificate?

The current method is not necessarily safe, because there is no way to determine that the obtained public key must be a safe public key. There may be a middleman who intercepts the public key sent to us by the other party, and then sends his own public key to us. When we encrypt the information sent with his public key, he can decrypt it with his own private key. Then he pretended to be us and sent information to the other party in the same way, so that our information was stolen, but he didn't know it yet. To solve such problems, digital certificates can be used.

First, a Hash algorithm is used to encrypt the public key and other information to generate a message digest, and then a credible certification authority (CA) encrypts the message digest with its private key to form a signature. Finally, the original information and signature are combined into a digital certificate. When the recipient receives the digital certificate, first use the same Hash algorithm to generate a digest based on the original information, then use the public key of the notary office to decrypt the digest in the digital certificate, and finally compare the decrypted digest with the generated digest By comparison, you can find out whether the obtained information has been changed.

The most important thing about this method is the reliability of the certification center. Generally, some top-level certification center certificates are built into the browser, which means that we automatically trust them. Only in this way can data security be guaranteed.

 4. HTTPS communication (handshake) process

The communication process of HTTPS is as follows:

1. The client initiates a request to the server, and the request includes the protocol version number used, a random number generated, and the encryption method supported by the client.
2. After receiving the request, the server confirms the encryption method used by both parties, and gives the server's certificate and a random number generated by the server.
3. After the client confirms that the server certificate is valid, it generates a new random number, encrypts the random number with the public key in the digital certificate, and sends it to the server. And it will also provide a hash value of all the previous content for verification by the server.
4. The server uses its own private key to decrypt the random number sent by the client. And provide the hash value of all the previous content for the client to check.
5. The client and the server use the previous three random numbers according to the agreed encryption method to generate a conversation key, which will be used to encrypt information in subsequent conversations.

 5. Features of HTTPS

The **Advantages** of HTTPS are as follows:

- The HTTPS protocol can be used to authenticate users and servers to ensure that data is sent to the correct client and server;
- The HTTPS protocol can be used for encrypted transmission and identity authentication, making communication more secure, preventing data from being stolen and modified during transmission, and ensuring data Security;
- HTTPS is the most secure solution under the current architecture. Although it is not absolutely secure, it greatly increases the cost of man-in-the-middle attacks;

The **disadvantages** of HTTPS are as follows:

- HTTPS requires encryption and decryption of both the server and the client, which consumes more server resources and the process is complicated;
- The handshake phase of the HTTPS protocol is time-consuming and increases the loading time of the page;
- SSL certificates are charged, and the more powerful the certificate The higher the cost;
- HTTPS connection takes up a lot of server-side resources, and it will cost more to support websites with a little more visitors;
- The SSL certificate needs to be bound to an IP, and multiple domain names cannot be bound to the same IP.

 6. How does HTTPS ensure security?

First understand two concepts:

- Symmetric encryption: that is, both parties to the communication use the same secret key for encryption and decryption. Although symmetric encryption is simple and has good performance, it cannot solve the problem of sending the secret key to the other party for the first time, and it is easy to be compromised. The hacker intercepts the secret key.
- Asymmetric encryption:

1. Private key + public key = key pair
2. The data encrypted with the private key can only be decrypted with the corresponding public key, and the data encrypted with the public key can only be decrypted with the corresponding private key
3. Because the communication between the two parties Both parties have their own set of key pairs. Before communication, both parties will first send their own public keys to each other.
4. Then the other party will take this public key to encrypt data and respond to the other party. When you get to the other party, the other party will use their own private key to decrypt

Although asymmetric encryption is more secure, the problem is that it is very slow and affects performance.

**Solution:**

Combining the two encryption methods, the symmetric encryption key is encrypted with the asymmetric encryption public key, and then sent out, and the receiver uses the private key to decrypt to obtain the symmetric encryption key, and then both parties can use Use symmetric encryption to communicate.

At this time, another problem arises, the middleman problem:

If there is an intermediary between the client and the server at this time, the intermediary only needs to replace the public key originally sent by the two parties with its own public key, so that the intermediary can easily decrypt the communication between the two parties. All data sent.

So at this time, a secure third-party certificate (CA) is needed to prove the identity of the identity and prevent being attacked by the middleman. The certificate includes: issuer, certificate purpose, user public key, user private key, user's HASH algorithm, certificate expiration time, etc.

But here comes the question, if the intermediary tampers with the certificate, will the identity certificate be invalid? This certificate is bought for nothing, and a new technology, digital signature, is needed at this time.

A digital signature is to use the HASH algorithm that comes with the CA to hash the content of the certificate to obtain a summary, and then encrypt it with the private key of the CA to finally form a digital signature. When someone sends his certificate, I use the same Hash algorithm to generate a message digest again, and then use the CA's public key to decrypt the digital signature to get the message digest created by the CA. Know if it has been tampered with in the middle. At this time, the security of communication can be guaranteed to the greatest extent.

3. HTTP status code

Categories of status codes:

| **类别** | **原因**                  | **描述**        |
| ------ | ----------------------- | ------------- |
| 1xx    | Informational(信息性状态码)   | 接受的请求正在处理     |
| 2xx    | Success(成功状态码)          | 请求正常处理完毕      |
| 3xx    | Redirection(重定向状态码)     | 需要进行附加操作一完成请求 |
| 4xx    | Client Error (客户端错误状态码) | 服务器无法处理请求     |
| 5xx    | Server Error(服务器错误状态码)  | 服务器处理请求出错     |

 1. 2XX (Success 成功状态码)

状态码2XX表示请求被正常处理了。

# (1)200 OK

200 OK表示客户端发来的请求被服务器端正常处理了。

# (2)204 No Content

该状态码表示客户端发送的请求已经在服务器端正常处理了,但是没有返回的内容,响应报文中不包含实体的主体部分。一般在只需要从客户端往服务器端发送信息,而服务器端不需要往客户端发送内容时使用。

# (3)206 Partial Content

该状态码表示客户端进行了范围请求,而服务器端执行了这部分的 GET 请求。响应报文中包含由 Content-Range 指定范围的实体内容。

 2. 3XX (Redirection 重定向状态码)

3XX 响应结果表明浏览器需要执行某些特殊的处理以正确处理请求。

# (1)301 Moved Permanently

**永久重定向。**

该状态码表示请求的资源已经被分配了新的 URI,以后应使用资源指定的 URI。新的 URI 会在 HTTP 响应头中的 Location 首部字段指定。若用户已经把原来的URI保存为书签,此时会按照 Location 中新的URI重新保存该书签。同时,搜索引擎在抓取新内容的同时也将旧的网址替换为重定向之后的网址。

**使用场景:**

- 当我们想换个域名,旧的域名不再使用时,用户访问旧域名时用301就重定向到新的域名。其实也是告诉搜索引擎收录的域名需要对新的域名进行收录。
- 在搜索引擎的搜索结果中出现了不带www的域名,而带www的域名却没有收录,这个时候可以用301重定向来告诉搜索引擎我们目标的域名是哪一个。

# (2)302 Found

**临时重定向。**

该状态码表示请求的资源被分配到了新的 URI,希望用户(本次)能使用新的 URI 访问资源。和 301 Moved Permanently 状态码相似,但是 302 代表的资源不是被永久重定向,只是临时性质的。也就是说已移动的资源对应的 URI 将来还有可能发生改变。若用户把 URI 保存成书签,但不会像 301 状态码出现时那样去更新书签,而是仍旧保留返回 302 状态码的页面对应的 URI。同时,搜索引擎会抓取新的内容而保留旧的网址。因为服务器返回302代码,搜索引擎认为新的网址只是暂时的。

**使用场景:**

- 当我们在做活动时,登录到首页自动重定向,进入活动页面。
- 未登陆的用户访问用户中心重定向到登录页面。
- 访问404页面重新定向到首页。

# (3)303 See Other

该状态码表示由于请求对应的资源存在着另一个 URI,应使用 GET 方法定向获取请求的资源。

303 状态码和 302 Found 状态码有着相似的功能,但是 303 状态码明确表示客户端应当采用 GET 方法获取资源。

303 状态码通常作为 PUT 或 POST 操作的返回结果,它表示重定向链接指向的不是新上传的资源,而是另外一个页面,比如消息确认页面或上传进度页面。而请求重定向页面的方法要总是使用 GET。

注意:

- 当 301、302、303 响应状态码返回时,几乎所有的浏览器都会把 POST 改成GET,并删除请求报文内的主体,之后请求会再次自动发送。
- 301、302 标准是禁止将 POST 方法变成 GET方法的,但实际大家都会这么做。

# (4)304 Not Modified

浏览器缓存相关。

该状态码表示客户端发送附带条件的请求时,服务器端允许请求访问资源,但未满足条件的情况。304 状态码返回时,不包含任何响应的主体部分。304 虽然被划分在 3XX 类别中,但是和重定向没有关系。

带条件的请求(Http 条件请求):使用 Get方法 请求,请求报文中包含(`if-match`、`if-none-match`、`if-modified-since`、`if-unmodified-since`、`if-range`)中任意首部。

状态码304并不是一种错误,而是告诉客户端有缓存,直接使用缓存中的数据。返回页面的只有头部信息,是没有内容部分的,这样在一定程度上提高了网页的性能。

# (5)307 Temporary Redirect

307表示临时重定向。该状态码与 302 Found 有着相同含义,尽管 302 标准禁止 POST 变成 GET,但是实际使用时还是这样做了。

307 会遵守浏览器标准,**不会从 POST 变成 GET**。但是对于处理请求的行为时,不同浏览器还是会出现不同的情况。规范要求浏览器继续向 Location 的地址 POST 内容。规范要求浏览器继续向 Location 的地址 POST 内容。

 3. 4XX (Client Error 客户端错误状态码)

4XX 的响应结果表明客户端是发生错误的原因所在。

# (1)400 Bad Request

This status code indicates that there is a syntax error in the request message. When an error occurs, it is necessary to modify the content of the request and send the request again. Also, browsers treat this status code like 200 OK.

# (2)401 Unauthorized

This status code indicates that the sent request needs to have authentication information that passes HTTP authentication (BASIC authentication, DIGEST authentication). If a request has been made before, it means that the user authentication failed

Responses returned with a 401 MUST include a WWW-Authenticate header applicable to the requested resource to challenge user information. When the browser receives a 401 response for the first time, a dialog window for authentication will pop up.

401 will appear in the following situations:

- 401.1 - Login failed.
- 401.2 - Server configuration caused login failure.
- 401.3 - Not authorized due to ACL restrictions on the resource.
- 401.4 - Filter authorization failed.
- 401.5 - ISAPI/CGI application authorization failure.
- 401.7 - Access is denied by the URL authorization policy on the web server. This error code is specific to IIS 6.0.

# (3)403 Forbidden

This status code indicates that the access to the requested resource is rejected by the server. The server does not need to give a detailed reason, but it can be explained in the body of the response message entity. After entering this state, verification cannot continue. This access is permanently prohibited and closely tied to application logic.

IIS defines a number of different 403 errors that indicate more specific error causes:

- 403.1 - 执行访问被禁止。
- 403.2 - 读访问被禁止。
- 403.3 - 写访问被禁止。
- 403.4 - 要求 SSL。
- 403.5 - 要求 SSL 128。
- 403.6 - IP 地址被拒绝。
- 403.7 - 要求客户端证书。
- 403.8 - 站点访问被拒绝。
- 403.9 - 用户数过多。
- 403.10 - 配置无效。
- 403.11 - 密码更改。
- 403.12 - 拒绝访问映射表。
- 403.13 - 客户端证书被吊销。
- 403.14 - 拒绝目录列表。
- 403.15 - 超出客户端访问许可。
- 403.16 - 客户端证书不受信任或无效。
- 403.17 - 客户端证书已过期或尚未生效
- 403.18 - 在当前的应用程序池中不能执行所请求的 URL。这个错误代码为 IIS 6.0 所专用。
- 403.19 - 不能为这个应用程序池中的客户端执行 CGI。这个错误代码为 IIS 6.0 所专用。
- 403.20 - Passport 登录失败。这个错误代码为 IIS 6.0 所专用。

# (4)404 Not Found

该状态码表明服务器上无法找到请求的资源。除此之外,也可以在服务器端拒绝请求且不想说明理由时使用。

以下情况会出现404:

- 404.0 - (None) - File or directory not found.
- 404.1 - The website cannot be accessed on the requested port.
- 404.2 - Web service extension lockout policy prevents this request.
- 404.3 - MIME mapping policy prevents this request.

# (5)405 Method Not Allowed

This status code indicates that although the method requested by the client can be recognized by the server, the server prohibits the use of this method. GET and HEAD methods, the server should always allow the client to access. The client can view the access methods allowed by the server through the OPTIONS method (pre-check), as follows

```
Access-Control-Allow-Methods: GET,HEAD,PUT,PATCH,POST,DELETE

```

 4. 5XX (Server Error server error status code)

A 5XX response results in an error on the server itself.

# (1)500 Internal Server Error

This status code indicates that an error occurred on the server side while executing the request. It could also be a bug or some temporary glitch in the web application.

# (2)502 Bad Gateway

This status code indicates that a server acting as a gateway or proxy received an invalid response from an upstream server. Note that 502 errors are usually not fixable by the client, but need to be fixed by the passing web server or proxy server. 502 will appear in the following situations:

- 502.1 - CGI (Common Gateway Interface) application timed out.
- 502.2 - CGI (Common Gateway Interface) application error.

# (3)503 Service Unavailable

This status code indicates that the server is temporarily overloaded or down for maintenance, and cannot process requests now. If you know in advance the time required to resolve the above situation, it is best to write the RetryAfter header field and return it to the client.

**scenes to be used:**

- 服务器停机维护时,主动用503响应请求;
- nginx 设置限速,超过限速,会返回503。

# (4)504 Gateway Timeout

该状态码表示网关或者代理的服务器无法在规定的时间内获得想要的响应。他是HTTP 1.1中新加入的。

使用场景:代码执行时间超时,或者发生了死循环。

 5. 总结

**1)2XX 成功**

- 200 OK,表示从客户端发来的请求在服务器端被正确处理
- 204 No content,表示请求成功,但响应报文不含实体的主体部分
- 205 Reset Content,表示请求成功,但响应报文不含实体的主体部分,但是与 204 响应不同在于要求请求方重置内容
- 206 Partial Content,进行范围请求

**(2)3XX 重定向**

- 301 moved permanently,永久性重定向,表示资源已被分配了新的 URL
- 302 found,临时性重定向,表示资源临时被分配了新的 URL
- 303 see other,表示资源存在着另一个 URL,应使用 GET 方法获取资源
- 304 not modified,表示服务器允许访问资源,但因发生请求未满足条件的情况
- 307 temporary redirect,临时重定向,和302含义类似,但是期望客户端保持请求方法不变向新的地址发出请求

**(3)4XX 客户端错误**

- 400 bad request, there is a syntax error in the request message
- 401 unauthorized, indicating that the request sent needs to have authentication information through HTTP authentication
- 403 forbidden, indicating that access to the requested resource is rejected by the server
- 404 not found, indicating that there is no such information on the server Find the requested resource

**(4) 5XX Server Error**

- 500 internal sever error, indicating that an error occurred on the server side when executing the request
- 501 Not Implemented, indicating that the server does not support a function required by the current request
- 503 service unavailable, indicating that the server is temporarily overloaded or is being shut down for maintenance, and cannot process request

 6. The same redirect, the difference between 307, 303, 302?

302 is the protocol status code of http1.0. In the http1.1 version, two more 303 and 307 came out to refine the 302 status code. 303 clearly indicates that the client should use the get method to obtain resources, and it will change the POST request into a GET request for redirection. 307 will follow the browser standard and will not change from post to get.

4. Introduction to DNS protocol

 1. What is the DNS protocol

**Concept**: DNS is the abbreviation of Domain Name System (Domain Name System), which provides a conversion service from a host name to an IP address, which is what we often call the Domain Name System. It is a distributed database composed of hierarchical DNS servers, and it is an application layer protocol that defines how hosts query this distributed database. It can make it easier for people to access the Internet without having to remember the IP number string that can be directly read by the machine.

**Function**: The domain name is resolved into an IP address, the client sends a domain name query request to the DNS server (the DNS server has its own IP address), and the DNS server informs the client of the IP address of the Web server.

 2. DNS uses both TCP and UDP protocols?

**DNS occupies port 53 and uses both TCP and UDP protocols. **

(1) Use TCP protocol during zone transfer

- The secondary domain name server will periodically (usually 3 hours) query the primary domain name server in order to know whether the data has changed. If there is a change, a zone transfer will be performed to synchronize the data. Zone transfers use TCP instead of UDP because the amount of data transferred synchronously is much larger than the amount of data that can be answered in a request.
- TCP is a reliable connection, which ensures the accuracy of data.

(2) UDP protocol is used in domain name resolution

- The client queries the DNS server for the domain name, and the content returned generally does not exceed 512 bytes, which can be transmitted by UDP. There is no need to go through a three-way handshake, so the DNS server load is lower and the response is faster. In theory, the client can also specify to use TCP when querying the DNS server, but in fact, many DNS servers only support UDP query packets when configuring.

 3. Complete DNS query process

The process of DNS server resolving domain names:

- First, it will search for the corresponding IP address in the **browser's cache**. If it is found, it will return directly. If it cannot find it, continue to the next step -
send the request to **local DNS server**, and cache it in the local domain name server If it is found, it will directly return the search result. If it is not found, continue to the next step
- the local DNS server sends a request to **root domain name server**, and the root domain name server will return a top-level domain name server address of the queried domain
- The local DNS server sends a request to the **top-level domain name server**, and the server that accepts the request queries its own cache. If there is a record, it returns the query result. If not, it returns the address of the relevant next-level authoritative domain name server-
local The DNS server sends a request to the **authoritative domain name server**, and the domain name server returns the corresponding result
- the local DNS server saves the returned result in the cache for the next use
- the local DNS server returns the returned result to the browser

比如要查询 **www.baidu.com** 的 IP 地址,首先会在浏览器的缓存中查找是否有该域名的缓存,如果不存在就将请求发送到本地的 DNS 服务器中,本地DNS服务器会判断是否存在该域名的缓存,如果不存在,则向根域名服务器发送一个请求,根域名服务器返回负责 .com 的顶级域名服务器的 IP 地址的列表。然后本地 DNS 服务器再向其中一个负责 .com 的顶级域名服务器发送一个请求,负责 .com 的顶级域名服务器返回负责 .baidu 的权威域名服务器的 IP 地址列表。然后本地 DNS 服务器再向其中一个权威域名服务器发送一个请求,最后权威域名服务器返回一个对应的主机名的 IP 地址列表。

 4. 迭代查询与递归查询

实际上,DNS解析是一个包含迭代查询和递归查询的过程。

- **递归查询**指的是查询请求发出后,域名服务器代为向下一级域名服务器发出请求,最后向用户返回查询的最终结果。使用递归 查询,用户只需要发出一次查询请求。
- **迭代查询**指的是查询请求后,域名服务器返回单次查询的结果。下一级的查询由用户自己请求。使用迭代查询,用户需要发出 多次的查询请求。

一般我们向本地 DNS 服务器发送请求的方式就是递归查询,因为我们只需要发出一次请求,然后本地 DNS 服务器返回给我 们最终的请求结果。而本地 DNS 服务器向其他域名服务器请求的过程是迭代查询的过程,因为每一次域名服务器只返回单次 查询的结果,下一级的查询由本地 DNS 服务器自己进行。

 5. DNS 记录和报文

DNS 服务器中以资源记录的形式存储信息,每一个 DNS 响应报文一般包含多条资源记录。一条资源记录的具体的格式为

```
(Name,Value,Type,TTL)

```

其中 TTL 是资源记录的生存时间,它定义了资源记录能够被其他的 DNS 服务器缓存多长时间。

常用的一共有四种 Type 的值,分别是 A、NS、CNAME 和 MX ,不同 Type 的值,对应资源记录代表的意义不同:

- 如果 Type = A,则 Name 是主机名,Value 是主机名对应的 IP 地址。因此一条记录为 A 的资源记录,提供了标 准的主机名到 IP 地址的映射。
- 如果 Type = NS,则 Name 是个域名,Value 是负责该域名的 DNS 服务器的主机名。这个记录主要用于 DNS 链式 查询时,返回下一级需要查询的 DNS 服务器的信息。
- 如果 Type = CNAME,则 Name 为别名,Value 为该主机的规范主机名。该条记录用于向查询的主机返回一个主机名 对应的规范主机名,从而告诉查询主机去查询这个主机名的 IP 地址。主机别名主要是为了通过给一些复杂的主机名提供 一个便于记忆的简单的别名。
- 如果 Type = MX,则 Name 为一个邮件服务器的别名,Value 为邮件服务器的规范主机名。它的作用和 CNAME 是一 样的,都是为了解决规范主机名不利于记忆的缺点。

 五、网络模型

 1. OSI七层模型

`ISO`为了更好的使网络应用更为普及,推出了 `OSI`参考模型。

# (1)应用层

`OSI`参考模型中最靠近用户的一层,是为计算机用户提供应用接口,也为用户直接提供各种网络服务。我们常见应用层的网络服务协议有:`HTTP`,`HTTPS`,`FTP`,`POP3`、`SMTP`等。

- 在客户端与服务器中经常会有数据的请求,这个时候就是会用到 `http(hyper text transfer protocol)(超文本传输协议)`或者 `https`在后端设计数据接口时,我们常常使用到这个协议。
- `FTP`是文件传输协议,在开发过程中,个人并没有涉及到,但是我想,在一些资源网站,比如 **百度网盘 迅雷**应该是基于此协议的。
- `SMTP`是 `simple mail transfer protocol(简单邮件传输协议)`。在一个项目中,在用户邮箱验证码登录的功能时,使用到了这个协议。

# (2)表示层

表示层提供各种用于应用层数据的编码和转换功能,确保一个系统的应用层发送的数据能被另一个系统的应用层识别。如果必要,该层可提供一种标准表示形式,用于将计算机内部的多种数据格式转换成通信中采用的标准表示形式。数据压缩和加密也是表示层可提供的转换功能之一。

在项目开发中,为了方便数据传输,可以使用 `base64`对数据进行编解码。如果按功能来划分,`base64`应该是工作在表示层。

# (3)会话层

会话层就是负责建立、管理和终止表示层实体之间的通信会话。该层的通信由不同设备中的应用程序之间的服务请求和响应组成。

# (4)传输层

传输层建立了主机端到端的链接,传输层的作用是为上层协议提供端到端的可靠和透明的数据传输服务,包括处理差错控制和流量控制等问题。该层向高层屏蔽了下层数据通信的细节,使高层用户看到的只是在两个传输实体间的一条主机到主机的、可由用户控制和设定的、可靠的数据通路。我们通常说的,`TCP` `UDP`就是在这一层。端口号既是这里的“端”。

# (5)网络层

本层通过 `IP`寻址来建立两个节点之间的连接,为源端的运输层送来的分组,选择合适的路由和交换节点,正确无误地按照地址传送给目的端的运输层。就是通常说的 `IP`层。这一层就是我们经常说的 `IP`协议层。`IP`协议是 `Internet`的基础。我们可以这样理解,网络层规定了数据包的传输路线,而传输层则规定了数据包的传输方式。

# (6)数据链路层

将比特组合成字节,再将字节组合成帧,使用链路层地址 (以太网使用MAC地址)来访问介质,并进行差错检测。

网络层与数据链路层的对比,通过上面的描述,我们或许可以这样理解,网络层是规划了数据包的传输路线,而数据链路层就是传输路线。不过,在数据链路层上还增加了差错控制的功能。

# (7)物理层

实际最终信号的传输是通过物理层实现的。通过物理介质传输比特流。规定了电平、速度和电缆针脚。常用设备有(各种物理设备)集线器、中继器、调制解调器、网线、双绞线、同轴电缆。这些都是物理层的传输介质。

**OSI七层模型通信特点:对等通信**

对等通信,为了使数据分组从源传送到目的地,源端OSI模型的每一层都必须与目的端的对等层进行通信,这种通信方式称为对等层通信。在每一层通信过程中,使用本层自己协议进行通信。

 2. TCP/IP五层协议

`TCP/IP`五层协议和 `OSI`的七层协议对应关系如下:

![img](https://bingjs.com:8008/img/wl/wl9.png)

- **应用层 (application layer)**:直接为应用进程提供服务。应用层协议定义的是应用进程间通讯和交互的规则,不同的应用有着不同的应用层协议,如 HTTP协议(万维网服务)、FTP协议(文件传输)、SMTP协议(电子邮件)、DNS(域名查询)等。
- **传输层 (transport layer)**:有时也译为运输层,它负责为两台主机中的进程提供通信服务。该层主要有以下两种协议:
- - **传输控制协议 (Transmission Control Protocol,TCP)**:提供面向连接的、可靠的数据传输服务,数据传输的基本单位是报文段(segment);
  - **用户数据报协议 (User Datagram Protocol,UDP)**:提供无连接的、尽最大努力的数据传输服务,但不保证数据传输的可靠性,数据传输的基本单位是用户数据报。
- **网络层 (internet layer)**:有时也译为网际层,它负责为两台主机提供通信服务,并通过选择合适的路由将数据传递到目标主机。
- **数据链路层 (data link layer)**:负责将网络层交下来的 IP 数据报封装成帧,并在链路的两个相邻节点间传送帧,每一帧都包含数据和必要的控制信息(如同步信息、地址信息、差错控制等)。
- **物理层 (physical Layer)**:确保数据可以在各种物理媒介上进行传输,为数据的传输提供可靠的环境。

从上图中可以看出,`TCP/IP`模型比 `OSI`模型更加简洁,它把 `应用层/表示层/会话层`全部整合为了 `应用层`。

在每一层都工作着不同的设备,比如我们常用的交换机就工作在数据链路层的,一般的路由器是工作在网络层的。

在每一层实现的协议也各不同,即每一层的服务也不同,下图列出了每层主要的传输协议:

Similarly, the communication method of `TCP/IP` five-layer protocol is peer-to-peer communication:

6. TCP and UDP

 1. The concept and characteristics of TCP and UDP

Both TCP and UDP are transport layer protocols, and they both belong to the TCP/IP protocol family:

**(1)UDP**

The full name of UDP is **User Datagram Protocol**. In the network, it is used to process data packets like the TCP protocol. It is a connectionless protocol. In the OSI model, at the transport layer, it is at the upper layer of the IP protocol. UDP has the disadvantages of not providing data packet grouping, assembling and sorting of data packets, that is to say, after the message is sent, it is impossible to know whether it has arrived safely and completely.

Its characteristics are as follows:

**1) Connectionless oriented**

First of all, UDP does not need to perform a three-way handshake to establish a connection before sending data like TCP. If you want to send data, you can start sending. And it is only a porter of the data message, and will not perform any splitting and splicing operations on the data message.

Specifically:

- At the sending end, the application layer passes the data to the UDP protocol of the transport layer. UDP only adds a UDP header to the data to identify the UDP protocol, and then passes it to the network layer. - At the receiving end, the network layer passes the data to
the At the transport layer, UDP only removes the IP header and passes it to the application layer without any splicing operation

**2) There are unicast, multicast and broadcast functions**

UDP not only supports one-to-one transmission mode, but also supports one-to-many, many-to-many, and many-to-one modes, that is to say, UDP provides unicast, multicast, and broadcast functions.

**3) Packet-oriented**

The sender's UDP sends the message to the application program, and after adding the header, it is delivered to the IP layer. UDP neither merges nor splits the packets delivered by the application layer, but preserves the boundaries of these packets. Therefore, the application must choose the appropriate size of the message

**4) Unreliability**

First of all, the unreliability is reflected in the fact that there is no connection. The communication does not need to establish a connection, and it can be sent as soon as it is wanted. Such a situation is definitely unreliable.

And it will transmit whatever data is received, and it will not back up the data, and it will not care whether the other party has received the data correctly when sending the data.

再者网络环境时好时坏,但是 UDP 因为没有拥塞控制,一直会以恒定的速度发送数据。即使网络条件不好,也不会对发送速率进行调整。这样实现的弊端就是在网络条件不好的情况下可能会导致丢包,但是优点也很明显,在某些实时性要求高的场景(比如电话会议)就需要使用 UDP 而不是 TCP。

**5)头部开销小,传输数据报文时是很高效的。**

![img](https://bingjs.com:8008/img/wl/wl11.png)

UDP 头部包含了以下几个数据:

- 两个十六位的端口号,分别为源端口(可选字段)和目标端口
- 整个数据报文的长度
- 整个数据报文的检验和(IPv4 可选字段),该字段用于发现头部信息和数据中的错误

因此 UDP 的头部开销小,只有8字节,相比 TCP 的至少20字节要少得多,在传输数据报文时是很高效的。

**(2)TCP**

TCP的全称是传输控制协议是一种面向连接的、可靠的、基于字节流的传输层通信协议。TCP 是面向连接的、可靠的流协议(流就是指不间断的数据结构)。

它有以下几个特点:

**1)面向连接**

面向连接,是指发送数据之前必须在两端建立连接。建立连接的方法是“三次握手”,这样能建立可靠的连接。建立连接,是为数据的可靠传输打下了基础。

**2)仅支持单播传输**

每条TCP传输连接只能有两个端点,只能进行点对点的数据传输,不支持多播和广播传输方式。

**3)面向字节流**

TCP不像UDP一样那样一个个报文独立地传输,而是在不保留报文边界的情况下以字节流方式进行传输。

**4)可靠传输**

对于可靠传输,判断丢包、误码靠的是TCP的段编号以及确认号。TCP为了保证报文传输的可靠,就给每个包一个序号,同时序号也保证了传送到接收端实体的包的按序接收。然后接收端实体对已成功收到的字节发回一个相应的确认(ACK);如果发送端实体在合理的往返时延(RTT)内未收到确认,那么对应的数据(假设丢失了)将会被重传。

**5)提供拥塞控制**

当网络出现拥塞的时候,TCP能够减小向网络注入数据的速率和数量,缓解拥塞。

**6)提供全双工通信**

TCP允许通信双方的应用程序在任何时候都能发送数据,因为TCP连接的两端都设有缓存,用来临时存放双向通信的数据。当然,TCP可以立即发送一个数据段,也可以缓存一段时间以便一次发送更多的数据段(最大的数据段大小取决于MSS)

 2. TCP和UDP的区别

|        | UDP                   | TCP                        |
| ------ | --------------------- | -------------------------- |
| 是否连接   | 无连接                   | 面向连接                       |
| 是否可靠   | 不可靠传输,不使用流量控制和拥塞控制    | 可靠传输(数据顺序和正确性),使用流量控制和拥塞控制 |
| 连接对象个数 | 支持一对一,一对多,多对一和多对多交互通信 | 只能是一对一通信                   |
| 传输方式   | 面向报文                  | 面向字节流                      |
| 首部开销   | 首部开销小,仅8字节            | 首部最小20字节,最大60字节            |
| 适用场景   | 适用于实时应用,例如视频会议、直播     | 适用于要求可靠传输的应用,例如文件传输        |

 3. TCP和UDP的使用场景

- **TCP应用场景**: 效率要求相对低,但对准确性要求相对高的场景。因为传输中需要对数据确认、重发、排序等操作,相比之下效率没有UDP高。例如:文件传输(准确高要求高、但是速度可以相对慢)、接受邮件、远程登录。
- **UDP应用场景**: 效率要求相对高,对准确性要求相对低的场景。例如:QQ聊天、在线视频、网络语音电话(即时通讯,速度要求高,但是出现偶尔断续不是太大问题,并且此处完全不可以使用重发机制)、广播通信(广播、多播)。

 4. UDP协议为什么不可靠?

UDP在传输数据之前不需要先建立连接,远地主机的运输层在接收到UDP报文后,不需要确认,提供不可靠交付。总结就以下四点:

- 不保证消息交付:不确认,不重传,无超时
- 不保证交付顺序:不设置包序号,不重排,不会发生队首阻塞
- 不跟踪连接状态:不必建立连接或重启状态机
- 不进行拥塞控制:不内置客户端或网络反馈机制

 5. TCP的重传机制

由于TCP的下层网络(网络层)可能出现**丢失、重复或失序**的情况,TCP协议提供可靠数据传输服务。为保证数据传输的正确性,TCP会重传其认为已丢失(包括报文中的比特错误)的包。TCP使用两套独立的机制来完成重传,一是**基于时间**,二是**基于确认信息**。

TCP在发送一个数据之后,就开启一个定时器,若是在这个时间内没有收到发送数据的ACK确认报文,则对该报文进行重传,在达到一定次数还没有成功时放弃并发送一个复位信号。

 6. TCP的拥塞控制机制

TCP的拥塞控制机制主要是以下四种机制:

- 慢启动(慢开始)
- 拥塞避免
- 快速重传
- 快速恢复

**(1)慢启动(慢开始)**

- Set cwnd = 1 when starting to send (cwnd refers to the congestion window)
- Idea: Do not send a large amount of data at the beginning, but test the congestion level of the network first, and increase the size of the congestion window from small to large.
- In order to prevent network congestion caused by excessive cwnd growth, set a slow start threshold (ssthresh state variable) - - When
cnwd < ssthresh, use the slow start algorithm
  - When cnwd = ssthresh, you can use either the slow start algorithm or congestion avoidance algorithm
  - when cnwd > ssthresh, use congestion avoidance algorithm

**(2) Congestion Avoidance**

- Congestion avoidance may not be able to completely avoid congestion. It means that the congestion window is controlled to grow linearly during the congestion avoidance phase, making the network less prone to congestion.
- Idea: Let the congestion window cwnd increase slowly, that is, add one to the congestion control window of the sender every time a return time RTT passes - whether it is
in the slow start phase or in the congestion avoidance phase, as long as the sender judges that the network is congested, Just set the slow start threshold to half the size of the sending window when congestion occurs. Then set the congestion window to 1 and execute the slow start algorithm. as the picture shows:

![img](https://bingjs.com:8008/img/wl/wl12.png)

- Among them, the basis for judging that the network is congested is that no acknowledgment has been received. Although no acknowledgment may be due to packet loss due to other reasons, it cannot be determined, so it is treated as congestion.

**(3) Fast retransmission**

- Fast retransmission requires the receiver to send a repeated confirmation immediately after receiving an out-of-sequence segment (in order to let the sender know early that a segment has not reached the other party). As long as the sender receives three consecutive repeated acknowledgments, it will immediately retransmit the message segment that the other party has not yet received, without having to continue to wait for the set retransmission timer to expire.
- Since there is no need to wait for the set retransmission timer to expire, the unconfirmed segment can be retransmitted as soon as possible, which can improve the throughput of the entire network

**(4) Quick Recovery**

- When the sender receives three repeated confirmations in a row, it executes the "multiplication reduction" algorithm to halve the ssthresh threshold. But then the slow start algorithm is not executed.
- Considering that if the network is congested it will not receive several duplicate acknowledgments, so the sender now thinks the network may not be congested. Therefore, the slow start algorithm is not executed at this time, but the cwnd is set to the size of ssthresh, and then the congestion avoidance algorithm is executed.

 7. TCP flow control mechanism

Generally speaking, flow control is to prevent the sender from sending data too fast, so that the receiver can receive it in time. TCP uses a variable-sized **sliding window** for flow control, and the unit of the window size is bytes. The window size mentioned here is actually the size of the data transmitted each time.

- When a connection is established, each end of the connection allocates a buffer to hold the incoming data, and sends the size of the buffer to the other end.
- When data arrives, the receiver sends an acknowledgment, which contains its remaining buffer size. (The size of the remaining buffer space is called the window, and notifications indicating the window size are called window notifications. The receiver includes a window notification in every acknowledgment it sends.) - If the speed at which the receiving application can read
data As fast as data arrives, the receiver will send a positive window notification with every acknowledgment.
- If the sender operates faster than the receiver, the received data will eventually fill the receiver's buffer, causing the receiver to advertise a zero window. When the sender receives a zero-window advertisement, it must stop sending until the receiver re-advertises a positive window.

 8. TCP's reliable transmission mechanism

The reliable transmission mechanism of TCP is based on the continuous ARQ protocol and the sliding window protocol.

The TCP protocol maintains a sending window on the sender. The message segments before the sending window are the message segments that have been sent and confirmed. The message segments after the sending window are the message segments that are not allowed to be sent in the cache. When the sender sends a message to the receiver, it will send all the message segments in the window in sequence, and set a timer, which can be understood as the earliest message segment sent but not received confirmation. If the confirmation reply of a certain message segment is received within the time of the timer, then slide the window, and slide the head of the window backward to the last position of the confirmation message segment. If there is no message segment, reset the timer, and if there is no more, turn off the timer. If the timer expires, resend all the message segments that have been sent but have not received confirmation, and set the timeout interval to twice the previous one. When the sender receives three redundant acknowledgment responses from the receiver, this is an indication that the message segment after this message segment is likely to be lost, then the sender will enable the fast retransmission mechanism , that is, send all sent but confirmed segments before the current timer expires.

The receiver uses a cumulative acknowledgment mechanism. For all message segments that arrive in sequence, the receiver returns an affirmative answer of a message segment. If an out-of-order segment is received, the receiver discards it and returns an affirmative answer to the most recent segment that arrived in order. The use of cumulative confirmation ensures that the message segments before the returned confirmation number have arrived in order, so the sending window can be moved to the back of the confirmed message segment.

The size of the sending window is variable, and it is determined by the remaining size of the receiving window and the degree of congestion in the network. TCP controls the sending rate of message segments by controlling the length of the sending window.

But the TCP protocol is not exactly the same as the sliding window protocol, because many TCP implementations will cache out-of-order segments, and when retransmission occurs, only one segment will be retransmitted, so the reliable transmission mechanism of the TCP protocol It is more like a hybrid of window sliding protocol and selective retransmission protocol.

 9. TCP's three-way handshake and four-way wave

# (1) Three-way handshake

![img](https://bingjs.com:8008/img/wl/wl13.png)

三次握手(Three-way Handshake)其实就是指建立一个TCP连接时,需要客户端和服务器总共发送3个包。进行三次握手的主要作用就是为了确认双方的接收能力和发送能力是否正常、指定自己的初始化序列号为后面的可靠性传送做准备。实质上其实就是连接服务器指定端口,建立TCP连接,并同步连接双方的序列号和确认号,交换TCP窗口大小信息。

刚开始客户端处于 Closed 的状态,服务端处于 Listen 状态。

- 第一次握手:客户端给服务端发一个 SYN 报文,并指明客户端的初始化序列号 ISN,此时客户端处于 SYN_SEND 状态。

> 首部的同步位SYN=1,初始序号seq=x,SYN=1的报文段不能携带数据,但要消耗掉一个序号。

- 第二次握手:服务器收到客户端的 SYN 报文之后,会以自己的 SYN 报文作为应答,并且也是指定了自己的初始化序列号 ISN。同时会把客户端的 ISN + 1 作为ACK 的值,表示自己已经收到了客户端的 SYN,此时服务器处于 SYN_REVD 的状态。

> 在确认报文段中SYN=1,ACK=1,确认号ack=x+1,初始序号seq=y

- 第三次握手:客户端收到 SYN 报文之后,会发送一个 ACK 报文,当然,也是一样把服务器的 ISN + 1 作为 ACK 的值,表示已经收到了服务端的 SYN 报文,此时客户端处于 ESTABLISHED 状态。服务器收到 ACK 报文之后,也处于 ESTABLISHED 状态,此时,双方已建立起了连接。

> 确认报文段ACK=1,确认号ack=y+1,序号seq=x+1(初始为seq=x,第二个报文段所以要+1),ACK报文段可以携带数据,不携带数据则不消耗序号。

**那为什么要三次握手呢?两次不行吗?**

- 为了确认双方的接收能力和发送能力都正常
- 如果是用两次握手,则会出现下面这种情况:

> 如客户端发出连接请求,但因连接请求报文丢失而未收到确认,于是客户端再重传一次连接请求。后来收到了确认,建立了连接。数据传输完毕后,就释放了连接,客户端共发出了两个连接请求报文段,其中第一个丢失,第二个到达了服务端,但是第一个丢失的报文段只是在某些网络结点长时间滞留了,延误到连接释放以后的某个时间才到达服务端,此时服务端误认为客户端又发出一次新的连接请求,于是就向客户端发出确认报文段,同意建立连接,不采用三次握手,只要服务端发出确认,就建立新的连接了,此时客户端忽略服务端发来的确认,也不发送数据,则服务端一致等待客户端发送数据,浪费资源。

**简单来说就是以下三步:**

- **第一次握手**:客户端向服务端发送连接请求报文段。该报文段中包含自身的数据通讯初始序号。请求发送后,客户端便进入 SYN-SENT 状态。
- **第二次握手**:服务端收到连接请求报文段后,如果同意连接,则会发送一个应答,该应答中也会包含自身的数据通讯初始序号,发送完成后便进入 SYN-RECEIVED 状态。
- **第三次握手**:当客户端收到连接同意的应答后,还要向服务端发送一个确认报文。客户端发完这个报文段后便进入 ESTABLISHED 状态,服务端收到这个应答后也进入 ESTABLISHED 状态,此时连接建立成功。

TCP 三次握手的建立连接的过程就是相互确认初始序号的过程,告诉对方,什么样序号的报文段能够被正确接收。 第三次握手的作用是客户端对服务器端的初始序号的确认。如果只使用两次握手,那么服务器就没有办法知道自己的序号是否 已被确认。同时这样也是为了防止失效的请求报文段被服务器接收,而出现错误的情况。

# (2)四次挥手

![img](https://bingjs.com:8008/img/wl/wl14.png)

刚开始双方都处于 ESTABLISHED 状态,假如是客户端先发起关闭请求。四次挥手的过程如下:

- 第一次挥手: 客户端会发送一个 FIN 报文,报文中会指定一个序列号。此时客户端处于 FIN_WAIT1 状态。

> 即发出连接释放报文段(FIN=1,序号seq=u),并停止再发送数据,主动关闭TCP连接,进入FIN_WAIT1(终止等待1)状态,等待服务端的确认。

- 第二次挥手:服务端收到 FIN 之后,会发送 ACK 报文,且把客户端的序列号值 +1 作为 ACK 报文的序列号值,表明已经收到客户端的报文了,此时服务端处于 CLOSE_WAIT 状态。

> 即服务端收到连接释放报文段后即发出确认报文段(ACK=1,确认号ack=u+1,序号seq=v),服务端进入CLOSE_WAIT(关闭等待)状态,此时的TCP处于半关闭状态,客户端到服务端的连接释放。客户端收到服务端的确认后,进入FIN_WAIT2(终止等待2)状态,等待服务端发出的连接释放报文段。

- 第三次挥手:如果服务端也想断开连接了,和客户端的第一次挥手一样,发给 FIN 报文,且指定一个序列号。此时服务端处于 LAST_ACK 的状态。

> 即服务端没有要向客户端发出的数据,服务端发出连接释放报文段(FIN=1,ACK=1,序号seq=w,确认号ack=u+1),服务端进入LAST_ACK(最后确认)状态,等待客户端的确认。

- 第四次挥手:客户端收到 FIN 之后,一样发送一个 ACK 报文作为应答,且把服务端的序列号值 +1 作为自己 ACK 报文的序列号值,此时客户端处于 TIME_WAIT 状态。需要过一阵子以确保服务端收到自己的 ACK 报文之后才会进入 CLOSED 状态,服务端收到 ACK 报文之后,就处于关闭连接了,处于 CLOSED 状态。

> 即客户端收到服务端的连接释放报文段后,对此发出确认报文段(ACK=1,seq=u+1,ack=w+1),客户端进入TIME_WAIT(时间等待)状态。此时TCP未释放掉,需要经过时间等待计时器设置的时间2MSL后,客户端才进入CLOSED状态。

那为什么需要四次挥手呢?

> 因为当服务端收到客户端的SYN连接请求报文后,可以直接发送SYN+ACK报文。其中ACK报文是用来应答的,SYN报文是用来同步的。但是关闭连接时,当服务端收到FIN报文时,很可能并不会立即关闭SOCKET,所以只能先回复一个ACK报文,告诉客户端,“你发的FIN报文我收到了”。只有等到我服务端所有的报文都发送完了,我才能发送FIN报文,因此不能一起发送,故需要四次挥手。

简单来说就是以下四步:

- **第一次挥手**:若客户端认为数据发送完成,则它需要向服务端发送连接释放请求。
- **第二次挥手**:服务端收到连接释放请求后,会告诉应用层要释放 TCP 链接。然后会发送 ACK 包,并进入 CLOSE_WAIT 状态,此时表明客户端到服务端的连接已经释放,不再接收客户端发的数据了。但是因为 TCP 连接是双向的,所以服务端仍旧可以发送数据给客户端。
- **第三次挥手**:服务端如果此时还有没发完的数据会继续发送,完毕后会向客户端发送连接释放请求,然后服务端便进入 LAST-ACK 状态。
- **第四次挥手**:客户端收到释放请求后,向服务端发送确认应答,此时客户端进入 TIME-WAIT 状态。该状态会持续 2MSL(最大段生存期,指报文段在网络中生存的时间,超时会被抛弃) 时间,若该时间段内没有服务端的重发请求的话,就进入 CLOSED 状态。当服务端收到确认应答后,也便进入 CLOSED 状态。

TCP 使用四次挥手的原因是因为 TCP 的连接是全双工的,所以需要双方分别释放到对方的连接,单独一方的连接释放,只代 表不能再向对方发送数据,连接处于的是半释放的状态。

最后一次挥手中,客户端会等待一段时间再关闭的原因,是为了防止发送给服务器的确认报文段丢失或者出错,从而导致服务器 端不能正常关闭。

 10. TCP粘包是怎么回事,如何处理?

默认情况下, TCP 连接会启⽤延迟传送算法 (Nagle 算法), 在数据发送之前缓存他们. 如果短时间有多个数据发送, 会缓冲到⼀起作⼀次发送 (缓冲大小见 socket.bufferSize ), 这样可以减少 IO 消耗提⾼性能.

如果是传输⽂件的话, 那么根本不⽤处理粘包的问题, 来⼀个包拼⼀个包就好了。但是如果是多条消息, 或者是别的⽤途的数据那么就需要处理粘包.

下面看⼀个例⼦, 连续调⽤两次 send 分别发送两段数据 data1 和 data2, 在接收端有以下⼏种常⻅的情况:

A. 先接收到 data1, 然后接收到 data2 .

B. 先接收到 data1 的部分数据, 然后接收到 data1 余下的部分以及 data2 的全部.

C. 先接收到了 data1 的全部数据和 data2 的部分数据, 然后接收到了 data2 的余下的数据.

D. ⼀次性接收到了 data1 和 data2 的全部数据.

其中的 BCD 就是我们常⻅的粘包的情况. ⽽对于处理粘包的问题, 常⻅的解决⽅案有:

- **There is a waiting time before sending multiple times**: Just wait for a period of time before sending the next time. It is suitable for scenarios with very low interaction frequency. The disadvantages are also obvious. For comparison For frequent scenarios, the transmission efficiency is too low, but there is almost no need to do anything.
- **Turn off the Nagle algorithm**: Turn off the Nagle algorithm, in Node.js you can turn it off through the socket.setNoDelay() method The Nagle algorithm allows each send to be sent directly without buffering. This method is more suitable for scenarios where the data sent each time is relatively large (but not as large as the file), and the frequency is not particularly high. If the amount of data sent each time is relatively small and the frequency is particularly high, turning off Nagle is purely self-defeating. In addition, this method is not suitable for poor network conditions, because the Nagle algorithm is a package combination performed on the server side, but if the network condition of the client is not good in a short period of time, or the application layer due to some reasons Failure to recv the TCP data in time will cause multiple packets to be buffered on the client side resulting in sticky packets. (If you are communicating in a stable computer room, then this probability is relatively small and you can choose to ignore it)
- **Packing/unpacking**: Packing/unpacking is a common solution in the industry at present. That is, before sending each data packet, put some characteristic data before/after it, and then divide each data packet according to the characteristic data when receiving the data.

 11. Why doesn't udp stick packets?

- The TCP protocol is a stream-oriented protocol, and UDP is a message-oriented protocol. A UDP segment is a message, and the application must extract data in units of messages, and cannot extract arbitrary bytes of data at a time -
UDP has the protection of message boundaries, and there is a message header (message source address) in each UDP packet , port and other information), so that it is easy for the receiving end to distinguish and process. The transmission protocol transmits data as an independent message on the Internet, and the receiving end can only receive independent messages. The receiving end can only receive one data packet sent by the sending end at a time. If the size of the received data is smaller than the size of the data sent by the sending end at one time, a part of the data will be lost. Even if it is lost, the receiving end will not It will be received in two batches.

7. WebSocket

 1. Understanding of WebSocket

WebSocket is a network technology provided by HTML5 for **full-duplex communication** between browsers and servers, and belongs to the application layer protocol. It is based on the TCP transport protocol and reuses the HTTP handshake channel. The browser and the server only need to complete a handshake, and a persistent connection can be created directly between the two, and two-way data transmission can be performed.

The emergence of WebSocket solves the disadvantages of half-duplex communication. Its biggest feature is: **The server can actively push messages to the client, and the client can also actively push messages to the server. **

**WebSocket principle**: The client notifies (notify) an event (event) with all recipient IDs (recipients IDs) to the WebSocket server, and the server immediately notifies all active (active) clients after receiving it, only the ID is in Only clients in the receiver ID sequence will handle this event.

**WebSocket features are as follows:**

- Support two-way communication, stronger real-time performance
- Can send text and binary data''
- Built on the TCP protocol, the implementation of the server is relatively easy
- The data format is relatively lightweight, the performance overhead is small, and the communication is efficient
- No same Source restriction, client can communicate with any server
- protocol identifier is ws (wss if encrypted), server URL is URL
- good compatibility with HTTP protocol. The default ports are also 80 and 443, and the handshake phase uses the HTTP protocol, so it is not easy to shield during the handshake, and can pass through various HTTP proxy servers.

**The usage of Websocket is as follows:**

In the client:

// 在index.html中直接写WebSocket,设置服务端的端口号为 9999
let ws = new WebSocket('ws://localhost:9999');
// 在客户端与服务端建立连接后触发
ws.onopen = function() {
    console.log("Connection open."); 
    ws.send('hello');
};
// 在服务端给客户端发来消息的时候触发
ws.onmessage = function(res) {
    console.log(res);       // 打印的是MessageEvent对象
    console.log(res.data);  // 打印的是收到的消息
};
// 在客户端与服务端建立关闭后触发
ws.onclose = function(evt) {
  console.log("Connection closed.");
}; 

 2. Realization of instant messaging: the difference between short polling, long polling, SSE and WebSocket?

The purpose of short polling and long polling is to realize an instant communication between the client and the server.

**Basic idea of ​​short polling**: The browser sends http requests to the browser every once in a while, and the server responds directly after receiving the request, regardless of whether there is data update. The instant messaging implemented in this way is essentially a process in which the browser sends a request and the server accepts the request. By making the client make continuous requests, the client can simulate real-time data changes received from the server. The advantage of this method is that it is relatively simple and easy to understand. The disadvantage is that this method seriously wastes the resources of the server and client due to the need to continuously establish http connections. When the number of users increases, the pressure on the server side will increase, which is very unreasonable.

**Basic idea of ​​long polling**: First, the client initiates a request to the server. When the server receives the request from the client, the server will not respond directly, but suspends the request first, and then Determine whether the server-side data has been updated. If there is an update, it will respond. If there is no data, it will return after reaching a certain time limit. After processing the information returned by the server, the client-side JavaScript response processing function will send a request again to re-establish the connection. Compared with short polling, long polling has the advantage of significantly reducing the number of unnecessary http requests, which saves resources in comparison. The disadvantage of long polling is that the connection hang will also lead to a waste of resources.

**Basic idea of ​​SSE**: The server uses stream information to push information to the server. Strictly speaking, the http protocol cannot enable the server to actively push information. However, there is a workaround where the server declares to the client that the next thing to be sent is the stream information. That is to say, what is sent is not a one-time data packet, but a data stream, which will be sent continuously. At this time, the client will not close the connection, but will always wait for the new data stream sent by the server. Video playback is an example of this. SSE uses this mechanism to push information to the browser using streaming information. It is based on the http protocol, currently supported by other browsers except IE/Edge. Compared with the previous two methods, it does not need to create too many http requests, and saves resources in comparison.

WebSocket is a new protocol defined by HTML5. Different from the traditional http protocol, this protocol allows the server to actively push information to the client. The disadvantage of using the WebSocket protocol is that the configuration on the server side is more complicated. WebSocket is a full-duplex protocol, that is, both parties to the communication are equal and can send messages to each other, while the SSE method is one-way communication, and the server can only push information to the client. If the client needs to send information, it is It belongs to the next http request.

** Of the four communication protocols above, the first three are based on the HTTP protocol. **

For these four instant communication protocols, from a performance point of view:

**WebSocket > Long Connection (SEE) > Long Polling > Short Polling**

However, if we consider browser compatibility issues, the order is just the opposite:

**Short Polling > Long Polling > Long Connection (SEE) > WebSocket**

Therefore, it is still necessary to judge which method to use according to the specific usage scenario.

# Code output interview questions

 Foreword:

**Code output results** is also a frequently asked question in interviews. A piece of code may involve a lot of knowledge points, which examines the applicant's basic ability. In the front-end interview, the code output questions that are often tested mainly involve the following knowledge points: **asynchronous programming, event loop, this pointing, scope, variable promotion, closure, prototype, inheritance**, etc. These knowledge points are often not It appears alone, but contains multiple knowledge points in the same piece of code. Therefore, the author divides these issues into four categories for discussion. The basic knowledge will not be systematically explained here, but the knowledge points of each topic and the execution process of the code will be described in the form of interview sample questions. If you know these sample questions, most of the code output problems in the front-end interview can be easily solved.

1. Asynchronous & event loop

 1. Code output result

const promise = new Promise((resolve, reject) => {
  console.log(1);
  console.log(2);
});
promise.then(() => {
  console.log(3);
});
console.log(4);

```

输出结果如下:

```
4

promise.then is a microtask, it will be executed after all the macrotasks are executed, and at the same time, the internal state of the promise needs to change, because there is no internal change here, and it is always in the pending state, so 3 is not output.

 2. Code output result

const promise1 = new Promise((resolve, reject) => {
  console.log('promise1')
  resolve('resolve1')
})
const promise2 = promise1.then(res => {
  console.log(res)
})
console.log('1', promise1);
console.log('2', promise2);

```

输出结果如下:

```
promise1
1 Promise{<resolved>: resolve1}
2 Promise{<pending>}
resolve1

It should be noted that printing promise1 directly will print out its status value and parameters.

The code execution process is as follows:

1. The script is a macro task, execute these codes in order;
2. First enter Promise, execute the code in the constructor, and print `promise1`;
3. When encountering the `resolve` function, change the state of `promise1` to `resolved`, and save the result;
4. Encounter `promise1.then` this microtask, put it into the microtask queue;
5. `promise2` is a new `Promise` whose status is `pending`;
6. Execute synchronous code 1, and print out that the status of `promise1` is `resolved`;
7. Execute synchronous code 2, and print out that the status of `promise2` is `pending`;
8. After the execution of the macro task is completed, find the micro task The queue finds the microtask of `promise1.then` and its status is `resolved`, and executes it.

 3. Code output result

const promise = new Promise((resolve, reject) => {
  console.log(1);
  setTimeout(() => {
    console.log("timerStart");
    resolve("success");
    console.log("timerEnd");
  }, 0);
  console.log(2);
});
promise.then((res) => {
  console.log(res);
});
console.log(4);

```

输出结果如下:

```
1
2
4
timerStart
timerEnd
success

The code execution process is as follows:

- When encountering the Promise constructor, it will execute the content first, and print 1;
- Encounter the timer `steTimeout`, which is a macro task, and put it into the macro task queue;
- Continue to execute downwards, and print 2;
- Since the status of `Promise` is still `pending` at this time, `promise.then` will not be executed first; -
continue to execute the following synchronization tasks, and print out 4;
- there is no task in the microtask queue at this time, continue to execute the next round of macros Task, execute `steTimeout`;
- First execute `timerStart`, then encounter `resolve`, change the state of `promise` to `resolved` and save the result and push the previous `promise.then` into the microtask queue, and then Execute `timerEnd`;
- After executing this macro task, execute the micro task `promise.then`, and print out the result of `resolve`.

 4. Code output result

Promise.resolve().then(() => {
  console.log('promise1');
  const timer2 = setTimeout(() => {
    console.log('timer2')
  }, 0)
});
const timer1 = setTimeout(() => {
  console.log('timer1')
  Promise.resolve().then(() => {
    console.log('promise2')
  })
}, 0)
console.log('start');

The output is as follows:

start
promise1
timer1
promise2
timer2

The code execution process is as follows:

1. First, `Promise.resolve().then` is a microtask, which is added to the microtask queue
2. Execute timer1, which is a macrotask, and is added to the macrotask queue
3. Continue to execute the following synchronous code and print out `start `
4. In this way, the first round of macro tasks is completed, and the micro task `Promise.resolve().then` is executed, and `promise1` is printed out. 5.
When `timer2` is encountered, it is a macro task, and it is added to the macro Task queue. At this time, the macro task queue has two tasks, which are `timer1` and `timer2`;
6. In this way, the first round of micro tasks is executed, and the second round of macro tasks is started. First, the timer `timer1` is executed , print `timer1`;
7. Encounter `Promise.resolve().then`, it is a microtask, join the microtask queue
8. Start executing tasks in the microtask queue, print `promise2`;
9. Finally execute Macro task `timer2` timer, print out `timer2`;

 5. Code output result

const promise = new Promise((resolve, reject) => {
    resolve('success1');
    reject('error');
    resolve('success2');
});
promise.then((res) => {
    console.log('then:', res);
}).catch((err) => {
    console.log('catch:', err);
})

The output is as follows:

then:success1

This topic examines **After the state of Promise changes, it will not change again**. The start state changes from `pending` to `resolve`, indicating that it has changed to a completed state, and the following two states will not be executed again, and the following catch will not catch errors.

 6. Code output result

Promise.resolve(1)
  .then(2)
  .then(Promise.resolve(3))
  .then(console.log)

The output is as follows:

1
Promise {<fulfilled>: undefined}

If the parameter of the Promise.resolve method is an original value or an object without a then method, the Promise.resolve method returns a new Promise object with a status of resolved, and the parameters of the Promise.resolve method will be passed to the callback at the same time function.

The parameter accepted by the then method is a function, and if it is not a function, it will actually interpret it as then(null), which will cause the result of the previous Promise to be passed below.

 7. Code output result

const promise1 = new Promise((resolve, reject) => {
  setTimeout(() => {
    resolve('success')
  }, 1000)
})
const promise2 = promise1.then(() => {
  throw new Error('error!!!')
})
console.log('promise1', promise1)
console.log('promise2', promise2)
setTimeout(() => {
  console.log('promise1', promise1)
  console.log('promise2', promise2)
}, 2000)

The output is as follows:

promise1 Promise {<pending>}
promise2 Promise {<pending>}

Uncaught (in promise) Error: error!!!
promise1 Promise {<fulfilled>: "success"}
promise2 Promise {<rejected>: Error: error!!}

 8. 代码输出结果

Promise.resolve(1)
  .then(res => {
    console.log(res);
    return 2;
  })
  .catch(err => {
    return 3;
  })
  .then(res => {
    console.log(res);
  });

输出结果如下:

1   
2

Promise是可以链式调用的,由于每次调用 `.then` 或者 `.catch` 都会返回一个新的 promise,从而实现了链式调用, 它并不像一般任务的链式调用一样return this。

上面的输出结果之所以依次打印出1和2,是因为 `resolve(1)`之后走的是第一个then方法,并没有进catch里,所以第二个then中的res得到的实际上是第一个then的返回值。并且return 2会被包装成 `resolve(2)`,被最后的then打印输出2。

 9. 代码输出结果

Promise.resolve().then(() => {
  return new Error('error!!!')
}).then(res => {
  console.log("then: ", res)
}).catch(err => {
  console.log("catch: ", err)
})

输出结果如下:

"then: " "Error: error!!!"

返回任意一个非 promise 的值都会被包裹成 promise 对象,因此这里的 `return new Error('error!!!')`也被包裹成了 `return Promise.resolve(new Error('error!!!'))`,因此它会被then捕获而不是catch。

 10. 代码输出结果

const promise = Promise.resolve().then(() => {
  return promise;
})
promise.catch(console.err)

输出结果如下:

Uncaught (in promise) TypeError: Chaining cycle detected for promise #<Promise>

这里其实是一个坑,`.then` 或 `.catch` 返回的值不能是 promise 本身,否则会造成死循环。

 11. 代码输出结果

Promise.resolve(1)
  .then(2)
  .then(Promise.resolve(3))
  .then(console.log)

输出结果如下:


1

看到这个题目,好多的then,实际上只需要记住一个原则:`.then` 或 `.catch` 的参数期望是函数,传入非函数则会发生**值透传**。

第一个then和第二个then中传入的都不是函数,一个是数字,一个是对象,因此发生了透传,将 `resolve(1)` 的值直接传到最后一个then里,直接打印出1。

 12. 代码输出结果

Promise.reject('err!!!')
  .then((res) => {
    console.log('success', res)
  }, (err) => {
    console.log('error', err)
  }).catch(err => {
    console.log('catch', err)
  })

输出结果如下:

error err!!!

我们知道,`.then`函数中的两个参数:

- 第一个参数是用来处理Promise成功的函数
- 第二个则是处理失败的函数

也就是说 `Promise.resolve('1')`的值会进入成功的函数,`Promise.reject('2')`的值会进入失败的函数。

In this question, the error is directly caught by the second parameter of `then`, so it will not be caught by `catch`, and the output is: `error err!!!'`

However, if it is like this:

Promise.resolve()
  .then(function success (res) {
    throw new Error('error!!!')
  }, function fail1 (err) {
    console.log('fail1', err)
  }).catch(function fail2 (err) {
    console.log('fail2', err)
  })

If an error is thrown in the first parameter of `then`, then it will not be deactivated by the second parameter, but will be caught by the `catch` later.

 13. Code output result

Promise.resolve('1')
  .then(res => {
    console.log(res)
  })
  .finally(() => {
    console.log('finally')
  })
Promise.resolve('2')
  .finally(() => {
    console.log('finally2')
    return '我是finally2返回的值'
  })
  .then(res => {
    console.log('finally2后面的then函数', res)
  })

The output is as follows:

1
finally2
finally
finally2后面的then函数 2

`.finally()` is generally rarely used, just remember the following points:

- The `.finally()` method will execute regardless of the final state of the Promise object
- The callback function of the `.finally()` method does not accept any parameters, which means you cannot know in the `.finally()` function Whether the final state of the Promise is `resolved` or `rejected`
- it will return the default value of a Promise object last time, but if an exception is thrown, it will return the Promise object of the exception.
- finally is essentially a special case of the then method

Error trapping for `.finally()`:

Promise.resolve('1')
  .finally(() => {
    console.log('finally1')
    throw new Error('我是finally中抛出的异常')
  })
  .then(res => {
    console.log('finally后面的then函数', res)
  })
  .catch(err => {
    console.log('捕获错误', err)
  })

The output is:

'finally1'
'捕获错误' Error: 我是finally中抛出的异常

 14. Code output result

function runAsync (x) {
    const p = new Promise(r => setTimeout(() => r(x, console.log(x)), 1000))
    return p
}

Promise.all([runAsync(1), runAsync(2), runAsync(3)]).then(res => console.log(res))

The output is as follows:

1
2
3
[1, 2, 3]

First, a Promise is defined to asynchronously execute the function runAsync, which passes in a value x, and then prints out this x after an interval of one second.

Then use `Promise.all` to execute this function. When executing it, you can see that 1, 2, 3 are output after one second, and the array [1, 2, 3] is output at the same time. The three functions are executed synchronously. And all results are returned in a callback function. And the execution order of the result and the function is consistent.

 15. Code output result

function runAsync (x) {
  const p = new Promise(r => setTimeout(() => r(x, console.log(x)), 1000))
  return p
}
function runReject (x) {
  const p = new Promise((res, rej) => setTimeout(() => rej(`Error: ${x}`, console.log(x)), 1000 * x))
  return p
}
Promise.all([runAsync(1), runReject(4), runAsync(3), runReject(2)])
       .then(res => console.log(res))
       .catch(err => console.log(err))

The output is as follows:

can be seen. The catch catches the first error, and the first error in this question is the result of `runReject(2)`. If there is an exception in a group of asynchronous operations, it will not enter the first callback function parameter of `.then()`. Will be caught by the second callback function of `.then()`.

 16. Code output result

function runAsync (x) {
  const p = new Promise(r => setTimeout(() => r(x, console.log(x)), 1000))
  return p
}
Promise.race([runAsync(1), runAsync(2), runAsync(3)])
  .then(res => console.log('result: ', res))
  .catch(err => console.log(err))

The output is as follows:

then will only capture the first successful method, although other functions will continue to execute, but they are not captured by then.

 17. Code output result

function runAsync(x) {
  const p = new Promise(r =>
    setTimeout(() => r(x, console.log(x)), 1000)
  );
  return p;
}
function runReject(x) {
  const p = new Promise((res, rej) =>
    setTimeout(() => rej(`Error: ${x}`, console.log(x)), 1000 * x)
  );
  return p;
}
Promise.race([runReject(0), runAsync(1), runAsync(2), runAsync(3)])
  .then(res => console.log("result: ", res))
  .catch(err => console.log(err));

The output is as follows:

0
Error: 0
1
2
3

It can be seen that after the catch catches the first error, the following code is not executed yet, but it will not be caught again.

Note: If there is an asynchronous task that throws an exception in the array passed in by `all` and `race`, only the first thrown error will be caught, and it will be caught by the second parameter of then or by the catch later ; But it will not affect the execution of other asynchronous tasks in the array.

 18. Code output result

async function async1() {
  console.log("async1 start");
  await async2();
  console.log("async1 end");
}
async function async2() {
  console.log("async2");
}
async1();
console.log('start')

The output is as follows:

async1 start
async2
start
async1 end

The code execution process is as follows:

1. First execute the synchronization code `async1 start` in the function, and then encounter `await`, which will block the execution of the code behind `async1`, so it will first execute the synchronization code `async2` in `async2`, and then jump out `async1`;
2. Execute the synchronous code `start` after jumping out of the `async1` function;
3. After a round of macro tasks are executed, execute the content `async1 end` after `await`.

It can be understood here that the statement after await is equivalent to being placed in new Promise, and the next line and subsequent statements are equivalent to being placed in Promise.then.

 19. Code output result

async function async1() {
  console.log("async1 start");
  await async2();
  console.log("async1 end");
  setTimeout(() => {
    console.log('timer1')
  }, 0)
}
async function async2() {
  setTimeout(() => {
    console.log('timer2')
  }, 0)
  console.log("async2");
}
async1();
setTimeout(() => {
  console.log('timer3')
}, 0)
console.log("start")

The output is as follows:

async1 start
async2
start
async1 end
timer2
timer3
timer1

The code execution process is as follows:

1. First enter `async1`, print out `async1 start`;
2. Then encounter `async2`, enter `async2`, encounter timer `timer2`, join the macro task queue, and then print `async2`;
3. Since `async2` blocks the execution of the following code, execute the subsequent timer `timer3`, add it to the macro task queue, and then print `start`; 4. Then execute the
code after `async2`, and print `async1 end `, encounter the timer `timer1`, add it to the macro task queue;
5. Finally, the macro task queue has three tasks, the sequence is `timer2`, `timer3`, `timer1`, there is no micro task, so directly all The macro tasks are executed according to the first-in-first-out principle.

 20. Code output result

async function async1 () {
  console.log('async1 start');
  await new Promise(resolve => {
    console.log('promise1')
  })
  console.log('async1 success');
  return 'async1 end'
}
console.log('srcipt start')
async1().then(res => console.log(res))
console.log('srcipt end')

The output is as follows:

script start
async1 start
promise1
script end

It should be noted here that in `async1`, the Promise behind `await` has no return value, that is, its state is always `pending`, so the content after `await` will not be executed, including` `.then` after async1`.

 21. Code output result

async function async1 () {
  console.log('async1 start');
  await new Promise(resolve => {
    console.log('promise1')
    resolve('promise1 resolve')
  }).then(res => console.log(res))
  console.log('async1 success');
  return 'async1 end'
}
console.log('srcipt start')
async1().then(res => console.log(res))
console.log('srcipt end')

Here is a modification of the above question, adding resolve.

The output is as follows:

script start
async1 start
promise1
script end
promise1 resolve
async1 success
async1 end

22. Code output result

async function async1() {
  console.log("async1 start");
  await async2();
  console.log("async1 end");
}

async function async2() {
  console.log("async2");
}

console.log("script start");

setTimeout(function() {
  console.log("setTimeout");
}, 0);

async1();

new Promise(resolve => {
  console.log("promise1");
  resolve();
}).then(function() {
  console.log("promise2");
});
console.log('script end')

The output is as follows:

script start
async1 start
async2
promise1
script end
async1 end
promise2
setTimeout

The code execution process is as follows:

1. The two functions async1 and async2 are defined at the beginning, but they are not executed. The code in the script is executed, so the script start is printed; 2.
When the timer Settimeout is encountered, it is a macro task, and it is added to the macro task queue ;
3. Then execute the function async1, first print out async1 start;
4. When encountering await, execute async2, print out async2, block the execution of the following code, and add the following code to the microtask queue;
5. Then jump out of async1 and async2, when Promise is encountered, print out promise1;
6. When resolve is encountered, add it to the microtask queue, then execute the following script code, and print out script end;
7. After that, it is time to execute the microtask queue, first print Output async1 end, and then print out promise2;
8. After executing the microtask queue, start executing the timer in the macrotask queue, and print out setTimeout.

 23. Code output result

async function async1 () {
  await async2();
  console.log('async1');
  return 'async1 success'
}
async function async2 () {
  return new Promise((resolve, reject) => {
    console.log('async2')
    reject('error')
  })
}
async1().then(res => console.log(res))

The output is as follows:

async2
Uncaught (in promise) error

It can be seen that if an error is thrown in the async function, the error result will be terminated and the execution will not continue.

If you want to execute the code behind the error, you can use catch to catch:

async function async1 () {
  await Promise.reject('error!!!').catch(e => console.log(e))
  console.log('async1');
  return Promise.resolve('async1 success')
}
async1().then(res => console.log(res))
console.log('script start')

Such an output would be:

script start
error!!!
async1
async1 success

 24. Code output result

const first = () => (new Promise((resolve, reject) => {
    console.log(3);
    let p = new Promise((resolve, reject) => {
        console.log(7);
        setTimeout(() => {
            console.log(5);
            resolve(6);
            console.log(p)
        }, 0)
        resolve(1);
    });
    resolve(2);
    p.then((arg) => {
        console.log(arg);
    });
}));
first().then((arg) => {
    console.log(arg);
});
console.log(4);

The output is as follows:

3
7
4
1
2
5
Promise{<resolved>: 1}

The code execution process is as follows:

1. First enter Promise, print out 3, then enter the following Promise, print out 7;
2. Encounter a timer, add it to the macro task queue;
3. Execute the resolve in Promise p, the state becomes resolved, return The value is 1;
4. Execute the resolve in Promise first, the status becomes resolved, and the return value is 2;
5. When p.then is encountered, it is added to the microtask queue; when first().then is encountered, it is added to the task Queue;
6. Execute the code outside, and print 4;
7. The first round of macro tasks is executed, and the tasks in the micro task queue are started, and 1 and 2 are printed successively;
8. The micro tasks are executed in this way, Start to execute the next round of macro tasks. There is a timer in the macro task queue. Execute it and print out 5. Since the execution has changed to the resolved state, `resolve(6)` will not be executed again; 9. Finally `console
. log(p)` prints out `Promise{<resolved>: 1}`;

 25. Code output result

const async1 = async () => {
  console.log('async1');
  setTimeout(() => {
    console.log('timer1')
  }, 2000)
  await new Promise(resolve => {
    console.log('promise1')
  })
  console.log('async1 end')
  return 'async1 success'
console.log('script start');
async1().then(res => console.log(res));
console.log('script end');
Promise.resolve(1)
  .then(2)
  .then(Promise.resolve(3))
  .catch(4)
  .then(res => console.log(res))
setTimeout(() => {
  console.log('timer2')
}, 1000)

The output is as follows:

script start
async1
promise1
script end
1
timer2
timer1

The code execution process is as follows:

1. Execute the timing belt first, and print out script start;
2. Encounter timer timer1 and add it to the macro task queue;
3. Then execute Promise, print out promise1, because Promise has no return value, so the following code will not Execute;
4. Then execute the synchronous code and print out script end;
5. Continue to execute the following Promise, .then and .catch expect the parameter to be a function, and here a number is passed in, so value penetration will occur, and resolve The value of (1) is passed to the last then, and 1 is printed directly;
6. When encountering the second timer, add it to the microtask queue, execute the microtask queue, and execute the two timers in sequence, but due to The reason for the timer time is that timer2 will be printed out after two seconds, and timer1 will be printed out after four seconds.

 26. Code output result

const p1 = new Promise((resolve) => {
  setTimeout(() => {
    resolve('resolve3');
    console.log('timer1')
  }, 0)
  resolve('resovle1');
  resolve('resolve2');
}).then(res => {
  console.log(res)  // resolve1
  setTimeout(() => {
    console.log(p1)
  }, 1000)
}).finally(res => {
  console.log('finally', res)
})

The execution result is as follows:

resolve1
finally  undefined
timer1
Promise{<resolved>: undefined}

 27. Code output result

console.log('1');

setTimeout(function() {
    console.log('2');
    process.nextTick(function() {
        console.log('3');
    })
    new Promise(function(resolve) {
        console.log('4');
        resolve();
    }).then(function() {
        console.log('5')
    })
})
process.nextTick(function() {
    console.log('6');
})
new Promise(function(resolve) {
    console.log('7');
    resolve();
}).then(function() {
    console.log('8')
})

setTimeout(function() {
    console.log('9');
    process.nextTick(function() {
        console.log('10');
    })
    new Promise(function(resolve) {
        console.log('11');
        resolve();
    }).then(function() {
        console.log('12')
    })
})

The output is as follows:

1
7
6
8
2
4
3
5
9
11
10
12

**(1) The analysis of the first round of event loop flow is as follows:**

- The overall script enters the main thread as the first macro task, and outputs 1 when encountering `console.log`.
- When `setTimeout` is encountered, its callback function is distributed to the macro task Event Queue. Let's call it `setTimeout1` for the time being.
- When `process.nextTick()` is encountered, its callback function is distributed to the microtask Event Queue. Denote it as `process1`.
- Encounter `Promise`, `new Promise` is executed directly, output 7. `then` is dispatched to the microtask Event Queue. Denote it as `then1`.
- Encountered `setTimeout` again, and its callback function was distributed to the macro task Event Queue, recorded as `setTimeout2`.

| Macro task Event Queue | Micro task Event Queue |
| -------------- | -------------- |
| setTimeout1 | process1 |
| setTimeout2 | then1 |

The above table is the situation of each Event Queue at the end of the first round of event loop macro tasks, and 1 and 7 have been output at this time. Found two microtasks `process1` and `then1`:

- Execute `process1`, output 6.
- Execute `then1`, output 8.

The first round of the event loop is officially over, and the result of this round is to output 1, 7, 6, 8.

**(2) The second round of time loop starts from the ** `setTimeout1`** macro task:**

- First output 2. Next, we encountered `process.nextTick()`, and also distributed it to the microtask Event Queue, which was recorded as `process2`.
- `new Promise` is executed immediately and outputs 4, and `then` is also distributed to the microtask Event Queue, which is recorded as `then2`.

| Macro task Event Queue | Micro task Event Queue |
| -------------- | -------------- |
| setTimeout2 | process2 |
| | then2 |

The second round of event loop macro tasks ends, and it is found that there are two micro tasks of `process2` and `then2` that can be executed:

- output 3.
- output 5.

The second round of event loop ends, and the second round outputs 2, 4, 3, 5.

**(3) The third round of event loop starts, and only setTimeout2 is left at this time, execute. **

- Output 9 directly.
- Distribute `process.nextTick()` to the microtask Event Queue. Denote it as `process3`.
- Execute `new Promise` directly, output 11.
- Distribute `then` to the microtask Event Queue, denoted as `then3`.

| Macro task Event Queue | Micro task Event Queue |
| -------------- | -------------- |
| | process3 |
| | then3 |

The third round of event loop macro task execution ends, and two micro tasks `process3` and `then3` are executed:

- output 10.
- output 12.

The third round of event loop ends, and the third round outputs 9, 11, 10, 12.

The entire code, a total of three event loops, the complete output is 1, 7, 6, 8, 2, 4, 3, 5, 9, 11, 10, 12. 28. Code output result

console.log(1)

setTimeout(() => {
  console.log(2)
})

new Promise(resolve =>  {
  console.log(3)
  resolve(4)
}).then(d => console.log(d))

setTimeout(() => {
  console.log(5)
  new Promise(resolve =>  {
    resolve(6)
  }).then(d => console.log(d))
})

setTimeout(() => {
  console.log(7)
})

console.log(8)

The output is as follows:

1
3
8
4
2
5
6
7

The code execution process is as follows:

1. Execute the script code first, print 1;
2. Encounter the first timer, add it to the macro task queue;
3. Encounter Promise, execute the code, print 3, encounter resolve, add it to the microtask Queue;
4. Encounter the second timer, add to the macro task queue;
5. Encounter the third timer, add to the macro task queue;
6. Continue to execute the script code, print out 8, the first round of execution ends ;
7. Execute the microtask queue, print out the resolution result of the first Promise: 4;
8. Start executing the macrotask queue, execute the first timer, and print out 2;
9. There is no microtask at this time, continue to execute the macro For the second timer in the task, first print out 5, when encountering Promise, first print out 6, and when encountering resolve, add it to the microtask queue; 10. Execute the microtask queue,
print out 6;
11. Execute the macro The last timer in the task queue, prints 7. 29. Code output result

console.log(1);
  
setTimeout(() => {
  console.log(2);
  Promise.resolve().then(() => {
    console.log(3)
  });
});

new Promise((resolve, reject) => {
  console.log(4)
  resolve(5)
}).then((data) => {
  console.log(data);
})

setTimeout(() => {
  console.log(6);
})

console.log(7);

The code output is as follows:

1
4
7
5
2
3
6

The code execution process is as follows:

1. Execute the scrip code first, print 1;
2. Encounter the first timer setTimeout, add it to the macro task queue;
3. Encounter Promise, execute the synchronization code inside, print 4, encounter resolve, Add it to the microtask queue;
4. Encounter the second timer setTimeout, add it to the red task queue;
5. Execute the script code, print out 7, so far the first round of execution is completed;
6. Specify the microtask queue The code in , print out the result of resolve: 5;
7. Execute the first timer setTimeout in the macro task, first print out 2, then encounter Promise.resolve().then(), add it to the micro task 8.
After executing this macrotask, start to execute the microtask queue and print out 3;
9. Continue to execute the second timer in the macrotask queue and print out 6.

 30. Code output result

Promise.resolve().then(() => {
    console.log('1');
    throw 'Error';
}).then(() => {
    console.log('2');
}).catch(() => {
    console.log('3');
    throw 'Error';
}).then(() => {
    console.log('4');
}).catch(() => {
    console.log('5');
}).then(() => {
    console.log('6');
});

The execution results are as follows:


6

In this topic, we need to know that whether it is thne or catch, as long as throw throws an error, it will be caught by catch. If there is no throw error, it will continue to execute the subsequent then. 31. Code output result

setTimeout(function () {
  console.log(1);
}, 100);

new Promise(function (resolve) {
  console.log(2);
  resolve();
  console.log(3);
}).then(function () {
  console.log(4);
  new Promise((resove, reject) => {
    console.log(5);
    setTimeout(() =>  {
      console.log(6);
    }, 10);
  })
});
console.log(7);
console.log(8);

The output is:

2
3
7
8
4
5
6
1

The code execution process is as follows:

1. When encountering a timer first, add it to the macro task queue;
2. When encountering a Promise, first execute the synchronization code inside, print out 2, when encountering resolve, add it to the micro task queue, and execute the subsequent synchronization code, Print out 3;
3. Continue to execute the code in the script, print out 7 and 8, so far the first round of code execution is completed;
4. Execute the code in the microtask queue, first print out 4, if encounter Promise, execute the Synchronize the code, print out 5, and add it to the macro task queue when encountering a timer. At this time, there are two timers in the macro task queue;
5. Execute the code in the macro task queue, here we need to pay attention to the first The time of one timer is 100ms, and the time of the second timer is 10ms, so the second timer is executed first, and 6 is printed; 6. At
this time, the microtask queue is empty, continue to execute the macrotask queue, and print out 1.

After completing this topic, we need to pay special attention to the time of each timer, not all timers are 0.

two, this

 1. Code output result

function foo() {
  console.log( this.a );
}

function doFoo() {
  foo();
}

var obj = {
  a: 1,
  doFoo: doFoo
};

var a = 2; 
obj.doFoo()

Output result: 2

In Javascript, this points to the current object when the function is executed. When foo is executed, the execution environment is the doFoo function, and the execution environment is global. Therefore, this in foo points to window, so 2 will be printed. 2. Code output result

var a = 10
var obj = {
  a: 20,
  say: () => {
    console.log(this.a)
  }
}
obj.say() 

var anotherObj = { a: 30 } 
obj.say.apply(anotherObj) 

Output result: 10 10

How do we know that the arrow function does not bind this, its this comes from the context of its parent, so it will first print the value of a in the global 10. Although the say method is made to point to another object later, the characteristics of the arrow function cannot be changed. Its this still points to the global, so 10 will still be output.

However, if it is an ordinary function, then there will be a completely different result:

var a = 10  
var obj = {  
  a: 20,  
  say(){
    console.log(this.a)  
  }  
}  
obj.say()   
var anotherObj={a:30}   
obj.say.apply(anotherObj)

Output result: 20 30

At this time, this in the say method will point to the object where it is located, and the value of a in it will be output.

 3. Code output result

function a() {
  console.log(this);
}
a.call(null);

Print result: window object

According to the ECMAScript262 specification: if the object caller passed in as the first parameter is null or undefined, the call method will use the global object (the window object on the browser) as the value of this. Therefore, regardless of passing in null or undefined, its this is the global object window. So, on the browser the answer is to output the window object. Note that in strict mode, null is null and undefined is undefined:

'use strict';

function a() {
    console.log(this);
}
a.call(null); // null
a.call(undefined); // undefined

 4. Code output result

var obj = { 
  name: 'cuggz', 
  fun: function(){ 
     console.log(this.name); 
  } 
obj.fun()     // cuggz
new obj.fun() // undefined

 5. Code output result

var obj = {
   say: function() {
     var f1 = () =>  {
       console.log("1111", this);
     }
     f1();
   },
   pro: {
     getPro:() =>  {
        console.log(this);
     }
   }
}
var o = obj.say;
o();
obj.say();
obj.pro.getPro();

Output result:

1111 window对象
1111 obj对象
window对象

**Analysis:**

1. o(), o is executed globally, and f1 is an arrow function, it does not bind this, its this points to its parent’s this, and its parent’s say method’s this points to the global scope , so the window will be printed;
2. obj.say(), whoever calls say, the this of say points to whom, so at this time this points to the obj object;
3. obj.pro.getPro(), as we know, the arrow The function does not bind this, getPro is in pro, and the object does not constitute a separate scope, so the this of the arrow function points to the global scope window.

 6. Code output result

var myObject = {
    foo: "bar",
    func: function() {
        var self = this;
        console.log(this.foo);  
        console.log(self.foo);  
        (function() {
            console.log(this.foo);  
            console.log(self.foo);  
        }());
    }
};
myObject.func();

Output result: bar bar undefined bar

**Analysis:**

1. First, func is called by myObject, and this points to myObject. And because var self = this; so self points to myObject.
2. This immediate execution anonymous function expression is called by window, and this points to window. The scope of immediately executing the anonymous function is in the scope of myObject.func, the self variable cannot be found in this scope, and the self variable is searched upwards along the scope chain, and the self pointing to the myObject object is found.

 7. Code output problem

window.number = 2;
var obj = {
 number: 3,
 db1: (function(){
   console.log(this);
   this.number *= 4;
   return function(){
     console.log(this);
     this.number *= 5;
   }
 })()
}
var db1 = obj.db1;
db1();
obj.db1();
console.log(obj.number);     // 15
console.log(window.number);  // 40

This topic looks a bit messy, but it actually examines what this points to:

1. When db1() is executed, this points to the global scope, so window.number * 4 = 8, and then the anonymous function is executed, so window.number * 5 = 40; 2. When
obj.db1(); is executed, this points to The obj object executes the anonymous function, so obj.numer * 5 = 15.

 8. Code output result

var length = 10;
function fn() {
    console.log(this.length);
}
 
var obj = {
  length: 5,
  method: function(fn) {
    fn();
    arguments[0]();
  }
};
 
obj.method(fn, 1);

Output result: 10 2

**Analysis:**

1. When fn() is executed for the first time, this points to the window object, and 10 is output.
2. Executing arguments0 for the second time is equivalent to calling a method with arguments, this points to arguments, and two parameters are passed here, so the length of the output arguments is 2. 9. Code output result

var a = 1;
function printA(){
  console.log(this.a);
}
var obj={
  a:2,
  foo:printA,
  bar:function(){
    printA();
  }
}

obj.foo(); // 2
obj.bar(); // 1
var foo = obj.foo;
foo(); // 1

Output result: 2 1 1

**Analysis:**

1. obj.foo(), the this of foo points to the obj object, so a will output 2;
2. obj.bar(), printA is executed in the bar method, so at this time, the this of printA points to the window, so it will output 1;
3. foo(), foo is executed in the global object, so its this points to window, so it will output 1;

 10. Code output result

var x = 3;
var y = 4;
var obj = {
    x: 1,
    y: 6,
    getX: function() {
        var x = 5;
        return function() {
            return this.x;
        }();
    },
    getY: function() {
        var y = 7;
        return this.y;
    }
}
console.log(obj.getX()) // 3
console.log(obj.getY()) // 6

Output result: 3 6

**Analysis:**

1. We know that this of the anonymous function points to the global object, so this points to window, and 3 will be printed;
2. getY is called by obj, so its this points to the obj object, and 6 will be printed.

 11. Code output result

 var a = 10; 
 var obt = { 
   a: 20, 
   fn: function(){ 
     var a = 30; 
     console.log(this.a)
   } 
 }
 obt.fn();  // 20
 obt.fn.call(); // 10
 (obt.fn)(); // 20

Output result: 20 10 20

**Analysis:**

1. obt.fn(), fn is called by obt, so its this points to the obt object, and 20 will be printed out;
2. obt.fn.call(), here the parameter of call is not written, it means null, We know that if the parameter of call is undefined or null, then this will point to the global object this, so 10 will be printed;
3. (obt.fn)(), here we add parentheses to the expression, and the function of parentheses is to change The operation order of the expression, and adding or not adding parentheses here has no effect; it is equivalent to obt.fn(), so 20 will be printed; 12. The code output result

function a(xx){
  this.x = xx;
  return this
};
var x = a(5);
var y = a(6);

console.log(x.x)  // undefined
console.log(y.x)  // 6

Output result: undefined 6

**Analysis:**

1. The most important thing is var x = a(5), the function a is called in the global scope, so this inside the function points to the window object. **So this.x = 5 is equivalent to: window.x = 5. **After that, return this, that is to say, the value of the x variable in var x = a(5) is window, where x overwrites the value of x inside the function. Then execute console.log(xx), that is, console.log(window.x), and there is no x attribute in the window object, so it will output undefined.
2. When pointing to yx, the value of x in the global variable will be assigned a value of 6, so 6 will be printed. 13. Code output result

function foo(something){
    this.a = something
}

var obj1 = {
    foo: foo
}

var obj2 = {}

obj1.foo(2); 
console.log(obj1.a); // 2

obj1.foo.call(obj2, 3);
console.log(obj2.a); // 3

var bar = new obj1.foo(4)
console.log(obj1.a); // 2
console.log(bar.a); // 4

Output result: 2 3 2 4

**Analysis:**

1. First execute obj1.foo(2); will add attribute a to obj with a value of 2. Then execute obj1.a, a is called by right obj1, so this points to obj, and prints out 2;
2. When obj1.foo.call(obj2, 3) is executed, this of foo will point to obj2, and the following is the same as above
3. obj1.a will print 2; 4.
Finally, it is to examine the priority of this binding, new binding has a higher priority than implicit binding, so 4 will be output. 14. Code output result

function foo(something){
    this.a = something
}

var obj1 = {}

var bar = foo.bind(obj1);
bar(2);
console.log(obj1.a); // 2

var baz = new bar(3);
console.log(obj1.a); // 2
console.log(baz.a); // 3

Output result: 2 2 3

This question is similar to the above question, mainly to examine the priority of this binding. Just remember the following conclusions: **this binding priority: ****new binding > explicit binding > implicit binding > default binding. **

 3. Scope & variable promotion & closure

 1. Code output result

(function(){
   var x = y = 1;
})();
var z;

console.log(y); // 1
console.log(z); // undefined
console.log(x); // Uncaught ReferenceError: x is not defined

The key point of this code is: var x = y = 1; In fact, it is executed from right to left, first execute y = 1, because y is not declared with var, so it is a global variable, and then the second step is Assigning y to x means assigning a global variable to a local variable. In the end, x is a local variable and y is a global variable, so printing x is an error.

 2. Code output result

var a, b
(function () {
   console.log(a);
   console.log(b);
   var a = (b = 3);
   console.log(a);
   console.log(b);   
})()
console.log(a);
console.log(b);

Output result:

undefined 
undefined 
undefined 
3

This topic is similar to the knowledge points investigated in the above topic. b is assigned a value of 3, and b is a global variable at this time, and 3 is assigned to a, and a is a local variable, so when it is finally printed, a is still undefined.

 3. Code output result

var friendName = 'World';
(function() {
  if (typeof friendName === 'undefined') {
    var friendName = 'Jack';
    console.log('Goodbye ' + friendName);
  } else {
    console.log('Hello ' + friendName);
  }
})();

Output: Goodbye Jack

We know that in JavaScript, both Function and var are hoisted (variable hoisting), so the above code is equivalent to:

var friendName = 'World';
(function() {
  if (typeof friendName === 'undefined') {
    var friendName = 'Jack';
    console.log('Goodbye ' + friendName);
  } else {
    console.log('Hello ' + friendName);
  }
})();

In this way, the answer is clear at a glance.

 4. Code output result

function fn1(){
  console.log('fn1')
}
var fn2
 
fn1()
fn2()
 
fn2 = function() {
  console.log('fn2')
}
 
fn2()

Output result:

fn1
Uncaught TypeError: fn2 is not a function
fn2

Here we are also investigating variable promotion. The key lies in the first fn2(). At this time, fn2 is still an undefined variable, so an error will be reported that fn2 is not a function. 5. Code output result

function a() {
    var temp = 10;
    function b() {
        console.log(temp); // 10
    }
    b();
}
a();

function a() {
    var temp = 10;
    b();
}
function b() {
    console.log(temp); // 报错 Uncaught ReferenceError: temp is not defined
}
a();

In the above two pieces of code, the first piece can be output normally, which should be fine. The key lies in the second piece of code, which will report Uncaught ReferenceError: temp is not defined. This is because when the b method is executed, the value of temp is undefined.

 6. Code output result

 var a=3;
 function c(){
    alert(a);
 }
 (function(){
  var a=4;
  c();
 })();

The scope chain of variables in js is related to the environment at the time of definition, not at the time of execution. The execution environment will only change this, passed parameters, global variables, etc.

 7. Code output problem

function fun(n, o) {
  console.log(o)
  return {
    fun: function(m){
      return fun(m, n);
    }
  };
}
var a = fun(0);  a.fun(1);  a.fun(2);  a.fun(3);
var b = fun(0).fun(1).fun(2).fun(3);
var c = fun(0).fun(1);  c.fun(2);  c.fun(3);

Output result:

```
undefined  0  0  0
undefined  0  1  2
undefined  0  1  1

```

This is a question about closures. For the fun method, an object is returned after the call. We know that when calling a function, the number of actual parameters passed in is less than the number of formal parameters specified in the function declaration, and the remaining formal parameters will be set to undefined values. So `console.log(o);` will output undefined. And a is the object returned by fun(0). That is to say, the value of the parameter n in the function fun is 0, and in the returned object, a parameter n is needed, and there is no n in the scope of this object, it continues to follow the scope to the next level scope Look for n, and finally find n in the function fun, and the value of n is 0. Knowing this, other operations are simple, and so on.

 8. Code output result

f = function() {return true;};   
g = function() {return false;};   
(function() {   
   if (g() && [] == ![]) {   
      f = function f() {return false;};   
      function g() {return true;}  //在匿名函数内部发生函数提升 因此if判断中的g()返回true  
   }   
})();   
console.log(f());

Output result: false

Here, two variables f and g are first defined, and we know that variables can be reassigned. The following is an anonymous self-executing function. The function g() is called in the if condition. Since the function g is redefined in the anonymous function, the externally defined variable g is overwritten. Therefore, the internal function g is called here. method, return true. The first condition passes and the second condition is entered.

The second condition is [] == ![], first look at ![], in JavaScript, when used for Boolean operations, such as here, the non-null reference of the object is regarded as true, and the null reference null is regarded as is false. Since here is not a null, but an array without elements, [] is considered true, and the result of ![] is false. When a boolean value participates in a conditional operation, true will be treated as 1, and false will be treated as 0. Now the condition becomes a question of [] == 0. When an object participates in the conditional comparison, it will be evaluated. The result of the evaluation is that the array becomes a string, and the result of [] is '', and' ' will be treated as 0, so the condition is true.

Both conditions are true, so the code in the condition will be executed. f is defined without using var, so it is a global variable. Therefore, the external variable f will be accessed through the closure here, and the value will be reassigned. Now the return value of executing the f function has become false. But g will not have this problem, here is g defined in a function, and will not affect the external g function. So the final result is false.

 4. Prototype & Inheritance

 1. Code output result

function Person(name) {
    this.name = name
}
var p2 = new Person('king');
console.log(p2.__proto__) //Person.prototype
console.log(p2.__proto__.__proto__) //Object.prototype
console.log(p2.__proto__.__proto__.__proto__) // null
console.log(p2.__proto__.__proto__.__proto__.__proto__)//null后面没有了,报错
console.log(p2.__proto__.__proto__.__proto__.__proto__.__proto__)//null后面没有了,报错
console.log(p2.constructor)//Person
console.log(p2.prototype)//undefined p2是实例,没有prototype属性
console.log(Person.constructor)//Function 一个空函数
console.log(Person.prototype)//打印出Person.prototype这个对象里所有的方法和属性
console.log(Person.prototype.constructor)//Person
console.log(Person.prototype.__proto__)// Object.prototype
console.log(Person.__proto__) //Function.prototype
console.log(Function.prototype.__proto__)//Object.prototype
console.log(Function.__proto__)//Function.prototype
console.log(Object.__proto__)//Function.prototype
console.log(Object.prototype.__proto__)//null

This moral topic examines the basis of prototypes and prototype chains, just remember them. 2. Code output result

// a
function Foo () {
 getName = function () {
   console.log(1);
 }
 return this;
}
// b
Foo.getName = function () {
 console.log(2);
}
// c
Foo.prototype.getName = function () {
 console.log(3);
}
// d
var getName = function () {
 console.log(4);
}
// e
function getName () {
 console.log(5);
}

Foo.getName();           // 2
getName();               // 4
Foo().getName();         // 1
getName();               // 1 
new Foo.getName();       // 2
new Foo().getName();     // 3
new new Foo().getName(); // 3

Output: 2 4 1 1 2 3 3

**Analysis:**

1. **Foo.getName()**, Foo is a function object, all objects can have attributes, b defines the getName attribute of Foo as a function, and outputs 2; 2. **
getName()**, see d here , e, d is a function expression, e is a function declaration, the difference between the two is variable promotion, the 5 in the function declaration will be overwritten by the 4 in the function expression; 3. **Foo().getName()**
, Here we need to look at part a, reassign the global getName to the console.log(1) function inside Foo, execute Foo() to return this, this this points to window, Foo().getName() is window.getName( ), output 1;
4. **getName()**, in the above 3, the global getName has been reassigned, so here is still output 1;
5. **new Foo.getName()**, here is equivalent to new (Foo.getName()), execute Foo.getName() first, output 2, and then create an instance;
6. **new Foo().getName()**, here is equivalent to (new Foo()) .getName(), first new an instance of Foo, and then execute the getName method of this instance, but this instance itself does not have this method, so go to the prototype chain __protot__ to find, instance .protot === Foo.prototype, so the output 3;
7. **new new Foo().getName()**, here is equivalent to new (new Foo().getName()), as in 6 above, first output 3, and then new a new Foo(). An instance of getName().

 3. Code output result

var F = function() {};
Object.prototype.a = function() {
  console.log('a');
};
Function.prototype.b = function() {
  console.log('b');
}
var f = new F();
f.a();
f.b();
F.a();
F.b()

Output result:

a
Uncaught TypeError: f.b is not a function
a
b

**Analysis:**

1. f is not an instance of Function, because it is not a constructor, it calls the relevant properties and methods on the Function prototype chain, and can only access the Object prototype chain. So fa() outputs a, and fb() reports an error.
2. F is a constructor, and F is an instance of the constructor Function. Because F instanceof Object === true and F instanceof Function === true, it can be concluded that F is an instance of Object and Function, that is, F can access both a and b. So Fa() outputs a and Fb() outputs b. 4. Code output result

function Foo(){
    Foo.a = function(){
        console.log(1);
    }
    this.a = function(){
        console.log(2)
    }
}

Foo.prototype.a = function(){
    console.log(3);
}

Foo.a = function(){
    console.log(4);
}

Foo.a();
let obj = new Foo();
obj.a();
Foo.a();

Output result: 4 2 1

**Analysis:**

1. Foo.a() This is the static method a that calls the Foo function. Although Foo has a higher-priority attribute method a, Foo is not called at this time, so the result of the static method a of Foo is output at this time: 4
2. let obj = new Foo(); The new method is used to call the function, and the function instance object is returned. At this time, the properties and methods inside the Foo function are initialized, and the prototype chain is established.
3. obj.a() ; Call the method a on the obj instance. There are currently two a methods on the instance: one is an internal attribute method, and the other is a method on the prototype. When both exist, first look for ownProperty, if not, then go to the prototype chain to find, so call a on the instance to output: 2 4.
Foo.a() ; According to step 2, the property method inside the Foo function has been Initialization, overrides the static method of the same name, so the output: 1

 5. Code output result

function Dog() {
  this.name = 'puppy'
}
Dog.prototype.bark = () => {
  console.log('woof!woof!')
}
const dog = new Dog()
console.log(Dog.prototype.constructor === Dog && dog.constructor === Dog && dog instanceof Dog)

Output result: true

**Analysis:**

Because the constructor is an attribute on the prototype, dog.constructor actually points to Dog.prototype.constructor; the constructor attribute points to the constructor. instanceof actually detects whether the type is on the prototype chain of the instance.

The constructor is an attribute on the prototype, which is easily overlooked. The functions of constructor and instanceof are different. Perceptually speaking, constructor has strict restrictions. It can only strictly compare whether the object’s constructor is a specified value; while instanceof is relatively loose. As long as the detected type is on the prototype chain, it can will return true.

 6. Code output result

var A = {n: 4399};
var B =  function(){this.n = 9999};
var C =  function(){var n = 8888};
B.prototype = A;
C.prototype = A;
var b = new B();
var c = new C();
A.n++
console.log(b.n);
console.log(c.n);

Output result: 9999 4400

**Analysis:**

1. console.log(bn), when looking for bn, first check whether the b object itself has n attribute, if not, it will search on the prototype (prototype), when executing var b = new B(), this.n inside the function =9999 (this points to b at this time) Return b object, b object has its own n attribute, so return 9999.
2. console.log(cn), in the same way, when var c = new C() is executed, the c object does not have its own n attribute, look up and find the n attribute on the prototype (prototype), because A.n++(this When n in object A is 4400), so return 4400.

 7. Code output problem

function A(){
}
function B(a){
  this.a = a;
}
function C(a){
  if(a){
this.a = a;
  }
}
A.prototype.a = 1;
B.prototype.a = 1;
C.prototype.a = 1;
 
console.log(new A().a);
console.log(new B().a);
console.log(new C(2).a);

Output result: 1 undefined 2

**Analysis:**

1. console.log(new A().a), new A() is an object created by the constructor, and it does not have an attribute itself, so I searched for its prototype and found that the attribute value of the a attribute of the prototype is 1, so The output value is 1;
2. console.log(new B().a), ew B() is an object created by the constructor, which has a parameter a, but the object has no parameters, so the output value is undefined;
3. console.log(new C(2).a), new C() is the object created by the constructor, the constructor has a parameter a, and the passed actual parameter is 2, inside the execution function, it is found that if is True, execute this.a = 2, so the value of attribute a is 2. 8. Code output problem

function Parent() {
    this.a = 1;
    this.b = [1, 2, this.a];
    this.c = { demo: 5 };
    this.show = function () {
        console.log(this.a , this.b , this.c.demo );
    }
}

function Child() {
    this.a = 2;
    this.change = function () {
        this.b.push(this.a);
        this.a = this.b.length;
        this.c.demo = this.a++;
    }
}

Child.prototype = new Parent();
var parent = new Parent();
var child1 = new Child();
var child2 = new Child();
child1.a = 11;
child2.a = 12;
parent.show();
child1.show();
child2.show();
child1.change();
child2.change();
parent.show();
child1.show();
child2.show();

Output result:

parent.show(); // 1  [1,2,1] 5

child1.show(); // 11 [1,2,1] 5
child2.show(); // 12 [1,2,1] 5

parent.show(); // 1 [1,2,1] 5

child1.show(); // 5 [1,2,1,11,12] 5

child2.show(); // 6 [1,2,1,11,12] 5

This topic is worth reviewing. It involves a lot of knowledge points, such as **this pointing, prototype, prototype chain, class inheritance, data type**, etc.

**Analysis**:

1. parent.show(), you can directly get the required value, there is nothing to say;
2. child1.show(), the constructor of `Child` originally pointed to `Child`, the title explicitly sets `Child` The prototype object of the class points to an instance of the `Parent` class. It should be noted that `Child.prototype` points to `parent`, an instance of `Parent`, not to the `Parent` class.
3. child2.show(), there is nothing to say about this;
4. parent.show(), `parent` is an instance of the `Parent` class, and `Child.prorotype` points to another instance of the `Parent` class instance, the two do not affect each other in the heap memory, so the above operation does not affect the `parent` instance, so the output remains unchanged;
5. child1.show(), after `child1` executes the `change()` method, an 6.
- this.b.push(this.a), due to the dynamic pointing feature of this, this.b will point to the b array on `Child.prototype`, and this.a will point to `child1` a property, so `Child.prototype.b` becomes **[1,2,1,11]**;
   - this.a = this.b.length, in this statement `this.a` and The direction of `this.b` is consistent with the previous sentence, so the result is that `child1.a` becomes 4;
   - this.c.demo = this.a++, since `child1` does not have the attribute c, so `this.c` here will point to `Child.prototype.c`, and the value of `this.a` is 4 , is a primitive type, so the value will be directly assigned during the assignment operation, the result of `Child.prototype.c.demo` is 4, and `this.a` is then incremented to 5 (4 + 1 = 5).
7. `child2` executes the `change()` method, and `child2` and `child1` are both instances of the `Child` class, so their prototype chains point to the same prototype object `Child.prototype`, that is, the same A `parent` instance, so all statements affecting the prototype object in `child2.change()` will affect the final output of `child1`.
8. - this.b.push(this.a), due to the dynamic pointing feature of this, this.b will point to the b array on `Child.prototype`, and this.a will point to the a property of `child2`, so` Child.prototype.b` becomes **[1,2,1,11,12]**;
   - this.a = this.b.length, `this.a` and `this.b in this statement The direction of ` is consistent with the previous sentence, so the result is that `child2.a` becomes 5;
   - this.c.demo = this.a++, since `child2` does not have the attribute c, here `this. c` will point to `Child.prototype.c`, so the execution result is that the value of `Child.prototype.c.demo` becomes the value of `child2.a` 5, and `child2.a` eventually increases to 6 ( 5 + 1 = 6). 9. Code output result

Output result: true

In fact, this code is implementing prototype chain inheritance. SubType inherits SuperType, which essentially rewrites the prototype object of SubType and replaces it with an instance of a new type. The prototype of SubType is rewritten, so instance.constructor points to SuperType. details as follows:

1. JavaScript Basics

 1. Handwritten Object.create

Idea: use the incoming object as a prototype

function create(obj) {
  function F() {}
  F.prototype = obj
  return new F()
}

 2. Handwritten instanceof method

The instanceof operator is used to determine whether the constructor's prototype property appears anywhere in the object's prototype chain.

Implementation steps:

1. First obtain the prototype of the type
2. Then obtain the prototype of the object
3. Then loop to determine whether the prototype of the object is equal to the prototype of the type until the prototype of the object is null, because the prototype chain is finally null. Concrete implementation:

function myInstanceof(left, right) {
  let proto = Object.getPrototypeOf(left), // 获取对象的原型
      prototype = right.prototype; // 获取构造函数的 prototype 对象

  // 判断构造函数的 prototype 对象是否在对象的原型链上
  while (true) {
    if (!proto) return false;
    if (proto === prototype) return true;

    proto = Object.getPrototypeOf(proto);
  }
}

 3. Handwritten new operator

In the process of calling `new`, the above four things will happen:

(1) First create a new empty object

(2) Set the prototype, and set the prototype of the object as the prototype object of the function.

(3) Let the this of the function point to this object, execute the code of the constructor (add properties to this new object)

(4) Determine the return value type of the function, if it is a value type, return the created object. If it is a reference type, an object of this reference type is returned.

function objectFactory() {
  let newObject = null;
  let constructor = Array.prototype.shift.call(arguments);
  let result = null;
  // 判断参数是否是一个函数
  if (typeof constructor !== "function") {
    console.error("type error");
    return;
  }
  // 新建一个空对象,对象的原型为构造函数的 prototype 对象
  newObject = Object.create(constructor.prototype);
  // 将 this 指向新建对象,并执行函数
  result = constructor.apply(newObject, arguments);
  // 判断返回对象
  let flag = result && (typeof result === "object" || typeof result === "function");
  // 判断返回结果
  return flag ? result : newObject;
}
// 使用方法
objectFactory(构造函数, 初始化参数);

 4. Handwritten Promise

const PENDING = "pending";
const RESOLVED = "resolved";
const REJECTED = "rejected";

function MyPromise(fn) {
  // 保存初始化状态
  var self = this;

  // 初始化状态
  this.state = PENDING;

  // 用于保存 resolve 或者 rejected 传入的值
  this.value = null;

  // 用于保存 resolve 的回调函数
  this.resolvedCallbacks = [];

  // 用于保存 reject 的回调函数
  this.rejectedCallbacks = [];

  // 状态转变为 resolved 方法
  function resolve(value) {
    // 判断传入元素是否为 Promise 值,如果是,则状态改变必须等待前一个状态改变后再进行改变
    if (value instanceof MyPromise) {
      return value.then(resolve, reject);
    }

    // 保证代码的执行顺序为本轮事件循环的末尾
    setTimeout(() => {
      // 只有状态为 pending 时才能转变,
      if (self.state === PENDING) {
        // 修改状态
        self.state = RESOLVED;

        // 设置传入的值
        self.value = value;

        // 执行回调函数
        self.resolvedCallbacks.forEach(callback => {
          callback(value);
        });
      }
    }, 0);
  }

  // 状态转变为 rejected 方法
  function reject(value) {
    // 保证代码的执行顺序为本轮事件循环的末尾
    setTimeout(() => {
      // 只有状态为 pending 时才能转变
      if (self.state === PENDING) {
        // 修改状态
        self.state = REJECTED;

        // 设置传入的值
        self.value = value;

        // 执行回调函数
        self.rejectedCallbacks.forEach(callback => {
          callback(value);
        });
      }
    }, 0);
  }

  // 将两个方法传入函数执行
  try {
    fn(resolve, reject);
  } catch (e) {
    // 遇到错误时,捕获错误,执行 reject 函数
    reject(e);
  }
}

MyPromise.prototype.then = function(onResolved, onRejected) {
  // 首先判断两个参数是否为函数类型,因为这两个参数是可选参数
  onResolved =
    typeof onResolved === "function"
      ? onResolved
      : function(value) {
          return value;
        };

  onRejected =
    typeof onRejected === "function"
      ? onRejected
      : function(error) {
          throw error;
        };
    return new MyPromise((resolve,reject)=> {
      // 如果是等待状态,则将函数加入对应列表中
      if (this.state === PENDING) {
        this.resolvedCallbacks.push(onResolved);
        this.rejectedCallbacks.push(onRejected);
      }

      // 如果状态已经凝固,则直接执行对应状态的函数

      if (this.state === RESOLVED) {
        try{
          let res = onResolved(this.val)
          if(res instanceof MyPromsie){
            res.then(v => {
              resolve(v)
            },error => {
              reject(error)
            })
          }
        }catch(error){
          reject(error)
        }
      }

      if (this.state === REJECTED) {
        onRejected(this.value);
      }
    };
})

 5. Handwritten Promise.then

The `then` method returns a new `promise` instance, in order to execute the function in `then` when the state of `promise` changes (when `resolve` / `reject` is called), we use a `callbacks` array First temporarily store the function passed to then, and then call it when the state changes.

**So, how to ensure that the method in the latter ** `then` ** will be executed after the previous ** `then`** (probably asynchronous) ends? **

We can `push` the function passed to `then` and the `resolve` of the new `promise` to the `callbacks` array of the previous `promise` to achieve the effect of connecting the past and the future:

- Inheritance: After the current `promise` is completed, call its `resolve` to change the state. In this `resolve`, the callbacks in `callbacks` will be called in turn, so that the method in `then` will be executed. - Afterwards
: In the previous step, when the method in `then` is executed, a result is returned. If the result is a simple value, the `resolve` of the new `promise` is called directly to change its state, which in turn calls the new The method in the `callbacks` array of `promise`, repeats. If the returned result is a `promise`, you need to wait for it to complete before triggering the `resolve` of the new `promise`, so you can call the `resolve` of the new `promise` in the `then` of the result

then(onFulfilled, onReject){
    // 保存前一个promise的this
    const self = this; 
    return new MyPromise((resolve, reject) => {
      // 封装前一个promise成功时执行的函数
      let fulfilled = () => {
        try{
          const result = onFulfilled(self.value); // 承前
          return result instanceof MyPromise? result.then(resolve, reject) : resolve(result); //启后
        }catch(err){
          reject(err)
        }
      }
      // 封装前一个promise失败时执行的函数
      let rejected = () => {
        try{
          const result = onReject(self.reason);
          return result instanceof MyPromise? result.then(resolve, reject) : reject(result);
        }catch(err){
          reject(err)
        }
      }
      switch(self.status){
        case PENDING: 
          self.onFulfilledCallbacks.push(fulfilled);
          self.onRejectedCallbacks.push(rejected);
          break;
        case FULFILLED:
          fulfilled();
          break;
        case REJECT:
          rejected();
          break;
      }
    })
   }

**Notice:**

- The callback methods in multiple consecutive `then`s are registered synchronously, but they are registered in different `callbacks` arrays, because each `then` returns a new `promise` instance (refer to the above example and diagram)
- After the registration is completed, the asynchronous event in the constructor will be executed, and after the asynchronous completion, the pre-registered callbacks in the `callbacks` array will be called sequentially

 6. Handwritten Promise.all

**1) Core ideas**

1. Receive an array of Promise instances or an object with an Iterator interface as a parameter
2. This method returns a new promise object,
3. Traverse the incoming parameters and use Promise. It becomes a promise object
4. Success is only when all callbacks of the parameters are successful, and the return value array is in the same order as the parameters
5. If one of the parameter arrays fails, the failure state is triggered, and the error message of the first Promise that fails to trigger is used as the Promise.all error message.

**2) Implementation code**

Generally speaking, Promise.all is used to handle multiple concurrent requests. It is also for the convenience of page data construction. It requests data from different interfaces used by a page together. However, if one of the interfaces fails, multiple requests It also fails, and the page may not come out, it depends on the coupling degree of the current page

function promiseAll(promises) {
  return new Promise(function(resolve, reject) {
    if(!Array.isArray(promises)){
        throw new TypeError(`argument must be a array`)
    }
    var resolvedCounter = 0;
    var promiseNum = promises.length;
    var resolvedResult = [];
    for (let i = 0; i < promiseNum; i++) {
      Promise.resolve(promises[i]).then(value=>{
        resolvedCounter++;
        resolvedResult[i] = value;
        if (resolvedCounter == promiseNum) {
            return resolve(resolvedResult)
          }
      },error=>{
        return reject(error)
      })
    }
  })
}
// test
let p1 = new Promise(function (resolve, reject) {
    setTimeout(function () {
        resolve(1)
    }, 1000)
})
let p2 = new Promise(function (resolve, reject) {
    setTimeout(function () {
        resolve(2)
    }, 2000)
})
let p3 = new Promise(function (resolve, reject) {
    setTimeout(function () {
        resolve(3)
    }, 3000)
})
promiseAll([p3, p1, p2]).then(res => {
    console.log(res) // [3, 1, 2]
})

 7. Handwritten Promise.race

The parameter of this method is an array of Promise instances, and the callback method registered by then is executed when the state of a certain Promise in the array becomes fulfilled. Because the state of a Promise can only be changed once, then we only need Just inject the resolve method of the Promise object generated in Promise.race into the callback function of each Promise instance in the array.

Promise.race = function (args) {
  return new Promise((resolve, reject)=>{
    for(let i = 0; i < args.length; i++){
      Promise.resolve(args[i]).then(resolve,reject)
    }
  })
}

 8. Handwritten anti-shake function Function anti-shake means to execute the callback after n seconds after the event is triggered. If the event is triggered again within this n seconds, the timing will be restarted. This can be used in some click request events to avoid sending multiple requests to the backend due to multiple clicks by the user.

// 函数防抖的实现
function debounce(fn, wait) {
  let timer = null;

  return function() {
    let context = this,
        args = arguments;

    // 如果此时存在定时器的话,则取消之前的定时器重新记时
    if (timer) {
      clearTimeout(timer);
      timer = null;
    }

    // 设置定时器,使事件间隔指定事件后执行
    timer = setTimeout(() => {
      fn.apply(context, args);
    }, wait);
  };
}

 9. Handwritten throttling function Function throttling refers to specifying a unit time. Within this unit time, only one callback function that triggers an event can be executed. If an event is triggered multiple times within the same unit time, only one can be executed. take effect. Throttling can be used in the event monitoring of the scroll function to reduce the frequency of event calls through event throttling.

// 函数节流的实现;
function throttle(fn, delay) {
  let curTime = Date.now();

  return function() {
    let context = this,
        args = arguments,
        nowTime = Date.now();

    // 如果两次时间间隔超过了指定时间,则执行函数。
    if (nowTime - curTime >= delay) {
      curTime = Date.now();
      return fn.apply(context, args);
    }
  };
}

function throttle(fn,wait){
  let timer = null
  return function(){
    let context = this, args = arguments
    if(!timer){
      timer = setTimeOut(()=>{
        fn.apply(context,args)
        timer = null
      },wait)
    }
  }
}

 10. Handwritten type judgment function

function getType(value) {
  // 判断数据是 null 的情况
  if (value === null) {
    return value + "";
  }
  // 判断数据是引用类型的情况
  if (typeof value === "object") {
    let valueClass = Object.prototype.toString.call(value),
      type = valueClass.split(" ")[1].split("");
    type.pop();
    return type.join("").toLowerCase();
  } else {
    // 判断数据是基本数据类型的情况和函数的情况
    return typeof value;
  }
}

 11. Handwritten call function

The implementation steps of the call function:

1. To determine whether the calling object is a function, even if we define it on the prototype of the function, it may be called by calling or other methods.
2. Determine whether the incoming context object exists, if not, set it to window.
3. Process the incoming parameters and intercept all parameters after the first parameter.
4. Make the function a property of the context object.
5. Use the context object to call this method and save the returned result.
6. Delete the newly added attribute.
7. Return the result.

// call函数实现
Function.prototype.myCall = function(context) {
  // 判断调用对象
  if (typeof this !== "function") {
    console.error("type error");
  }
  // 获取参数
  let args = [...arguments].slice(1),
      result = null;
  // 判断 context 是否传入,如果未传入则设置为 window
  context = context || window;
  // 将调用函数设为对象的方法
  context.fn = this;
  // 调用函数
  result = context.fn(...args);
  // 将属性删除
  delete context.fn;
  return result;
};

 12. Handwritten apply function

The implementation steps of the apply function:

1. To determine whether the calling object is a function, even if we define it on the prototype of the function, it may be called by calling or other methods.
2. Determine whether the incoming context object exists, if not, set it to window.
3. Make the function a property of the context object.
4. Determine whether the parameter value is passed in.
5. Use the context object to call this method and save the returned result.
6. Delete the newly added attribute
7. Return the result

// apply 函数实现
Function.prototype.myApply = function(context) {
  // 判断调用对象是否为函数
  if (typeof this !== "function") {
    throw new TypeError("Error");
  }
  let result = null;
  // 判断 context 是否存在,如果未传入则为 window
  context = context || window;
  // 将函数设为对象的方法
  context.fn = this;
  // 调用方法
  if (arguments[1]) {
    result = context.fn(...arguments[1]);
  } else {
    result = context.fn();
  }
  // 将属性删除
  delete context.fn;
  return result;
};

 13. Handwritten bind function

The implementation steps of the bind function:

1. To determine whether the calling object is a function, even if we define it on the prototype of the function, it may be called by calling or other methods.
2. Save the reference of the current function and get the values ​​of other incoming parameters.
3. Create a function to return
4. Use apply to bind the function call inside the function. It is necessary to judge whether the function is a constructor. At this time, you need to pass in the this of the current function to the apply call, and pass in the specified context object in other cases.

// bind 函数实现
Function.prototype.myBind = function(context) {
  // 判断调用对象是否为函数
  if (typeof this !== "function") {
    throw new TypeError("Error");
  }
  // 获取参数
  var args = [...arguments].slice(1),
      fn = this;
  var bound = function() {
    // 根据调用方式,传入不同绑定值
    return fn.apply(
      this instanceof Fn ? this : context,
      args.concat(...arguments)
    );
  };
  f = Function() {}
  f.prototype = fn.prototype
  bound.prototype = new f();
  return bound;
};

 14. Implementation of function currying

Function currying refers to a technique that converts a function that takes multiple arguments into a series of functions that take one argument.


function curry(fn, args) {
  // 获取函数需要的参数长度
  let length = fn.length;

  args = args || [];

  return function() {
    let subArgs = args.slice(0);

    // 拼接得到现有的所有参数
    for (let i = 0; i < arguments.length; i++) {
      subArgs.push(arguments[i]);
    }

    // 判断参数的长度是否已经满足函数所需参数的长度
    if (subArgs.length >= length) {
      // 如果满足,执行函数
      return fn.apply(this, subArgs);
    } else {
      // 如果不满足,递归返回科里化的函数,等待参数的传入
      return curry.call(this, fn, subArgs);
    }
  };
}

// es6 实现
function curry(fn, ...args) {
  return fn.length <= args.length ? fn(...args) : curry.bind(null, fn, ...args);
}

 15. Implement AJAX request

AJAX is the abbreviation of Asynchronous JavaScript and XML, which refers to the asynchronous communication through JavaScript, obtaining XML documents from the server to extract data, and then updating the corresponding part of the current web page without refreshing the entire web page.

Steps to create an AJAX request:

- **Create an XMLHttpRequest object**.
- On this object **use the open method to create an HTTP request**, the parameters required by the open method are the method of the request, the address of the request, whether it is asynchronous and the user's authentication information.
- Before initiating a request, you can add some information and listener functions to this object**. For example, you can add header information to the request through the setRequestHeader method. You can also add a status listener function to this object. An XMLHttpRequest object has a total of 5 states. When its state changes, the onreadystatechange event will be triggered. You can set the listener function to process the result of the successful request. When the readyState of the object changes to 4, it means that the data returned by the server has been received. At this time, the status of the request can be judged. If the status is 2xx or 304, it means the return is normal. At this time, the page can be updated through the data in the response.
- After the properties of the object and the listening function are set, finally **call the sent method to initiate a request to the server**, and parameters can be passed in as the sent data body.

const SERVER_URL = "/server";
let xhr = new XMLHttpRequest();
// 创建 Http 请求
xhr.open("GET", SERVER_URL, true);
// 设置状态监听函数
xhr.onreadystatechange = function() {
  if (this.readyState !== 4) return;
  // 当请求成功时
  if (this.status === 200) {
    handle(this.response);
  } else {
    console.error(this.statusText);
  }
};
// 设置请求失败时的监听函数
xhr.onerror = function() {
  console.error(this.statusText);
};
// 设置请求头信息
xhr.responseType = "json";
xhr.setRequestHeader("Accept", "application/json");
// 发送 Http 请求
xhr.send(null);

 16. Encapsulating AJAX requests with Promises

// promise 封装实现:
function getJSON(url) {
  // 创建一个 promise 对象
  let promise = new Promise(function(resolve, reject) {
    let xhr = new XMLHttpRequest();
    // 新建一个 http 请求
    xhr.open("GET", url, true);
    // 设置状态的监听函数
    xhr.onreadystatechange = function() {
      if (this.readyState !== 4) return;
      // 当请求成功或失败时,改变 promise 的状态
      if (this.status === 200) {
        resolve(this.response);
      } else {
        reject(new Error(this.statusText));
      }
    };
    // 设置错误监听函数
    xhr.onerror = function() {
      reject(new Error(this.statusText));
    };
    // 设置响应的数据类型
    xhr.responseType = "json";
    // 设置请求头信息
    xhr.setRequestHeader("Accept", "application/json");
    // 发送 http 请求
    xhr.send(null);
  });
  return promise;
}

 17. Implement shallow copy

Shallow copy means that a new object accurately copies the attribute values ​​of the original object. If the copy is a basic data type, the copy is the value of the basic data type. If it is a reference data type, the copy is the memory address. If the reference memory address of one of the objects changes, the other object will also change.

# (1)Object.assign()

`Object.assign()` is the copy method of objects in ES6. The first parameter accepted is the target object, and the rest of the parameters are the source object. Usage: `Object.assign(target, source_1, ···)`, this method Shallow copy and deep copy of one-dimensional objects can be realized.

**Notice:**

- If the target object and the source object have properties with the same name, or if multiple source objects have properties with the same name, the later properties will override the earlier properties.
- If the function has only one parameter, if the parameter is an object, the object will be returned directly; if the parameter is not an object, the parameter will be converted into an object and then returned.
- Because `null` and `undefined` cannot be converted into objects, so the first parameter cannot be `null` or `undefined`, and an error will be reported.

let target = {a: 1};
let object2 = {b: 2};
let object3 = {c: 3};
Object.assign(target,object2,object3);  
console.log(target);  // {a: 1, b: 2, c: 3}

# (2) Spread operator

Use the spread operator to copy attributes when constructing literal objects. Syntax: `let cloneObj = { ...obj };`

let obj1 = {a:1,b:{c:1}}
let obj2 = {...obj1};
obj1.a = 2;
console.log(obj1); //{a:2,b:{c:1}}
console.log(obj2); //{a:1,b:{c:1}}
obj1.b.c = 2;
console.log(obj1); //{a:2,b:{c:2}}
console.log(obj2); //{a:1,b:{c:2}}

# (3) The array method realizes the shallow copy of the array

**1)Array.prototype.slice**

- The `slice()` method is a method of a JavaScript array that returns selected elements from an existing array: usage: `array.slice(start, end)`, this method does not change the original array.
- This method has two parameters, both of which are optional. If neither parameter is written, a shallow copy of an array can be implemented.

let arr = [1,2,3,4];
console.log(arr.slice()); // [1,2,3,4]
console.log(arr.slice() === arr); //false

**2)Array.prototype.concat**

- The `concat()` method is used to combine two or more arrays. This method does not alter the existing array, but returns a new array.
- This method has two parameters, both of which are optional. If neither parameter is written, a shallow copy of an array can be implemented.

let arr = [1,2,3,4];
console.log(arr.concat()); // [1,2,3,4]
console.log(arr.concat() === arr); //false

# (4) Handwriting to achieve shallow copy

// 浅拷贝的实现;

function shallowCopy(object) {
  // 只拷贝对象
  if (!object || typeof object !== "object") return;

  // 根据 object 的类型判断是新建一个数组还是对象
  let newObject = Array.isArray(object) ? [] : {};

  // 遍历 object,并且判断是 object 的属性才拷贝
  for (let key in object) {
    if (object.hasOwnProperty(key)) {
      newObject[key] = object[key];
    }
  }
  Object.keys(object).forEach(key => {
    if(object.hasOwnProperty(key)){
      newObject[key] = object[key]
    }
  })

  return newObject;
}// 浅拷贝的实现;

function shallowCopy(object) {
  // 只拷贝对象
  if (!object || typeof object !== "object") return;

  // 根据 object 的类型判断是新建一个数组还是对象
  let newObject = Array.isArray(object) ? [] : {};

  // 遍历 object,并且判断是 object 的属性才拷贝
  for (let key in object) {
    if (object.hasOwnProperty(key)) {
      newObject[key] = object[key];
      //newObject[key] = typeof object[key] === 'object' ? deepCopy(object[key]) : object[key]
    }
  }

  return newObject;
}// 浅拷贝的实现;
function shallowCopy(object) {
  // 只拷贝对象
  if (!object || typeof object !== "object") return;
  // 根据 object 的类型判断是新建一个数组还是对象
  let newObject = Array.isArray(object) ? [] : {};
  // 遍历 object,并且判断是 object 的属性才拷贝
  for (let key in object) {
    if (object.hasOwnProperty(key)) {
      newObject[key] = object[key];
    }
  }
  return newObject;
}

 18. Implement deep copy

- **Shallow copy**: Shallow copy refers to copying the attribute value of one object to another object. If the value of some attribute is a reference type, then the address of the reference will be copied to the object, so the two Objects will have references of the same reference type. Shallow copying can be achieved using Object.assign and the spread operator.
- **Deep copy**: Compared with shallow copy, deep copy will create a new reference type and copy the corresponding value to it if the attribute value is a reference type, so the object gets a new reference type rather than a reference to the original type. Deep copy can use the two functions of JSON to implement some objects, but because the object format of JSON is stricter than that of js, if there is a function or a value of Symbol type in the attribute value, the conversion will fail

# (1)JSON.stringify()

- `JSON.parse(JSON.stringify(obj))` is one of the more commonly used deep copy methods at present. Its principle is to use `JSON.stringify` to serialize `js` objects (JSON strings), and then use `JSON.parse` to deserialize (restore) js objects.
- This method can realize deep copy simply and crudely, but there are still problems. If there are functions, undefined, symbols in the copied object, they will disappear after being processed by `JSON.stringify()`.

let obj1 = {  a: 0,
              b: {
                 c: 0
                 }
            };
let obj2 = JSON.parse(JSON.stringify(obj1));
obj1.a = 1;
obj1.b.c = 1;
console.log(obj1); // {a: 1, b: {c: 1}}
console.log(obj2); // {a: 0, b: {c: 0}}

# (2) The _.cloneDeep method of the function library lodash

The function library also provides _.cloneDeep for Deep Copy

var _ = require('lodash');
var obj1 = {
    a: 1,
    b: { f: { g: 1 } },
    c: [1, 2, 3]
};
var obj2 = _.cloneDeep(obj1);
console.log(obj1.b.f === obj2.b.f);// false

# (3) Implement deep copy function by handwriting

// 深拷贝的实现
function deepCopy(object) {
  if (!object || typeof object !== "object") return;

  let newObject = Array.isArray(object) ? [] : {};

  for (let key in object) {
    if (object.hasOwnProperty(key)) {
      newObject[key] =
        typeof object[key] === "object" ? deepCopy(object[key]) : object[key];
    }
  }
  
  Object.keys(object).forEach(key => {
    if(object.hasOwnProperty(key)){
      newObject[key] = typeof object[key] === 'object' ? deepCopy(object[key]) : object[key]
    }
  })

  return newObject;
}

 2. Data processing

 1. Implement date formatting functions

enter:

dateFormat(new Date('2020-12-01'), 'yyyy/MM/dd') // 2020/12/01
dateFormat(new Date('2020-04-01'), 'yyyy/MM/dd') // 2020/04/01
dateFormat(new Date('2020-04-01'), 'yyyy年MM月dd日') // 2020年04月01日
const dateFormat = (dateInput, format)=>{
    var day = dateInput.getDate() 
    var month = dateInput.getMonth() + 1  
    var year = dateInput.getFullYear()   
    format = format.replace(/yyyy/, year)
    format = format.replace(/MM/,month)
    format = format.replace(/dd/,day)
    return format
}

 2. To exchange the values ​​of a and b, temporary variables cannot be used

Cleverly use the sum and difference of two numbers:

```
a = a + b
b = a - b
a = a - b

```

 3. Realize the out-of-order output of the array

The main implementation ideas are:

- Take out the first element of the array, randomly generate an index value, and exchange the first element with the element corresponding to the index.
- Take out the second element of the data array for the second time, randomly generate an index value other than the index value 1, and exchange the second element with the element corresponding to the index value - execute according to the above rules until
traversal Finish

var arr = [1,2,3,4,5,6,7,8,9,10];
for (var i = 0; i < arr.length; i++) {
  const randomIndex = Math.round(Math.random() * (arr.length - 1 - i)) + i;
  [arr[i], arr[randomIndex]] = [arr[randomIndex], arr[i]];
}
console.log(arr)

Another way is to traverse in reverse order:

var arr = [1,2,3,4,5,6,7,8,9,10];
let length = arr.length,
    randomIndex,
    temp;
  while (length) {
    randomIndex = Math.floor(Math.random() * length--);
    temp = arr[length];
    arr[length] = arr[randomIndex];
    arr[randomIndex] = temp;
  }
console.log(arr)

 4. Implement the sum of array elements

var arr = [1,2,3,4,5,6,7,8,9,10];
let length = arr.length,
    randomIndex,
    temp;
  while (length) {
    randomIndex = Math.floor(Math.random() * length--);
    temp = arr[length];
    arr[length] = arr[randomIndex];
    arr[randomIndex] = temp;
  }
console.log(arr)

Recursive implementation:

let arr = [1, 2, 3, 4, 5, 6] 

function add(arr) {
    if (arr.length == 1) return arr[0] 
    return arr[0] + add(arr.slice(1)) 
}
console.log(add(arr)) // 21

 5. Realize the flattening of the array

**(1) Recursive implementation** Ordinary recursive ideas are easy to understand, that is, to traverse one by one through loop recursion, if each item is still an array, then continue to traverse down, using recursive program The method to realize the connection of each item of the array:

let arr = [1, [2, [3, 4, 5]]];
function flatten(arr) {
  let result = [];

  for(let i = 0; i < arr.length; i++) {
    if(Array.isArray(arr[i])) {
      result = result.concat(flatten(arr[i]));
    } else {
      result.push(arr[i]);
    }
  }
  return result;
}
flatten(arr);  //  [1, 2, 3, 4,5]

**(2) reduce function iteration**

As can be seen from the ordinary recursive function above, in fact, each item of the array is processed, so in fact, reduce can also be used to realize the splicing of the array, thereby simplifying the code of the first method. The modified code is as follows :

let arr = [1, [2, [3, 4]]];
function flatten(arr) {
    return arr.reduce(function(prev, next){
        return prev.concat(Array.isArray(next) ? flatten(next) : next)
    }, [])
}
console.log(flatten(arr));//  [1, 2, 3, 4,5]

**(3) Spread operator implementation**

The implementation of this method adopts the method of extension operator and some, which are used together to achieve the purpose of flattening the array:

let arr = [1, [2, [3, 4]]];
function flatten(arr) {
    while (arr.some(item => Array.isArray(item))) {
        arr = [].concat(...arr);
    }
    return arr;
}
console.log(flatten(arr)); //  [1, 2, 3, 4,5]

**(4) split and toString**

The array flattening can be achieved through the two methods of split and toString. Since the array will have a toString method by default, you can directly convert the array into a comma-separated string, and then use the split method to convert the string back into an array. , as shown in the code below:

let arr = [1, [2, [3, 4]]];
function flatten(arr) {
    return arr.toString().split(',');
}
console.log(flatten(arr)); //  [1, 2, 3, 4,5]

These two methods can be used to convert multidimensional arrays directly into comma-joined strings, and then re-delimited into arrays.

**(5) flat in ES6**

We can also directly call the flat method in ES6 to flatten the array. The syntax of the flat method: `arr.flat([depth])`

Among them, depth is the parameter of flat, and depth is the expansion depth of the array that can be passed (default is not filled, the value is 1), that is, expand one layer of array. If the number of layers is uncertain, the parameters can be passed into Infinity, which means that no matter how many layers are to be expanded:

let arr = [1, [2, [3, 4]]];
function flatten(arr) {
  return arr.flat(Infinity);
}
console.log(flatten(arr)); //  [1, 2, 3, 4,5]

It can be seen that a two-layer nested array achieves our expected effect by setting the parameter of the flat method to Infinity. In fact, it can also be set to 2, and this effect can also be achieved. In the process of programming, if the number of nested layers of the array is uncertain, it is best to use Infinity directly to achieve flattening.

**(6) Regular and JSON methods**

In the fourth method, the toString method has been used, which still adopts the method of converting JSON.stringify into a string first, then filters out the square brackets of the array in the string through a regular expression, and finally uses JSON.parse to convert the which converts to an array:

let arr = [1, [2, [3, [4, 5]]], 6];
function flatten(arr) {
  let str = JSON.stringify(arr);
  str = str.replace(/(\[|\])/g, '');
  str = '[' + str + ']';
  return JSON.parse(str); 
}
console.log(flatten(arr)); //  [1, 2, 3, 4,5]

 6. Implement array deduplication

Given an unordered array, it is required to remove duplicate numbers in the array and return a new array without duplicates. ES6 approach (using collection of data structures):

const array = [1, 2, 3, 5, 1, 5, 9, 1, 2, 8];

Array.from(new Set(array)); // [1, 2, 3, 5, 9, 8]

ES5 method: use map to store unique numbers

const array = [1, 2, 3, 5, 1, 5, 9, 1, 2, 8];

uniqueArray(array); // [1, 2, 3, 5, 9, 8]

function uniqueArray(array) {
  let map = {};
  let res = [];
  for(var i = 0; i < array.length; i++) {
    if(!map.hasOwnProperty([array[i]])) {
      map[array[i]] = 1;
      res.push(array[i]);
    }
  }
  return res;
}

 7. Implement the flat method of the array

function _flat(arr, depth) {
  if(!Array.isArray(arr) || depth <= 0) {
    return arr;
  }
  return arr.reduce((prev, cur) => {
    if (Array.isArray(cur)) {
      return prev.concat(_flat(cur, depth - 1))
    } else {
      return prev.concat(cur);
    }
  }, []);
}

 8. Implement the push method of the array

let arr = [];
Array.prototype.push = function() {
    for( let i = 0 ; i < arguments.length ; i++){
        this[this.length] = arguments[i] ;
    }
    return this.length;
}

 9. Implement the filter method of the array

Array.prototype._filter = function(fn) {
    if (typeof fn !== "function") {
        throw Error('参数必须是一个函数');
    }
    const res = [];
    for (let i = 0, len = this.length; i < len; i++) {
        fn(this[i]) && res.push(this[i]);
    }
    return res;
}

 10. Implement the map method of the array

Array.prototype._map = function(fn) {
   if (typeof fn !== "function") {
        throw Error('参数必须是一个函数');
    }
    const res = [];
    for (let i = 0, len = this.length; i < len; i++) {
        res.push(fn(this[i]));
    }
    return res;
}

 11. Implement the repeat method of string

Input the string s, and its repeated times, and output the repeated result, for example, input abc, 2, and output abcabc.

function repeat(s, n) {
    return (new Array(n + 1)).join(s);
}

recursion:

function repeat(s, n) {
    return (n > 0) ? s.concat(repeat(s, --n)) : "";
}

 12. Implement string flipping

Add a method to the prototype chain of the string to implement string flipping:

String.prototype._reverse = function(a){
    return a.split("").reverse().join("");
}
var obj = new String();
var res = obj._reverse ('hello');
console.log(res);    // olleh

It should be noted that the defined method must be called after instantiating the object, otherwise the method cannot be found.

 13. Separate numbers with commas

**Numbers have decimal versions:**

let format = n => {
    let num = n.toString() // 转成字符串
    let decimals = ''
        // 判断是否有小数
    num.indexOf('.') > -1 ? decimals = num.split('.')[1] : decimals
    let len = num.length
    if (len <= 3) {
        return num
    } else {
        let temp = ''
        let remainder = len % 3
        decimals ? temp = '.' + decimals : temp
        if (remainder > 0) { // 不是3的整数倍
            return num.slice(0, remainder) + ',' + num.slice(remainder, len).match(/\d{3}/g).join(',') + temp
        } else { // 是3的整数倍
            return num.slice(0, len).match(/\d{3}/g).join(',') + temp 
        }
    }
}
format(12323.33)  // '12,323.33'

**Number without decimal version:**

let format = n => {
    let num = n.toString() 
    let len = num.length
    if (len <= 3) {
        return num
    } else {
        let remainder = len % 3
        if (remainder > 0) { // 不是3的整数倍
            return num.slice(0, remainder) + ',' + num.slice(remainder, len).match(/\d{3}/g).join(',') 
        } else { // 是3的整数倍
            return num.slice(0, len).match(/\d{3}/g).join(',') 
        }
    }
}
format(1232323)  // '1,232,323'

 14. Implement the addition of non-negative large integers

JavaScript has range restrictions on values, the restrictions are as follows:

Number.MAX_VALUE // 1.7976931348623157e+308
Number.MAX_SAFE_INTEGER // 9007199254740991
Number.MIN_VALUE // 5e-324
Number.MIN_SAFE_INTEGER // -9007199254740991

If you want to add a very large integer (`> Number.MAX_SAFE_INTEGER`), but want to output the general form, then using + is not possible, once the number exceeds `Number.MAX_SAFE_INTEGER`, the number will be converted to scientific immediately Notation, and the numerical precision will have errors compared to before.

Implement an algorithm to add large numbers:

function sumBigNumber(a, b) {
  let res = '';
  let temp = 0;
  
  a = a.split('');
  b = b.split('');
  
  while (a.length || b.length || temp) {
    temp += ~~a.pop() + ~~b.pop();
    res = (temp % 10) + res;
    temp  = temp > 9
  }
  return res.replace(/^0+/, '');
}

The main ideas are as follows:

- First use strings to save large numbers, so that the numbers will not change in mathematical representation
- Initialize res, temp to save intermediate calculation results, and convert the two strings into an array for each bit Addition operation
- add the corresponding bits of the two arrays, the result of the addition of the two numbers may be greater than 10, so it may be necessary only to perform a remainder operation on 10, and save the result in the current bit - judge the current
bit Whether it is greater than 9, that is, whether it will carry, and if so, assign temp to true, because in the addition operation, true will be automatically and implicitly converted to 1 for the next addition - repeat the above operation until the end of the
calculation

 15. Implement add(1)(2)(3)

The concept of function currying: Currying is a technology that converts a function that accepts multiple parameters into a function that accepts a single parameter, and returns a new function that accepts the remaining parameters and returns the result.

1) Rough version

function add (a) {
return function (b) {
    return function (c) {
      return a + b + c;
    }
}
}
console.log(add(1)(2)(3)); // 6

2) Currying solution

- parameter length is fixed

var add = function (m) {
  var temp = function (n) {
    return add(m + n);
  }
  temp.toString = function () {
    return m;
  }
  return temp;
};
console.log(add(3)(4)(5)); // 12
console.log(add(3)(6)(9)(25)); // 43

For add(3)(4)(5), the execution process is as follows:

1. Execute add(3) first, at this time m=3, and return to the temp function;
2. Execute temp(4), execute add(m+n) in this function, n is the value 4 passed in this time, m The value is still 3 in the previous step, so add(m+n)=add(3+4)=add(7), at this time m=7, and return to the temp function 3. Execute temp(5), and execute in this
function add(m+n), n is the value 5 passed in this time, and the value of m is still 7 in the previous step, so add(m+n)=add(7+5)=add(12), at this time m= 12, and returns the temp function
4. Since no parameters are passed in later, the returned temp function is not executed but printed. Friends who know JS know that the toString of the object is a method to modify the object to convert the string, so the temp function in the code The toString function returns the m value, and the m value is the value m=12 when the function is executed in the last step, so the return value is 12.

- parameter length is not fixed

function add (...args) {
    //求和
    return args.reduce((a, b) => a + b)
}
function currying (fn) {
    let args = []
    return function temp (...newArgs) {
        if (newArgs.length) {
            args = [
                ...args,
                ...newArgs
            ]
            return temp
        } else {
            let val = fn.apply(this, args)
            args = [] //保证再次调用时清空
            return val
        }
    }
}
let addCurry = currying(add)
console.log(addCurry(1)(2)(3)(4, 5)())  //15
console.log(addCurry(1)(2)(3, 4, 5)())  //15
console.log(addCurry(1)(2, 3, 4, 5)())  //15

 16. Convert class array to array

There are several ways to convert class arrays to arrays:

- Call the slice method of the array to realize the conversion

Array.prototype.slice.call(arrayLike);

- Call the splice method of the array to realize the conversion

Array.prototype.splice.call(arrayLike, 0);

- Convert by calling the concat method of the array

Array.prototype.concat.apply([], arrayLike);

- Conversion via the Array.from method

Array.from(arrayLike);

 17. Summing using reduce

arr = [1,2,3,4,5,6,7,8,9,10], sum

let arr = [1,2,3,4,5,6,7,8,9,10]
arr.reduce((prev, cur) => { return prev + cur }, 0)

arr = [1,2,3,[[4,5],6],7,8,9], sum

let arr = [1,2,3,4,5,6,7,8,9,10]
arr.flat(Infinity).reduce((prev, cur) => { return prev + cur }, 0)

arr = [{a:1, b:3}, {a:2, b:3, c:4}, {a:3}], sum


let arr = [{a:9, b:3, c:4}, {a:1, b:3}, {a:3}] 

arr.reduce((prev, cur) => {
    return prev + cur["a"];
}, 0)

 18. Convert js object to tree structure

// 转换前:
source = [{
            id: 1,
            pid: 0,
            name: 'body'
          }, {
            id: 2,
            pid: 1,
            name: 'title'
          }, {
            id: 3,
            pid: 2,
            name: 'div'
          }]
// 转换为: 
tree = [{
          id: 1,
          pid: 0,
          name: 'body',
          children: [{
            id: 2,
            pid: 1,
            name: 'title',
            children: [{
              id: 3,
              pid: 1,
              name: 'div'
            }]
          }
        }]

Code:

function jsonToTree(data) {
  // 初始化结果数组,并判断输入数据的格式
  let result = []
  if(!Array.isArray(data)) {
    return result
  }
  // 使用map,将当前对象的id与当前对象对应存储起来
  let map = {};
  data.forEach(item => {
    map[item.id] = item;
  });
  // 
  data.forEach(item => {
    let parent = map[item.pid];
    if(parent) {
      (parent.children || (parent.children = [])).push(item);
    } else {
      result.push(item);
    }
  });
  return result;
}

 19. Using ES5 and ES6 to find the sum of function parameters

ES5:

function sum() {
    let sum = 0
    Array.prototype.forEach.call(arguments, function(item) {
        sum += item * 1
    })
    return sum
}

ES6:

function sum(...nums) {
    let sum = 0
    nums.forEach(function(item) {
        sum += item * 1
    })
    return sum
}

 20. Parsing URL Params as Object

let url = 'http://www.domain.com/?user=anonymous&id=123&id=456&city=%E5%8C%97%E4%BA%AC&enabled';
parseParam(url)
/* 结果
{ user: 'anonymous',
  id: [ 123, 456 ], // 重复出现的 key 要组装成数组,能被转成数字的就转成数字类型
  city: '北京', // 中文需解码
  enabled: true, // 未指定值得 key 约定为 true
}
*/
function parseParam(url) {
  const paramsStr = /.+\?(.+)$/.exec(url)[1]; // 将 ? 后面的字符串取出来
  const paramsArr = paramsStr.split('&'); // 将字符串以 & 分割后存到数组中
  let paramsObj = {};
  // 将 params 存到对象中
  paramsArr.forEach(param => {
    if (/=/.test(param)) { // 处理有 value 的参数
      let [key, val] = param.split('='); // 分割 key 和 value
      val = decodeURIComponent(val); // 解码
      val = /^\d+$/.test(val) ? parseFloat(val) : val; // 判断是否转为数字
      if (paramsObj.hasOwnProperty(key)) { // 如果对象有 key,则添加一个值
        paramsObj[key] = [].concat(paramsObj[key], val);
      } else { // 如果对象没有这个 key,创建 key 并设置值
        paramsObj[key] = val;
      }
    } else { // 处理没有 value 的参数
      paramsObj[param] = true;
    }
  })
  return paramsObj;
}

 3. Scenario application

 1. Print red, yellow and green in a cycle

Let's look at a typical problem, and compare several asynchronous programming methods through this problem: **The red light is on once every 3s, the green light is on once every 1s, and the yellow light is on once every 2s; how to make the three lights alternately and repeatedly turn on? **

Three lighting functions:

function red() {
    console.log('red');
}
function green() {
    console.log('green');
}
function yellow() {
    console.log('yellow');
}

The complicated part of this question is that **need to "alternately repeat" the lights** instead of "turning on once" and it's over.

# (1) Implemented with callback

const task = (timer, light, callback) => {
    setTimeout(() => {
        if (light === 'red') {
            red()
        }
        else if (light === 'green') {
            green()
        }
        else if (light === 'yellow') {
            yellow()
        }
        callback()
    }, timer)
}
task(3000, 'red', () => {
    task(2000, 'green', () => {
        task(1000, 'yellow', Function.prototype)
    })
})

There is a bug here: the code only completes the process once, and the red, yellow and green lights only light up once after execution. How to make it repeat alternately?

As mentioned above, recursion can be recursively turned on for a cycle:

const step = () => {
    task(3000, 'red', () => {
        task(2000, 'green', () => {
            task(1000, 'yellow', step)
        })
    })
}
step()

**Note that the step method is called again in the callback where the yellow light is on** to complete the cycle of lighting.

# (2) Realize with promise

const task = (timer, light) => 
    new Promise((resolve, reject) => {
        setTimeout(() => {
            if (light === 'red') {
                red()
            }
            else if (light === 'green') {
                green()
            }
            else if (light === 'yellow') {
                yellow()
            }
            resolve()
        }, timer)
    })
const step = () => {
    task(3000, 'red')
        .then(() => task(2000, 'green'))
        .then(() => task(2100, 'yellow'))
        .then(step)
}
step()

Here, the callback is removed, and after a lighting is over, the current promise is resolved, and recursion is still used.

# (3) Implemented with async/await

const taskRunner =  async () => {
    await task(3000, 'red')
    await task(2000, 'green')
    await task(2100, 'yellow')
    taskRunner()
}
taskRunner()

 2. Realize printing 1,2,3,4 every second

// 使用闭包实现
for (var i = 0; i < 5; i++) {
  (function(i) {
    setTimeout(function() {
      console.log(i);
    }, i * 1000);
  })(i);
}
// 使用 let 块级作用域
for (let i = 0; i < 5; i++) {
  setTimeout(function() {
    console.log(i);
  }, i * 1000);
}

 3. Children's counting problem

There are 30 children, numbered from 1-30, form a circle and count accordingly, the children who count 1, 2, 3 to 3 exit the circle, and then the next child counts 1, 2, 3 again, ask What is the number of the last remaining child?

function childNum(num, count){
    let allplayer = [];  
    for(let i = 0; i < num; i++){
        allplayer[i] = i + 1;
    }
  
    let exitCount = 0;    // 离开人数
    let counter = 0;      // 记录报数
    let curIndex = 0;     // 当前下标
  
    while(exitCount < num - 1){
        if(allplayer[curIndex] !== 0) counter++;  
      
        if(counter == count){
            allplayer[curIndex] = 0;               
            counter = 0;
            exitCount++;  
        }
        curIndex++;
        if(curIndex == num){
            curIndex = 0             
        };         
    }  
    for(i = 0; i < num; i++){
        if(allplayer[i] !== 0){
            return allplayer[i]
        }    
    }
}
childNum(30, 3)

 4. Use Promise to realize asynchronous loading of pictures

let imageAsync=(url)=>{
            return new Promise((resolve,reject)=>{
                let img = new Image();
                img.src = url;
                img.οnlοad=()=>{
                    console.log(`图片请求成功,此处进行通用操作`);
                    resolve(image);
                }
                img.οnerrοr=(err)=>{
                    console.log(`失败,此处进行失败的通用操作`);
                    reject(err);
                }
            })
        }
      
imageAsync("url").then(()=>{
    console.log("加载成功");
}).catch((error)=>{
    console.log("加载失败");
})

 5. Implement the publish-subscribe model

class EventCenter{
  // 1. 定义事件容器,用来装事件数组
    let handlers = {}

  // 2. 添加事件方法,参数:事件名 事件方法
  addEventListener(type, handler) {
    // 创建新数组容器
    if (!this.handlers[type]) {
      this.handlers[type] = []
    }
    // 存入事件
    this.handlers[type].push(handler)
  }

  // 3. 触发事件,参数:事件名 事件参数
  dispatchEvent(type, params) {
    // 若没有注册该事件则抛出错误
    if (!this.handlers[type]) {
      return new Error('该事件未注册')
    }
    // 触发事件
    this.handlers[type].forEach(handler => {
      handler(...params)
    })
  }

  // 4. 事件移除,参数:事件名 要删除事件,若无第二个参数则删除该事件的订阅和发布
  removeEventListener(type, handler) {
    if (!this.handlers[type]) {
      return new Error('事件无效')
    }
    if (!handler) {
      // 移除事件
      delete this.handlers[type]
    } else {
      const index = this.handlers[type].findIndex(el => el === handler)
      if (index === -1) {
        return new Error('无该绑定事件')
      }
      // 移除事件
      this.handlers[type].splice(index, 1)
      if (this.handlers[type].length === 0) {
        delete this.handlers[type]
      }
    }
  }
}

 6. Find the most frequently occurring words in the article

function findMostWord(article) {
  // 合法性判断
  if (!article) return;
  // 参数处理
  article = article.trim().toLowerCase();
  let wordList = article.match(/[a-z]+/g),
    visited = [],
    maxNum = 0,
    maxWord = "";
  article = " " + wordList.join("  ") + " ";
  // 遍历判断单词出现次数
  wordList.forEach(function(item) {
    if (visited.indexOf(item) < 0) {
      // 加入 visited 
      visited.push(item);
      let word = new RegExp(" " + item + " ", "g"),
        num = article.match(word).length;
      if (num > maxNum) {
        maxNum = num;
        maxWord = item;
      }
    }
  });
  return maxWord + "  " + maxNum;
}

 7. Encapsulate asynchronous fetch and use async await

(async () => {
    class HttpRequestUtil {
        async get(url) {
            const res = await fetch(url);
            const data = await res.json();
            return data;
        }
        async post(url, data) {
            const res = await fetch(url, {
                method: 'POST',
                headers: {
                    'Content-Type': 'application/json'
                },
                body: JSON.stringify(data)
            });
            const result = await res.json();
            return result;
        }
        async put(url, data) {
            const res = await fetch(url, {
                method: 'PUT',
                headers: {
                    'Content-Type': 'application/json'
                },
                data: JSON.stringify(data)
            });
            const result = await res.json();
            return result;
        }
        async delete(url, data) {
            const res = await fetch(url, {
                method: 'DELETE',
                headers: {
                    'Content-Type': 'application/json'
                },
                data: JSON.stringify(data)
            });
            const result = await res.json();
            return result;
        }
    }
    const httpRequestUtil = new HttpRequestUtil();
    const res = await httpRequestUtil.get('http://golderbrother.cn/');
    console.log(res);
})();

 8. Realize prototype inheritance The so-called prototype chain inheritance is to make the prototype of the new instance equal to the instance of the parent class:

//父方法
function SupperFunction(flag1){
    this.flag1 = flag1;
}

//子方法
function SubFunction(flag2){
    this.flag2 = flag2;
}

//父实例
var superInstance = new SupperFunction(true);

//子继承父
SubFunction.prototype = superInstance;

//子实例
var subInstance = new SubFunction(false);
//子调用自己和父的属性
subInstance.flag1;   // true
subInstance.flag2;   // false

 9. Implement two-way data binding

let obj = {}
let input = document.getElementById('input')
let span = document.getElementById('span')
// 数据劫持
Object.defineProperty(obj, 'text', {
  configurable: true,
  enumerable: true,
  get() {
    console.log('获取数据了')
  },
  set(newVal) {
    console.log('数据更新了')
    input.value = newVal
    span.innerHTML = newVal
  }
})
// 输入监听
input.addEventListener('keyup', function(e) {
  obj.text = e.target.value
})

 10. Implement simple routing

// hash路由
class Route{
  constructor(){
    // 路由存储对象
    this.routes = {}
    // 当前hash
    this.currentHash = ''
    // 绑定this,避免监听时this指向改变
    this.freshRoute = this.freshRoute.bind(this)
    // 监听
    window.addEventListener('load', this.freshRoute, false)
    window.addEventListener('hashchange', this.freshRoute, false)
  }
  // 存储
  storeRoute (path, cb) {
    this.routes[path] = cb || function () {}
  }
  // 更新
  freshRoute () {
    this.currentHash = location.hash.slice(1) || '/'
    this.routes[this.currentHash]()
  }
}

 11. Implement the Fibonacci sequence

// 递归
function fn (n){
    if(n==0) return 0
    if(n==1) return 1
    return fn(n-2)+fn(n-1)
}
// 优化
function fibonacci2(n) {
    const arr = [1, 1, 2];
    const arrLen = arr.length;

    if (n <= arrLen) {
        return arr[n];
    }

    for (let i = arrLen; i < n; i++) {
        arr.push(arr[i - 1] + arr[ i - 2]);
    }

    return arr[arr.length - 1];
}
// 非递归
function fn(n) {
    let pre1 = 1;
    let pre2 = 1;
    let current = 2;

    if (n <= 2) {
        return current;
    }

    for (let i = 2; i < n; i++) {
        pre1 = pre2;
        pre2 = current;
        current = pre1 + pre2;
    }

    return current;
}

 12. The longest non-repeating length of a string

Use a sliding window to install characters without repetition, and enumerate the characters to record the maximum value. Use map to maintain the index of characters, and when you encounter the same character, just move the left border over. Record the maximum length during the moving process:

var lengthOfLongestSubstring = function (s) {
    let map = new Map();
    let i = -1
    let res = 0
    let n = s.length
    for (let j = 0; j < n; j++) {
        if (map.has(s[j])) {
            i = Math.max(i, map.get(s[j]))
        }
        res = Math.max(res, j - i)
        map.set(s[j], j)
    }
    return res
};

 13. Use setTimeout to implement setInterval

The function of setInterval is to execute a function every specified time, but this execution is not executed immediately when the time comes, its real function is to add events to the event queue every other time, only when the current execution stack is empty Only then can the event be taken out of the event queue for execution. Therefore, there may be such a situation that the current execution stack takes a long time to execute, resulting in the accumulation of events added by multiple timers in the event queue. When the execution stack ends, these events will be executed sequentially, so it is impossible to wait for a period of time. The effect of time execution.

For this shortcoming of setInterval, we can use setTimeout recursive calls to simulate setInterval, so that we can ensure that only one event ends before we trigger the next timer event, which solves the problem of setInterval.

The realization idea is to use a recursive function to continuously execute setTimeout to achieve the effect of setInterval

function mySetInterval(fn, timeout) {
  // 控制器,控制定时器是否继续执行
  var timer = {
    flag: true
  };
  // 设置递归函数,模拟定时器执行。
  function interval() {
    if (timer.flag) {
      fn();
      setTimeout(interval, timeout);
    }
  }
  // 启动定时器
  setTimeout(interval, timeout);
  // 返回控制器
  return timer;
}

 14. Implement jsonp

// 动态的加载js文件
function addScript(src) {
  const script = document.createElement('script');
  script.src = src;
  script.type = "text/javascript";
  document.body.appendChild(script);
}
addScript("http://xxx.xxx.com/xxx.js?callback=handleRes");
// 设置一个全局的callback函数来接收回调结果
function handleRes(res) {
  console.log(res);
}
// 接口返回的数据格式
handleRes({a: 1, b: 2});

 15. Determine whether the object has a circular reference

There is no problem with circular reference objects, but problems will occur during serialization. For example, if you call `JSON.stringify()` to serialize this type of object, an error will be reported: `Converting circular structure to JSON.`The following method can Used to determine whether a circular reference already exists in an object:

const isCycleObject = (obj,parent) => {
    const parentArr = parent || [obj];
    for(let i in obj) {
        if(typeof obj[i] === 'object') {
            let flag = false;
            parentArr.forEach((pObj) => {
                if(pObj === obj[i]){
                    flag = true;
                }
            })
            if(flag) return true;
            flag = isCycleObject(obj[i],[...parentArr,obj[i]]);
            if(flag) return true;
        }
    }
    return false;
}


const a = 1;
const b = {a};
const c = {b};
const o = {d:{a:3},c}
o.c.b.aa = a;

console.log(isCycleObject(o)

Find the target value of a sorted 2D array:

var findNumberIn2DArray = function(matrix, target) {
    if (matrix == null || matrix.length == 0) {
        return false;
    }
    let row = 0;
    let column = matrix[0].length - 1;
    while (row < matrix.length && column >= 0) {
        if (matrix[row][column] == target) {
            return true;
        } else if (matrix[row][column] > target) {
            column--;
        } else {
            row++;
        }
    }
    return false;
};

Print a 2D array diagonally:

function printMatrix(arr){
  let m = arr.length, n = arr[0].length
    let res = []
  
  // 左上角,从0 到 n - 1 列进行打印
  for (let k = 0; k < n; k++) {
    for (let i = 0, j = k; i < m && j >= 0; i++, j--) {
      res.push(arr[i][j]);
    }
  }

  // 右下角,从1 到 n - 1 行进行打印
  for (let k = 1; k < m; k++) {
    for (let i = k, j = n - 1; i < m && j >= 0; i++, j--) {
      res.push(arr[i][j]);
    }
  }
  return res
}

Guess you like

Origin blog.csdn.net/weixin_51225684/article/details/130424617