Front-end performance analysis tool - Lighthouse

1. Basic introduction of Google plug-in lighthouse

Lighthouse is a website performance evaluation tool. It is an open source automation tool launched by Google Chrome. It can evaluate various performance indicators of PWA and web pages, and give best practice suggestions to help developers improve the quality of the website. It is also very simple to use, we just need to provide a URL to be measured, it will run a series of tests against this page, and then generate a report about the performance of the page. Through the report, we can know what actions need to be taken to improve the performance and experience of the application.

2. Lighthouse user guide

In the higher version (should be >= 60) of the Chrome browser, Lighthouse has been directly integrated into the debugging tool DevTools, so no installation or download is required.

Press F12 to open the developer tool, you can see that there is an Audits option behind the console, security and other options (the lighthouse plug-in is installed or a higher version of Google Chrome may display lighthouse) option, select this option, and then click generate report. Can.

For different applicable scenarios, we can install and use Lighthouse in a variety of ways:

  • Chrome browser plugin. A more user-friendly interface is provided in the form of a Chrome plug-in for easy reading of reports.

  • Chrome DevTools. The tool is integrated in the latest version of the Chrome browser and can be used without installation.

  • Lighthouse CLI command-line tool. It is convenient to integrate Lighthouse into the continuous integration system.

  • programmed way. We can also import the Lighthouse toolkit through Node.js modules to use it programmatically.

3. Lighthouse generates website reports

First visit the website that needs to be evaluated, such as http://www.baidu.com, then click the lighthouse button and select generate report. As shown below:

lighthouse will run a series of tests to review the page, and then it will show you some performance indicators about the performance of the page in the form of a report. You can refer to some indicator tips in this report to improve your website application. Lighthouse can generate a JSON or HTML report, HTML can be opened directly, and a report in json format can be opened through the  address of Lighthouse Report Viewer  . as the picture shows:

4. The life cycle of lighthouse operation

The process of running evaluation in Lighthouse has a complete life cycle, which can be divided into three main processes:

Collecting (collecting data): The first is the Collecting process. This step will call the built-in driver (Driver), and its function is to invoke the browser through the Google development tool protocol (Chrome DevTools Protocol), and create a new tab request to be evaluated Site, collect site data through the browser and save the results (called Artifacts) in a local temporary directory.

Auditing (data analysis): Then enter the Auditing process, read the Artifacts data, check each item according to the built-in evaluation strategy, and calculate the numerical score of each item.

Report (generate report): Finally, the Report process is carried out, and the scoring results are divided according to the latitude of PWA, performance, accessibility, best practice, etc., and output in JSON, HTML and other formats.

As shown below:

Based on this, the command line tool provides lifecycle options, and we can let the CLI run only one or more specific lifecycles of the entire evaluation process. For example, use --gather-mode (-G) to only perform resource collection life cycle, the command line tool will start the browser, collect the relevant data of the tested site, and store the results locally in the form of json, the default is ./latest-run/ directory, then exit the process:

  1. lighthouse https://example.com/ -G

If you want to skip the interaction of the browser, read the temporary data of the page directly from the local, run the evaluation and produce the result report, you can use --audit-mode (-A), the default will be from ./latest-run/ The directory reads:

  1. lighthouse https://example.com/ -A

If the two options are used at the same time, the entire evaluation life cycle will be run. Compared with running the lighthouse command directly, a copy of the test site data will be saved in the current directory.

  1. lighthouse https://example.com -GA

If we don't want to use the default ./latest-run/ directory, we can also customize the directory where the json data of the site is saved, such as:

  1. lighthouse -GA=./mycustomfolder https://example.com

Reference: Appendix 1 Use Lighthouse to evaluate PWA PWA application practice

5. Indicator analysis of lighthouse report

After using Lighthouse to evaluate the website, we will get a score report, which includes performance (Performance), accessibility (Accessibility), best practice (Best Practice), search engine optimization (SEO), PWA (Progressive Web App) five parts:

5.1. Performance

The score range of the performance score is 0 to 100. If there is a score of 0, it is usually an error when running Lighthouse. A full score of 100 represents the data that the website has reached the 98th percentile value, while a score of 50 is the 75th percentile. value data.

Performance indicators that affect scoring: The performance test results will be divided into three parts: Metrics, Diagnostics, and Opportunities, but only the indicators in the Metrics part will have a direct impact on the score.

Lighthouse measures the following Metrics performance indicators:

  • First Contentful Paint. That is, the point in time when the browser draws arbitrary content (such as text, images, canvas, etc.) to the screen for the first time.

  • First Meaningful Paint. Measures how long users perceive the primary content of a page to be visible. For different sites, the primary content is different, for example: for blog articles, the title and above-the-fold text are the primary content, while for shopping sites, images will also become very important.

  • First CPU Idle. That is, the point in time when the page is able to respond to input for the first time, and its timing is often after the first effective drawing is completed. This indicator is currently still experimental.

  • Time to Interactive. Refers to the point in time when all page content has been successfully loaded and can quickly respond to user actions. This indicator is currently still experimental.

  • Speed ​​Index (Speed ​​Index). Measures how quickly above-the-fold visible content is drawn to the screen. Displaying as much content as possible during the first loading of the page can often bring a better user experience, so the value of the speed index should be as small as possible.

  • Estimated Input Latency. This indicator measures the response speed of the page to user input behavior, and its benchmark value should be lower than 50ms.

The indicators in the Metrics section will directly affect the score and can be used as our main reference point.

In the other two parts,  Opportunities  refers to optimization opportunities, which provide detailed suggestions and documents to explain the reasons for low scores and help us implement and improve them. Diagnostics  refers to existing problems and gives guidance for further experimentation and tuning to improve performance. Both are not included in the calculation of the score.

5.1.1. Performance Scoring Criteria

The contribution of each performance indicator to the score has its calculation logic, and Lighthouse will map the original performance value into a number between 0-100.

For example, the original value of FMP (First Meaningful Paint) is the time from page initialization to the successful rendering of the main content. According to the data of the real site, the FMP value of the top performance site is about 1220ms, and this value will be mapped to Lighthouse 99 points.

For different scores, Lighthouse uses different colors for marking. The corresponding relationship between the score range and the color is as follows:

  • 0 - 49 (slow): red

  • 50 - 89 (average): orange

  • 90 - 100 (fast): Green

The contribution of each indicator to the performance score is not the same, and the indicator with a larger weight has a greater impact on the performance score. For the weight distribution of each indicator, please refer to: https://docs.google.com/spreadsheets/d/1Cxzhy5ecqJCucdf1M0iOzM8mIxNc7mmx107o5nj38Eo/edit#gid=0

5.2. Accessibility

The score for the Accessibility Score is calculated as a weighted average of the relevant indicators. You can check the specific weight of each indicator in the scoring details . Similarly, index items with larger weights have a greater impact on the score.

The test result of each index item of accessibility is pass or fail, which is different from the calculation method of the performance index item. When the page only partially passes a certain index, the index of the page will not be scored. For example, if some elements on the page have screen reader-friendly names but others do not, the page will score 0 for the screenreader-friendly-names metric.

5.3. Best Practice (Optimization)

The best practice score is on a scale of 0-100. The index items that affect this score are all weighted the same.

For example: it is recommended to use https, cross-domain jump links need to use the rel logo, and obsolete APIs cannot be used, etc.

5.4. Search Engine Optimization (SEO)

For example: image elements use alt attributes, etc. to improve search engine search rankings, so that search engines can find your website.

5.5、PWA(Progressive Web App)

Lighthouse uses the PWA benchmark check list (Baseline PWA Checklist) for evaluation. The evaluation results divide these indicators into four categories, including 12 automatic test items and 3 manual test items. The scoring weight of each automatic test item is identical. The evaluation indicators of PWA are very important to us. We can learn more about the benchmark indicators from these four categories.

Fast and reliable:

  1. Pages load quickly under mobile network conditions.

  2. The page can return status code 200 under offline conditions. Here we can use Service Worker to make it available offline.

  3. start url returns status code 200 under offline conditions. start url is an attribute in the manifest.json we mentioned in the previous chapter, which specifies the URL to load when the user opens the PWA.

Can be installed:

  1. Always use HTTPS.

  2. Register Service Worker to cache page and start_url.

  3. Use the manifest file to meet the requirements of installing PWA, and the browser can actively notify the user to add the application to the desktop to increase the retention rate.

PWA optimization:

  1. Redirect HTTP traffic to HTTPS.

  2. Configure a custom splash screen.

  3. Set the address bar theme color.

  4. The content of the page is adaptive to the size of the viewport, which is more friendly to mobile users.

  5. The tag is used, and the width or initial-scale attribute is set.

  6. When the JavaScript file is not available, fallback measures are provided, and the page can display basic content without a white screen.

Manual test items:

  1. The site is available across browsers, such as Chrome, Edge, Firefox and Safari, etc.

  2. Switching between pages is smooth. Even in a poor network environment, the switching animation should be simple and smooth, which is the key to improving the user's perceived experience.

  3. Make sure each page has a unique URL, opens in a new browser window, and is easy to share on social media.

In addition to the above benchmark indicators, in order to make the PWA experience more perfect, there are some advanced indicators that Lighthouse has not implemented, that is, PWA indicators that can be used as exemplary references, such as user experience, cache, push notifications, etc. .

Reference: 2 Lighthouse scoring guide PWA application practice

6. Node and Chrome version requirements

There are certain requirements for the versions of node and Chrome browsers when using lighthouse.

node > =8.9 (not verified)

The Chrome browser must be >= 79, otherwise lighthouse may fail to run, or some indicators may not be displayed. For example, I have installed Chrome75.0.3770.80 below, and when I execute lighthouse using the command line, two indicators display abnormally:

The performance and best practice indicators are abnormal, and the error is as follows:

 ---------------------------------------------------------------------------------

Guess you like

Origin blog.csdn.net/DY_CSDN/article/details/130016552