The performance optimization part of the front-end interview (10) 10 small knowledge points per day


Table of contents


Series Article Directory

The performance optimization part of the front-end interview (1) 10 small knowledge points per day

The performance optimization part of the front-end interview (2) 10 small knowledge points per day

The performance optimization part of the front-end interview (3) 10 small knowledge points per day

The performance optimization part of the front-end interview (4) 10 small knowledge points per day

The performance optimization part of the front-end interview (5) 10 small knowledge points per day

The performance optimization part of the front-end interview (6) 10 small knowledge points per day

The performance optimization part of the front-end interview (7) 10 small knowledge points per day

The performance optimization part of the front-end interview (8) 10 small knowledge points per day

The performance optimization part of the front-end interview (9) 10 small knowledge points per day

The performance optimization part of the front-end interview (10) 10 small knowledge points per day

knowledge points

90. How to deal with inline scripts and inline styles in front-end code, and their impact on performance?

Inline scripting and inline styles are the practice of embedding JavaScript code and CSS styles directly into HTML pages. While doing so reduces external requests, it can also have performance and maintainability implications. Here are some approaches and considerations for dealing with inline scripts and inline styles and their impact on performance:

Processing of inline scripts:

  1. Reduced size: Inline scripts can increase the size of HTML files and affect page loading speed. Make sure inline scripts are as lean as possible, stripping unnecessary whitespace, comments, and repetitive code.
  2. Asynchronous loading: For mandatory inline scripts, consider adding asyncor deferattributes to load scripts asynchronously without blocking page rendering.
  3. Externalization: If the script is large in size or needs to be reused, consider externalizing it as a separate JavaScript file, and then use <script>tag references. This helps with caching and reuse, but adds one more network request.
  4. Event delegation: For event processing in inline scripts, use event delegation to reduce the number of inline scripts and improve performance.

Handling of inline styles:

  1. Style merging: Try to avoid repeating inline styles on multiple elements, but merge the same styles and define them centrally in <style>tags or external CSS files.
  2. Optimizing selectors: The more specific a selector for an inline style, the fewer elements it affects, and the better the rendering performance. Use appropriate selectors to apply styles precisely.
  3. Avoid duplication: Don't inline the same styles in multiple places, this increases the bulk of your HTML. Use class names or IDs to refer to styles where needed.
  4. Responsive design: For responsive design, use media queries and external CSS files to manage styles for different screen sizes instead of handling them in inline styles.

Considerations that affect performance:

  1. HTML Size: Inline scripts and styles increase the size of the HTML file, resulting in slower page loads. Especially for mobile users, this can significantly affect the experience.
  2. Maintainability: Excessive inline scripts and styles can make the code difficult to maintain. Externalized scripts and styles are easier to manage and update.
  3. Caching: Inline scripts and styles cannot be cached by the browser and need to be re-downloaded for each visit. External files can take advantage of browser caching, making them load faster on subsequent visits.

When dealing with inline scripts and inline styles, there is a trade-off between performance, maintainability, and development efficiency. In general, it is recommended to externalize critical scripts and styles and inline non-critical or page-specific code. This improves code maintainability and readability while maintaining performance.

91. Have you ever encountered performance problems in the front-end packaging and construction process? How are you speeding up development and deployment by optimizing your build process?

Yes, there may be performance issues during front-end packaging and building, especially in large projects or complex applications. Here are some of my experiences and approaches to optimizing the build process for faster development and deployment:

1. Code Splitting:
Split the application into smaller modules to realize on-demand loading, thereby reducing the initial packaging volume. Use tools like Webpack's dynamic import feature to do code splitting so that each page or route only loads the code it needs.

2. Tree Shaking:
Use the Tree Shaking feature to eliminate unused code and reduce packaging volume. Ensure that only required modules are exported and referenced in the code base, unnecessary code will be eliminated during the build process.

3. Persistent cache:
Use the file hash or version number during the construction process to ensure that the generated file has a persistent cache, so that the browser cache can be used to speed up access when a new version is released.

4. Parallel build:
For large projects, use parallel build to handle multiple tasks at the same time, such as compiling, compressing, and copying files. This can speed up the overall build process.

5. Caching and caching components:
In construction tools such as Webpack, configure reasonable caching strategies and caching components to avoid repeated compilation and packaging of unnecessary modules.

6. Optimize loaders (Loaders):
Ensure that the use of loaders is necessary, and configure optimizations as needed, such as limiting unnecessary file parsing or conversion operations.

7. Parallel compression:
During the construction process, you can use parallel compression tools, such as Terser Parallel, to speed up the compression process of JavaScript files.

8. Cache dependencies:
Use dependency caching tools such as Yarn or npm's cache function to avoid repeated download and installation of dependencies.

9. Pre-build cache:
In a continuous integration environment, consider using a pre-build cache before building to reduce the download and installation of dependencies.

10. Optimize image processing:
Use image compression tools to optimize images, such as ImageOptim, TinyPNG, etc., to reduce the size of image files.

11. Production environment optimization:
Enable optimization strategies such as code compression, obfuscation, separation of CSS, and deletion of debugging information in the production environment to reduce file size.

12. Code analysis tools:
Use construction analysis tools, such as Webpack Bundle Analyzer, to analyze the packaged files, identify larger modules, and further optimize them.

Through the comprehensive application of these optimization methods, the performance of front-end packaging and construction can be significantly improved, the development and deployment process can be accelerated, and the efficiency and user experience of the project can be improved.

92. Talk about how you handle client-side updates and versioning when the front-end cache is invalidated to ensure users get the latest content.

When the front-end cache fails, it is very important to ensure that the client can get the latest content. Here are some ways to handle client updates and versioning:

1. Versioned filenames:
Filenames are versioned each time a new version is released. app.jsFor example, change the filename from to app.v1.js. This ensures that the browser will re-download the new file and not use the cached old version.

2. Cache clearing strategy:
In the front-end application, the following strategies can be used to clear the cache and obtain the latest content:

  • Manual Refresh: Users can manually refresh the page to force the browser to re-request all resources.
  • Auto-refresh: In some cases, such as a major version upgrade of an application, code logic can be used to trigger an automatic page refresh.
  • Clear the cache of specific resources: You can set the cache expiration time of specific resources on the server side to ensure that these resources will be downloaded again after a period of time.

3. Version Control and Release Process:
Implement a strict version control and release process to ensure that every change is properly versioned and deployed. This helps ensure that filenames are updated when new versions are released, and clears the cache if needed.

4. Service Workers:
If the application uses Service Workers, you can update the Service Worker script when a new version is released, and add logic to it to clear the cache and get new content. This will allow the app to update in the background, even if the user doesn't manually refresh the page.

5. Reasonable cache strategy:
When setting the cache strategy, make sure to use appropriate cache headers, such as Cache-Controland Expires, and version numbers to control the cache time. Reasonably set the cache time so that new resources can be obtained in time when the content needs to be updated.

6. Inform users:
In the app interface, you can display a notification to users that their current version is out of date and encourage them to refresh the page to get the latest content.

Comprehensive use of the above methods can ensure that when the front-end cache fails, the client can obtain the latest content and provide a better user experience. At the same time, establishing a clear release process and version control mechanism can also help the team better manage updates and cache invalidation issues.

93. In the scenario of using dynamic imports (Dynamic Imports) in the front-end code, how do you balance the splitting of modules and the requirements of loading performance?

Using Dynamic Imports in front-end code is an effective way to balance module splitting and loading performance needs. Dynamic loading can delay loading modules as needed, thereby reducing the initial loading volume, but when balancing module splitting and loading performance, the following aspects need to be considered:

1. The granularity of module splitting:
The granularity of module splitting should be moderate to avoid splitting too finely, resulting in too many network requests. When splitting, it needs to be divided according to functions, pages or routes to ensure that the split modules have a reasonable size and can be loaded when needed without causing too many requests.

2. Page initial loading performance:
For pages with high initial loading performance requirements, you can only use the core functions as part of the initial loading, and delay loading of non-essential modules. This helps to speed up the first load time and provide a better user experience.

3. Lazy loading strategy:
According to the usage of the page and user behavior, formulate a reasonable lazy loading strategy. For example, you can load related modules when scrolling to a specific area, or trigger loading according to the user's click action.

4. Preloading:
Use browser <link rel="preload">tabs to preload modules that may be needed in the future, allowing them to load faster when needed.

5. User experience:
In the process of splitting and loading, always put user experience first. Make sure that module loading does not affect the user's operation and interaction to avoid lag or delay.

6. Performance monitoring and optimization:
Use performance monitoring tools to analyze page loading performance, determine which modules are loading slowly, and then perform targeted optimization. Consider using tools such as Webpack Bundle Analyzer to analyze module size and dependencies to help make more reasonable splitting and loading decisions.

7. Equipment and network conditions:
Consider the user's equipment and network conditions, and adopt different loading strategies in different situations. For example, when the mobile terminal or the network is slow, dynamic loading can be performed more carefully to avoid loading too many modules.

Taking the above factors into consideration, we can balance the splitting and loading performance requirements of modules according to specific projects and scenarios, so as to provide a better user experience and optimize the performance of front-end applications.

94. Talk about how you can optimize the loading of icons and small images by using CSS Sprites or Icon Fonts.

Using CSS Sprites and Icon Fonts is a common way to optimize the loading of icons and small images, which can reduce the number of HTTP requests and thus improve page performance. Here are some explanations and suggestions on how to use both methods:

CSS Sprites:

CSS Sprites combine multiple small icons into a large image, and then display different icons through the background-position property of CSS. Doing so reduces HTTP requests and improves loading speed.

  1. Icon merging: Merge all small icons into one big image, which can be done manually or automatically with tools.
  2. CSS settings: Set appropriate background-positionproperties in CSS to display the desired icon. By adjusting these values, different icons can be displayed in different elements.
  3. Sprite Generation Tools: Some online tools or build tools like Webpack plugins (eg webpack-spritesmith) can be used to automatically generate CSS Sprites.

Icon Fonts:

Icon Fonts uses font files to represent icons, each icon corresponds to a font character, and then the icons are displayed through CSS settings.

  1. Choose an icon font library: You can choose an existing icon font library, such as FontAwesome, Material Icons, etc., or create a font library yourself.
  2. Font file import: Import icon font files (usually in the format of .woff, .woff2, .eot, .ttfetc.) into the project.
  3. CSS settings: display the corresponding icons through settings font-familyand properties.content

Pros and Considerations:

  • Advantages: Both methods can reduce HTTP requests, improve loading speed, and at the same time have scalability and will not be distorted due to zooming in and out.
  • Note: When using CSS Sprites, be aware that the process of merging icons can be cumbersome, and that background-positionvalues ​​need to be manually managed when using the same icon in multiple places. When using Icon Fonts, you need to be aware of font file sizes, and in some cases, text selection and accessibility issues may arise.

Choose the appropriate method:

Choosing to use CSS Sprites or Icon Fonts depends on specific needs and project situations. If you need to use a rich collection of icons, and want to be able to easily change the color and style, you can consider using Icon Fonts. If you need finer control and minimize icon file size, you can choose to use CSS Sprites. At the same time, with the widespread use of SVG icons, you can also consider using SVG as a way of icons, which has vectorization, scalability and excellent browser support.

95. Please describe a challenge you encountered in front-end performance optimization, and explain in detail how you solved it and the lessons learned from it.

Challenge:
I once encountered a challenge in a front-end performance optimization project for an e-commerce website. The loading speed of the website on the mobile terminal is slow, especially under the 3G network, the loading time is too long, which affects the user experience and conversion rate. After analysis, it was found that the main problem was that the images on the page were loaded too much and were too large, resulting in an increase in page loading time.

Workaround:
I took the following approach to resolve this performance issue:

  1. Image Compression: First, I compressed all the images on the site, using tools such as TinyPNG to reduce the image file size while maintaining good quality.
  2. Lazy loading: For the pictures on the page, I implemented the lazy loading (Lazy Loading) function, which starts loading only when the picture enters the visible area. This reduces the amount of image requests and page load time on initial load.
  3. Applicable format: For different types of pictures, I have selected the appropriate format according to my needs, such as JPEG, PNG or WebP. For browsers that support the WebP format, using WebP can further reduce the image size.
  4. CDN: Hosting pictures on a CDN can speed up the transmission of pictures and reduce network delays.

Lessons Learned:
From this challenge, I learned the following lessons:

  1. Performance analysis: Before solving a performance problem, you first need to perform a performance analysis to understand the root cause of the problem. Only when the problem is clearly identified can targeted measures be taken.
  2. Image optimization: Images play an important role in front-end performance, and optimizing images properly can significantly improve page load speed. Compression, lazy loading, and selection of the appropriate format are all critical steps.
  3. Technology selection: When choosing an optimization method, decisions need to be made based on project needs and target audience. Different projects may require different optimization strategies.
  4. Testing and monitoring: After optimization, it is important to conduct comprehensive testing to ensure that the improvement is as expected. Also, set up performance monitoring to regularly evaluate and adjust optimization strategies to maintain a good user experience.

Through this experience, I have a deeper understanding of the importance of front-end performance optimization, and also learned how to comprehensively consider problems from multiple perspectives and take appropriate measures to solve performance challenges.

96. Have you ever encountered the impact of front-end unit testing or integration testing on performance? How do you strike a balance between testing and performance?

Front-end unit tests and integration tests may have an impact on performance to some extent. While testing is a critical step in ensuring code quality and functional correctness, an inappropriate testing strategy can lead to performance issues. It is very important to strike a balance between testing and performance, here are some ways to handle this balance:

  1. Control of test scope: When writing unit tests and integration tests, it is necessary to control the scope of the test, try to only focus on the test of code logic and function, and avoid introducing too many performance-related tests. Separate performance-related tests for better management and control.
  2. Independence of performance testing: For performance testing, it can be isolated as a separate testing phase, such as a dedicated performance testing link integrated into the continuous integration process. This ensures that performance testing does not interfere too much with the normal unit and integration testing process.
  3. Simulation and virtualization: In performance testing, simulated data and virtual environments can be used to simulate real scenarios to reduce the impact on real resources. For example, use a virtual network environment to test network request performance instead of connecting directly to a real server.
  4. Performance testing tools: use specialized performance testing tools, such as JMeter, LoadRunner, etc., for performance testing. These tools can simulate a large number of users accessing the application at the same time, so as to evaluate the performance of the application under high load conditions.
  5. Similarity of test environment: Try to make the test environment similar to the production environment to more accurately simulate real-world performance. Differences in test environments may result in inaccurate performance test results.
  6. Continuous monitoring: Introduce performance monitoring into the project, regularly perform performance testing and monitoring on the system, discover performance problems in time and take measures to optimize them.
  7. Trade-offs: When striking a balance between testing and performance, you need to make trade-offs based on the actual situation and needs of the project. Conduct more rigorous testing at critical business logic, and perform more detailed performance testing at performance-critical parts.

In conclusion, testing and performance are a carefully balanced issue. With a reasonable testing strategy and performance testing methodology, code quality and functional correctness can be ensured, while potential performance issues can be identified and resolved.

97. When using third-party APIs or external resources, how do you ensure their reliability and performance to avoid impact on the application?

Ensuring the reliability and performance of third-party APIs or external resources is critical to app stability and user experience. Here are some ways to ensure that these external resources do not negatively impact your application:

1. Choose a reliable provider: When choosing a third-party API or external resource, choose a verified and reliable provider. Check out its documentation, community support, and user reviews for stability and performance.

2. Monitoring and alarming: Use monitoring tools to monitor the performance and availability of third-party APIs or external resources in real time. Set up an alarm mechanism, and take timely measures in case of abnormality or delay.

3. Alternatives: When using third-party APIs, consider whether there are alternatives. If there is a problem with the main API, is there an alternative solution to ensure the normal operation of the application.

4. Exception handling: Add appropriate exception handling mechanism to the code. For example, if calling a third-party API fails, how should the application respond, and whether there is backup data or operations.

5. Caching and preloading: For some commonly used external resources, you can consider using a caching mechanism to reduce frequent requests for external resources. Preloading may load resources before user actions to get the required content in advance.

6. Current limiting and timeout settings: When calling third-party APIs, you can set appropriate current limiting policies to avoid excessive requests. At the same time, set an appropriate timeout to avoid long waits.

7. Version control: Make sure that the version of the third-party API used is a stable and tested version to avoid problems that may exist when using an overly new version.

8. Testing: During the development and testing phase, fully test the third-party APIs used. Simulate various scenarios, including normal and abnormal conditions, to ensure that the application can perform normally in various situations.

9. Concurrent processing: Multiple third-party APIs may be used at the same time in the application, and attention should be paid to the processing of concurrent requests to avoid performance problems or resource competition.

10. Monitoring and optimization: Regularly evaluate the performance of third-party APIs and optimize as needed. At the same time, pay attention to the updates and changes of the supplier to ensure that the application can adapt to possible changes.

By adopting the above methods comprehensively, the reliability and performance of the third-party API or external resources can be ensured, thereby improving application stability and user experience.

98. When dealing with front-end caching, how to deal with dynamic content and frequently updated data to ensure that users get the latest information?

When dealing with dynamic content and frequently updated data, some adjustments need to be made in the front-end caching strategy to ensure that users get the latest information while maintaining good performance. Here are some ways to help with this situation:

  1. Cache granularity control: For frequently updated data, different cache granularities can be used according to different parts or attributes of the data. The infrequently changing parts are cached for a longer period of time, while the frequently changing parts are cached for a shorter period of time.
  2. Cache expiration mechanism: For dynamic content, you can set a shorter cache expiration time to ensure that users get newer information. The cache time can be dynamically adjusted according to the characteristics and importance of the data.
  3. Conditional request: Using a conditional request mechanism, such as HTTP If-Modified-Sinceand If-None-Matchheaders, the server can determine whether the data has been updated, and return a 304 status code if there is no update, saving bandwidth and response time.
  4. Version control: add a version number or identifier to the URL or cache key, and update the version number when the data is updated, which can force the cache to be refreshed.
  5. Real-time push: For critical dynamic data, you can use real-time push technology, such as WebSocket or Server-Sent Events, to push the data to the client in real time, reducing the dependence on the cache.
  6. Manual Refresh: For a specific page or section, a manual refresh button can be provided to allow the user to manually update the data.
  7. Background data synchronization: Data synchronization is performed regularly or triggered by events in the background to maintain the consistency of cached data and server data.
  8. Incremental update: When data is updated, only the changed part is updated, not the entire data, thereby reducing the amount and time of updating data.
  9. Cache fallback: When the cache expires or the data is unavailable, an appropriate fallback mechanism can be provided, such as displaying default data or prompting the user that the data may not be up to date.

In short, dealing with dynamic content and frequently updated data requires comprehensive consideration of data characteristics, user experience, and performance requirements. Through appropriate caching strategies and technical means, it is possible to provide up-to-date information while maintaining good performance.

99. Talk about your automation and continuous integration practices in front-end performance optimization. How do you ensure that performance-optimizing changes are still valid after the code is committed?

When it comes to front-end performance optimization, automation and continuous integration are crucial practices to ensure that performance-optimized changes are still valid after the code is committed. The following is my practical experience in this area:

Automated performance testing: I will use performance testing tools (such as Lighthouse, WebPageTest, Google PageSpeed ​​Insights, etc.) for automated performance testing. These tools can be run automatically as part of a continuous integration process, evaluating the performance of each code commit. I set up regular performance testing tasks on the continuous integration server to ensure that new code commits do not cause performance degradation.

Performance budget: I would set a performance budget, i.e. define a set of performance metrics (such as page load time, render time, etc.), and run performance tests in continuous integration to ensure that new code changes would not exceed these budgets. Continuous integration automatically triggers an alert if something goes over budget, which needs to be dealt with by the development team in a timely manner.

Performance regression testing: In addition to automated performance testing in continuous integration, I also perform regular performance regression testing. Whenever new code changes are merged into the main branch, I run comprehensive performance regression tests to make sure the new optimization changes don't conflict with other parts or introduce new performance issues.

Code review and performance optimization guidance: In the team, I will cooperate with developers to conduct code review, and provide performance optimization suggestions and guidance during code review. This can help developers consider performance when writing code and avoid performance problems later.

Monitoring and alerting: I use monitoring tools to monitor the performance of the application in real time. If there is an abnormal change in performance, the monitoring system will automatically trigger an alarm to remind the team to pay attention to and deal with the performance problem in a timely manner.

Continuous optimization and improvement: We regularly analyze the results of performance tests, identify potential performance problems, and continue to optimize. Continuous integration and automated performance testing help us quickly find problems and ensure the continuous effectiveness of optimized changes.

In summary, automation and continuous integration are key practices to ensure that front-end performance optimizations remain effective after code commits. Through automated performance testing, performance budgeting, performance regression testing, and continuous optimization, you can ensure that application performance is maintained and improved in an ever-changing codebase.

100. Please share your performance optimization strategies and practices when using build tools such as Webpack, Rollup or Parcel.

When using build tools such as Webpack, Rollup, or Parcel, performance optimization is a key part of ensuring the efficiency and quality of app delivery. Here are some performance optimization strategies and practices for different build tools:

Webpack:

  1. Code splitting and on-demand loading: Use Webpack's code splitting function to split the application into multiple blocks, realize on-demand loading, reduce the initial loading volume, and improve the first rendering speed.
  2. Optimize packaging volume: use Webpack's Tree Shaking to eliminate unused code, and use code splitting to reduce duplicate code. Also, TerserPluginminify your code via Webpack's optimization plugins such as
  3. Caching and long-term caching: Configure the file name hash of Webpack to implement long-term caching to ensure that resources can be loaded correctly after updating. At the same time, use the cache function of Webpack to avoid repeated construction.
  4. Parallel builds and caching: Using Webpack's multi-threaded build plugin, e.g. happypack, can speed up the build process. Also, use an external cache (like cache-loader) to cache build intermediate results to avoid repeated processing.

Rollup:

  1. Tree Shaking and Dead Code Elimination: Rollup is more inclined to ES modules in design, so it naturally supports Tree Shaking, which can effectively eliminate unused code.
  2. Code splitting and common module extraction: Use Rollup's code splitting feature to split the application into different blocks, and extract repeated codes as common modules to reduce file size.
  3. Optimize output format: Rollup supports a variety of output formats, and select the appropriate output format according to the needs of the application, such as ES module, UMD, CommonJS, etc.

Parcel:

  1. Zero-configuration optimization: Parcel is characterized by zero-configuration and automatically performs many performance optimizations, such as code splitting and caching. Make sure to use the latest version of Parcel for best performance.
  2. Code splitting and on-demand loading: Parcel supports code splitting and on-demand loading by default, ensuring that the application is split into multiple blocks reasonably and improving the loading speed.
  3. Caching: Parcel uses caching to speed up the rebuild process. A flag can be used to start the build if the cache needs to be cleared -no-cache.

In addition to the above general strategies, other optimization measures can also be taken according to the specific project requirements and the characteristics of the build tool, such as tuning for performance, compressing images, using lazy loading, optimizing font loading, etc.

To sum up, no matter which build tool is used, front-end performance optimization can be achieved through appropriate configurations and strategies. It's important to stay on top of the latest tool updates and best practices to ensure that apps remain efficient and of high quality during build and delivery.

Guess you like

Origin blog.csdn.net/weixin_52003205/article/details/132308619