Solve the process of Chrome browser ERR_INSUFFICIENT_RESOURCES

Table of contents

1. Background

2. Download the compilation tool depot_tools

3. Download Chromium source code        

4. Analyze Chromium code and add logs

4. Compile Chrome

5. Positioning problem

6. Solutions

Seven, stepping on the pit record


1. Background

Recently, the company's customer service colleagues often reported that ERR_INSUFFICIENT_RESOURCES exceptions often appeared in the chrome browser after four o'clock in the afternoon, which caused the customer service system to fail to work normally. The main feature is that this exception will be reported when a new tab is opened, and it can be used normally in the original tab. Clear your browser cache and it will reappear soon. The front-end partners of our team have never encountered this problem, so we also searched for a lot of relevant information on the Internet, and found that a blog encountered a scene that was similar to the scene we encountered, so we tried along this line of thought. I located it, and finally verified that it is the same problem. Next, I will describe the entire positioning and solution process, hoping to help students who encounter related problems.

2. Download the compilation tool depot_tools

       (When downloading the chrome source code, make sure you can access google normally)  deptool is a tool for downloading and compiling chromium. Use git to clone the deptool tool locally:

git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git

At this time, command-line tools such as fetch and gclient can actually be accessed and executed through absolute paths, but for the convenience of subsequent operations, they can be added to PATH

export PATH="$PATH:/path/to/depot_tools"

After completion, test whether the fetch command is available on the command line:

which fetch

3. Download Chromium source code
        

Because the core download code process of gclient, fetch and other tools also depends on git, so the following git global settings are suggested to be adjusted:

git config --global http.postBuffer 524288000
git config --global core.precomposeUnicode true

Because the chrommium project has a long history and the git warehouse is extremely huge, in order to complete the download code faster, you can pull the code by ignoring the historical submission code to speed up the speed.

fetch --no-history chromium

If the pull fails, you can execute the following command to continue the pull. This command supports resuming uploads from breakpoints.

gclient sync

4. Analyze Chromium code and add logs

Use VS to open the Chromium/src/services directory, which is the basic system service layer of chrome. Search for ERR_INSUFFICIENT_RESOURCES and find more than 100 places. We found the following code in services/network/url_loader_factory.cc

void URLLoaderFactory::CreateLoaderAndStartWithSyncClient(
    mojo::PendingReceiver<mojom::URLLoader> receiver,
    int32_t request_id,
    uint32_t options,
    const ResourceRequest& resource_request,
    mojo::PendingRemote<mojom::URLLoaderClient> client,
    base::WeakPtr<mojom::URLLoaderClient> sync_client,
    const net::MutableNetworkTrafficAnnotationTag& traffic_annotation) {
 
  // 省略前面代码
  
  bool exhausted = false;
  if (!context_->CanCreateLoader(params_->process_id)) {
    exhausted = true;
  }

  int keepalive_request_size = 0;
  if (resource_request.keepalive && keepalive_statistics_recorder) {
    const size_t url_size = resource_request.url.spec().size();
    size_t headers_size = 0;

    net::HttpRequestHeaders merged_headers = resource_request.headers;
    merged_headers.MergeFrom(resource_request.cors_exempt_headers);

    for (const auto& pair : merged_headers.GetHeaderVector()) {
      headers_size += (pair.key.size() + pair.value.size());
    }

    keepalive_request_size = url_size + headers_size;

    const auto& top_frame_id = *params_->top_frame_id;
    const auto& recorder = *keepalive_statistics_recorder;

    if (!exhausted) {
      if (recorder.num_inflight_requests() >= kMaxKeepaliveConnections ||
          recorder.NumInflightRequestsPerTopLevelFrame(top_frame_id) >=
              kMaxKeepaliveConnectionsPerTopLevelFrame ||
          recorder.GetTotalRequestSizePerTopLevelFrame(top_frame_id) +
                  keepalive_request_size >
              kMaxTotalKeepaliveRequestSize) {
                LOG(ERROR) << "url_loader_factory.cc>>>>CreateLoaderAndStartWithSyncClient>>keepalive_request_size:" << keepalive_request_size
                 << "kMaxTotalKeepaliveRequestSize" << kMaxTotalKeepaliveRequestSize << "recorder.num_inflight_requests()" << recorder.num_inflight_requests()
                 << "kMaxKeepaliveConnections" << kMaxKeepaliveConnections << "recorder.NumInflightRequestsPerTopLevelFrame(top_frame_id) " << recorder.NumInflightRequestsPerTopLevelFrame(top_frame_id)
                 << kMaxKeepaliveConnectionsPerTopLevelFrame << kMaxKeepaliveConnectionsPerTopLevelFrame << " recorder.GetTotalRequestSizePerTopLevelFrame(top_frame_id) + keepalive_request_size " 
                 << recorder.GetTotalRequestSizePerTopLevelFrame(top_frame_id) + keepalive_request_size ;
        exhausted = true;
      }
    }
  }

  if (exhausted) {
    //新增日志便于测试
    LOG(ERROR) << "url_loader_factory.cc>>>>ERR_INSUFFICIENT_RESOURCES";
    URLLoaderCompletionStatus status;
    status.error_code = net::ERR_INSUFFICIENT_RESOURCES;
    status.exists_in_cache = false;
    status.completion_time = base::TimeTicks::Now();
    mojo::Remote<mojom::URLLoaderClient>(std::move(client))->OnComplete(status);
    return;
  }

  //省略后面代码
}

Analyzing the CreateLoaderAndStartWithSyncClient method of services/network/url_loader_factory.cc found that when exhausted=true, an ERR_INSUFFICIENT_RESOURCES exception will be thrown. Let's continue to see under what scenario exhausted=true.


bool NetworkContext::CanCreateLoader(uint32_t process_id) {
  auto it = loader_count_per_process_.find(process_id);
  uint32_t count = (it == loader_count_per_process_.end() ? 0 : it->second);
  //新增日志便于测试
  LOG(ERROR) << "network_context.cc>>>>CanCreateLoader>>count" << count << "process_id:" << process_id;
  return count < max_loaders_per_process_;
}
  // A count of outstanding requests per initiating process.
  //每一个初始化进程所能承受的最大未完成的请求数。
  std::map<uint32_t, uint32_t> loader_count_per_process_;

  // static constexpr uint32_t kMaxOutstandingRequestsPerProcess = 2700;
  //便于复现问题将数值调整为100=================================================
  static constexpr uint32_t kMaxOutstandingRequestsPerProcess = 100;
  uint32_t max_loaders_per_process_ = kMaxOutstandingRequestsPerProcess;

The CanCreateLoader method of services/network/network_context.cc will be based on the same process_id

Whether the count below is greater than the value of kMaxOutstandingRequestsPerProcess (the default value is 2700) determines the value of exhausted. In order to facilitate the reproduction of the problem, I adjusted it to 100. At the same time, I also added the corresponding log.

4. Compile Chrome

Most of Google's C++ projects use a cross-platform compilation tool like ninja, and the bottom layer of ninja on the mac side will call Apple's clang compiler.

Because the compilation parameters of ninja are relatively complicated, Google provides a tool such as gn to generate a suitable ninjaFile according to the current system environment. After that, there is no need to set any parameters when compiling with autoninja, and compile directly based on the ninjaFile configuration file.

The specific process is as follows:

gn gen out/Default

Now a series of parameters and configurations required to compile Chrome will be generated in the out directory, and then start to compile (the whole process takes about 8 hours, depending on the computer configuration):

autoninja -C out/Default chrome

After the compilation is complete, you will see an executable file such as ./out/Default/Chromium.app/Contents/MacOS/Chromium appear in the out directory.

In order to be able to view the added logs, you need to use the command line to start and add the --enable-logging parameter

/个人文件目录/chromium/src/out/Default/Chromium.app/Contents/MacOS/Chromium  --enable-logging

In this way, the corresponding log will be output when operating chrome.

5. Positioning problem

[38514:19971:1103/151157.874646:ERROR:network_context.cc(915)] network_context.cc>>>>LoaderCreated>>process_id:0 count>>101
[38514:19971:1103/151157.875321:ERROR:url_loader_factory.cc(182)] url_loader_factory.cc>>>>CreateLoaderAndStartWithSyncClient>>url:https://content-autofill.googleapis.com/v1/pages/ChNDaHJvbWUvMTA5LjAuNTM4MS4wEjMJBAwQwvyO8h4SBQ2RYZVOEgUNkWGVThIFDZFhlU4SBQ2BkPF8EgUNgZDxfBIFDZFhlU4SEAkcC8bFxOA28RIFDQbtu_8=?alt=proto
[38514:19971:1103/151157.875475:ERROR:network_context.cc(933)] network_context.cc>>>>CanCreateLoader>>count101process_id:0
[38514:19971:1103/151157.875565:ERROR:url_loader_factory.cc(263)] url_loader_factory.cc>>>>ERR_INSUFFICIENT_RESOURCES
[38514:19971:1103/151157.876158:ERROR:network_context.cc(926)] network_context.cc>>>>LoaderDestroyed>>process_id:0 count>>100

The analysis found that every time a new tab is created, the process of process_id:0 will be triggered, because the autofill function is used in the front-end project, it will continuously trigger the request of https://content-autofill.googleapis.com/v1/pages, and this The computer cannot access Google, which will cause the number of unfinished requests in the process of process_id:0 to keep accumulating, and our company's customer service system will continue to work for 8 hours or more. This leads to the verification logic of the process_id:0 process being executed first every time a new tab is opened, which leads to exhausted=true in the previous code, but when we open a new tab again, ERR_INSUFFICIENT_RESOURCES will be thrown no matter which website is visited This exception.

6. Solutions

The solution I adopted is to configure the host locally and point the *.googleapis.com domain name to 127.0.0.1, so that the request for .googleapis.com will end quickly, and the threshold of chrome will not be triggered, thereby avoiding the situation of exhausted=true , I never received feedback from the customer service team on related issues.

Seven, stepping on the pit record

1. If Science Internet is turned on, the code still cannot be pulled
fatal: unable to access 'https://chromium.googlesource.com/chromium/tools/depot_tools.git/': Failed to connect to chromium.googlesource.com port 443 after 75123 ms: Operation timed out

 export https_proxy=http://127.0.0.1:7890
 export http_proxy=http://127.0.0.1:7890

2. There is a problem when cloning a large git repository:

error: RPC failed; curl 18 transfer closed with outstanding read data remain

(Error: RPC failed; curl 18 transport closed but still has outstanding read data)

After investigation, the reason is that the default value of curl's postBuffer is too small, and we need to reconfigure the size on the terminal:

//Set the global http.postBuffer to 200M, that is, 2000x1024x1024=2097152000 bite
//(If the target is really larger than 200M, change it according to actual needs)
git config --global http.postBuffer 209715200000
//You can view the existing git private after modification Configure the list to determine whether it is successful
git config --list
 

Reference: Share an experience of directly hiring enterprise-side Debug for Boss-Knowledge

Local compilation of Chrome browser stepping notes on Mac (latest in 2021.02) - Programmer Sought

Guess you like

Origin blog.csdn.net/gjd1988/article/details/128478824