ART virtual machine heap creation process

  In the process of calling Runtime::Init for initialization, ART will create a virtual machine heap Heap. The initialization process of the heap needs to be determined by some virtual machine startup parameters, such as:
  1. options->heap_initial_size_: The initial size of the heap, specified by the option -Xms.
  2.options->heap_growth_limit_: The upper limit of the allowable growth of the heap, which is a soft upper limit of the heap, specified by the option -XX:HeapGrowthLimit.
  3. options->heap_min_free_: The minimum free value of the heap, specified by the option -XX:HeapMinFree.
  4. options->heap_max_free_: The maximum free value of the heap, specified by the option -XX:HeapMaxFree.
  5. options->heap_target_utilization_: The target utilization of the heap, specified by the option -XX:HeapTargetUtilization.
  6.options->heap_maximum_size_: The maximum value of the heap, which is a hard upper limit value of the heap, specified by the option -Xmx.
  7.options->image_: Image file used to create Image Space, specified by option -Ximage.
  8.options->is_concurrent_gc_enabled_: Whether to support parallel GC, specified by option -Xgc.
  9. options->parallel_gc_threads_: The number of threads used to execute GC tasks at the same time during the GC pause phase, specified by the option -XX:ParallelGCThreads.
  10.options->conc_gc_threads_: The number of threads used to execute GC tasks at the same time in the non-suspended phase of GC, specified by option -XX:ConcGCThreads.
  11. options->low_memory_mode_: Whether to run in low memory mode, specified by option XX:LowMemoryMode.
  12.options->long_pause_log_threshold_: The time threshold for application pause caused by GC. Once the threshold is exceeded, a warning log will be output, which is specified by option XX:LongPauseLogThreshold.
  13. options->long_gc_log_threshold_: GC time threshold, once the threshold is exceeded, a warning log will be output, specified by option -XX:LongGCLogThreshold.
  14. options->ignore_max_footprint_: No limit flag for heap size, specified by option -XX:IgnoreMaxFootprint.

/art/runtime/runtime.cc

bool Runtime::Init(const RuntimeOptions& raw_options, bool ignore_unrecognized) {
  ...
  heap_ = new gc::Heap(options->heap_initial_size_,
                       options->heap_growth_limit_,
                       options->heap_min_free_,
                       options->heap_max_free_,
                       options->heap_target_utilization_,
                       options->foreground_heap_growth_multiplier_,
                       options->heap_maximum_size_,
                       options->heap_non_moving_space_capacity_,
                       options->image_,
                       options->image_isa_,
                       options->collector_type_,
                       options->background_collector_type_,
                       options->parallel_gc_threads_,
                       options->conc_gc_threads_,
                       options->low_memory_mode_,
                       options->long_pause_log_threshold_,
                       options->long_gc_log_threshold_,
                       options->ignore_max_footprint_,
                       options->use_tlab_,
                       options->verify_pre_gc_heap_,
                       options->verify_pre_sweeping_heap_,
                       options->verify_post_gc_heap_,
                       options->verify_pre_gc_rosalloc_,
                       options->verify_pre_sweeping_rosalloc_,
                       options->verify_post_gc_rosalloc_,
                       options->use_homogeneous_space_compaction_for_oom_,
                       options->min_interval_homogeneous_space_compaction_by_oom_);
  ...

  The constructor of Heap is relatively long. Here, only some important codes are selected for explanation, and it is the default process. As can be seen from the self-added log, the foreground_collector_type_ is CollectorTypeCMS
  at the beginning .   background_collector_type_ is CollectorTypeHomogeneousSpaceCompact.

12:55:02:011I/art     ( 1200): foreground_collector_type is CollectorTypeCMS,background_collector_type is CollectorTypeHomogeneousSpaceCompact

  When the current process is not a Zygote process, the member background_collector_type_ (foreground collection algorithm) must be the same as the member foreground_collector_type_ (background collection algorithm). As can be seen from the above, the front-end collection algorithm of the parameters passed to Heap by Runtime is CMS, and the background collection algorithm is HomogeneousSpaceCompact.
  ChangeCollector sets the member collector_type_ to desired_collector_type_, and desired_collector_type_ is initialized to foreground_collector_type_ in the front. That is, to this point:
  foreground_collector_type_ : CollectorTypeCMS
  background_collector_type_ : CollectorTypeHomogeneousSpaceCompact
  collector_type_ : CollectorTypeCMS
  desired_collector_type_ : CollectorTypeCMS

/art/runtime/gc/heap.cc

  // If we aren't the zygote, switch to the default non zygote allocator. This may update the
  // entrypoints.
  const bool is_zygote = Runtime::Current()->IsZygote();
  if (!is_zygote) {
    large_object_threshold_ = kDefaultLargeObjectThreshold;
    // Background compaction is currently not supported for command line runs.
    if (background_collector_type_ != foreground_collector_type_) {
      VLOG(heap) << "Disabling background compaction for non zygote";
      background_collector_type_ = foreground_collector_type_;
    }
  }
  ChangeCollector(desired_collector_type_);

  image_file_name is "/system/framework/boot.art", not empty, so enter the imageSpace creation process. Part of the process of the ImageSpace::Create function has been introduced in "Process Analysis of ART Loading OAT Files". Now suppose that the relocated image /data/dalvik-cache/arm/system@[email protected] has been generated, and use this as a parameter to initialize the imageSpace, that is, call the
ImageSpace::Init function.

/art/runtime/gc/heap.cc

  if (!image_file_name.empty()) {
    std::string error_msg;
    space::ImageSpace* image_space = space::ImageSpace::Create(image_file_name.c_str(),
                                                               image_instruction_set,
                                                               &error_msg);
    if (image_space != nullptr) {
      AddSpace(image_space);
      // Oat files referenced by image files immediately follow them in memory, ensure alloc space
      // isn't going to get in the middle
      byte* oat_file_end_addr = image_space->GetImageHeader().GetOatFileEnd();
      CHECK_GT(oat_file_end_addr, image_space->End());
      requested_alloc_space_begin = AlignUp(oat_file_end_addr, kPageSize);
    } else {
      LOG(WARNING) << "Could not create image space with image file '" << image_file_name << "'. "
                   << "Attempting to fall back to imageless running. Error was: " << error_msg;
    }
  }

  The first parameter of ImageSpace::Init is "/data/dalvik-cache/arm/system@[email protected]", the second parameter is "/system/framework/boot.art", and the third parameter The parameter is true, and the fourth parameter is used to record the generated error message. In addition, we need to understand the process of creating a MemMap by the MapFileAtAddress function.
  The first parameter expected_ptr of MapFileAtAddress is the address you want to map. Here, the starting address of the image recorded by the ImageHeader is passed in, but it does not mean that the mapped address is the starting address of the image, because the actual mapped address has to go through the image. The starting address is obtained after 4k alignment adjustment. The second parameter is the size of the mapped content. Here, the size of the image is passed in. In fact, the size of the mapped memory is also adjusted by 4k alignment. The third parameter prot and the fourth parameter flags are the memory protection flag and the type of the mapping object respectively, the fifth parameter fd is the file descriptor used for mapping, and the sixth parameter start is the offset of the file descriptor used for mapping , the seventh parameter reuse indicates whether the created MemMap can overlap with the existing MemMap, the eighth parameter filename is the file name corresponding to the file descriptor, and the last parameter error_msg records the error information.
  In fact, MapFileAtAddress uses mmap internally for memory mapping. The parameters of mmap are adjusted by 4k according to the input parameters of MapFileAtAddress. The file offset and file size need to be 4k aligned, so mmap's approach is to start mapping from the incoming expected address minus the file offset 4k and then aligning down the changed variable's address. The mapping size is the file size 4k aligned upward. value.

/art/runtime/mem_map.cc


MemMap* MemMap::MapFileAtAddress(byte* expected_ptr, size_t byte_count, int prot, int flags, int fd,
                                 off_t start, bool reuse, const char* filename,
                                 std::string* error_msg) {
  CHECK_NE(0, prot);
  CHECK_NE(0, flags & (MAP_SHARED | MAP_PRIVATE));

  // Note that we do not allow MAP_FIXED unless reuse == true, i.e we
  // expect his mapping to be contained within an existing map.
  if (reuse) {
    // reuse means it is okay that it overlaps an existing page mapping.
    // Only use this if you actually made the page reservation yourself.
    CHECK(expected_ptr != nullptr);

#if !defined(__APPLE__)  // TODO: Reanable after b/16861075 BacktraceMap issue is addressed.
    uintptr_t expected = reinterpret_cast<uintptr_t>(expected_ptr);
    uintptr_t limit = expected + byte_count;
    DCHECK(ContainedWithinExistingMap(expected, limit, error_msg));
#endif
    flags |= MAP_FIXED;
  } else {
    CHECK_EQ(0, flags & MAP_FIXED);
    // Don't bother checking for an overlapping region here. We'll
    // check this if required after the fact inside CheckMapRequest.
  }

  if (byte_count == 0) {
    return new MemMap(filename, nullptr, 0, nullptr, 0, prot, false);
  }
  // Adjust 'offset' to be page-aligned as required by mmap.
  int page_offset = start % kPageSize;
  off_t page_aligned_offset = start - page_offset;
  // Adjust 'byte_count' to be page-aligned as we will map this anyway.
  size_t page_aligned_byte_count = RoundUp(byte_count + page_offset, kPageSize);
  // The 'expected_ptr' is modified (if specified, ie non-null) to be page aligned to the file but
  // not necessarily to virtual memory. mmap will page align 'expected' for us.
  byte* page_aligned_expected = (expected_ptr == nullptr) ? nullptr : (expected_ptr - page_offset);

  byte* actual = reinterpret_cast<byte*>(mmap(page_aligned_expected,
                                              page_aligned_byte_count,
                                              prot,
                                              flags,
                                              fd,
                                              page_aligned_offset));
  if (actual == MAP_FAILED) {
    auto saved_errno = errno;

    std::string maps;
    ReadFileToString("/proc/self/maps", &maps);

    *error_msg = StringPrintf("mmap(%p, %zd, 0x%x, 0x%x, %d, %" PRId64
                              ") of file '%s' failed: %s\n%s",
                              page_aligned_expected, page_aligned_byte_count, prot, flags, fd,
                              static_cast<int64_t>(page_aligned_offset), filename,
                              strerror(saved_errno), maps.c_str());
    return nullptr;
  }
  std::ostringstream check_map_request_error_msg;
  if (!CheckMapRequest(expected_ptr, actual, page_aligned_byte_count, error_msg)) {
    return nullptr;
  }
  return new MemMap(filename, actual + page_offset, byte_count, actual, page_aligned_byte_count,
                    prot, reuse);
}

/art/runtime/gc/heap.cc

ImageSpace* ImageSpace::Init(const char* image_filename, const char* image_location,
                             bool validate_oat_file, std::string* error_msg) {
  CHECK(image_filename != nullptr);
  CHECK(image_location != nullptr);

  uint64_t start_time = 0;
  if (VLOG_IS_ON(heap) || VLOG_IS_ON(startup)) {
    start_time = NanoTime();
    LOG(INFO) << "ImageSpace::Init entering image_filename=" << image_filename;
  }

  std::unique_ptr<File> file(OS::OpenFileForReading(image_filename));
  if (file.get() == NULL) {
    *error_msg = StringPrintf("Failed to open '%s'", image_filename);
    return nullptr;
  }
  ImageHeader image_header;
  bool success = file->ReadFully(&image_header, sizeof(image_header));
  if (!success || !image_header.IsValid()) {
    *error_msg = StringPrintf("Invalid image header in '%s'", image_filename);
    return nullptr;
  }

  // Note: The image header is part of the image due to mmap page alignment required of offset.
  //map boot.art到一个MemMap中
  std::unique_ptr<MemMap> map(MemMap::MapFileAtAddress(image_header.GetImageBegin(),
                                                 image_header.GetImageSize(),
                                                 PROT_READ | PROT_WRITE,
                                                 MAP_PRIVATE,
                                                 file->Fd(),
                                                 0,
                                                 false,
                                                 image_filename,
                                                 error_msg));
  if (map.get() == NULL) {
    DCHECK(!error_msg->empty());
    return nullptr;
  }
  CHECK_EQ(image_header.GetImageBegin(), map->Begin());
  DCHECK_EQ(0, memcmp(&image_header, map->Begin(), sizeof(ImageHeader)));
  //map GC要用到的一个live bitmap到一个MemMap中
  std::unique_ptr<MemMap> image_map(
      MemMap::MapFileAtAddress(nullptr, image_header.GetImageBitmapSize(),
                               PROT_READ, MAP_PRIVATE,
                               file->Fd(), image_header.GetBitmapOffset(),
                               false,
                               image_filename,
                               error_msg));
  if (image_map.get() == nullptr) {
    *error_msg = StringPrintf("Failed to map image bitmap: %s", error_msg->c_str());
    return nullptr;
  }
  uint32_t bitmap_index = bitmap_index_.FetchAndAddSequentiallyConsistent(1);
  std::string bitmap_name(StringPrintf("imagespace %s live-bitmap %u", image_filename,
                                       bitmap_index));
  //创建一个ContinuousSpaceBitmap
  std::unique_ptr<accounting::ContinuousSpaceBitmap> bitmap(
      accounting::ContinuousSpaceBitmap::CreateFromMemMap(bitmap_name, image_map.release(),
                                                          reinterpret_cast<byte*>(map->Begin()),
                                                          map->Size()));
  if (bitmap.get() == nullptr) {
    *error_msg = StringPrintf("Could not create bitmap '%s'", bitmap_name.c_str());
    return nullptr;
  }

  std::unique_ptr<ImageSpace> space(new ImageSpace(image_filename, image_location,
                                             map.release(), bitmap.release()));

  // VerifyImageAllocations() will be called later in Runtime::Init()
  // as some class roots like ArtMethod::java_lang_reflect_ArtMethod_
  // and ArtField::java_lang_reflect_ArtField_, which are used from
  // Object::SizeOf() which VerifyImageAllocations() calls, are not
  // set yet at this point.

  space->oat_file_.reset(space->OpenOatFile(image_filename, error_msg));
  if (space->oat_file_.get() == nullptr) {
    DCHECK(!error_msg->empty());
    return nullptr;
  }

  if (validate_oat_file && !space->ValidateOatFile(error_msg)) {
    DCHECK(!error_msg->empty());
    return nullptr;
  }

  Runtime* runtime = Runtime::Current();
  runtime->SetInstructionSet(space->oat_file_->GetOatHeader().GetInstructionSet());
  //从boot.oat里面class_linker加载好的一些方法会记录在ImageHeader的image_roots_成员中,运行时可以通过读取boot.art的ImageHeader拿到这些方法的入口
  mirror::Object* resolution_method = image_header.GetImageRoot(ImageHeader::kResolutionMethod);
  runtime->SetResolutionMethod(down_cast<mirror::ArtMethod*>(resolution_method));
  mirror::Object* imt_conflict_method = image_header.GetImageRoot(ImageHeader::kImtConflictMethod);
  runtime->SetImtConflictMethod(down_cast<mirror::ArtMethod*>(imt_conflict_method));
  mirror::Object* imt_unimplemented_method =
      image_header.GetImageRoot(ImageHeader::kImtUnimplementedMethod);
  runtime->SetImtUnimplementedMethod(down_cast<mirror::ArtMethod*>(imt_unimplemented_method));
  mirror::Object* default_imt = image_header.GetImageRoot(ImageHeader::kDefaultImt);
  runtime->SetDefaultImt(down_cast<mirror::ObjectArray<mirror::ArtMethod>*>(default_imt));

  mirror::Object* callee_save_method = image_header.GetImageRoot(ImageHeader::kCalleeSaveMethod);
  runtime->SetCalleeSaveMethod(down_cast<mirror::ArtMethod*>(callee_save_method),
                               Runtime::kSaveAll);
  callee_save_method = image_header.GetImageRoot(ImageHeader::kRefsOnlySaveMethod);
  runtime->SetCalleeSaveMethod(down_cast<mirror::ArtMethod*>(callee_save_method),
                               Runtime::kRefsOnly);
  callee_save_method = image_header.GetImageRoot(ImageHeader::kRefsAndArgsSaveMethod);
  runtime->SetCalleeSaveMethod(down_cast<mirror::ArtMethod*>(callee_save_method),
                               Runtime::kRefsAndArgs);

  if (VLOG_IS_ON(heap) || VLOG_IS_ON(startup)) {
    LOG(INFO) << "ImageSpace::Init exiting (" << PrettyDuration(NanoTime() - start_time)
             << ") " << *space.get();
  }
  return space.release();
}

  ImageSpace is a Space with continuous physical address space because it is internally mapped using mmap. The starting address of ImageSpace is the starting address of the image obtained in ImageHeader, and the final address is the starting address of ImageSpace plus the size of ImageSpace. In the log you can see:

I/art     ( 1200): ImageSpace::Init exiting (29.485ms) SpaceTypeImageSpace begin=0x6f2e9000,end=0x6fce42f8,size=9MB,name="/data/dalvik-cache/arm/system@[email protected]"]

  In fact, there is a SpaceBitmap and boot.oat file between ImageSpace and the next space. This SpaceBitmap is next to the end address of ImageSpace and is used to record the live bitmap of ImageSpace.
  After the ImageSpace is generated, open the boot.oat file corresponding to boot.art through the OpenOatFile function. boot.oat is an ELF file, that is, load the ELF file into the memory specified by boot.art, which is also boot. The end address of art is an address aligned up to 4k. You can see from the log:

I/art     ( 1200): oat file begin is 0x6fce5000 ,oat file end is 0x734d6000,oat file data begin is 0x6fce6000 ,oat data end is 0x734d4f48

  The starting address of the oat file, 0x6fce5000, is exactly the result of 4k alignment upwards at the end address 0x6fce42f8 of the boot.art file. The process of opening a loaded oat file is not explained in detail. According to my understanding, in addition to the starting address of the oat file, the starting address of the oatdata segment after the oat file is loaded is also specified in boot.art. The oat file has two segments that are different from general ELF: the oatdata segment and the oatexec segment. The information of the original dex file is placed in the oatdata segment, and the local machine instructions compiled by these dex files are placed in the oatexec segment. The oatdata segment has a link that can directly reach the corresponding native code of the oatexec segment. The marks that define the oatdata segment are the oatdata symbol and the oatexec symbol, and the marks that define the oatexec segment are the position of the oatexec symbol and the oatlastword symbol +4.
  Some boundary addresses have been specified in the ImageHeader when the boot.art is generated: the oat_file_begin parameter is the starting address of the oat file loaded into the memory, the oat_data_begin_ parameter is the starting address of the oatdata segment of the oat file, and the oat_data_end is the above-mentioned oatlastword symbol +4 The address, oat_file_end is the end address of the entire ELF file, these addresses are obtained in the process of opening the oat file before generating boot.art.

/art/compiler/image_writer.cc

  ImageHeader image_header(PointerToLowMemUInt32(image_begin_),
                           static_cast<uint32_t>(image_end_),
                           RoundUp(image_end_, kPageSize),
                           RoundUp(bitmap_bytes, kPageSize),
                           PointerToLowMemUInt32(GetImageAddress(image_roots.Get())),
                           oat_file_->GetOatHeader().GetChecksum(),
                           PointerToLowMemUInt32(oat_file_begin),
                           PointerToLowMemUInt32(oat_data_begin_),
                           PointerToLowMemUInt32(oat_data_end),
                           PointerToLowMemUInt32(oat_file_end),
                           compile_pic_);
  memcpy(image_->Begin(), &image_header, sizeof(image_header));

  Heap::AddSpace adds ImageSpace to vector
continuous_spaces_ that stores continuous space. After that, get the end address of boot.oat through ImageHeader, and use the new address after 4k alignment as the expected starting address of the next MemMap.
  Next, we will create a MemMap starting from the address of the memory end address where boot.oat is 4k aligned upwards. The MemMap is named "zygote space" in the zygote process and "non moving space" in the non-zygote process. Create After completion, the address at 300MB is used as the starting expected address of the next MemMap. Here, the new map function MemMap::MapAnonymous is used to map the space allocated by ashmem named "dalvik-xxx" to the process space, and then build A MemMap.

/art/runtime/gc/heap.cc

  // We may use the same space the main space for the non moving space if we don't need to compact
  // from the main space.
  // This is not the case if we support homogeneous compaction or have a moving background
  // collector type.
  bool separate_non_moving_space = is_zygote ||
      support_homogeneous_space_compaction || IsMovingGc(foreground_collector_type_) ||
      IsMovingGc(background_collector_type_);
  if (foreground_collector_type == kCollectorTypeGSS) {
    separate_non_moving_space = false;
  }
  std::unique_ptr<MemMap> main_mem_map_1;
  std::unique_ptr<MemMap> main_mem_map_2;
  byte* request_begin = requested_alloc_space_begin;
  if (request_begin != nullptr && separate_non_moving_space) {
    request_begin += non_moving_space_capacity;
  }
  std::string error_str;
  std::unique_ptr<MemMap> non_moving_space_mem_map;
  if (separate_non_moving_space) {
    // If we are the zygote, the non moving space becomes the zygote space when we run
    // PreZygoteFork the first time. In this case, call the map "zygote space" since we can't
    // rename the mem map later.
    const char* space_name = is_zygote ? kZygoteSpaceName: kNonMovingSpaceName;
    // Reserve the non moving mem map before the other two since it needs to be at a specific
    // address.
    non_moving_space_mem_map.reset(
        MemMap::MapAnonymous(space_name, requested_alloc_space_begin,
                             non_moving_space_capacity, PROT_READ | PROT_WRITE, true, &error_str));
    CHECK(non_moving_space_mem_map != nullptr) << error_str;
    // Try to reserve virtual memory at a lower address if we have a separate non moving space.
    request_begin = reinterpret_cast<byte*>(300 * MB);
  }

  main_mem_map_1 creates a MemMap named "main space" at the starting address of request_begin (300MB), and request_begin is at the address of 300MB in the zygote process. By default, a MemMap main_mem_map_2 with the same specifications as main_mem_map_1 is created at the final address of main_mem_map_1 except for the name.

/art/runtime/gc/heap.cc

  // Attempt to create 2 mem maps at or after the requested begin.
  main_mem_map_1.reset(MapAnonymousPreferredAddress(kMemMapSpaceName[0], request_begin, capacity_,
                                                    PROT_READ | PROT_WRITE, &error_str));
  CHECK(main_mem_map_1.get() != nullptr) << error_str;
  if (support_homogeneous_space_compaction ||
      background_collector_type_ == kCollectorTypeSS ||
      foreground_collector_type_ == kCollectorTypeSS) {
    main_mem_map_2.reset(MapAnonymousPreferredAddress(kMemMapSpaceName[1], main_mem_map_1->End(),
                                                      capacity_, PROT_READ | PROT_WRITE,
                                                      &error_str));
    CHECK(main_mem_map_2.get() != nullptr) << error_str;
  }

  Then we create a DlMallocSpace named "zygote / non moving space" from the MemMap named "zygote space" or "non moving space" that we got earlier, and add it to the vector continuous_spaces_ that stores the continuous space through AddSpace. This space is named "zygote / non moving space".

/art/runtime/gc/heap.cc

  // Create the non moving space first so that bitmaps don't take up the address range.
  if (separate_non_moving_space) {
    // Non moving space is always dlmalloc since we currently don't have support for multiple
    // active rosalloc spaces.
    const size_t size = non_moving_space_mem_map->Size();
    non_moving_space_ = space::DlMallocSpace::CreateFromMemMap(
        non_moving_space_mem_map.release(), "zygote / non moving space", kDefaultStartingSize,
        initial_size, size, size, false);
    non_moving_space_->SetFootprintLimit(non_moving_space_->Capacity());
    CHECK(non_moving_space_ != nullptr) << "Failed creating non moving space "
        << requested_alloc_space_begin;
    AddSpace(non_moving_space_);
  }

  The MemMap of main_mem_map_1 mentioned above will be used to create a RosAllocSpace named "main rosalloc space". Like DlMallocSpace, they are both used to allocate memory. The difference is that the allocation algorithm is different. This RosAllocSpace will then be added to the continuous space vector. Then create a RosAllocSpace with the same name as the previous RosAllocSpace named "main rosalloc space 1" based on main_mem_map_2.

/art/runtime/gc/heap.cc

  // Create other spaces based on whether or not we have a moving GC.
  if (IsMovingGc(foreground_collector_type_) && foreground_collector_type_ != kCollectorTypeGSS) {
    // Create bump pointer spaces.
    // We only to create the bump pointer if the foreground collector is a compacting GC.
    // TODO: Place bump-pointer spaces somewhere to minimize size of card table.
    bump_pointer_space_ = space::BumpPointerSpace::CreateFromMemMap("Bump pointer space 1",
                                                                    main_mem_map_1.release());
    CHECK(bump_pointer_space_ != nullptr) << "Failed to create bump pointer space";
    AddSpace(bump_pointer_space_);
    temp_space_ = space::BumpPointerSpace::CreateFromMemMap("Bump pointer space 2",
                                                            main_mem_map_2.release());
    CHECK(temp_space_ != nullptr) << "Failed to create bump pointer space";
    AddSpace(temp_space_);
    CHECK(separate_non_moving_space);
  } else {
    CreateMainMallocSpace(main_mem_map_1.release(), initial_size, growth_limit_, capacity_);
    CHECK(main_space_ != nullptr);
    AddSpace(main_space_);
    if (!separate_non_moving_space) {
      non_moving_space_ = main_space_;
      CHECK(!non_moving_space_->CanMoveObjects());
    }
    ...
        } else if (main_mem_map_2.get() != nullptr) {
      const char* name = kUseRosAlloc ? kRosAllocSpaceName[1] : kDlMallocSpaceName[1];
      main_space_backup_.reset(CreateMallocSpaceFromMemMap(main_mem_map_2.release(), initial_size,
                                                           growth_limit_, capacity_, name, true));
      CHECK(main_space_backup_.get() != nullptr);
      // Add the space so its accounted for in the heap_begin and heap_end.
      AddSpace(main_space_backup_.get());
    }

  The last created is the large object space. The representation of large object space on 64-bit processors is FreeListSpace, and the representation of large object space on 32-bit processors is LargeObjectMapSpace. It is worth mentioning that using AddSpace will not add LargeObjectMapSpace to the continuous space vector, but to the discontinuous_spaces_ of the discontinuous space vector. This is because the large object space will mmap a piece of memory when allocating memory. The starting address of this memory is determined by the system, so the allocated memory is discontinuous.

/art/runtime/gc/heap.cc

  // Allocate the large object space.
  if (kUseFreeListSpaceForLOS) {
    large_object_space_ = space::FreeListSpace::Create("large object space", nullptr, capacity_);
  } else {
    large_object_space_ = space::LargeObjectMapSpace::Create("large object space");
  }
  CHECK(large_object_space_ != nullptr) << "Failed to create large object space";
  AddSpace(large_object_space_);
  // Compute heap capacity. Continuous spaces are sorted in order of Begin().

  Finally, create the card table area, which is related to GC.

/art/runtime/gc/heap.cc

  // Allocate the card table.
  card_table_.reset(accounting::CardTable::Create(heap_begin, heap_capacity));
  CHECK(card_table_.get() != NULL) << "Failed to create card table";
  // Card cache for now since it makes it easier for us to update the references to the copying
  // spaces.

  Before zygote forks out the process each time, it will also divide the non_moving_space_ into two through Heap::PreZygoteFork to get a "Zygote space" and a "non moving space". The process is not detailed here. Among them, "Zygote space" is used to share resources between the zygote process and the process forked by zygote, and "non moving space" is used to store immovable objects. Immovable objects are those objects whose space is allocated using the AllocNonMovableObject interface.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325538846&siteId=291194637