Ceph - ceph.conf common parameters

Ceph - ceph.conf common parameters


Here are some commonly used to Ceph configuration tuning parameters, the default values ​​are ceph_nautilus (14.2.1) reproduces the source, if the parameters described inadequacies of interpretation, Wang noted.


  • mon_osd_cache_size
    • To buffer the maximum amount of memory in the osdmap
    • Default value: 500M
  • mon_osd_max_initial_pgs
    • Creating a pool of the maximum number of pg (if the user has specified more than that of PG, PG split that cluster after the pool is created, in order to achieve the target)
    • Default: 1024
  • mon_osd_min_up_ratio

    • Before the OSD marked down, can maintain a minimum ratio of OSD in the up state.
    • Default: 0.3
  • mon_osd_min_in_ratio

    • Before the OSD marked out, remain in state in the minimum proportion of OSD.
    • Default: 0.75
  • mon_osd_down_out_interval

    • After the number of seconds in response to the OSD stop mark it down and out.
    • Default: 600
  • mon_osd_nearfull_ratio

    • When the alarm level, any one of a cluster OSD greater than or equal to this value space utilization, will be marked as NearFull the cluster, the cluster will generate an alarm at this time, and all OSD prompts state already in NearFull
    • Default: 0.85
  • mon_osd_full_ratio

    • Reported to stop water, any of a cluster is greater than equal to this OSD usage value, the cluster is marked as Full, this time to stop accepting write requests cluster client
    • Default: 0.95
  • mon_osd_backfillfull_ratio

    • OSD space usage is greater than equal to this value, we reject PG move or continue to move through the present embodiment Backfill OSD
    • Default: 0.90
  • osd_failsafe_full_ratio

    • PG executes contain op write operation, to prevent the disk space where the OSD is 100% filled last barrier, this limit is exceeded, op, will be dropped
    • Default: 0.97
  • mon_max_pg_per_osd

    • Pg allow the maximum number of clusters for each of the OSD
    • Default: 250 (less OSD in case a single node, can be adjusted larger recommended)
  • osd_scrub_chunk_min
    • Minimum number of blocks in a single object to be cleaned
    • Default: 5
  • osd_scrub_chunk_max
    • The maximum number of objects in a single block to be cleaned
    • Default: 25
  • osd_max_scrubs
    • The maximum number of concurrent cleaning on a single OSD
    • Default: 1
  • osd_scrub_begin_hour
    • scrub start time
    • Default: 0
  • osd_scrub_end_hour
    • scrub End Time
    • Default: 24 (default is 24 hours that is, to control the self)
  • osd_deep_scrub_interval
    • Deep scrub each PG
    • Default value: one week
  • osd_recovery_max_active
    • Control on a single OSD, a maximum number of simultaneously recover PG
    • Default: 3

For more details of the parameters, you can use git yourself ceph version of the source to query, more precise

Guess you like

Origin www.cnblogs.com/shu-sheng/p/12149617.html