Hadoop——namenode&secondary namenode

学习记录

namenode:主要是管理hdfs集群中的datanode,负责数据块的存储地址位置等,例如:Client上传文件时,namenode他会将该文件块在集群上将要存放的位置告知Client,然后Client得到地址信息后,将数据块上传至对应的位置。每次集群启动时,namenode都要加载fsimage和edtis日志,fsimage相当于namenode的快照,edits中保存着hdfs文件系统的操作记录,namenode加载完fsimage和edits后,会执行edtis的操作。早期的hdfs随着edtis操作记录的增多,每次启动时间越来越长,因为要逐步加载edits,hdfs2.0后出现了secondary namenode。

secondary namenode:secondary namenode的出现是为了解决hdfs每次启动都要将历史的edtis加载一遍(很费时间),它一般运行在和namenode不同的机器上,在一定条件下将历史的fsimage和edits拉取到它自身的节点上,然后加载fsimage和edits,合并运行得到一个新的fsimage,合并好后将新的fsimage送到namenode上,这样子下次namenode启动时,从这个新的fsimage加载,且加载的edits为新的fsimage之后的操作,这样子就省去加载新的fsimage之前的操作记录。

secondary namenode的出现同时也增加了hdfs的稳定性,当namenode上的fsimage和edits损坏时,我们可以通过secondary namenode上的数据来恢复hdfs集群。

查看fsimage内容:

1、将fsimage转换成对应格式的文件

hdfs oiv -P xml -i fsimae-xxxxx-xxx -o /xxx/image.xml

命令解释:

oiv: apply the offline fsimage viewer to an fsimage

-P:指定转换格式,格式有xml、web等

-i : INPUT表示输入的文件地址,一般为fsimage文件地址

-o: OUTPUT表示输出的文件地址,转换后的文件保存的路径

查看fsimage.xml内容

<?xml version="1.0"?>
<fsimage>
	<NameSection>
		<genstampV1>1000</genstampV1>
		<genstampV2>1006</genstampV2>
		<genstampV1Limit>0</genstampV1Limit>
		<lastAllocatedBlockId>1073741829</lastAllocatedBlockId>
		<txid>6548</txid>
	</NameSection>
	<INodeSection>
		<lastInodeId>16390</lastInodeId>
		<inode>
			<id>16385</id>
			<type>DIRECTORY</type>
			<name></name>
			<mtime>1550225337393</mtime>
			<permission>enche:supergroup:rwxr-xr-x</permission>
			<nsquota>9223372036854775807</nsquota>
			<dsquota>-1</dsquota>
		</inode>
		<inode>
			<id>16386</id>
			<type>DIRECTORY</type>
			<name>test</name>
			<mtime>1549998416906</mtime>
			<permission>enche:supergroup:rwxr-xr-x</permission>
			<nsquota>-1</nsquota>
			<dsquota>-1</dsquota>
		</inode>
		<inode>
			<id>16387</id>
			<type>FILE</type>
			<name>peopleinfo.txt</name>
			<replication>2</replication>
			<mtime>1550034917432</mtime>
			<atime>1550034142443</atime>
			<perferredBlockSize>134217728</perferredBlockSize>
			<permission>enche:supergroup:rw-r--r--</permission>
			<blocks>
				<block>
					<id>1073741825</id>
					<genstamp>1002</genstamp>
					<numBytes>226</numBytes>
				</block>
			</blocks>
		</inode>
		<inode>
			<id>16388</id>
			<type>FILE</type>
			<name>result</name>
			<replication>2</replication>
			<mtime>1550034791441</mtime>
			<atime>1550034413842</atime>
			<perferredBlockSize>134217728</perferredBlockSize>
			<permission>enche:supergroup:rw-r--r--</permission>
		</inode>
		<inode>
			<id>16390</id>
			<type>FILE</type>
			<name>hadoop-2.7.7.tar.gz</name>
			<replication>3</replication>
			<mtime>1550225337364</mtime>
			<atime>1550240226977</atime>
			<perferredBlockSize>134217728</perferredBlockSize>
			<permission>enche:supergroup:rw-r--r--</permission>
			<blocks>
				<block>
					<id>1073741828</id>
					<genstamp>1005</genstamp>
					<numBytes>134217728</numBytes>
				</block>
				<block>
					<id>1073741829</id>
					<genstamp>1006</genstamp>
					<numBytes>84502793</numBytes>
				</block>
			</blocks>
		</inode>
	</INodeSection>

<INodeReferenceSection></INodeReferenceSection>
	<SnapshotSection>
		<snapshotCounter>0</snapshotCounter>
	</SnapshotSection>
	<INodeDirectorySection>
		<directory>
			<parent>16385</parent>
			<inode>16390</inode>
			<inode>16388</inode>
			<inode>16386</inode>
		</directory>
		<directory>
			<parent>16386</parent>
			<inode>16387</inode>
		</directory>
	</INodeDirectorySection>
	<FileUnderConstructionSection></FileUnderConstructionSection>
	<SecretManagerSection>
		<currentId>0</currentId>
		<tokenSequenceNumber>0</tokenSequenceNumber>
	</SecretManagerSection>
	<CacheManagerSection>
		<nextDirectiveId>1</nextDirectiveId>
	</CacheManagerSection>
</fsimage>

edtis文件查看:

1、将edtis转换成对应格式的文件

hdfs oev -P xml -i edtis-xxxxx-xxx -o /xxx/edit.xml

命令解释:

oev: apply the offline edtis viewer to an edits file

-P:指定转换格式,格式有xml、web等

-i : INPUT表示输入的文件地址,一般为fsimage文件地址

-o: OUTPUT表示输出的文件地址,转换后的文件保存的路径

<?xml version="1.0" encoding="UTF-8"?>
<EDITS>
  <EDITS_VERSION>-63</EDITS_VERSION>
  <RECORD>
    <OPCODE>OP_START_LOG_SEGMENT</OPCODE>
    <DATA>
      <TXID>6551</TXID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_DELETE</OPCODE>
    <DATA>
      <TXID>6552</TXID>
      <LENGTH>0</LENGTH>
      <PATH>/hadoop-2.7.7.tar.gz</PATH>
      <TIMESTAMP>1550390860102</TIMESTAMP>
      <RPC_CLIENTID>6b1cc2bb-8031-4b71-baec-b5959376d69f</RPC_CLIENTID>
      <RPC_CALLID>3</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD</OPCODE>
    <DATA>
      <TXID>6553</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>16391</INODEID>
      <PATH>/hadoop-2.7.7.tar.gz._COPYING_</PATH>
      <REPLICATION>3</REPLICATION>
      <MTIME>1550393129776</MTIME>
      <ATIME>1550393129776</ATIME>
      <BLOCKSIZE>134217728</BLOCKSIZE>
      <CLIENT_NAME>DFSClient_NONMAPREDUCE_125716633_1</CLIENT_NAME>
      <CLIENT_MACHINE>192.168.3.5</CLIENT_MACHINE>
      <OVERWRITE>true</OVERWRITE>
      <PERMISSION_STATUS>
        <USERNAME>enche</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
      <RPC_CLIENTID>8166168a-82a0-4ea3-8159-8a7333813831</RPC_CLIENTID>
      <RPC_CALLID>3</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ALLOCATE_BLOCK_ID</OPCODE>
    <DATA>
      <TXID>6554</TXID>
      <BLOCK_ID>1073741830</BLOCK_ID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_SET_GENSTAMP_V2</OPCODE>
    <DATA>
      <TXID>6555</TXID>
      <GENSTAMPV2>1007</GENSTAMPV2>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD_BLOCK</OPCODE>
    <DATA>
      <TXID>6556</TXID>
      <PATH>/hadoop-2.7.7.tar.gz._COPYING_</PATH>
      <BLOCK>
        <BLOCK_ID>1073741830</BLOCK_ID>
        <NUM_BYTES>0</NUM_BYTES>
        <GENSTAMP>1007</GENSTAMP>
      </BLOCK>
      <RPC_CLIENTID></RPC_CLIENTID>
      <RPC_CALLID>-2</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ALLOCATE_BLOCK_ID</OPCODE>
    <DATA>
      <TXID>6557</TXID>
      <BLOCK_ID>1073741831</BLOCK_ID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_SET_GENSTAMP_V2</OPCODE>
    <DATA>
      <TXID>6558</TXID>
      <GENSTAMPV2>1008</GENSTAMPV2>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD_BLOCK</OPCODE>
    <DATA>
      <TXID>6559</TXID>
      <PATH>/hadoop-2.7.7.tar.gz._COPYING_</PATH>
      <BLOCK>
        <BLOCK_ID>1073741830</BLOCK_ID>
        <NUM_BYTES>134217728</NUM_BYTES>
        <GENSTAMP>1007</GENSTAMP>
      </BLOCK>
      <BLOCK>
        <BLOCK_ID>1073741831</BLOCK_ID>
        <NUM_BYTES>0</NUM_BYTES>
        <GENSTAMP>1008</GENSTAMP>
      </BLOCK>
      <RPC_CLIENTID></RPC_CLIENTID>
      <RPC_CALLID>-2</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_CLOSE</OPCODE>
    <DATA>
      <TXID>6560</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>0</INODEID>
      <PATH>/hadoop-2.7.7.tar.gz._COPYING_</PATH>
      <REPLICATION>3</REPLICATION>
      <MTIME>1550393137654</MTIME>
      <ATIME>1550393129776</ATIME>
      <BLOCKSIZE>134217728</BLOCKSIZE>
      <CLIENT_NAME></CLIENT_NAME>
      <CLIENT_MACHINE></CLIENT_MACHINE>
      <OVERWRITE>false</OVERWRITE>
      <BLOCK>
        <BLOCK_ID>1073741830</BLOCK_ID>
        <NUM_BYTES>134217728</NUM_BYTES>
        <GENSTAMP>1007</GENSTAMP>
      </BLOCK>
      <BLOCK>
        <BLOCK_ID>1073741831</BLOCK_ID>
        <NUM_BYTES>84502793</NUM_BYTES>
        <GENSTAMP>1008</GENSTAMP>
      </BLOCK>
      <PERMISSION_STATUS>
        <USERNAME>enche</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_RENAME_OLD</OPCODE>
    <DATA>
      <TXID>6561</TXID>
      <LENGTH>0</LENGTH>
      <SRC>/hadoop-2.7.7.tar.gz._COPYING_</SRC>
      <DST>/hadoop-2.7.7.tar.gz</DST>
      <TIMESTAMP>1550393137671</TIMESTAMP>
      <RPC_CLIENTID>8166168a-82a0-4ea3-8159-8a7333813831</RPC_CLIENTID>
      <RPC_CALLID>9</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_END_LOG_SEGMENT</OPCODE>
    <DATA>
      <TXID>6562</TXID>
    </DATA>
  </RECORD>
</EDITS>

猜你喜欢

转载自blog.csdn.net/Enche/article/details/87889636