hadoop 2.6 源码 解读之关闭文件流

客户端写完文件之后,要关闭文件流,如下:

            out.write("hello".getBytes("UTF-8"));
//            out.writeUTF("da jia hao,cai shi zhen de hao!");
            out.close();

最后调用 DFSOutputStream 的 close() 方法

@Override
  public synchronized void close() throws IOException {
    if (closed) {
      IOException e = lastException.getAndSet(null);
      if (e == null)
        return;
      else
        throw e;
    }

    try {
      flushBuffer();       // flush from all upper layers

      if (currentPacket != null) { 
        waitAndQueueCurrentPacket();
      }

      if (bytesCurBlock != 0) {
        // send an empty packet to mark the end of the block
        currentPacket = createPacket(0, 0, bytesCurBlock, currentSeqno++);
        currentPacket.lastPacketInBlock = true;
        currentPacket.syncBlock = shouldSyncBlock;
      }

      flushInternal();             // flush all data to Datanodes
      // get last block before destroying the streamer
      ExtendedBlock lastBlock = streamer.getBlock();
      closeThreads(false);
      completeFile(lastBlock);
      dfsClient.endFileLease(fileId);
    } catch (ClosedChannelException e) {
    } finally {
      closed = true;
    }
  }

总结

  • flushBuffer() 主要是将缓存的数据写入数据包packet
  • waitAndQueueCurrentPacket() 将没有发送的packet 放入队列里
  • flushInternal() 确认所有数据包已经成功写入数据流管道
  • completeFile() 向 Namenode 提交文件
  • dfsClient.endFileLease() 释放文件租约

猜你喜欢

转载自blog.csdn.net/zhixingheyi_tian/article/details/80454281