【翻译】给新的Hadoop集群选择合适的硬件(二)

接上一篇:https://my.oschina.net/u/234661/blog/855909

Because Cloudera’s customers need to thoroughly understand their workloads in order to fully optimize Hadoop hardware, a classic chicken-and-egg problem ensues. Most teams looking to build a Hadoop cluster don’t yet know the eventual profile of their workload, and often the first jobs that an organization runs with Hadoop are far different than the jobs that Hadoop is ultimately used for as proficiency increases. Furthermore, some workloads might be bound in unforeseen ways. For example, some theoretical IO-bound workloads might actually be CPU-bound because of a user’s choice of compression, or different implementations of an algorithm might change how the MapReduce job is constrained.

由于Cloudera的客户为了优化硬件需要明确的理解工作场景,一个典型的鸡生蛋、蛋生鸡的问题产生了。大多数构建Hadoop集群的团队甚至还不知道最终的工作场景。很多时候,一个组织用Hadoop做的第一个工作跟他们熟悉Hadoop之后的最终用途大相径庭。更重要的是,一些工作场景会受到不可预见的限制。比如,一个理论上的受IO限制的工作可能实际受到了CPU限制,由于用到了压缩,或者实现了不同的逻辑改变了MapReduce限制的工作。

For these reasons, when the team is unfamiliar with the types of jobs it is going to run, as an initial approach it makes sense to invest in a balanced Hadoop cluster.The next step would be to benchmark MapReduce jobs running on the balanced cluster to analyze how they’re bound. To achieve that goal, it’s straightforward to measure live workloads and determine bottlenecks by putting thorough monitoring in place. We recommend installing Cloudera Manager on the Hadoop cluster to provide real-time statistics about CPU, disk, and network load. With Cloudera Manager installed, Hadoop administrators can then run their MapReduce jobs and check the Cloudera Manager dashboard to see how each machine is performing.

由于这些原因,团队不熟悉运行任务的类型,作为一个最初的方案,投入一个均衡的Hadoop集群就变得有意义了。下一步在集群上跑MapReduce的基准测试任务来分析限制在哪里。为了达到这一目的,最直接的方式就是通过测量实际的场景和设置监控来确认瓶颈所在。我们建议安装Cloudera Manager在集群上来提供实时的CPU、磁盘、网络负载统计。安装CM后,管理员可运行MapReduce、查看仪表盘来获取每个机器的运行情况。

In addition to building out a cluster appropriate for the workload, we encourage customers to work with their hardware vendor to understand the economics of power and cooling. Since Hadoop runs on tens, hundreds, or thousands of nodes, an operations team can save a significant amount of money by investing in power-efficient hardware. Each hardware vendor will be able to provide tools and recommendations for how to monitor power and cooling.

另外,为了打造一个适合工作场景的集群,我们鼓励客户跟硬件提供商一起工作,以便更好的理解电量和制冷上的经济投入。对于10个,100个,1000个节点的集群,运维团队可以节省不少的一笔钱,通过购买节能硬件。每个供应商都会在电力和制冷方面提供工具和建议。

选择硬件

The first step in choosing a machine configuration is to understand the type of hardware your operations team already manages. Operations teams often have opinions or hard requirements about new machine purchases, and will prefer to work with hardware with which they’re already familiar. Hadoop is not the only system that benefits from efficiencies of scale. Again, as a general suggestion, if the cluster is new or you can’t accurately predict your ultimate workload, we advise that you use balanced hardware.

选择硬件首先需要弄清楚已有的硬件类型。团队购买硬件的观念,往往更倾向于他们熟悉的硬件。Hadoop并不会唯一受益于规模效应的系统。通常建议,如果你不能准确的预计最终用途,我们建议购买均衡的硬件。

There are four types of roles in a basic Hadoop cluster: NameNode (and Standby NameNode), JobTrackerTaskTracker, and DataNode. (A node is a machine performing a particular task.) Most machines in your cluster will perform two of these roles, functioning as both DataNode (for data storage) and TaskTracker (for data processing).

典型的Hadoop集群有4种角色:NameNode(备用NameNode),JobTracker,TaskTracker,DataNode.一个node就是执行特定任务的机器。多数机器扮演2个角色,同时具有DataNode(数据存储)和TaskTracker(数据计算)的功能。

Here are the recommended specifications for DataNode/TaskTrackers in a balanced Hadoop cluster:

  • 12-24 1-4TB hard disks in a JBOD (Just a Bunch Of Disks) configuration
  • 2 quad-/hex-/octo-core CPUs, running at least 2-2.5GHz
  • 64-512GB of RAM
  • Bonded Gigabit Ethernet or 10Gigabit Ethernet (the more storage density, the higher the network throughput needed)

推荐的DataNode/TaskTrackers规格:

12-24个1-4TB的磁盘(JBOD)

2个4/6/8核CPU,主频至少2-2.5GHz

64-512内存

骨干千兆网络或万兆网络。数据越密集,需要的网络吞吐量就越高。

The NameNode role is responsible for coordinating data storage on the cluster, and the JobTracker for coordinating data processing. (The Standby NameNode should not be co-located on the NameNode machine for clusters and will run on hardware identical to that of the NameNode.) Cloudera recommends that customers purchase enterprise-class machines for running the NameNode and JobTracker, with redundant power and enterprise-grade disks in RAID 1 or 10 configurations.

NameNode用来协调数据在集群上的存储,JobTracker协调数据计算。(备用的NameNode不应该一并放在NameNode的主机上,应该放在跟NameNode硬件一模一样的另一台主机上。)Cloudera建议客户购买企业级的硬件来跑NameNode和JobTracker,配置配用电源,企业级的RAID1和RAID10磁盘。

The NameNode will also require RAM directly proportional to the number of data blocks in the cluster. A good rule of thumb is to assume 1GB of NameNode memory for every 1 million blocks stored in the distributed file system. With 100 DataNodes in a cluster, 64GB of RAM on the NameNode provides plenty of room to grow the cluster. We also recommend having HA configured on both the NameNode and JobTracker, features that have been available in the CDH4 line for some time.

NameNode需要的内存与集群里数据块的数量成正比。一个好的经验法则,每存储100万个数据块,NameNode需要1GB内存。100个DataNode的集群,64GB内存的NameNode有足够的空间来扩展集群。建议采用NameNode和JobTracker的高可用(HA)配置,这个特性包含在CDH4及以后的版本里。

Here are the recommended specifications for NameNode/JobTracker/Standby NameNode nodes. The drive count will fluctuate depending on the amount of redundancy:

  • 4–6 1TB hard disks in a JBOD configuration (1 for the OS, 2 for the FS image [RAID 1], 1 for Apache ZooKeeper, and 1 for Journal node)
  • 2 quad-/hex-/octo-core CPUs, running at least 2-2.5GHz
  • 64-128GB of RAM
  • Bonded Gigabit Ethernet or 10Gigabit Ethernet
  • If you expect your Hadoop cluster to grow beyond 20 machines, we recommend that the initial cluster be configured as if it were to span two racks, where each rack has a top-of-rack 10 GigE switch. As the cluster grows to multiple racks, you will want to add redundant core switches to connect the top-of-rack switches with 40GigE. Having two logical racks gives the operations team a better understanding of the network requirements for intra-rack and cross-rack communication.

推荐的NameNode/JobTracker/备用Namenode规格:

磁盘数量会根据冗余(备份)的数量而上下浮动。

With a Hadoop cluster in place, the team can start identifying workloads and prepare to benchmark those workloads to identify hardware bottlenecks. After some time benchmarking and monitoring, the team will understand how additional machines should be configured. Heterogeneous Hadoop clusters are common, especially as they grow in size and number of use cases – so starting with a set of machines that are not “ideal” for your workload will not be a waste of time. Cloudera Manager offers templates that allow different hardware profiles to be managed in groups, making it simple to manage heterogeneous clusters.

Hadoop集群到位后,团队可以运行等同的工作负载,准备基准测试来确定硬件瓶颈。经过一段时间的基准测试和监控,你就会明白其他的机器如何配置了

Below is a list of various hardware configurations for different workloads, including our original “balanced” recommendation:

  • Light Processing Configuration (1U/machine): Two hex-core CPUs, 24-64GB memory, and 8 disk drives (1TB or 2TB)
  • Balanced Compute Configuration (1U/machine): Two hex-core CPUs, 48-128GB memory, and 12 – 16 disk drives (1TB or 2TB) directly attached using the motherboard controller. These are often available as twins with two motherboards and 24 drives in a single 2U cabinet.
  • Storage Heavy Configuration (2U/machine): Two hex-core CPUs, 48-96GB memory, and 16-24 disk drives (2TB – 4TB). This configuration will cause high network traffic in case of multiple node/rack failures.
  • Compute Intensive Configuration (2U/machine): Two hex-core CPUs, 64-512GB memory, and 4-8 disk drives (1TB or 2TB)

以下是不同工作场景的硬件配置,包含之前提到的“中庸”的建议:

轻度任务(1U服务器):2个6核CPU,24-64GB内存,8个磁盘(1TB或2TB)

计算与存储兼顾(1U服务器):2个6核CPU,48-128GB内存,主板控制器直接连接12-16个硬盘(1TB或2TB),在一个2U机柜上,通常2个主板和24个硬盘组成1对。

重度存储(2U服务器):2个6核CPU,48-96GB内存,16-24个硬盘(2TB-4TB)。如果多个节点或者机架故障,将引起大量的网络数据传输。

密集计算(2U服务器):2个6核CPU,64-512GB内存,4-8个硬盘(1TB-2TB)。

未完成,请继续阅读:https://my.oschina.net/u/234661/blog/856011

猜你喜欢

转载自my.oschina.net/u/234661/blog/855913