The Spark (III): Installation and Configuration HDP2.4 installation (V): and the cluster assembly is mounted

  See  HDP2.4 installation (V): Cluster and components installed  spark version 1.6 installation configuration, the installed base of HBase, hadoop cluster on the cluster automatically installed by Spark ambari, based on the operating mode of hadoop yarn.

table of Contents:

  • Spark cluster installation
  • Configuration parameters
  • Testing and certification

Spark cluster installation:


  • In ambari -service interface, select "add Service", as shown:
  • In the pop-up interface selected spark services, as:

  • "Next", the distribution host node, because we have a pre-hadoop and hbase cluster installation, you can press distribution spark history Server Wizard
  • Distribution client, as shown below:
  • Posted installation, following the correct state

Configuration parameters:


  • After installation is complete, restart hdfs and yarn
  • Check spark Services, spark thrift server does not start properly, log is as follows:
      View Code
  •  Solution: Adjust yarn parameters configured yarn.nodemanager.resource.memory-mb, yarn.scheduler.maximum-allocation-mb

  •  yarn.nodemanager.resource.memory-mb

    YARN node indicates the amount of physical memory available, the default is 8192 (MB), is noted that the present machine hdp2-3 I 4G of memory, setting the default value is 512M, the size is adjusted to below

  • yarn.scheduler.maximum-allocation-mb

    The amount of physical memory up to a single task can apply the default is 8192 (MB).

  • Save the configuration, the configuration-dependent service restart, after the normal follows:

  •  

Test Verification:


  • In either mounting a spark Client machine (hdp4), the directory to spark installation directory bin directory
  • Command: ./spark-sql
  • sql command: show database; below
  • View the history, as follows:

 

Guess you like

Origin www.cnblogs.com/momoyan/p/11616503.html