Spark1.2集群环境搭建(Standalone+HA) 4G内存5个节点也是蛮拼的
准备工作:
1、笔记本4G内存 ,操作系统WIN7 (屌丝的配置) 2、工具VMware Workstation 3、虚拟机:CentOS6.4共五台
4、搭建好Hadoop集群( 方便Spark可从HDSF上读取文件,进行实验测试)
实验环境:
Hadoop HA集群: Ip 192.168.249.130 192.168.249.131 192.168.249.132 192.168.249.133 Spark HA集群:
实验环境仅作
Ip hostname role 学习用,4G内192.168.249.134 SY-0134 Master 存确实蛮拼的,192.168.249.130 SY-0130 StandBy Master 资源非常有限。
下周换上几台192.168.249.131 SY-0131 worker 台式机作集群。 192.168.249.132 SY-0132 worker 上述SY-0134是192.168.249.133 SY-0133 worker 新克隆的虚拟
机,作为Spark的环境中的Master,原属于Hadoop集群中的4个节点分别作为StandByMaster 和 Worker角色。
关于虚拟机环境设置、网络配置、Hadoop集群搭建参见 XXX 。
本文重点关注Spark1.2环境、Zookeeper环境简易搭建,仅作学习与实验原型,且不涉及太多理论知识。
hostname SY-0130 SY-0131 SY-0132 SY-0133 role ActiveNameNode StandByNameNode DataNode1 DataNode2 软件安装:
(注:用户hadoop登录SY-0134)
1、在节点SY-0134,hadoop用户目录创建toolkit 文件夹,用来保存所有软件安装包,建立labsp文件作为本次实验环境目录。
[hadoop@SY-0134 ~]$ mkdir labsp [hadoop@SY-0134~]$ mkdir toolkit
我将下载的软件包存放在toolkit中如下
[hadoop@SY-0134 toolkit]$ ls hadoop- hadoop- jdk-7u71-linux-i586.gz scala- spark- zookeeper- 2、这次实验我下载的Spark包是spark- ,Scala版本是2.10.3,Zookeeper是,Spark和Scala有版本对应关系,可在Spark官网介绍中找到Spark版本支持的Scala版本。 3、JDK安装及环境变量设置
[hadoop@SY-0134 ~]$ mkdir lab #我将jdk7安装在lab目录 [hadoop@SY-0134 pwd /home/hadoop/lab/ #环境变量设置:
[hadoop@SY-0134 ~]$ vi .bash_profile # User specific environment and startup programs export JAVA_HOME=/home/hadoop/lab/ PATH=$JAVA_HOME/bin:$PATH:$HOME/bin export PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar #设置生效
[hadoop@SY-0130 ~]$ source .bash_profile 4、Scala安装及环境变量设置
我将scala解压到/home/hadoop/labsp/scala- 修改.bash_profile文件
增加:export SCALA_HOME=/home/hadoop/labsp/scala-2.10.3
修改:PATH=$JAVA_HOME/bin:$PATH:$HOME/bin:$SCALA_HOME/bin #设置生效
[hadoop@SY-0130 ~]$ source .bash_profile 检验Scala是否安装好: [hadoop@SY-0134 ~]$ scala
Welcome to Scala version 2.10.3 (Java HotSpot(TM) Client VM, Java 上述显示安装成功。 5、Spark安装及环境配置
我将spark解压到/home/hadoop/labsp/spark1.2_hadoop2.3位置。下载的这个包是预编译包。 修改.bash_profile文件
增加:export SPARK_HOME=/home/hadoop/labsp/spark1.2_hadoop2.3
修改 PATH=$JAVA_HOME/bin:$PATH:$HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin
#设置生效
[hadoop@SY-0130 ~]$ source .bash_profile
#修改spark-env.sh
[hadoop@SY-0134 conf]$ pwd
/home/hadoop/labsp/spark1.2_hadoop2.3/conf [hadoop@SY-0134 conf]$vi spark-env.sh 核心配置:
export JAVA_HOME=/home/hadoop/lab/
export SCALA_HOME=/home/hadoop/labsp/scala-2.10.3 export SPARK_DAEMON_JAVA_OPTS=\"- -,SY-0130:2181,SY-0131:2181,SY-0132:2181,SY-0133:2181 -\"
至此JDK,Scala,Spark 安装及环境变量设置好,当然上述配置步骤也可一次修改完成。 6、Zookeeper安装
我将zookeeper解压到/home/hadoop/labsp/zookeeper- #配置zoo.cfg文件
[hadoop@SY-0134 zookeeper- pwd
/home/hadoop/labsp/zookeeper-3.4.6 [hadoop@SY-0134 zookeeper- mkdir data [hadoop@SY-0134 zookeeper- mkdir datalog [hadoop@SY-0134 zookeeper- cd conf
[hadoop@SY-0134 conf]$ cp zoo_sample.cfg zoo.cfg [hadoop@SY-0134 conf]$ vi zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/home/hadoop/labsp/zookeeper- dataLogDir=/home/hadoop/labsp/zookeeper- # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # /current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to \"0\" to disable auto purge feature #autopurge.purgeInterval=1 server.1=SY-0134:2888:3888 server.2=SY-0130:2888:3888 server.3=SY-0131:2888:3888 server.4=SY-0132:2888:3888 server.5=SY-0133:2888:3888 #配置myid文件
[hadoop@SY-0134 data]$ pwd /home/hadoop/labsp/zookeeper-
输入1进入SY-0134的zookeeper中的myid文件 echo \"1\"> home/hadoop/labsp/zookeeper- 7、SSH免密码登录
虽然在Hadoop集群中,SY-0130,能够免密码登录到SY-0131,SY-0132,SY-0133 。但是在本次Spark集群中,Master为SY-0134 ,他需要能够免密码登录到SY-0130,SY-0131,SY-0132,SY-0133。
#我是先在SY-0134中,生成公钥。
[hadoop@SY-0134 ~]$ ssh-keygen -t rsa [hadoop@SY-0134 ~]$ cd .ssh [hadoop@SY-0134 .ssh]$ ls
id_rsa id_rsa.pub known_hosts #将id_rsa.pub文件拷贝给SY-0130
[hadoop@SY-0134 .ssh]$ scp id_rsa.pub hadoop@SY-0130:~/.ssh/authorized_keys #在SY-0130中,生成公钥。
[hadoop@SY-0130 ~]$ ssh-keygen -t rsa [hadoop@SY-0130 ~]$ cd .ssh [hadoop@SY-0130 .ssh]$ ls
authorized_keys id_rsa id_rsa.pub known_hosts
#将id_rsa.pub文件的内容追加写入到authorized_keys中。稍微有点特殊。
[hadoop@SY-0130 .ssh]$ cat id_rsa.pub >>authorized_keys
#将SY-0130下的authorized_keys文件使用SCP命令复制到SY-0131,SY-0132,SY-0133 。 8、其他节点Spark,Scala, zookeeper安装
上述7步仅完成了SY-0134 ,Spark,Scala, Zookeeper的安装,须将三个安装文件目录SCP命令拷贝到SY-0130,SY-0131,SY-132,SY-0133目录,并且同样设置环境变量。 [hadoop@SY-0134 labsp]$ ls
scala-2.10.3 spark1.2_hadoop2.3 zookeeper-3.4.6
另外一点,Zoookeeper 的Server在不同节点上,myid文件内容不一样。
echo \"1\"> home/hadoop/labsp/zookeeper- #SY-0134 echo \"2\"> home/hadoop/labsp/zookeeper- #SY-0130 echo \"3\"> home/hadoop/labsp/zookeeper- #SY-0131 echo \"4\"> home/hadoop/labsp/zookeeper- #SY-0132 echo \"5\"> home/hadoop/labsp/zookeeper- #SY-0133
集群启动测试:
1、在5个节点上分别启动zookeeper .
[hadoop@SY-0134 zookeeper- bin/zkServer.sh start [hadoop@SY-0130 zookeeper- bin/zkServer.sh start [hadoop@SY-0131 zookeeper- bin/zkServer.sh start [hadoop@SY-0132 zookeeper- bin/zkServer.sh start [hadoop@SY-0133 zookeeper- bin/zkServer.sh start 2、在SY-0134启动 Spark Master
[hadoop@SY-0134 spark1.2_hadoop2.3]$ sbin/start-all.sh starting , logging to /home/hadoop/labsp/spark1.2_hadoop2.3/sbin/../logs/spark-hadoop- SY-0133: starting , logging to /home/hadoop/labsp/spark1.2_hadoop2.3/sbin/../logs/spark-hadoop- SY-0132: starting , logging to /home/hadoop/labsp/spark1.2_hadoop2.3/sbin/../logs/spark-hadoop- SY-0131: starting , logging to /home/hadoop/labsp/spark1.2_hadoop2.3/sbin/../logs/spark-hadoop- 3、在SY-0130启动 Standby Spark Master
[hadoop@SY-0130 spark1.2_hadoop2.3]$ sbin/start-master.sh
starting , logging to /lab/labsp/spark1.2_hadoop2.3/sbin/../logs/spark-hadoop-
因篇幅问题不能全部显示,请点此查看更多更全内容