准备

我们已经部署好 hadoophive,下面开始部署安装 spark

节点安排:

NameIpHostName
Master,Worker192.168.8.101master1
Worker192.168.8.201slave1
Worker192.168.8.202slave2

下载并解压 scala

1
2
3
wget https://downloads.lightbend.com/scala/2.13.4/scala-2.13.4.tgz
sudo mkdir /opt/scala
sudo tar -zxvf scala-2.13.4.tgz -C /opt/scala/

下载并解压 spark

1
2
3
wget https://downloads.apache.org/spark/spark-2.4.7/spark-2.4.7-bin-hadoop2.7.tgz
tar -zxvf spark-2.4.7-bin-hadoop2.7.tgz -C /opt/bigdata/
mv /opt/bigdata/spark-2.4.7-bin-hadoop2.7 /opt/bigdata/spark-2.4.7

配置环境变量

1
2
3
export SCALA_HOME=/opt/scala/scala-2.13.4
export SPARK_HOME=/opt/bigdata/spark-2.4.7
PATH=$PATH:$SCALA_HOME/bin:$SPARK_HOME/bin

source /etc/profile 使配置生效

安装

配置 conf/slaves

1
2
3
cd $SPARK_HOME
cp conf/slaves.template conf/slaves
vim slaves

加入集群节点

1
2
3
master1
slave1
slave2

配置 conf/spark-env.sh

1
2
3
cp conf/log4j.properties.template conf/log4j.properties
cp conf/spark-env.sh.template conf/spark-env.sh
vim conf/spark-env.sh

加入如下环境配置内容,设置 master1Master 节点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
export JAVA_HOME=/opt/java/jdk1.8.0_271
export HADOOP_HOME=/opt/bigdata/hadoop-2.7.7
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_CONF_DIR
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_MASTER_IP=master1
export SPARK_LOCAL_DIRS=/opt/scala/scala-2.13.4
export SPARK_EXECUTOR_MEMORY=512M
export SPARK_WORKER_MEMORY=128M
export SPARK_DRIVER_MEMORY=512M
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1

配置完成,同步到其他节点

1
xsync file $SPARK_HOME

启动 spark 集群

1
2
3
4
5
bennie@master1:/opt/bigdata/spark-2.4.7$ $SPARK_HOME/sbin/start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /opt/bigdata/spark-2.4.7/logs/spark-bennie-org.apache.spark.deploy.master.Master-1-master1.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /opt/bigdata/spark-2.4.7/logs/spark-bennie-org.apache.spark.deploy.worker.Worker-1-slave2.out
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /opt/bigdata/spark-2.4.7/logs/spark-bennie-org.apache.spark.deploy.worker.Worker-1-slave1.out
master1: starting org.apache.spark.deploy.worker.Worker, logging to /opt/bigdata/spark-2.4.7/logs/spark-bennie-org.apache.spark.deploy.worker.Worker-1-master1.out

基本使用

spark-shell

执行 spark-shell --master spark://master1:7077 命令,启动 spark shell

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
bennie@master1:~$ spark-shell --master spark://master1:7077
21/01/13 23:36:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://master1:4040
Spark context available as 'sc' (master = spark://master1:7077, app id = app-20210124233653-0000).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.7
/_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_271)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

pyspark

首先安装 Anaconda3 - 下载地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(base) ➜  ~ pip install pyspark==2.4.7 -i https://pypi.douban.com/simple

(base) ➜ ~ pyspark --master spark://master1:7077
Python 3.7.6 (default, Jan 8 2020, 20:23:39) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.4.7
/_/

Using Python version 3.7.6 (default, Jan 8 2020 20:23:39)
SparkSession available as 'spark'.
>>>