You've already forked www.colben.cn
632 lines
19 KiB
Markdown
632 lines
19 KiB
Markdown
---
|
||
title: "hadoop2.10 部署"
|
||
date: 2023-05-23T10:00:00+08:00
|
||
lastmod: 2025-12-01T10:00:00+08:00
|
||
keywords: []
|
||
tags: ["hadoop", "hive", "tez", "hbase", "spark"]
|
||
categories: ["hadoop"]
|
||
---
|
||
|
||
## 环境
|
||
主机名 | 地址 | 数据目录 | 组件
|
||
---- | ---- | ---- | ----
|
||
编译服务器 | - | - | 各种编译工具
|
||
hdp-nn | 192.168.8.1/24 | /data/hdp-nn | Namenode Spark
|
||
hdp-snn | 192.168.8.2/24 | /data/hdp-snn | SecondaryNamenode
|
||
hdp-rm | 192.168.8.3/24 | - | ResourceManager
|
||
hdp-slave0 | 192.168.8.10/24 | /data/hdp-dn | Datanode NodeManager Spark
|
||
hdp-slave1 | 192.168.8.11/24 | /data/hdp-dn | Datanode NodeManager Spark
|
||
hive-hs | 192.168.8.20/24 | - | HiveServer2 Tez
|
||
hive-ms | 192.168.8.21/24 | - | HiveMetastore Tez
|
||
hbase-m | 192.168.8.30/24 | - | HbaseMaster
|
||
hbase-bm | 192.168.8.31/24 | - | HbaseBackupMaster
|
||
hbase-rs0 | 192.168.8.32/24 | - | HbaseRigionServer
|
||
hbase-rs1 | 192.168.8.33/24 | - | HbaseRigionServer
|
||
|
||
## 部署 hadoop 集群
|
||
### 服务器初始配置
|
||
- 在**全部主机**上执行如下操作
|
||
- 禁用防火墙
|
||
- 禁用 selinux
|
||
- 配置时间同步
|
||
- 配置主机名解析,修改 /etc/hosts,增加如下内容
|
||
```
|
||
# hadoop
|
||
192.168.8.1 hdp-nn
|
||
192.168.8.2 hdp-snn
|
||
192.168.8.3 hdp-dn
|
||
192.168.8.10 hdp-slave0
|
||
192.168.8.11 hdp-slave1
|
||
```
|
||
|
||
### ssh 免密登录
|
||
- 在 **hdp-nn** 配置 ssh 免密登录 hdp-nn、hdp-snn 和 hdp-slaveX
|
||
```bash
|
||
ssh-copy-id hdp-nn
|
||
ssh-copy-id hdp-snn
|
||
ssh-copy-id hdp-slave0
|
||
ssh-copy-id hdp-slave1
|
||
```
|
||
|
||
- 在 **hdp-rm** 上配置 ssh 免密登录 hdp-rm 和 hdp-slaveX
|
||
```bash
|
||
ssh-copy-id hdp-rm
|
||
ssh-copy-id hdp-slave0
|
||
ssh-copy-id hdp-slave1
|
||
```
|
||
|
||
### 部署 jdk8 环境
|
||
- 在**全部主机**上下载**最新的 jdk8 安装包**,解压
|
||
```bash
|
||
tar zxf jdk-8u471-linux-x64.tar.gz
|
||
mv jdk1.8.0_471 /opt/jdk
|
||
# 无需配置 jdk 环境变量
|
||
```
|
||
|
||
### 部署 dfs 和 yarn 集群
|
||
- 在**全部主机**上执行如下操作
|
||
- 下载 hadoop 2.10.2 部署包,解压
|
||
```bash
|
||
curl -LO https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.10.2/hadoop-2.10.2.tar.gz
|
||
tar zxf hadoop-2.10.2.tar.gz
|
||
mv hadoop-2.10.2 /opt/hdp
|
||
```
|
||
|
||
- 配置 hadoop 环境变量
|
||
```bash
|
||
echo 'export HADOOP_HOME=/opt/hdp' > /etc/profile.d/hdp.sh
|
||
echo 'export PATH=$HADOOP_HOME/bin:$PATH' >> /etc/profile.d/hdp.sh
|
||
# 不推荐把 $HADOOP_HOME/sbin 加入环境变量 PATH,避免与 spark 冲突
|
||
source /etc/profile.d/hdp.sh
|
||
```
|
||
|
||
- 编辑 $HADOOP_HOME/etc/hadoop/hadoop-env.sh,指定 JAVA_HOME 环境变量
|
||
```bash
|
||
export JAVA_HOME=/opt/jdk
|
||
```
|
||
|
||
- 编辑 $HADOOP_HOME/etc/hadoop/core-site.xml,参考内容如下
|
||
```xml
|
||
<configuration>
|
||
<property>
|
||
<!-- namenode 的 hdfs 协议通信地址 -->
|
||
<name>fs.defaultFS</name>
|
||
<value>hdfs://hdp-nn:8020</value>
|
||
</property>
|
||
<property>
|
||
<!-- hadoop 集群存储临时文件的目录,datanode 里建议挂载独立盘 -->
|
||
<name>hadoop.tmp.dir</name>
|
||
<value>/tmp/hdp</value>
|
||
</property>
|
||
<property>
|
||
<!-- hive beeline 登录用户 root -->
|
||
<name>hadoop.proxyuser.root.hosts</name>
|
||
<value>*</value>
|
||
</property>
|
||
<property>
|
||
<!-- hive beeline 登录用户 root -->
|
||
<name>hadoop.proxyuser.root.groups</name>
|
||
<value>*</value>
|
||
</property>
|
||
</configuration>
|
||
```
|
||
|
||
- 编辑 $HADOOP_HOME/etc/hadoop/hdfs-site.xml,参考内容如下
|
||
```xml
|
||
<configuration>
|
||
<property>
|
||
<!-- namenode 元数据存放位置,可指定多个目录(用逗号分隔)实现容错 -->
|
||
<name>dfs.namenode.name.dir</name>
|
||
<value>/data/hdp_nn</value>
|
||
</property>
|
||
<property>
|
||
<!-- secondary namenode 镜像数据存放位置,可指定多个目录(用逗号分隔)实现容错 -->
|
||
<name>dfs.namenode.checkpoint.dir</name>
|
||
<value>/data/hdp_snn</value>
|
||
</property>
|
||
<property>
|
||
<!-- datanode 数据块存放位置,可指定多个目录(多盘,用逗号分隔)提高读写 io -->
|
||
<name>dfs.datanode.data.dir</name>
|
||
<value>/data/hdp_dn</value>
|
||
</property>
|
||
<property>
|
||
<!-- namenode 的 Web UI 访问地址 -->
|
||
<name>dfs.namenode.http-address</name>
|
||
<value>hdp-nn:9870</value>
|
||
</property>
|
||
<property>
|
||
<!-- secondary namenode 的主机和端口 -->
|
||
<name>dfs.namenode.secondary.http-address</name>
|
||
<value>hdp-snn:9868</value>
|
||
</property>
|
||
<property>
|
||
<!-- hdfs 副本数量,默认3,这里设置为2,保证两个 datanode 时数据有冗余 -->
|
||
<name>dfs.replication</name>
|
||
<value>2</value>
|
||
</property>
|
||
<property>
|
||
<!-- 启用 webhdfs api -->
|
||
<name>dfs.webhdfs.enabled</name>
|
||
<value>true</value>
|
||
</property>
|
||
</configuration>
|
||
```
|
||
|
||
- 编辑 $HADOOP_HOME/etc/hadoop/yarn-site.xml,参考内容如下
|
||
```xml
|
||
<configuration>
|
||
<property>
|
||
<name>yarn.nodemanager.aux-services</name>
|
||
<value>mapreduce_shuffle</value>
|
||
</property>
|
||
<property>
|
||
<!--resourcemanager 的主机名-->
|
||
<name>yarn.resourcemanager.hostname</name>
|
||
<value>hdp-rm</value>
|
||
</property>
|
||
<property>
|
||
<!-- resourcemanager 的 Web UI 访问地址 (默认端口8088) -->
|
||
<name>yarn.resourcemanager.webapp.address</name>
|
||
<value>hdp-rm:8088</value>
|
||
</property>
|
||
<!-- 设置 nodemanager 可用 6 核处理器
|
||
<property>
|
||
<name>yarn.nodemanager.resource.cpu-vcores</name>
|
||
<value>6</value>
|
||
</property> -->
|
||
<!-- 设置 nodemanager 可用 12GB 内存
|
||
<property>
|
||
<name>yarn.nodemanager.resource.memory-mb</name>
|
||
<value>12288</value>
|
||
</property> -->
|
||
<property>
|
||
<!-- (可选) 开启日志聚集功能,方便在Web UI上查看已完成任务的日志 -->
|
||
<name>yarn.log-aggregation-enable</name>
|
||
<value>true</value>
|
||
</property>
|
||
<property>
|
||
<!-- (可选) 日志保留时间(7天) -->
|
||
<name>yarn.log-aggregation.retain-seconds</name>
|
||
<value>604800</value>
|
||
</property>
|
||
<property>
|
||
<!-- 使用 spark/tez 时需关闭 yarn 虚拟内存检查 -->
|
||
<name>yarn.nodemanager.vmem-check-enabled</name>
|
||
<value>false</value>
|
||
</property>
|
||
<property>
|
||
<!-- 使用 spark 时需关闭 yarn 虚拟内存检查 -->
|
||
<name>yarn.nodemanager.pmem-check-enabled</name>
|
||
<value>false</value>
|
||
</property>
|
||
</configuration>
|
||
```
|
||
|
||
- 编辑 $HADOOP_HOME/etc/hadoop/mapred-env.sh,指定 JAVA_HOME 环境变量
|
||
```bash
|
||
export JAVA_HOME=/opt/jdk
|
||
```
|
||
|
||
- 编辑 $HADOOP_HOME/etc/hadoop/mapred-site.xml,参考内容如下
|
||
```xml
|
||
<configuration>
|
||
<property>
|
||
<name>mapreduce.framework.name</name>
|
||
<value>yarn</value>
|
||
</property>
|
||
<property>
|
||
<!-- MapReduce JobHistory Server 地址 -->
|
||
<name>mapreduce.jobhistory.address</name>
|
||
<value>hdp-rm:10020</value>
|
||
</property>
|
||
<property>
|
||
<!-- MapReduce JobHistory Server Web UI 地址 (默认端口19888) -->
|
||
<name>mapreduce.jobhistory.webapp.address</name>
|
||
<value>hdp-rm:19888</value>
|
||
</property>
|
||
</configuration>
|
||
```
|
||
|
||
- 编辑 $HADOOP_HOME/etc/hadoop/slaves,替换成全部的 slave 主机,参考内容如下
|
||
```
|
||
hdp-slave0
|
||
hdp-slave1
|
||
```
|
||
|
||
### 部署 spark 集群
|
||
- 在 **hdp-X** 上执行如下操作
|
||
- 下载 spark-3.3.4-bin-hadoop2.tgz,解压
|
||
```bash
|
||
curl -LO https://archive.apache.org/dist/spark/spark-3.3.4/spark-3.3.4-bin-hadoop2.tgz
|
||
tar zxf spark-3.3.4-bin-hadoop2.tgz
|
||
mv spark-3.3.4-bin-hadoop2 /opt/spark
|
||
```
|
||
|
||
- 配置 spark 环境变量
|
||
```bash
|
||
echo 'export SPARK_HOME=/opt/spark' > /etc/profile.d/spark.sh
|
||
echo 'export PATH=$SPARK_HOME/bin:$PATH' >> /etc/profile.d/spark.sh
|
||
# 不推荐把 $SPARK_HOME/sbin 加入环境变量 PATH,避免与 hadoop 冲突
|
||
source /etc/profile.d/spark.sh
|
||
```
|
||
|
||
- 修改 $HADOOP_HOME/etc/hadoop/yarn-site.xml,关闭 yarn 虚拟内存检查(已关闭)
|
||
- 编辑 $HADOOP_HOME/etc/hadoop/capacity-scheduler.xml,修改内容如下
|
||
```xml
|
||
<configuration>
|
||
<property>
|
||
<name>yarn.scheduler.capacity.resource-calculator</name>
|
||
<!-- <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value> -->
|
||
<value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>
|
||
</description>
|
||
</property>
|
||
</configuration>
|
||
```
|
||
|
||
- 创建 $SPARK_HOME/conf/spark-defaults.conf,参考内容如下
|
||
```
|
||
spark.master yarn
|
||
spark.eventLog.enabled true
|
||
spark.eventLog.dir hdfs://hdp-nn:8020/spark-logs
|
||
```
|
||
|
||
- 创建 $SPARK_HOME/conf/spark-env.sh,参考内容如下
|
||
```bash
|
||
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
|
||
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 -Dspark.history.fs.logDirectory=hdfs://hdp-nn:8020/spark-logs -Dspark.history.retainedApplications=30"
|
||
```
|
||
|
||
### 格式化 namenode
|
||
- 在 **hdp-nn** 上执行如下操作
|
||
```bash
|
||
hdfs namenode -format
|
||
```
|
||
|
||
### 启动 hadoop 集群
|
||
- 在 **hdp-nn** 上启动 dfs 集群
|
||
```bash
|
||
/opt/hdp/sbin/start-dfs.sh
|
||
```
|
||
|
||
- 在 **hdp-rm** 上启动 yarn 集群
|
||
```bash
|
||
/opt/hdp/sbin/start-yarn.sh
|
||
```
|
||
|
||
- 在 **hdp-X** 上查看 java 进程
|
||
```bash
|
||
/opt/jdk/bin/jps
|
||
```
|
||
|
||
### 启动 spark 日志服务
|
||
- 在 **hdp-nn** 上执行如下操作
|
||
- 创建 spark 日志目录
|
||
```bash
|
||
hdfs dfs -mkdir /spark-logs
|
||
```
|
||
|
||
- 启动日志服务
|
||
```bash
|
||
/opt/spark/sbin/start-history-server.sh
|
||
```
|
||
|
||
### 发布分布式计算任务
|
||
- 客户端模式
|
||
```bash
|
||
spark-shell
|
||
```
|
||
|
||
- 集群模式
|
||
```bash
|
||
spark-submit \
|
||
--class org.apache.spark.examples.SparkPi \
|
||
--deploy-mode cluster \
|
||
$SPARK_HOME/examples/jars/spark-examples_2.12-3.3.4.jar
|
||
```
|
||
|
||
- 浏览器访问 http://{spark 日志服务器}:18080 查看任务进度
|
||
|
||
---
|
||
|
||
## 部署 hive 集群
|
||
### 前提
|
||
- [已部署好 mysql 8](/post/mysql-install/#安装-mysql84-通用二进制包)
|
||
- 已创建好 mysql 用户机器数据库,参考 sql 如下
|
||
```sql
|
||
create user hive@'%' identified by 'Hive_1234';
|
||
create database hive default charset utf8mb4;
|
||
grant all on hive.* to hive@'%';
|
||
```
|
||
|
||
### 服务器初始配置
|
||
- 在 **hive-X** 上配置主机名解析,修改 /etc/hosts,增加如下内容
|
||
```
|
||
# 注意前面的 hadoop 解析记录不能删
|
||
# hive
|
||
192.168.8.20 hive-hs
|
||
192.168.8.21 hive-ms
|
||
```
|
||
|
||
### 部署 tez 环境
|
||
#### 编译 tez
|
||
- 在**编译服务器**上执行如下操作
|
||
- 编译 tez-0.9.2 依赖 protoc 2.5.0,下载 protoc 源码,解压,编译
|
||
```bash
|
||
curl -LO https://github.com/protocolbuffers/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz
|
||
tar zxf protobuf-2.5.0.tar.gz
|
||
cd protobuf-2.5.0.tar.gz
|
||
mkdir /opt/protoc-2.5.0
|
||
./configure --prefix=/opt/protoc-2.5.0
|
||
make
|
||
make check
|
||
make install
|
||
```
|
||
|
||
- 下载 tez 源码包,解压,编译
|
||
```bash
|
||
curl -LO https://mirrors.tuna.tsinghua.edu.cn/apache/tez/0.9.2/apache-tez-0.9.2-src.tar.gz
|
||
tar zxf apache-tez-0.9.2-src.tar.gz
|
||
cd apache-tez-0.9.2-src
|
||
export PATH=/opt/protoc-2.5.0/bin:/opt/jdk8/bin:/opt/maven3/bin:$PATH
|
||
mvn clean package -DskipTests=true -Dtar -Dhadoop.version=2.10.2 -Dmaven.javadoc.skip=true -pl tez-dist -am
|
||
```
|
||
|
||
- 上传 tez-dist/target/tez-0.9.2-minimal.tar.gz 到 **hive-X** 中
|
||
- 上传 tez-dist/target/tez-0.9.2.tar.gz 到 **hive-hs** 中
|
||
|
||
#### 部署 tez
|
||
- 在 **hive-hs** 上 put tez-0.9.2.tar.gz 到 hdfs
|
||
```bash
|
||
hdfs dfs -mkdir /tez
|
||
hdfs dfs -put tez-0.9.2.tar.gz /tez/
|
||
```
|
||
|
||
- 在 **hive-X** 上解压 tez minimal 包
|
||
```bash
|
||
mkdir /opt/tez
|
||
tar zxf tez-0.9.2-minimal.tar.gz -C /opt/tez/
|
||
```
|
||
|
||
- 在**全部主机**上修改 $HADOOP_HOME/etc/hadoop/mapred-site.xml,关闭 yarn 虚拟内存检查(已关闭)
|
||
- 在**全部主机**上创建 $HADOOP_HOME/etc/hadoop/tez-site.xml,参考内容如下
|
||
```xml
|
||
<configuration>
|
||
<property>
|
||
<name>tez.lib.uris</name>
|
||
<type>string</type>
|
||
<value>${fs.defaultFS}/tez/tez-0.9.2.tar.gz</value>
|
||
</property>
|
||
</configuration>
|
||
```
|
||
|
||
- 重启 hadoop dfs 和 yarn 集群
|
||
|
||
### 部署 hive 环境
|
||
- 在 **hive-X** 上执行如下操作
|
||
- 下载 hive 2.3.10 部署包,解压
|
||
```bash
|
||
curl -LO https://archive.apache.org/dist/hive/hive-2.3.10/apache-hive-2.3.10-bin.tar.gz
|
||
tar zxf apache-hive-2.3.10-bin.tar.gz
|
||
mv apache-hive-2.3.10-bin /opt/hive
|
||
```
|
||
|
||
- 下载 mysql 连接库,解压到 hive 库目录下
|
||
```bash
|
||
curl -LO https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-j-8.0.33.tar.gz
|
||
tar zxf mysql-connector-j-8.0.33.tar.gz mysql-connector-j-8.0.33/mysql-connector-j-8.0.33.jar
|
||
mv mysql-connector-j-8.0.33/mysql-connector-j-8.0.33.jar /opt/hive/lib/
|
||
rm -rf mysql-connector-j-8.0.33*
|
||
```
|
||
|
||
- 配置环境变量
|
||
```bash
|
||
echo 'export HIVE_HOME=/opt/hive' > /etc/profile.d/hive.sh
|
||
echo 'export PATH=$HIVE_HOME/bin:$PATH' >> /etc/profile.d/hive.sh
|
||
source /etc/profile.d/hive.sh
|
||
```
|
||
|
||
- 编辑 $HIVE_HOME/conf/hive-env.sh,指定 HADOOP_HOME 环境变量和 tez 库
|
||
```bash
|
||
HADOOP_HOME=/opt/hdp
|
||
export TEZ_HOME=/opt/tez
|
||
export HIVE_AUX_JARS_PATH=$TEZ_HOME/lib
|
||
export HADOOP_CLASSPATH=$TEZ_HOME:$TEZ_HOME/lib
|
||
```
|
||
|
||
- 创建 $HIVE_HOME/conf/hive-site.xml,参考内容如下
|
||
```xml
|
||
<configuration>
|
||
<property>
|
||
<!-- mysql 地址 -->
|
||
<name>javax.jdo.option.ConnectionURL</name>
|
||
<value>jdbc:mysql://mysql-ip:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value>
|
||
</property>
|
||
<property>
|
||
<!-- mysql 驱动 -->
|
||
<name>javax.jdo.option.ConnectionDriverName</name>
|
||
<value>com.mysql.cj.jdbc.Driver</value>
|
||
</property>
|
||
<property>
|
||
<!-- mysql 用户 -->
|
||
<name>javax.jdo.option.ConnectionUserName</name>
|
||
<value>hive</value>
|
||
</property>
|
||
<property>
|
||
<!-- mysql 密码 -->
|
||
<name>javax.jdo.option.ConnectionPassword</name>
|
||
<value>Hive_1234</value>
|
||
</property>
|
||
<property>
|
||
<!-- 自动初始化 hive 库 -->
|
||
<name>datanucleus.schema.autoCreateAll</name>
|
||
<value>true</value>
|
||
</property>
|
||
<property>
|
||
<name>hive.cli.print.header</name>
|
||
<value>true</value>
|
||
</property>
|
||
<property>
|
||
<name>hive.cli.print.current.db</name>
|
||
<value>true</value>
|
||
</property>
|
||
<property>
|
||
<!-- hive server 端口 -->
|
||
<name>hive.server2.webui.port</name>
|
||
<value>10002</value>
|
||
</property>
|
||
<property>
|
||
<!-- 数据存储位置(hdfs) -->
|
||
<name>hive.metastore.warehouse.dir</name>
|
||
<value>/hive/warehouse</value>
|
||
</property>
|
||
<property>
|
||
<!-- hive metastore 端口-->
|
||
<name>hive.metastore.uris</name>
|
||
<value>thrift://hive-ms:9083</value>
|
||
</property>
|
||
<property>
|
||
<!-- hive 使用 tez 引擎 -->
|
||
<name>hive.execution.engine</name>
|
||
<value>tez</value>
|
||
</property>
|
||
</configuration>
|
||
```
|
||
|
||
### 初始化 hive
|
||
- 在 **hive-ms** 上初始化 mysql 库
|
||
```bash
|
||
schematool -dbType mysql -initSchema
|
||
```
|
||
|
||
### 启动 hive 集群
|
||
- 在 **hive-ms** 上启动 hive metastore
|
||
```bash
|
||
hive --service metastore
|
||
```
|
||
|
||
- 在 **hive-hs** 上启动 hive server
|
||
```bash
|
||
hive --service hiveserver2
|
||
```
|
||
|
||
### 客户端
|
||
- 本地直接连接
|
||
```bash
|
||
hive
|
||
```
|
||
|
||
- beeline 连接,需要先在 $HADOOP_HOME/etc/hadoop/core-site.xml 中配置 proxyuser(已配置)
|
||
```bash
|
||
beeline -u jdbc:hive2://hive-hs:10000 -n root
|
||
```
|
||
|
||
---
|
||
|
||
## 部署 hbase 集群
|
||
### 前提
|
||
- [已部署好 zookeeper 3.4 以上版本](/post/zk-install)
|
||
|
||
### 服务器初始配置
|
||
- 在 **hbase-X** 上配置主机名解析,修改 /etc/hosts,增加如下内容
|
||
```
|
||
# 注意前面的 hadoop 解析记录不能删
|
||
# hbase
|
||
192.168.8.30 hbase-m
|
||
192.168.8.31 hbase-bm
|
||
192.168.8.32 hbase-rs0
|
||
192.168.8.33 hbase-rs1
|
||
```
|
||
|
||
### ssh 免密登录
|
||
- 在 **hbase-m** 上配置 ssh 免密登录 hbase-m、hbase-bm 和 hbase-rsX
|
||
```bash
|
||
ssh-copy-id hbase-m
|
||
ssh-copy-id hbase-bm
|
||
ssh-copy-id hbase-rs0
|
||
ssh-copy-id hbase-rs1
|
||
```
|
||
|
||
### 部署 hbase 环境
|
||
- 在 **hbase-X** 上执行如下操作
|
||
- 下载 hbase 2.5.13 部署包,解压
|
||
```bash
|
||
curl -LO https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/2.5.13/hbase-2.5.13-bin.tar.gz
|
||
tar zxf hbase-2.5.13-bin.tar.gz
|
||
mv hbase-2.5.13 /opt/hbase
|
||
```
|
||
|
||
- 配置环境变量
|
||
```bash
|
||
echo 'export HBASE_HOME=/opt/hbase' > /etc/profile.d/hbase.sh
|
||
echo 'export PATH=$HBASE_HOME/bin:$PATH' >> /etc/profile.d/hbase.sh
|
||
source /etc/profile.d/hbase.sh
|
||
```
|
||
|
||
- 编辑 $HBASE_HOME/conf/hbase-env.sh,指定如下三个环境变量
|
||
```bash
|
||
export JAVA_HOME=/opt/jdk
|
||
export HBASE_CLASSPATH=$HADOOP_HOME/etc/hadoop
|
||
export HBASE_MANAGES_ZK=false
|
||
```
|
||
|
||
- 清空 $HBASE_HOME/conf/hbase-site.xml,添加如下内容
|
||
```xml
|
||
<configuration>
|
||
<property>
|
||
<!-- 指定分布式 -->
|
||
<name>hbase.cluster.distributed</name>
|
||
<value>true</value>
|
||
</property>
|
||
<property>
|
||
<!-- 指定 hdfs 地址和目录 -->
|
||
<name>hbase.rootdir</name>
|
||
<value>hdfs://hdp-nn:8020/hbase</value>
|
||
</property>
|
||
<property>
|
||
<!-- 指定要连接的 zookeeper 节点 -->
|
||
<name>hbase.zookeeper.quorum</name>
|
||
<value>zk1,zk2,zk3</value>
|
||
</property>
|
||
</configuration>
|
||
```
|
||
|
||
- 清空 $HBASE_HOME/conf/regionservers,添加如下内容
|
||
```
|
||
hbase-rs0
|
||
hbase-rs1
|
||
```
|
||
|
||
- 创建 $HBASE_HOME/conf/backup-masters,添加如下内容
|
||
```
|
||
hbase-bm
|
||
```
|
||
|
||
### 启动 hbase 集群
|
||
- 在 **hbase-m30** 上启动 hbase 集群
|
||
```bash
|
||
start-hbase.sh
|
||
```
|
||
|
||
### 客户端连接
|
||
- 本地直接进入 hbase shell
|
||
```bash
|
||
hbase shell
|
||
```
|
||
|
||
## 参考
|
||
- [https://www.cnblogs.com/jpSpaceX/articles/15032931.html](https://www.cnblogs.com/jpSpaceX/articles/15032931.html)
|
||
|
||
## 问题
|
||
### nodemanager 没有优雅退出
|
||
- 异常描述:执行脚本 $HADOOP_HOME/sbin/stop-yarn.sh 停止 yarn 时,终端报“nodemanager did not stop gracefully after 5 seconds: killing with kill -9”
|
||
- 解决办法:修改脚本 $HADOOP_HOME/sbin/stop-yarn.sh,调整最后的停止顺序,即先停止 nodemanager,最后停止 resourcemanager
|
||
```bash
|
||
# stop nodeManager
|
||
"$bin"/yarn-daemons.sh --config $YARN_CONF_DIR stop nodemanager
|
||
# stop proxy server
|
||
"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR stop proxyserver
|
||
# stop resourceManager
|
||
"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR stop resourcemanager
|
||
```
|
||
|