一、运维操作

1、HDFS

HDFS 下线节点

https://blog.csdn.net/weixin_44758876/article/details/122535840
https://blog.csdn.net/moyefeng198436/article/details/113652813

2、HBase

2.1、HBase hbck

hbase org.apache.hbase.HBCK2 bypass -or 111887 111888
hbase org.apache.hbase.HBCK2
https://developer.aliyun.com/article/683107

2.2、快速拉起一个HBase集群

更换HBase使用的HDFS和ZK路径

<property>
    <name>hbase.rootdir</name>
    <value>hdfs://hacluster3/hbase_20231204</value>
</property>

<property>
    <name>zookeeper.znode.parent</name>
    <value>/hbase_20231204</value>
</property>

二、运维操作

1、HDFS NameNode

/usr/local/fqlhadoop/hadoop/sbin/hadoop-daemon.sh start namenode
/usr/local/fqlhadoop/hadoop/sbin/hadoop-daemon.sh stop namenode

bin/hdfs haadmin -DFSHAAdmin -getServiceState n1

2、HDFS DataNode

2.1、启停
/usr/local/fqlhadoop/hadoop/sbin/hadoop-daemon.sh start datanode
/usr/local/fqlhadoop/hadoop/sbin/hadoop-daemon.sh stop datanode

2.2、均衡节点数据
/usr/local/fqlhadoop/hadoop/sbin/start-balancer.sh -threshold 5

3、dfsadmin

bin/hdfs dfsadmin -report

HDFS 安全模式相关命令
bin/hdfs dfsadmin -safemode get
bin/hdfs dfsadmin -safemode leave

bin/yarn node -list

4、Yarn ResourceManager

/usr/local/fqlhadoop/hadoop/sbin/yarn-daemon.sh start resourcemanager
/usr/local/fqlhadoop/hadoop/sbin/yarn-daemon.sh stop resourcemanager

5、Yarn NodeManager

/usr/local/fqlhadoop/hadoop/sbin/yarn-daemon.sh start nodemanager
/usr/local/fqlhadoop/hadoop/sbin/yarn-daemon.sh stop nodemanager

6、Yarn WebAppProxyServer

/usr/local/fqlhadoop/hadoop/sbin/yarn-daemon.sh stop proxyserver
/usr/local/fqlhadoop/hadoop/sbin/yarn-daemon.sh start proxyserver
进程名称:org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer
端口:8888

X、Yarn 运维命令

yarn application -list
yarn application -queue root.lx_fdw
yarn application -list -queue lx_fdw -appStates ACCEPTED
yarn application -list -queue lx_fdw -appStates ACCEPTED | grep thrift_XXX.hadoop.com_10017

yarn application -kill application_1652347896268_18628697

主备切换
yarn rmadmin -transitionToStandby rm1

yarn rmadmin -getServiceState rm1
yarn rmadmin -getServiceState rm2

6、Hive

服务端启动

6.1 hive metastore

cd /usr/local/fqlhadoop/hive/
bin/hive --service metastore -p 3316 &
ps -ef | grep ‘org.apache.hadoop.hive.metastore.HiveMetaStore’ | grep -v grep | awk ‘{print $2}’ | xargs kill -9

6.2 hive service

bin/hive --service hiveserver2 &

关联的配置,vim conf/hive-site.xml
hive.server2.thrift.port、hive.server2.thrift.bind.host

参考 https://blog.csdn.net/qq_37171694/article/details/128070374

客户端连接

连接指定 Hive metastore 服务端
hive --hiveconf hive.metastore.uris=thrift://XXX.hadoop.com:3316

7、Spark

## spark-sql
spark-sql --master=yarn --queue lx_etl --driver-memory 1g --driver-java-options '-XX:MetaspaceSize=1g -XX:MaxMetaspaceSize=1g' --num-executors 1  --executor-memory 1g --executor-cores 1   --conf spark.yarn.am.memory=2048m --hiveconf hive.cli.print.header=false 



## Spark HistoryServer
/usr/local/fqlhadoop/spark/sbin/start-history-server.sh
/usr/local/fqlhadoop/spark/sbin/start-history-server.sh hdfs://hacluster/sparklog
/usr/local/fqlhadoop/spark/sbin/stop-history-server.sh hdfs://hacluster/sparklog
IP:XX.10.29.98
进程名称:org.apache.spark.deploy.history.HistoryServer
端口:18080

## Yarn JobHistoryServer
    /usr/local/fqlhadoop/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver
IP:XX.10.29.98
进程名称:org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
端口:19888

8、HBase

/usr/local/fqlhadoop/hbase/bin/hbase-daemon.sh stop master
/usr/local/fqlhadoop/hbase/bin/hbase-daemon.sh start master

/usr/local/fqlhadoop/hbase/bin/hbase-daemon.sh stop regionserver
/usr/local/fqlhadoop/hbase/bin/graceful_stop.sh 5.hadoop3.com

/usr/local/fqlhadoop/hbase/bin/hbase-daemon.sh start regionserver

9、混合操作

# 1、一台ETL客户端机器操作 2 个Yarn集群
ai_etl@10.XX.26.20
vim /home/ai_etl/fqlhadoop_client/hadoop-3.2.0/bin/yarn新增如下内容
source /home/ai_etl/fqlhadoop_client/hadoop-3.2.0/bin/hadoop-functions.sh

## 1.1、操作公共Yarn集群
/home/ai_etl/fqlhadoop_client/hadoop-2.7.7/bin/yarn application -list

## 1.2、操作模型Yarn集群
export HADOOP_HOME=/home/ai_etl/fqlhadoop_client/hadoop-3.2.0
export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
export YARN_CONF_DIR=${HADOOP_HOME}/conf
export HADOOP_COMMON_HOME=${HADOOP_HOME}
/home/ai_etl/fqlhadoop_client/hadoop-3.2.0/bin/yarn application -list



# 2、一台机器上混合部署不同版本的NodeManager和DataNode
biadmin@10.XX.24.100
## 2.1、启停NodeManager
/usr/local/fqlhadoop/hadoop/sbin/yarn-daemon.sh start nodemanager
/usr/local/fqlhadoop/hadoop/sbin/yarn-daemon.sh stop nodemanager

## 2.2、启停DataNode
export HADOOP_HOME=/usr/local/fqlhadoop/hadoop_cold
export HADOOP_CONF_DIR=/usr/local/fqlhadoop/hadoop_cold/conf
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
/usr/local/fqlhadoop/hadoop_cold/sbin/hadoop-daemon.sh start datanode
/usr/local/fqlhadoop/hadoop_cold/sbin/hadoop-daemon.sh stop datanode
Logo

技术共进,成长同行——讯飞AI开发者社区

更多推荐