1. linux 大数据笔记之flink集群的安装部署

flink集群是以hadoop集群为基础的,如果是yarn模式,先要搭建hadoop集群(参看前文:hadoop集群搭建)
如果是standalone模式,无需hadoop。本文讲解standalone模式

环境规划

IP HOSTNAME 性质
192.168.101.191 hadoop191 master
192.168.101.192 hadoop192 slave
192.168.101.193 hadoop193 slave

一、下载安装

下载地址(以1.20.0为例):https://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.20.0/ 下载flink-1.20.0-bin-scala_2.12.tgz
分别上传到 3台服务器的/opt/module/flink/目录,解压:tar -zxvf flink-1.20.0-bin-scala_2.12.tgz

二、修改配置文件

vim /home/data/flink/flink-1.20.0/conf/config.yaml

hadoop191的配置如下:

jobmanager:
  # 允许访问的ip
  bind-host: 0.0.0.0
  rpc:
  	# master服务器的hostname,3台机器都配置hadoop191
    address: hadoop191
    port: 6123
  memory:
    process:
       size: 1600m
  execution:
     failover-strategy: region
taskmanager:
	# 任务节点,允许访问ip
  bind-host: 0.0.0.0
  # 任务节点的主机名,每台机器填自己的主机名
  host: hadoop191
  numberOfTaskSlots: 6
  memory:
    process:
          size: 1728m
parallelism:
  default: 4
rest:
  address: localhost
  # 允许web访问的ip,仅修改hadoop191,浏览器也只访问此台
  bind-address: 0.0.0.0

hadoop192、hadoop193的配置,仅taskmanager.host各自填自己的主机名,不修改rest.address,
其他与master保持一致即可

三、修改主从

在目录 /opt/module/flink/flink-1.20.0/conf下修改:
vim masters
写入:hadoop191:8081
vim workers
写入:
hadoop191
hadoop192
hadoop193
分发到另外两台机器,3台机器的masters与workers是一致的

四、权限修改

分别在3台机器的目录:/opt/module/flink 执行: chown -R hadoop:hadoop flink-1.20.0

五、免密访问

我的3台机器,的hadoop账号设置了免密访问(参考hadoop集群搭建的1.4)

六、编写启动脚本

在 /home/hadoop/bin目录下执行 vim flink.sh

#!/bin/bash
if [ $# -lt 1 ]
then
 echo "No Args Input..."
 exit ;
fi
case $1 in
"start")
 echo " =================== 启动 flink 集群 ==================="
 ssh hadoop191 "/opt/module/flink/flink-1.20.0/bin/start-cluster.sh"

;;
"stop")
 echo " =================== 关闭 flink 集群 ==================="
 ssh hadoop191 "/opt/module/flink/flink-1.20.0/bin/stop-cluster.sh"

;;
*)
 echo "Input Args Error..."
;;
esac

授权与改变归属:
chmod +x flink.sh
chown -R hadoop:hadoop flink.sh

七、启动

用hadoop用户执行: sh flink.sh start

八、浏览器访问

http://192.168.101.191:8081/

2. DOCKER部署flink standalone 单节点

1.创建文件夹 /usr/local/flink1.18下面: logs checkpoints jobs
2。docker拉去镜像: docker pull flink:1.18.0
3.安装docker-compose
yum install epel-release
yum install docker-compose
4.找个文件夹执行: vim docker-compose.yml

version: "3"
services:
  jobmanager:
    image: flink:1.18.0
    container_name: flink-jobmanager
    hostname: jobmanager
    ports:
      - "8081:8081" # Flink Dashboard
    command: jobmanager
    environment:
      - JOB_MANAGER_RPC_ADDRESS=jobmanager
    volumes:
      - /usr/local/flink1.18/logs:/opt/flink/log
      - /usr/local/flink1.18/checkpoints:/opt/flink/checkpoints
      - /usr/local/flink1.18/jobs:/opt/flink/jobs
	  - /usr/local/flink1.18/conf:/opt/flink/conf 

  taskmanager:
    image: flink:1.18.0
    container_name: flink-taskmanager
    depends_on:
      - jobmanager
    command: taskmanager
    environment:
      - JOB_MANAGER_RPC_ADDRESS=jobmanager
      - TASK_MANAGER_NUMBER_OF_TASK_SLOTS=4
    volumes:
      - /usr/local/flink1.18/logs:/opt/flink/log
      - /usr/local/flink1.18/checkpoints:/opt/flink/checkpoints
      - /usr/local/flink1.18/jobs:/opt/flink/jobs
	  - /usr/local/flink1.18/conf:/opt/flink/conf 

5.拷贝一个flink-conf.yaml到/usr/local/flink1.18/conf目录
大致如下:

env.java.opts.all: --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED
jobmanager.rpc.address: jobmanager
jobmanager.rpc.port: 6123
jobmanager.bind-host: 0.0.0.0
jobmanager.memory.process.size: 2048m
taskmanager.bind-host: 0.0.0.0
taskmanager.host: hadoop193
taskmanager.memory.process.size: 4096m
taskmanager.numberOfTaskSlots: 4
parallelism.default: 2
jobmanager.execution.failover-strategy: region
rest.address: localhost
rest.bind-address: 0.0.0.0
blob.server.port: 6124
query.server.port: 6125

6.执行 docker-compose up -d

7.查看docker ps

8.浏览器访问 http://192.168.101.193:8081/

Logo

技术共进,成长同行——讯飞AI开发者社区

更多推荐