IDEA 连接阿里云 操作hadoop hdfs报连接超时Connection timed out: no further information
Idea 阿里云 hadoop hdfs
·
name | value | description |
---|---|---|
dfs.namenode.secondary.http-address | 0.0.0.0:9868 | The secondary namenode http server address and port. |
dfs.namenode.secondary.https-address | 0.0.0.0:9869 | The secondary namenode HTTPS server address and port. |
dfs.datanode.address | 0.0.0.0:9866 | The datanode server address and port for data transfer. |
dfs.datanode.http.address | 0.0.0.0:9864 | The datanode http server address and port. |
dfs.datanode.ipc.address | 0.0.0.0:9867 | The datanode ipc server address and port. |
dfs.namenode.https-address | 0.0.0.0:9871 | The namenode secure http server address and port. |
dfs.datanode.https.address | 0.0.0.0:9865 | The datanode secure http server address and port. |
dfs.namenode.http-address | 0.0.0.0:9870 | The address and the base port where the dfs namenode web ui will listen on. |
dfs.namenode.rpc-address | RPC address that handles all clients requests. In the case of HA/Federation where multiple namenodes exist, the name service id is added to the name e.g. dfs.namenode.rpc-address.ns1 dfs.namenode.rpc-address.EXAMPLENAMESERVICE The value of this property will take the form of nn-host1:rpc-port. The NameNode’s default RPC port is 8020. |
报错如下:
[INFO] [2020-05-12 20:19:36][org.apache.hadoop.hdfs.DataStreamer]Exception in createBlockOutputStream blk_1073741867_1043
java.net.ConnectException: Connection timed out: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_311]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715) ~[?:1.8.0_311]
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:253) ~[hadoop-hdfs-client-3.1.3.jar:?]
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1725) [hadoop-hdfs-client-3.1.3.jar:?]
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679) [hadoop-hdfs-client-3.1.3.jar:?]
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716) [hadoop-hdfs-client-3.1.3.jar:?]
报错原因:
阿里云安全组未开放hdfs的datanode数据传输端口,IDEA从外网访问服务器的Datanode,未开放hdfs数据传输端口,则客户端无法在写数据或读数据时直接访问datanode,也无法建立datanode的数据传输通道。
解决问题:
- 查询datanaode的dfs.datanode.address端口为9866,阿里云安全组开放该端口
- 在客户端hdfs-site.xml中加入配置
<property> <name>dfs.client.use.datanode.hostname</name> <value>true</value> </property>
更多推荐
所有评论(0)