Namenode: Java.Net.Bindexception

Failed to start the namenode: java.net.BindException: Address already in use

The issue was resolved by changing the port in core-site.xml from 9000 to 9001 as shown below.

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9001</value>
</property>

I formatted the namenode and restarted hadoop. The following services showed up as expected.

joe@joe-virtual-machine:~$ jps
3387 ResourceManager
3935 Jps
2850 NameNode
3163 SecondaryNameNode
2981 DataNode
3517 NodeManager

joe@joe-virtual-machine:~$ netstat -a - p | grep 9001
tcp 0 0 localhost:9001 *:* LISTEN
tcp 0 0 localhost:54460 localhost:9001 ESTABLISHED
tcp 0 0 localhost:9001 localhost:54460 ESTABLISHED
tcp 0 0 localhost:54598 localhost:9001 TIME_WAIT
joe@joe-virtual-machine:~$ netstat -a - p | grep 9000
tcp 0 0 *:9000 *:* LISTEN
tcp6 0 0 [::]:9000 [::]:* LISTEN

However, I would still like to understand why we couldn’t use 9000 port even though no other process was running in that port (netstat).

NameNode: java.net.BindException

Based on your comment, you're probably is most probably related to the hosts file.

Firstly you should uncomment the 127.0.0.1 localhost entry, this is a fundamental entry.

Secondly, Have you set up hadoop and hbase to run with external accessible services - i'm not too up on hbase, but for hadoop, the services need to be bound to non-localhost addresses for external access, so your masters and slaves files in $HADOOP_HOME/conf need to name the actual machine names (or IP addresses if you don't have a DNS server). None of your configuration files should mention localhost, and should use either the host names or IP addresses.

hadoop namenode port in use

Found the issue. This came from a short history of this server where the IP address changed, but the /etc/hosts file just had the new one appended to it rather than replaced. I think this was confusing the Hadoop start up as it was trying to open 50070 on a non-existent interface. The error being "port in use" made this a little confusing.

Unable to start secondarynamenode, datanode, nodemanager while starting hadoop

After frequent searches/tries i have found the solution to kill the process(port) without pid.

*******@127:~$ sudo fuser -k 50010/tcp
[sudo] password for muralee1857:
50010/tcp: 1514
*******@127:~$ sudo kill -9 $(lsof -t -i:50010)
*******@127:~$ sudo fuser -k 50010/tcp
*******@127:~$ sudo kill -9 $(lsof -t -i:50090)
*******@127:~$ sudo fuser -k 50090/tcp
50090/tcp: 2110
*******@127:~$ sudo kill -9 $(lsof -t -i:50090)
*******@127:~$ sudo fuser -k 50090/tcp
*******@127:~$ sudo fuser -k 8040/tcp
8040/tcp: 2304
*******@127:~$ sudo kill -9 $(lsof -t -i:8040)
*******@127:~$ sudo fuser -k 8040/tcp

Now i am able start all hadoop services.

/hadoopinstall/hadoop-2.6.0$ jps
6844 NodeManager
7150 Jps
6547 SecondaryNameNode
6202 NameNode
6702 ResourceManager
6358 DataNode

BindException in Hadoop on EC2

The mistake I made was to use the public IP of the instances instead of the Amazon allocated local IP (10.X.X.X). That fixed the issue.



Related Topics



Leave a reply



Submit