Running Hadoop on GPFS -
what other options hadoop derive fs.default.name option?
i'm trying hadoop running on gpfs instead of hdfs. have configured hadoop use libgpfs.so, libgpfshadoop.so, , hadoop-1.1.1-gpfs.jar libraries provided ibm. i'm running trouble core-site.xml config (i suspect) , starting namenode. ssh working , configured correctly.
launching namenode with:
sbin/hadoop-daemon.sh --config $config_dir --script hdfs start namenode
results in:
014-12-05 14:55:50,819 info org.apache.hadoop.hdfs.server.namenode.namenode: fs.defaultfs gpfs:/// 2014-12-05 14:55:50,941 warn org.apache.hadoop.util.nativecodeloader: unable load native-hadoop library platform... using builtin-java classes applicable 2014-12-05 14:55:51,063 fatal org.apache.hadoop.hdfs.server.namenode.namenode: failed start namenode. java.lang.illegalargumentexception: invalid uri namenode address (check fs.defaultfs): gpfs:/// has no authority. @ org.apache.hadoop.hdfs.server.namenode.namenode.getaddress(namenode.java:423) @ org.apache.hadoop.hdfs.server.namenode.namenode.getaddress(namenode.java:413) @ org.apache.hadoop.hdfs.server.namenode.namenode.getrpcserveraddress(namenode.java:464)
my core-site config:
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoop</value> </property> <property> <name>fs.default.name</name> <value>gpfs:///</value> </property> <property> <name>fs.gpfs.impl</name> <value>org.apache.hadoop.fs.gpfs.globalparallelfilesystem</value> </property> <property> <name>gpfs.mount.dir</name> <value>/mnt/gpfs</value> </property> </configuration>
i think hadoop expecting fs.default.name contain ip , port uses propagate other config options, since using gpfs, don't need to.
a thought... need run namenode if using gpfs? can run hadoop jobtracker?
Comments
Post a Comment