Ich habe den Pfad sowohl für das Verzeichnis der Konfigurationsdateien als auch für das Verzeichnis der .sh-Dateien angegeben.
Ich habe es geschafft, die Konfigurationen mit diesem Pfad vorzunehmen:
[E-Mail-geschützt]:/home/hduser/hadoop/hadoop-2.4.0/etc/hadoop# gedit coresite.xml
<configuration>
<!-- In: conf/core-site.xml -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hduser/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
[email protected]:/home/hduser/hadoop/hadoop-2.4.0/etc/hadoop# gedit mapred-site.xml
<configuration>
<!-- In: conf/mapred-site.xml -->
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
[email protected]:/home/hduser/hadoop/hadoop-2.4.0/etc/hadoop# gedit hdfs-site.xml
<configuration>
<!-- In: conf/hdfs-site.xml -->
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
Meine Verzeichnisdetails:
[email protected]:~/hadoop/hadoop-2.4.0/etc$ cd ./hadoop
[email protected]:~/hadoop/hadoop-2.4.0/etc/hadoop$ ls
capacity-scheduler.xml hadoop-policy.xml mapred-queues.xml.template
configuration.xsl hdfs-site.xml mapred-site.xml
container-executor.cfg hdfs-site.xml~ mapred-site.xml~
core-site.xml httpfs-env.sh mapred-site.xml.template
core-site.xml~ httpfs-log4j.properties slaves
hadoop-env.cmd httpfs-signature.secret ssl-client.xml.example
hadoop-env.sh httpfs-site.xml ssl-server.xml.example
hadoop-env.sh~ log4j.properties yarn-env.cmd
hadoop-metrics2.properties mapred-env.cmd yarn-env.sh
hadoop-metrics.properties mapred-env.sh yarn-site.xml
[email protected]:~/hadoop/hadoop-2.4.0/etc/hadoop$ cd ..
[email protected]:~/hadoop/hadoop-2.4.0/etc$ ls
hadoop
[email protected]:~/hadoop/hadoop-2.4.0/etc$ cd ..
[email protected]:~/hadoop/hadoop-2.4.0$ ls
bin etc include lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share
[email protected]:~/hadoop/hadoop-2.4.0$ cd ./etc
[email protected]:~/hadoop/hadoop-2.4.0/etc$ ls
hadoop
[email protected]:~/hadoop/hadoop-2.4.0/etc$ cd ..
[email protected]:~/hadoop/hadoop-2.4.0$ cd ./sbin
[email protected]:~/hadoop/hadoop-2.4.0/sbin$ ls
distribute-exclude.sh hdfs-config.sh slaves.sh start-dfs.cmd start-yarn.sh stop-dfs.cmd stop-yarn.sh
hadoop-daemon.sh httpfs.sh start-all.cmd start-dfs.sh
stop-all.cmd stop-dfs.sh yarn-daemon.sh
hadoop-daemons.sh mr-jobhistory-daemon.sh start-all.sh start-secure-dns.sh stop-all.sh stop-secure-dns.sh yarn-daemons.sh
hdfs-config.cmd refresh-namenodes.sh start-balancer.sh start-yarn.cmd stop-balancer.sh stop-yarn.cmd
Wenn ich die Daemons starte:
[email protected]:~/hadoop/hadoop-2.4.0/sbin$ ls
distribute-exclude.sh start-all.cmd stop-all.sh
hadoop-daemon.sh start-all.sh stop-balancer.sh
hadoop-daemons.sh start-balancer.sh stop-dfs.cmd
hdfs-config.cmd start-dfs.cmd stop-dfs.sh
hdfs-config.sh start-dfs.sh stop-secure-dns.sh
httpfs.sh start-secure-dns.sh stop-yarn.cmd
mr-jobhistory-daemon.sh start-yarn.cmd stop-yarn.sh
refresh-namenodes.sh start-yarn.sh yarn-daemon.sh
slaves.sh stop-all.cmd yarn-daemons.sh
[email protected]:~/hadoop/hadoop-2.4.0/sbin$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/hadoop-hduser-namenode-ratan-Inspiron-N5110.out
localhost: starting datanode, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/hadoop-hduser-datanode-ratan-Inspiron-N5110.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/hadoop-hduser-secondarynamenode-ratan-Inspiron-N5110.out
starting yarn daemons
starting resourcemanager, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/yarn-hduser-resourcemanager-ratan-Inspiron-N5110.out
localhost: starting nodemanager, logging to /home/hduser/hadoop/hadoop-2.4.0/logs/yarn-hduser-nodemanager-ratan-Inspiron-N5110.out
[email protected]:~/hadoop/hadoop-2.4.0/sbin$ jps
1441 DataNode
1608 SecondaryNameNode
1912 NodeManager
2448 Jps
1775 ResourceManager
[email protected]:~/hadoop/hadoop-2.4.0/sbin$
Das Problem ist, dass ich den zu formatierenden Namenode nicht finden kann. Und wenn ich die Daemons starte, ist der Namenode nirgendwo zu sehen. Wo mache ich einen Fehler?
Akzeptierte Antwort:
Überprüfen Sie diese Protokolldatei:
hadoop-hduser-namenode-ratan-Inspiron-N5110.log
Wenn es heißt, dass namenode nicht formatiert ist, formatieren Sie es
bin/hadoop namenode -format