Starting and stopping our cluster Hadoop provides some really nice scripts for starting and stopping everything -bash-4.2$ start-all.sh starting namenode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-namenode-peter.test.net.out localhost: starting datanode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-datanode-peter.test.net.out localhost: starting secondarynamenode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-secondarynamenode-peter.test.net.out starting jobtracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-jobtracker-peter.test.net.out localhost: starting tasktracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-tasktracker-peter.test.net.out -bash-4.2$ stop-all.sh stopping jobtracker localhost: stopping tasktracker stopping namenode localhost: stopping datanode localhost: stopping secondarynamenode We don't use these command on multi-node clusters though