Starting our multi-node cluster MapReduce Daemons Hadoop provides a script that will start and stop all daemons for MapReduce jobs We want to run the following script to start MapReduce on our master -bash-4.2$ start-mapred.sh starting jobtracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-jobtracker-peter.test.net.out chris: starting tasktracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-tasktracker-chris.test.net.out lois: starting tasktracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-tasktracker-lois.test.net.out meg: starting tasktracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-tasktracker-meg.test.net.out peter: starting tasktracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-tasktracker-peter.test.net.out