全分布模式(一主二从,注释为Hadoop3.1的都是相比2多出来的必要配置):
hosts

192.0.96.11    Cat1
192.0.96.12    Cat2
192.0.96.13    Cat3

ssh(三台全配,先ssh-keygen -t rsa)

ssh-copy-id -i .ssh/id_rsa.pub root@cat1
ssh-copy-id -i .ssh/id_rsa.pub root@cat2
ssh-copy-id -i .ssh/id_rsa.pub root@cat3

env:

# Jdk
JAVA_HOME=/usr/java/jdk/jdk8
export JAVA_HOME
PATH=$JAVA_HOME/bin:$PATH
export PATH

# Hadoop
HADOOP_HOME=/usr/java/hadoop/hadoop3
export HADOOP_HOME
PATH=$HADOOP_HOME/bin:$HADOOP/sbin:$PATH
export PATH
#Hadoop3.1
export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root"
export HDFS_SECONDARYNAMENODE_USER="root"
export YARN_RESOURCEMANAGER_USER="root"
export YARN_NODEMANAGER_USER="root"

hadoop-env.sh

export JAVA_HOME=/usr/java/jdk/jdk8

hdfs-site.xml

<!--数据块冗余度,默认3-->
<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<!--是否开启HDFS的权限检查,默认:true-->
<property>
  <name>dfs.permissions</name>
  <value>false</value>
</property>

core-site.xml

<!--NameNode的地址-->
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://Cat1:9000</value>
</property>    
<!--HDFS数据保存的目录,默认是Linux的tmp目录-->
<property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/java/hadoop/hadoop3/tmp</value>
</property>

mapred-site.xml

<!--MR程序运行的容器是Yarn-->
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>
<!--Hadoop3.1-->
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<!--Hadoop3.1-->

yarn-site.xml

<!--ResourceManager的地址-->
<property>
  <name>yarn.resourcemanager.hostname</name>
  <value>Cat1</value>
</property>        
<!--NodeManager运行MR任务的方式-->
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>

workers(Hadoop2为slaves)
Format NameNode(这里容易出的问题是format时tmp或name,data目录不能存在)

    hdfs namenode -format

成功字段为:name has been successfully formatted.

Copy(保证clusterID一致)

scp -r hadoop3/ root@cat2:/usr/java/hadoop
scp -r hadoop3/ root@cat3:/usr/java/hadoop

Running

start-all.sh

查看运行状态

    # hdfs dfsadmin -report
    Configured Capacity: 13283360768 (12.37 GB)
    Present Capacity: 7522836480 (7.01 GB)
    DFS Remaining: 7522820096 (7.01 GB)
    DFS Used: 16384 (16 KB)
    DFS Used%: 0.00%
    Replicated Blocks:
            Under replicated blocks: 0
            Blocks with corrupt replicas: 0
            Missing blocks: 0
            Missing blocks (with replication factor 1): 0
            Pending deletion blocks: 0
    Erasure Coded Block Groups: 
            Low redundancy block groups: 0
            Block groups with corrupt internal blocks: 0
            Missing block groups: 0
            Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: 192.0.96.12:9866 (Cat2)
Hostname: Cat2
Decommission Status : Normal
Configured Capacity: 6641680384 (6.19 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 3138756608 (2.92 GB)
DFS Remaining: 3502915584 (3.26 GB)
DFS Used%: 0.00%
DFS Remaining%: 52.74%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun May 27 04:29:04 EDT 2018
Last Block Report: Sun May 27 04:07:13 EDT 2018
Num of Blocks: 0

Name: 192.0.96.13:9866 (Cat3)
Hostname: Cat3
Decommission Status : Normal
Configured Capacity: 6641680384 (6.19 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 2621767680 (2.44 GB)
DFS Remaining: 4019904512 (3.74 GB)
DFS Used%: 0.00%
DFS Remaining%: 60.53%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun May 27 04:29:04 EDT 2018
Last Block Report: Sun May 27 04:07:13 EDT 2018
Num of Blocks: 0

Web Interfaces(3.1)

NameNode--------------------------------http://nn_host:port/----Default HTTP port is 9870.
ResourceManager----------------------http://rm_host:port/----Default HTTP port is 8088.
MapReduce JobHistory Server-----http://jhs_host:port/---Default HTTP port is 19888.

Web Interfaces(2.9.1)

NameNode--------------------------------http://nn_host:port/----Default HTTP port is 50070.
ResourceManager----------------------http://rm_host:port/----Default HTTP port is 8088.
MapReduce JobHistory Server-----http://jhs_host:port/---Default HTTP port is 19888.