博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Data - Hadoop伪分布式配置 - 使用Hadoop2.8.0和Ubuntu16.04
阅读量:6164 次
发布时间:2019-06-21

本文共 18808 字,大约阅读时间需要 62 分钟。

系统版本

anliven@Ubuntu1604:~$ uname -aLinux Ubuntu1604 4.8.0-36-generic #36~16.04.1-Ubuntu SMP Sun Feb 5 09:39:57 UTC 2017 x86_64 x86_64 x86_64 GNU/Linuxanliven@Ubuntu1604:~$ anliven@Ubuntu1604:~$ cat /proc/versionLinux version 4.8.0-36-generic (buildd@lgw01-18) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #36~16.04.1-Ubuntu SMP Sun Feb 5 09:39:57 UTC 2017anliven@Ubuntu1604:~$ anliven@Ubuntu1604:~$ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription:    Ubuntu 16.04.2 LTSRelease:    16.04Codename:   xenialanliven@Ubuntu1604:~$

创建hadoop用户

anliven@Ubuntu1604:~$ sudo useradd -m hadoop -s /bin/bashanliven@Ubuntu1604:~$ sudo passwd hadoop输入新的 UNIX 密码: 重新输入新的 UNIX 密码: passwd:已成功更新密码anliven@Ubuntu1604:~$ anliven@Ubuntu1604:~$ sudo adduser hadoop sudo正在添加用户"hadoop"到"sudo"组...正在将用户“hadoop”加入到“sudo”组中完成。anliven@Ubuntu1604:~$

更新apt及安装vim

hadoop@Ubuntu1604:~$ sudo apt-get update命中:1 http://mirrors.aliyun.com/ubuntu xenial InRelease命中:2 http://mirrors.aliyun.com/ubuntu xenial-updates InRelease命中:3 http://mirrors.aliyun.com/ubuntu xenial-backports InRelease命中:4 http://mirrors.aliyun.com/ubuntu xenial-security InRelease正在读取软件包列表... 完成                       hadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ sudo apt-get install vim正在读取软件包列表... 完成正在分析软件包的依赖关系树       正在读取状态信息... 完成       vim 已经是最新版 (2:7.4.1689-3ubuntu1.2)。升级了 0 个软件包,新安装了 0 个软件包,要卸载 0 个软件包,有 50 个软件包未被升级。hadoop@Ubuntu1604:~$

配置SSH免密码登录

hadoop@Ubuntu1604:~$ sudo apt-get install openssh-server正在读取软件包列表... 完成正在分析软件包的依赖关系树       正在读取状态信息... 完成       openssh-server 已经是最新版 (1:7.2p2-4ubuntu2.1)。升级了 0 个软件包,新安装了 0 个软件包,要卸载 0 个软件包,有 50 个软件包未被升级。hadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ cd ~hadoop@Ubuntu1604:~$ mkdir .sshhadoop@Ubuntu1604:~$ cd .sshhadoop@Ubuntu1604:~/.ssh$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa.Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:SHA256:DzjVWgTQB5I1JGRBmWi6gVHJ03V4WnJZEdojtbou0DM hadoop@Ubuntu1604The key's randomart image is:+---[RSA 2048]----+| o.o =X@B=*o     ||. + +.*+*B..     || o +   *+.*      ||. o   .o = .     ||   o .o S        ||  . . E. +       ||     . o. .      ||      ..         ||       ..        |+----[SHA256]-----+hadoop@Ubuntu1604:~/.ssh$ hadoop@Ubuntu1604:~/.ssh$ cat id_rsa.pub >> authorized_keyshadoop@Ubuntu1604:~/.ssh$ ls -l总用量 12-rw-rw-r-- 1 hadoop hadoop  399 4月  27 07:33 authorized_keys-rw------- 1 hadoop hadoop 1679 4月  27 07:32 id_rsa-rw-r--r-- 1 hadoop hadoop  399 4月  27 07:32 id_rsa.pubhadoop@Ubuntu1604:~/.ssh$ hadoop@Ubuntu1604:~/.ssh$ cd hadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ ssh localhostThe authenticity of host 'localhost (127.0.0.1)' can't be established.ECDSA key fingerprint is SHA256:fZ7fAvnnFk0/Imkn0YPdc2Gzxnfr0IJGSRb1swbm7oU.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'localhost' (ECDSA) to the list of known hosts.Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.8.0-36-generic x86_64) * Documentation:  https://help.ubuntu.com * Management:     https://landscape.canonical.com * Support:        https://ubuntu.com/advantage44 个可升级软件包。0 个安全更新。*** 需要重启系统 ***Last login: Thu Apr 27 07:25:26 2017 from 192.168.16.1hadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ exit注销Connection to localhost closed.hadoop@Ubuntu1604:~$

安装Java

hadoop@Ubuntu1604:~$ dpkg -l |grep jdkhadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ sudo apt-get install openjdk-8-jre openjdk-8-jdk正在读取软件包列表... 完成正在分析软件包的依赖关系树       正在读取状态信息... 完成       将会同时安装下列软件:..................done.正在处理用于 libc-bin (2.23-0ubuntu7) 的触发器 ...正在处理用于 ca-certificates (20160104ubuntu1) 的触发器 ...Updating certificates in /etc/ssl/certs...0 added, 0 removed; done.Running hooks in /etc/ca-certificates/update.d...done.done.hadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ dpkg -l |grep jdkii  openjdk-8-jdk:amd64                        8u121-b13-0ubuntu1.16.04.2                    amd64        OpenJDK Development Kit (JDK)ii  openjdk-8-jdk-headless:amd64               8u121-b13-0ubuntu1.16.04.2                    amd64        OpenJDK Development Kit (JDK) (headless)ii  openjdk-8-jre:amd64                        8u121-b13-0ubuntu1.16.04.2                    amd64        OpenJDK Java runtime, using Hotspot JITii  openjdk-8-jre-headless:amd64               8u121-b13-0ubuntu1.16.04.2                    amd64        OpenJDK Java runtime, using Hotspot JIT (headless)hadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ dpkg -L openjdk-8-jdk | grep '/bin$'/usr/lib/jvm/java-8-openjdk-amd64/binhadoop@Ubuntu1604:~$  hadoop@Ubuntu1604:~$ vim ~/.bashrchadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ head ~/.bashrc |grep javaexport JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"hadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ source ~/.bashrchadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ echo $JAVA_HOME/usr/lib/jvm/java-8-openjdk-amd64hadoop@Ubuntu1604:~$ hadoop@Ubuntu1604:~$ java -versionopenjdk version "1.8.0_121"OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)hadoop@Ubuntu1604:~$

安装Hadoop

hadoop@Ubuntu1604:~$ sudo tar -zxf ~/hadoop-2.8.0.tar.gz -C /usr/local[sudo] hadoop 的密码: hadoop@Ubuntu1604:~$ cd /usr/localhadoop@Ubuntu1604:/usr/local$ sudo mv ./hadoop-2.8.0/ ./hadoophadoop@Ubuntu1604:/usr/local$ sudo chown -R hadoop ./hadoophadoop@Ubuntu1604:/usr/local$ ls -l |grep hadoopdrwxr-xr-x 9 hadoop dialout 4096 3月  17 13:31 hadoophadoop@Ubuntu1604:/usr/local$ cd ./hadoophadoop@Ubuntu1604:/usr/local/hadoop$ ls -l总用量 148drwxr-xr-x 2 hadoop dialout  4096 3月  17 13:31 bindrwxr-xr-x 3 hadoop dialout  4096 3月  17 13:31 etcdrwxr-xr-x 2 hadoop dialout  4096 3月  17 13:31 includedrwxr-xr-x 3 hadoop dialout  4096 3月  17 13:31 libdrwxr-xr-x 2 hadoop dialout  4096 3月  17 13:31 libexec-rw-r--r-- 1 hadoop dialout 99253 3月  17 13:31 LICENSE.txt-rw-r--r-- 1 hadoop dialout 15915 3月  17 13:31 NOTICE.txt-rw-r--r-- 1 hadoop dialout  1366 3月  17 13:31 README.txtdrwxr-xr-x 2 hadoop dialout  4096 3月  17 13:31 sbindrwxr-xr-x 4 hadoop dialout  4096 3月  17 13:31 sharehadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hadoop versionHadoop 2.8.0Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 91f2b7a13d1e97be65db92ddabc627cc29ac0009Compiled by jdu on 2017-03-17T04:12ZCompiled with protoc 2.5.0From source with checksum 60125541c2b3e266cbf3becc5bda666This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.0.jarhadoop@Ubuntu1604:/usr/local/hadoop$

Hadoop伪分布式配置

Hadoop可以伪分布式的方式在单节点上运行,读取HDFS中的文件。此节点既作为 NameNode 也作为 DataNode。

在Hadoop伪分布式配置情况下,删除core-site.xml的配置项,可以从伪分布式模式切换回非分布式模式。

修改配置文件

hadoop@Ubuntu1604:~$ cd /usr/local/hadoop/etc/hadoophadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ vim core-site.xml hadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ cat core-site.xml 
hadoop.tmp.dir
file:/usr/local/hadoop/tmp
Abase for other temporary directories.
fs.defaultFS
hdfs://localhost:9000
hadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ vim hdfs-site.xmlhadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ cat hdfs-site.xml
dfs.replication
1
dfs.namenode.name.dir
file:/usr/local/hadoop/tmp/dfs/name
dfs.datanode.data.dir
file:/usr/local/hadoop/tmp/dfs/data
hadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$

格式化NameNode

hadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs namenode -format17/04/27 23:39:01 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   user = hadoopSTARTUP_MSG:   host = Ubuntu1604/127.0.1.1STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 2.8.0..................17/04/27 23:39:02 INFO namenode.FSImage: Allocated new BlockPoolId: BP-806199003-127.0.1.1-149330754208617/04/27 23:39:02 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.17/04/27 23:39:02 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression17/04/27 23:39:02 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.17/04/27 23:39:02 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 017/04/27 23:39:02 INFO util.ExitUtil: Exiting with status 017/04/27 23:39:02 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at Ubuntu1604/127.0.1.1************************************************************/hadoop@Ubuntu1604:/usr/local/hadoop$

启动NameNode和DataNode进程

hadoop@Ubuntu1604:/usr/local/hadoop$ ./sbin/start-dfs.shStarting namenodes on [localhost]localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-Ubuntu1604.outlocalhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-Ubuntu1604.outStarting secondary namenodes [0.0.0.0]The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.ECDSA key fingerprint is SHA256:fZ7fAvnnFk0/Imkn0YPdc2Gzxnfr0IJGSRb1swbm7oU.Are you sure you want to continue connecting (yes/no)? yes0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-Ubuntu1604.outhadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ jps1908 Jps1576 DataNode1467 NameNode1791 SecondaryNameNodehadoop@Ubuntu1604:/usr/local/hadoop$

访问Web界面

hadoop@Ubuntu1604:/usr/local/hadoop$ ip addr show enp0s32: enp0s3: 
mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:02:49:c1 brd ff:ff:ff:ff:ff:ff inet 192.168.16.100/24 brd 192.168.16.255 scope global enp0s3 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe02:49c1/64 scope link valid_lft forever preferred_lft foreverhadoop@Ubuntu1604:/usr/local/hadoop$

访问Web界面http://192.168.16.100:50070,可以查看NameNode/Datanode信息和HDFS中的文件

819128-20170503081004961-245390685.png

运行Hadoop伪分布式实例

hadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -mkdir -p /user/hadoophadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -mkdir inputhadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -put ./etc/hadoop/*.xml inputhadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -lsFound 1 itemsdrwxr-xr-x   - hadoop supergroup          0 2017-04-29 07:42 inputhadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -ls inputFound 8 items-rw-r--r--   1 hadoop supergroup       4942 2017-04-29 07:42 input/capacity-scheduler.xml-rw-r--r--   1 hadoop supergroup       1111 2017-04-29 07:42 input/core-site.xml-rw-r--r--   1 hadoop supergroup       9683 2017-04-29 07:42 input/hadoop-policy.xml-rw-r--r--   1 hadoop supergroup       1181 2017-04-29 07:42 input/hdfs-site.xml-rw-r--r--   1 hadoop supergroup        620 2017-04-29 07:42 input/httpfs-site.xml-rw-r--r--   1 hadoop supergroup       3518 2017-04-29 07:42 input/kms-acls.xml-rw-r--r--   1 hadoop supergroup       5546 2017-04-29 07:42 input/kms-site.xml-rw-r--r--   1 hadoop supergroup        690 2017-04-29 07:42 input/yarn-site.xmlhadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar grep input output 'dfs[a-z.]+'17/04/29 07:43:54 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id..................17/04/29 07:43:58 INFO mapreduce.Job:  map 100% reduce 100%17/04/29 07:43:58 INFO mapreduce.Job: Job job_local329465708_0002 completed successfully17/04/29 07:43:58 INFO mapreduce.Job: Counters: 35    File System Counters        FILE: Number of bytes read=1222362        FILE: Number of bytes written=2503241        FILE: Number of read operations=0        FILE: Number of large read operations=0        FILE: Number of write operations=0        HDFS: Number of bytes read=55020        HDFS: Number of bytes written=515        HDFS: Number of read operations=67        HDFS: Number of large read operations=0        HDFS: Number of write operations=16    Map-Reduce Framework        Map input records=4        Map output records=4        Map output bytes=101        Map output materialized bytes=115        Input split bytes=132        Combine input records=0        Combine output records=0        Reduce input groups=1        Reduce shuffle bytes=115        Reduce input records=4        Reduce output records=4        Spilled Records=8        Shuffled Maps =1        Failed Shuffles=0        Merged Map outputs=1        GC time elapsed (ms)=0        Total committed heap usage (bytes)=1054867456    Shuffle Errors        BAD_ID=0        CONNECTION=0        IO_ERROR=0        WRONG_LENGTH=0        WRONG_MAP=0        WRONG_REDUCE=0    File Input Format Counters         Bytes Read=219    File Output Format Counters         Bytes Written=77hadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -lsFound 2 itemsdrwxr-xr-x   - hadoop supergroup          0 2017-04-29 07:42 inputdrwxr-xr-x   - hadoop supergroup          0 2017-04-29 07:43 outputhadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -ls outputFound 2 items-rw-r--r--   1 hadoop supergroup          0 2017-04-29 07:43 output/_SUCCESS-rw-r--r--   1 hadoop supergroup         77 2017-04-29 07:43 output/part-r-00000hadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -cat output/*1   dfsadmin1   dfs.replication1   dfs.namenode.name.dir1   dfs.datanode.data.dirhadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ ls -l ./outputls: 无法访问'./output': 没有那个文件或目录hadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -get output ./outputhadoop@Ubuntu1604:/usr/local/hadoop$ cat ./output/*1   dfsadmin1   dfs.replication1   dfs.namenode.name.dir1   dfs.datanode.data.dirhadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -rm -r outputDeleted outputhadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -lsFound 1 itemsdrwxr-xr-x   - hadoop supergroup          0 2017-04-29 07:42 inputhadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar grep input output 'dfs[a-z.]+'17/04/29 07:48:40 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id..................hadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -lsFound 2 itemsdrwxr-xr-x   - hadoop supergroup          0 2017-04-29 07:42 inputdrwxr-xr-x   - hadoop supergroup          0 2017-04-29 07:48 outputhadoop@Ubuntu1604:/usr/local/hadoop$ ./bin/hdfs dfs -cat output/*1   dfsadmin1   dfs.replication1   dfs.namenode.name.dir1   dfs.datanode.data.dirhadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ ./sbin/stop-dfs.shStopping namenodes on [localhost]localhost: stopping namenodelocalhost: stopping datanodeStopping secondary namenodes [0.0.0.0]0.0.0.0: stopping secondarynamenodehadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ jps3807 Jpshadoop@Ubuntu1604:/usr/local/hadoop$

特别注意:Hadoop运行程序时,输出目录不能存在,否则会出错。再次执行前,必须删除 output 文件夹:./bin/hdfs dfs -rm -r output

YARN

修改配置文件mapred-site.xml

hadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ pwd/usr/local/hadoop/etc/hadoophadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ mv mapred-site.xml.template mapred-site.xmlhadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ vim mapred-site.xmlhadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ cat mapred-site.xml
mapreduce.framework.name
yarn
hadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ vim yarn-site.xmlhadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$ cat yarn-site.xml
yarn.nodemanager.aux-services
mapreduce_shuffle
hadoop@Ubuntu1604:/usr/local/hadoop/etc/hadoop$

如果不想启动YARN,务必将配置文件 mapred-site.xml改为原名称mapred-site.xml.template,否则将很可能会引起程序异常。

启动YARN

hadoop@Ubuntu1604:/usr/local/hadoop$ pwd/usr/local/hadoophadoop@Ubuntu1604:/usr/local/hadoop$ jps5774 Jpshadoop@Ubuntu1604:/usr/local/hadoop$ ./sbin/start-dfs.shStarting namenodes on [localhost]localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-Ubuntu1604.outlocalhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-Ubuntu1604.outStarting secondary namenodes [0.0.0.0]0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-Ubuntu1604.outhadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ jps6034 DataNode6373 Jps5915 NameNode6221 SecondaryNameNodehadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ ./sbin/start-yarn.shstarting yarn daemonsstarting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-Ubuntu1604.outlocalhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-Ubuntu1604.outhadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ jps6034 DataNode6644 Jps6422 ResourceManager6536 NodeManager5915 NameNode6221 SecondaryNameNodehadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ ./sbin/mr-jobhistory-daemon.sh start historyserverstarting historyserver, logging to /usr/local/hadoop/logs/mapred-hadoop-historyserver-Ubuntu1604.outhadoop@Ubuntu1604:/usr/local/hadoop$ hadoop@Ubuntu1604:/usr/local/hadoop$ jps6816 JobHistoryServer6034 DataNode6917 Jps6422 ResourceManager6536 NodeManager5915 NameNode6221 SecondaryNameNodehadoop@Ubuntu1604:/usr/local/hadoop$

访问Web页面

启用YARN之后,可以通过 Web 界面查看任务的运行情况:http://192.168.16.100:8088/cluster

819128-20170503081039414-516368874.png

关闭YARN和Hadoop

hadoop@Ubuntu1604:/usr/local/hadoop$ ./sbin/mr-jobhistory-daemon.sh stop historyserverhadoop@Ubuntu1604:/usr/local/hadoop$ ./sbin/stop-yarn.shhadoop@Ubuntu1604:/usr/local/hadoop$ ./sbin/stop-dfs.sh

转载于:https://www.cnblogs.com/anliven/p/6800034.html

你可能感兴趣的文章
java
查看>>
Vue.js连接后台数据jsp页面  ̄▽ ̄
查看>>
关于程序的单元测试
查看>>
mysql内存优化
查看>>
都市求生日记第一篇
查看>>
Java集合---HashMap源码剖析
查看>>
SQL优化技巧
查看>>
thead 固定,tbody 超出滚动(附带改变滚动条样式)
查看>>
Dijkstra算法
查看>>
css 动画 和 响应式布局和兼容性
查看>>
csrf 跨站请求伪造相关以及django的中间件
查看>>
MySQL数据类型--与MySQL零距离接触2-11MySQL自动编号
查看>>
生日小助手源码运行的步骤
查看>>
Configuration python CGI in XAMPP in win-7
查看>>
bzoj 5006(洛谷 4547) [THUWC2017]Bipartite 随机二分图——期望DP
查看>>
CF 888E Maximum Subsequence——折半搜索
查看>>
欧几里德算法(辗转相除法)
查看>>
面试题1-----SVM和LR的异同
查看>>
MFC控件的SubclassDlgItem
查看>>
如何避免历史回退到登录页面
查看>>