Transcript
Reference Guide MapR Administrator Training April 2012
Version 4.0.1
1. Reference Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Documentation for Previous Releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 MapR Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Cluster Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 MapR-FS Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 NFS HA Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Alarms Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.5 System Settings Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6 Other Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6.1 CLDB View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6.2 HBase View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6.2.1 HBase Local Logs View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6.2.2 HBase Log Level View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6.2.3 HBase Thread Dump View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6.3 JobTracker View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6.3.1 JobTracker Configuration View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6.4 Nagios View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.7 Node-Related Dialog Boxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Hadoop Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 hadoop archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 hadoop classpath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 hadoop daemonlog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.4 hadoop distcp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.5 hadoop fs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.6 hadoop jar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.7 hadoop job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.8 hadoop jobtracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.9 hadoop mfs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.10 hadoop mradmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.11 hadoop pipes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.12 hadoop queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.13 hadoop tasktracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.14 hadoop version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.15 hadoop conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.16 Hadoop 2 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.17 Hadoop 1 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 YARN commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 yarn application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 yarn classpath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.3 yarn daemonlog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.4 yarn jar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.5 yarn logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.6 yarn node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.7 yarn rmadmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.8 yarn version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 acl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1.1 acl edit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1.2 acl set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1.3 acl show . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2 alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2.1 alarm clear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2.2 alarm clearall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2.3 alarm config load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2.4 alarm config save . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2.5 alarm list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2.6 alarm names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2.7 alarm raise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.3 cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.3.1 cluster mapreduce get . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.3.2 cluster mapreduce set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.4 config . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.4.1 config load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.4.2 config save . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.5 dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.5.1 dashboard info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.6 dialhome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.6.1 dialhome ackdial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.6.2 dialhome enable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.6.3 dialhome lastdialed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.6.4 dialhome metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 5 5 5 7 32 40 41 44 49 50 51 52 52 53 54 56 61 64 65 68 69 71 72 74 77 78 80 81 83 84 85 85 89 89 90 91 92 92 93 93 94 94 94 95 96 96 99 99 100 102 103 103 104 104 106 107 109 109 110 110 110 111 114 115 116 116 120 120 120 121 122
1.6.6.5 dialhome status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.7 disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.7.1 disk add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.7.2 disk list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.7.3 disk listall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.7.4 disk remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8 dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.1 dump balancerinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.2 dump balancermetrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.3 dump cldbnodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.4 dump containerinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.5 dump replicationmanagerinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.6 dump replicationmanagerqueueinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.7 dump rereplicationinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.8 dump rolebalancerinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.9 dump rolebalancermetrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.10 dump volumeinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.11 dump volumenodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.8.12 dump zkinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.9 entity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.9.1 entity info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.9.2 entity list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.9.3 entity modify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.10 job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.10.1 job changepriority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.10.2 job kill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.10.3 job linklogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.10.4 job table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.11 license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.11.1 license add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.11.2 license addcrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.11.3 license apps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.11.4 license list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.11.5 license listcrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.11.6 license remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.11.7 license showid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.12 Metrics API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.13 nagios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.13.1 nagios generate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.14 nfsmgmt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.14.1 nfsmgmt refreshexports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15 node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.1 add-to-cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.2 node allow-into-cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.3 node cldbmaster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.4 node heatmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.5 node list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.6 node listcldbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.7 node listcldbzks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.8 node listzookeepers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.9 node maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.10 node metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.11 node move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.12 node remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.13 node services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.15.14 node topo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.16 rlimit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.16.1 rlimit get . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.16.2 rlimit set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.17 schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.17.1 schedule create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.17.2 schedule list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.17.3 schedule modify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.17.4 schedule remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.18 service list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.19 setloglevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.19.1 setloglevel cldb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.19.2 setloglevel fileserver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.19.3 setloglevel hbmaster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.19.4 setloglevel hbregionserver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.19.5 setloglevel jobtracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.19.6 setloglevel nfs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
123 124 124 125 126 127 128 129 132 133 134 137 139 142 144 145 146 148 149 151 151 152 153 154 154 155 156 157 164 164 164 165 165 166 166 166 167 167 167 170 170 170 174 175 175 176 178 180 180 181 182 182 187 187 188 189 190 191 191 192 193 194 195 196 196 197 197 198 198 199 199 200
1.6.19.7 setloglevel tasktracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20 table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.1 table attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.1.1 table attr edit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.1.2 table attr list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.2 table cf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.2.1 table cf create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.2.2 table cf delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.2.3 table cf edit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.2.4 table cf list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.3 table create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.4 table delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.5 table listrecent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.6 table region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.6.1 table region list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.6.2 table region merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.20.6.3 table region split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.21 task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.21.1 task failattempt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.21.2 task killattempt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.21.3 task table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.22 trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.22.1 trace dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.22.2 trace info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.22.3 trace print . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.22.4 trace reset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.22.5 trace resize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.22.6 trace setlevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.22.7 trace setmode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.23 urls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.24 userconfig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.24.1 userconfig load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.25 virtualip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.25.1 virtualip add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.25.2 virtualip edit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.25.3 virtualip list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.25.4 virtualip move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.25.5 virtualip remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26 volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.1 volume container move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.2 volume create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.3 volume dump create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.4 volume dump restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.5 volume fixmountpath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.6 volume info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.7 volume link create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.8 volume link remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.9 volume list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.10 volume mirror push . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.11 volume mirror start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.12 volume mirror stop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.13 volume modify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.14 volume mount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.15 volume move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.16 volume remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.17 volume rename . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.18 volume showmounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.19 volume snapshot create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.20 volume snapshot list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.21 volume snapshot preserve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.22 volume snapshot remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.26.23 volume unmount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.27 blacklist user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.27.1 blacklist listusers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Alarms Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 configure.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.2 disksetup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.3 fsck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.4 gfsck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.5 mapr-support-collect.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.6 mapr-support-dump.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
201 201 201 202 203 204 204 205 205 206 207 209 209 210 210 212 212 213 213 213 214 220 220 220 223 223 224 225 225 226 227 227 228 228 229 230 230 231 231 233 233 236 237 239 239 240 241 241 244 245 246 246 248 248 249 249 250 251 251 253 254 255 256 256 257 268 268 272 273 275 278 280
1.8.7 mrconfig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.1 mrconfig dg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.1.1 mrconfig dg create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.1.2 mrconfig dg help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.1.3 mrconfig dg list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2 mrconfig info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.1 mrconfig info containerchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.2 mrconfig info containerlist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.3 mrconfig info containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.4 mrconfig info dumpcontainers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.5 mrconfig info fsstate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.6 mrconfig info fsthreads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.7 mrconfig info orphanlist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.8 mrconfig info replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.9 mrconfig info slabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.10 mrconfig info threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.2.11 mrconfig info volume snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3 mrconfig sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3.1 mrconfig sp help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3.2 mrconfig sp list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3.3 mrconfig sp load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3.4 mrconfig sp make . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3.5 mrconfig sp offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3.6 mrconfig sp online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3.7 mrconfig sp refresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3.8 mrconfig sp shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.3.9 mrconfig sp unload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.4 mrconfig disk help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.5 mrconfig disk init . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.6 mrconfig disk list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.7 mrconfig disk load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.7.8 mrconfig disk remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.8 pullcentralconfig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.9 rollingupgrade.sh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.1 .dfs_attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.2 cldb.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.3 daemon.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.4 disktab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.5 hadoop-metrics.properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.6 mapr-clusters.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.7 mfs.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.8 taskcontroller.cfg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.9 warden.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.10 exports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.11 zoo.cfg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.12 db.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.13 warden.
.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.14 mapr.login.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.15 core-site.xml (Hadoop 1.x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.16 mapred-default.xml (MapReduce v1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.17 mapred-site.xml (MapReduce v1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.18 yarn-site.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.19 yarn-default.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.20 mapred-site.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10.21 mapred-default.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.11 MapR Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12 MapR Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.13 Ports Used by MapR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.14 Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.15 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.16 Source Code for MapR Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
281 281 281 283 283 283 283 284 285 286 288 288 289 289 290 291 291 292 292 293 294 294 296 297 297 298 298 299 299 300 301 301 302 302 303 304 304 305 308 308 308 311 312 313 313 316 317 318 319 320 321 325 338 354 356 364 366 380 380 384 389 392 394
Reference Guide The MapR Reference Guide contains in-depth reference information for MapR Software. Choose a subtopic below for more detail. Release Notes - Known issues and new features, by release MapR Control System - User interface reference API Reference - Information about the command-line interface and the REST API Utilities - MapR tool and utility reference Environment Variables - Environment variables specific to MapR Configuration Files - Information about MapR settings Ports Used by MapR - List of network ports used by MapR services Glossary - Essential MapR terms and definitions Hadoop Commands - Listing of Hadoop commands and options YARN Commands - Listing of Hadoop YARN commands and options
Documentation for Previous Releases Here are links to documentation for all major releases of MapR software: Version 3.x (PDF here) - Latest Release Version 2.x (PDF here) Version 1.x (PDF here)
Release Notes Ecosystem Components Release notes for open source Hadoop components in the MapR Distribution for Hadoop are available here.
MapR Distribution for Hadoop Release notes for versions of the MapR Distribution for Hadoop are available here.
MapR Control System The MapR Control System main screen consists of a navigation pane to the left and a view to the right. Dialogs appear over the main screen to perform certain actions. View this video for an introduction to the MapR Control System dashboard...
Logging on to the MapR Control System 1. In a browser, navigate to the node that is running the mapr-webserver service:
https://:8443
2. When prompted, enter the username and password of the administrative user.
The Dashboard
The Navigation pane to the left lets you choose which view to display on the right. The main view groups are: Cluster Views - information about the nodes in the cluster MapR-FS - information about volumes, snapshots and schedules NFS HA Views - NFS nodes and virtual IP addresses Alarms Views - node and volume alarms System Settings Views - configuration of alarm notifications, quotas, users, groups, SMTP, and HTTP Some other views are separate from the main navigation tree: CLDB View - information about the container location database HBase View - information about HBase on the cluster JobTracker View - information about the JobTracker Nagios View - information about the Nagios configuration script JobHistory Server View - information about MapReduce jobs after their ApplicationMaster terminates
Views Views display information about the system. As you open views, tabs along the top let you switch between them quickly. Clicking any column name in a view sorts the data in ascending or descending order by that column. Most views contain a Filter toolbar that lets you sort data in the view, so you can quickly find the information you want. Some views contain collapsible panes that provide different types of detailed information. Each collapsible pane has a control at the top left that expands and collapses the pane. The control changes to show the state of the pane:
- pane is collapsed; click to expand - pane is expanded; click to collapse
The Filter Toolbar The Filter toolbar lets you build search expressions to provide sophisticated filtering capabilities for locating specific data on views that display a large number of nodes. Expressions are implicitly connected by the AND operator; any search results satisfy the criteria specified in all expressions.
The Filter toolbar has two controls:
The Minus button ( The Plus button (
) removes the expression. ) adds a new expression.
Expressions Each expression specifies a semantic statement that consists of a field, an operator, and a value. The first dropdown menu specifies the field to match. The second dropdown menu specifies the type of match to perform. The text field specifies a value to match or exclude in the field. You can use a wildcard to substitute for any part of the string.
Cluster Views This section provides reference for the following views in the MapR Control System: Dashboard - Summary of cluster health, activity, and usage Cluster Heatmap Alarms Cluster Utilization Yarn Classic MapReduce (v1) Services Volumes Nodes - Summary of node information Overview Services Performance Disks MapReduce NFS Nodes Alarm Status Node Properties View - Details about a node Alarms Machine Performance MapR-FS and Available Disks System Disks Manage Node Services MapReduce DB Gets, Puts, Scans Node Heatmap Jobs The Job Pane The Task Table The Task Attempt Pane
Dashboard - Summary of cluster health, activity, and usage The Dashboard displays a summary of information about the cluster in six panes.
Panes include: Cluster Heatmap - the alarms and health for each node, by rack Alarms - a summary of alarms for the cluster Cluster Utilization - CPU, Memory, and Disk Space usage Yarn - the number of running and queued applications, number of Node Managers, used memory, total memory, percent of memory used, CPU's used, CPU's total, percent of CPU's used MapReduce - the number of running and queued jobs, running tasks, running map tasks, running reduce tasks, map task capacity, reduce task capacity, map task prefetch capacity, and blacklisted nodes Services - the number of instances of each service Volumes - the number of available, under-replicated, and unavailable volumes Links in each pane provide shortcuts to more detailed information. The following sections provide information about each pane.
Cluster Heatmap The Cluster Heatmap pane displays the health of the nodes in the cluster, by rack. Each node appears as a colored square to show its health at a glance.
If you click on the small wrench icon at the upper right of the Cluster Heatmap pane, a key to the color-coded heatmap display slides into view. At the top of the display, you can set the refresh rate for the display (measured in seconds), as well as the number of columns to display (for example, 20 nodes are displayed across two rows for a 10-column display). Click the wrench icon again to slide the display back out of view.
The left drop-down menu at the top of the pane lets you choose which data is displayed. Some of the choices are shown below.
Heatmap legend by category The heatmap legend changes depending on the criteria you select from the drop-down menu. All the criteria and their corresponding legends are shown here.
Health
Healthy - all services up, MapR-FS and all disks OK, and normal heartbeat Upgrading - upgrade in process Degraded - one or more services down, or no heartbeat for over 1 minute Maintenance - routine maintenance in process
Critical - Mapr-FS Inactive/Dead/Replicate, or no heartbeat for over 5 minutes Click to see the legend for all Heatmap displays, such as CPU, memory and disk space...
CPU Utilization
CPU < 50% CPU < 80% CPU >= 80% Unknown
Memory Utilization
Memory < 50% Memory < 80% Memory >= 80% Unknown
Disk Space Utilization
Used < 50% Used < 80% Used >= 80% Unknown
Too Many Containers Alarm
Containers within limit Containers exceeded limit
Duplicate HostId Alarm
No duplicate host id detected Duplicate host id detected
UID Mismatch Alarm
No UID mismatch detected UID mismatch detected
No Heartbeat Detected Alarm
Node heartbeat detected Node heartbeat not detected
TaskTracker Local Dir Full Alarm
TaskTracker local directory is not full TaskTracker local directory full
PAM Misconfigured Alarm
PAM configured PAM misconfigured
High FileServer Memory Alarm
Fileserver memory OK Fileserver memory high
Cores Present Alarm
No core files Core files present
Installation Directory Full Alarm
Installation Directory free Installation Directory full
Metrics Write Problem Alarm
Metrics writing to Database Metrics unable to write to Database
Root Partition Full Alarm
Root partition free Root partition full
HostStats Down Alarm
HostStats running HostStats down
Webserver Down Alarm
Webserver running Webserver down
NFS Gateway Down Alarm
NFS Gateway running NFS Gateway down
HBase RegionServer Down Alarm
HBase RegionServer running HBase RegionServer down
HBase Master Down Alarm
HBase Master running HBase Master down
TaskTracker Down Alarm
TaskTracker running TaskTracker down
JobTracker Down Alarm
JobTracker running JobTracker down
FileServer Down Alarm
FileServer running FileServer down
CLDB Down Alarm
CLDB running CLDB down
Time Skew Alarm
Time OK Time skew alarm(s)
Software Installation & Upgrades Alarm
Version OK Version alarm(s)
Disk Failure(s) Alarm
Disks OK Disk alarm(s)
Excessive Logging Alarm
No debug Debugging
Zoomed view You can see a zoomed view of all the nodes in the cluster by moving the zoom slide bar. The zoomed display reveals more details about each node, based on the criteria you chose from the drop-down menu. In this example, CPU Utilization is displayed for each node.
Clicking a rack name navigates to the Nodes view, which provides more detailed information about the nodes in the rack. Clicking a colored square navigates to the Node Properties View, which provides detailed information about the node.
Alarms The Alarms pane includes these four columns: Alarm - a list of alarms raised on the cluster Last Raised - the most recent time each alarm state changed Summary - how many nodes or volumes have raised each alarm Clear Alarm - clicking on the X clears the corresponding alarm Clicking Alarm, Last Raised, or Summary sorts data in ascending or descending order by that column.
Cluster Utilization The Cluster Utilization pane displays a summary of the total usage of the following resources: CPU Memory Disk Space
For each resource type, the pane displays the percentage of cluster resources used, the amount used, and the total amount present in the system. A colored dot after the pane's title summarizes the status of the disk and role Configuring Balancer Settings: Green: Both balancers are running. Orange: The replication role balancer is running. Yellow: The disk space balancer is running. Purple: Neither balancer is running. Click the colored dot to bring up the Balancer Configuration dialog.
Yarn The Yarn pane shows information about Yarn applications: Running Applications - the number of Yarn applications currently running Queued Applications - the number of Yarn applications queued to run Number of Node Managers - the number of Node Managers (and the number of nodes) in the cluster Used Memory - how much memory has been used to run the applications Total Memory - how much total memory is available for running applications Percent of Memory Used - the percent of memory used compared to the total memory available
CPU's Used - the number of CPU cores used CPU's Total - the total number of CPU cores available Percent of CPU's Used - the percent of CPU's used compared to the total number of CPU cores available
Classic MapReduce (v1) The Classic MapReduce (v1) pane shows information about MapReduce jobs: Running Jobs - the number of MapReduce jobs currently running Queued Jobs - the number of MapReduce jobs queued to run Running Tasks - the number of Map and Reduce tasks currently running Running Map Tasks - the number of Map tasks currently running Running Reduce Tasks - the number of Reduce tasks currently running Map Task Capacity - the number of map slots available across all nodes in the cluster Reduce Task Capacity - the number of reduce slots available across all nodes in the cluster Map Task Prefetch Capacity - the number of map tasks that can be queued to fill map slots once they become available Blacklisted Nodes - the number of nodes that have been eliminated from the MapReduce pool
Services The Services pane shows information about the services running on the cluster. For each service, the pane displays the following information: Actv - the number of running instances of the service Stby - the number of instances of the service that are configured and standing by to provide failover Stop - the number of instances of the service that have been intentionally stopped Fail - the number of instances of the service that have failed, indicated by a corresponsing Service Down alarm Total - the total number of instances of the service configured on the cluster
Clicking a service navigates to the Services view.
Volumes The Volumes pane displays the total number of volumes, and the number of volumes that are mounted and unmounted. For each category, the Volumes pane displays the number, percent of the total, and total size.
Clicking Mounted or Unmounted navigates to the Volumes view.
Nodes - Summary of node information The Nodes view displays the nodes in the cluster, by rack. The Nodes view contains two panes: the Topology pane and the Nodes pane. The Topology pane shows the racks in the cluster. Selecting a rack displays that rack's nodes in the Nodes pane to the right. Selecting Cluster displa ys all the nodes in the cluster. Clicking any column name sorts data in ascending or descending order by that column.
Selecting the checkbox beside one node makes the following buttons available: Properties - navigates to the Node Properties View, which displays detailed information about a single node. Manage Services - displays the Manage Node Services dialog, which lets you start and stop services on the node. Change Topology - displays the Change Node Topology dialog, which lets you change the topology path for a node.
Note: If a node has a No Heartbeat alarm raised, the Forget Node button is also displayed.
When you click on Forget Node, the following Message appears:
When you click on Manage Services, a dialog is displayed where you can stop, start, or restart the services on the node.
When you click on Change Topology, a dialog is displayed where you can choose a different location for the selected node.
Selecting the checkboxes beside multiple nodes changes the text on the buttons to reflect the number of nodes affected:
The dropdown menu at the top left specifies the type of information to display: Overview - general information about each node Services - services running on each node Performance - information about memory, CPU, I/O and RPC performance on each node Disks - information about disk usage, failed disks, and the MapR-FS heartbeat from each node MapReduce - information about the JobTracker heartbeat and TaskTracker slots on each node NFS Nodes - the IP addresses and Virtual IPs assigned to each NFS node Alarm Status - the status of alarms on each node Clicking a node's Hostname navigates to the Node Properties View, which provides detailed information about the node. Selecting the Filter checkbox displays the Filter toolbar, which provides additional data filtering options.
Each time you select a filtering option, the option is displayed in the window below the filter checkbox. You can add more options by clicking on the
.
Overview The Overview displays the following general information about nodes in the cluster: Hlth - each node's health: healthy, degraded, critical, or maintenance Hostname - the hostname of each node Physical IP(s) - the IP address or addresses associated with each node FS HB - time since each node's last heartbeat to the CLDB Physical Topology - the rack path to each node
Services The Services view displays the following information about nodes in the cluster: Hlth - eact node's health: healthy, degraded, critical, or maintenance Hostname - the hostname of each node
Configured Services - a list of the services specified in the config file Running Services - a list of the services running on each node Physical Topology - each node's physical topology
Performance The Performance view displays the following information about nodes in the cluster, including: Hlth - each node's health: healthy, degraded, critical, or maintenance Hostname - DNS hostname for the nodes in this cluster Memory - percentage of memory used and the total memory % CPU - percentage of CPU usage on the node # CPUs - number of CPUs present on the node Bytes Received - number of bytes received in 1 second, through all network interfaces on the node Click to see all Performance metrics... Bytes Sent - number of bytes sent in 1 second, through all network interfaces on the node # RPCs - number of RPC calls RPC In Bytes - number of RPC bytes received by this node every second RPC Out Bytes - number of RPC bytes sent by this node every second # Disk Reads - number of disk read operations on this node every second # Disk Writes - number of disk write operations on this node every second Disk Read Bytes - number of bytes read from all the disks on this node every second Disk Write Bytes - number of bytes written to all the disks on this node every second # Disks - number of disks on this node Gets - 1m - number of data retrievals (gets) executed on this region's primary node in a 1-minute interval Puts - 1m - number of data writes (puts) executed on this region's primary node in a 1-minute interval Scans - 1m - number of data seeks (scans) executed on this region's primary node in a 1-minute interval
Disks The Disks view displays the following information about nodes in the cluster: Hlth - each node's health: healthy, degraded, or critical Hostname - the hostname of each node # Bad Disks - the number of failed disks on each node Disk Space - the amount of disk used and total disk capacity, in gigabytes
MapReduce The MapReduce view displays the following information about nodes in the cluster: Hlth - each node's health: healthy, degraded, or critical Hostname - the hostname of each node TT Map Slots - the number of map slots on each node TT Map Slots Used - the number of map slots in use on each node TT Reduce Slots - the number of reduce slots on each node TT Reduce Slots Used - the number of reduce slots in use on each node
NFS Nodes The NFS Nodes view displays the following information about nodes in the cluster: Hlth - each node's health: healthy, degraded, or critical Hostname - the hostname of each node Physical IP(s) - the IP address or addresses associated with each node Virtual IP(s) - the virtual IP address or addresses assigned to each node
Alarm Status The Alarm Status view displays the following information about nodes in the cluster: Hlth - each node's health: healthy, degraded, critical, or maintenance Hostname - DNS hostname for nodes in this cluster Version Alarm - one or more services on the node are running an unexpected version No Heartbeat Alarm - node is not undergoing maintenance, and no heartbeat is detected for over 5 minutes UID Mismatch Alarm - services in the cluster are being run with different user names (UIDs) Duplicate HostId Alarm - two or more nodes in the cluster have the same host id Click to see all Alarm Status alerts... Too Many Containers Alarm - number of containers on this node reached the maximum limit Excess Logs Alarm - debug logging is enabled on the node (debug logging generates enormous amounts of data and can fill up disk
space) Disk Failure Alarm - a disk has failed on the node Time Skew Alarm - the clock on the node is out of sync with the master CLDB by more than 20 seconds Root Partition Full Alarm - the root partition ("/") on the node is running out of space (99% full) Installation Directory Full Alarm - the partition /opt/mapr on the node is running out of space (95% full) Core Present Alarm - a service on the node has crashed and created a core dump file High FileServer Memory Alarm - memory consumed by fileserver service on the node is high Pam Misconfigured Alarm - the PAM authentication on the node is configured incorrectly TaskTracker Local Directory Full Alarm - the local directory used by the TaskTracker on the specified node(s) is full, and the TaskTracker cannot operate as a result CLDB Alarm - the CLDB service on the node has stopped running FileServer Alarm - the FileServer service on the node has stopped running JobTracker Alarm - the JobTracker service on the node has stopped running TaskTracker Alarm - the TaskTracker service on the node has stopped running HBase Master Alarm - the HBase Master service on the node has stopped running HBase RegionServer Alarm - the HBase RegionServer service on the node has stopped running NFS Gateway Alarm - the NFS service on the node has stopped running WebServer Alarm - the WebServer service on the node has stopped running HostStats Alarm - the HostStats service has stopped running Metrics write problem Alarm - metric data was not written to the database
Node Properties View - Details about a node The Node Properties view displays detailed information about a single node in seven collapsible panes: Alarms Machine Performance MapR-FS and Available Disks System Disks Manage Node Services MapReduce DB Gets, Puts, Scans
Buttons: Forget Node - displays the Forget Node dialog box
Alarms The Alarms pane displays a list of alarms that have been raised on the system, and the following information about each alarm: Alarm - the alarm name Last Raised - the most recent time when the alarm was raised Summary - a description of the alarm Clear Alarm - clicking on the X clears the corresponding alarm
Machine Performance The Machine Performance pane displays the following information about the node's performance and resource usage since it last reported to the CLDB: Memory Used - the amount of memory in use on the node
Disk Used - the amount of disk space used on the node CPU - The number of CPUs and the percentage of CPU used on the node Network I/O - the input and output to the node per second RPC I/O - the number of RPC calls on the node and the amount of RPC input and output Disk I/O - the amount of data read to and written from the disk # Operations - the number of disk reads and writes
MapR-FS and Available Disks The MapR-FS and Available Disks pane displays the disks on the node and information about each disk.
Information headings include: Status - the status of the disk (healthy, failed, or offline) Mount - whether the disk is mounted (indicated by ) or unmounted Device - the device name File System - the file system on the disk Used - the percentage of memory used out of total memory available on the disk Model # - the model number of the disk Serial # - the serial number of the disk Firmware Version - the version of the firmware being used Add to MAPR-FS - clicking the Remove from MAPR-FS - clicking the
adds the disk to MAPR-FS storage displays a dialog that asks you to verify that you want to remove the disk
If you confirm by clicking OK, and data on that disk has not been replicated, a warning dialog appears:
For more information on disk status, and the proper procedure for adding, removing, and replacing disks, see the Managing Disks page. If you are running MapR 1.2.2 or earlier, do not use the disk add command or the MapR Control System to add disks to MapR-FS. You must either upgrade to MapR 1.2.3 before adding or replacing a disk, or use the following procedure (which avoids the disk add comma nd): 1. Use the MapR Control System to remove the failed disk. All other disks in the same storage pool are removed at the same time. Make a note of which disks have been removed. 2. Create a text file /tmp/disks.txt containing a list of the disks you just removed. See Setting Up Disks for MapR. 3. Add the disks to MapR-FS by typing the following command (as root or with sudo): /opt/mapr/server/disksetup -F /tmp/disks.txt
System Disks The System Disks pane displays information about disks present and mounted on the node: Status - the status of the disk (healthy, failed, or offline) Mount - whether the disk is mounted (indicated by ) or unmounted Device - the device name File System - the file system on the disk Used - the percentage of memory used out of total memory available on the disk Model # - the model number of the disk Serial # - the serial number of the disk Firmware Version - the version of the firmware being used
Manage Node Services The Manage Node Services pane displays the status of each service on the node.
Service - the name of each service State: Configured: the package for the service is installed and the service is configured for all nodes, but it is not enabled for the particular node Not Configured: the package for the service is not installed and/or the service is not configured (configure.sh has not run) Running: the service is installed, has been started by the warden, and is currently executing Stopped: the service is installed and configure.sh has run, but the service is currently not executing StandBy: the service is installed Failed: the service was running, but terminated unexpectedly Log Path - the path to where each service stores its logs Stop/Start: click on
to stop the service
click on
to start the service
Restart - click on to restart the service Log Settings - displays the Trace Activity dialog where you can set the level of logging for a service on a particular node. When you select a log level, all the levels listed above it are included in the log. Levels include: ERROR WARN INFO DEBUG TRACE
You can also start and stop services in the the Manage Node Services dialog, by clicking Manage Services in the Nodes view.
MapReduce The MapReduce pane displays the number of map and reduce slots used, and the total number of map and reduce slots on the node.
DB Gets, Puts, Scans The DB Gets, Puts, Scans pane displays the number of gets, puts, and scan operations performed during various time intervals.
Node Heatmap The Node Heatmap view provides a graphical summary of node status across the cluster. This view displays the same information as the Node Heatmap pane on the Dashboard, without the other panes that appear on the dashboard.
Jobs The Jobs view displays the data collected by the MapR Metrics service. The Jobs view contains two panes: the chart pane and the data grid. The chart pane displays the data corresponding to the selected metric in histogram form. The data grid lists the jobs running on the cluster.
Click on the wrench icon to slide out a menu of information to display. Choices include: Cumulative Job Combine Input Records Cumulative Job Map Input Bytes Cumulative Job Map Input Records Cumulative Job Map Output Bytes Cumulative Job Map Output Records Cumulative Job Reduce Input Records Click to see all Job metrics... Cumulative Job Reduce Output Bytes Cumulative Job Reduce Shuffle Bytes Cumulative Physical Memory Current CPU Current Memory
Job Average Map Attempt Duration Job Average Reduce Attempt Duration Job Average Task Duration Job Combine Output Records Job Complete Map Task Count Job Complete Reduce Task Count Job Complete Task Count Job Cumulative CPU Job Data-local Map Tasks Job Duration Job End Time Job Error Count Job Failed Map Task Attempt Count Job Failed Map Task Count Job Failed Reduce Task Attempt Count Job Failed Reduce Task Count Job Failed Task Attempt Count Job Failed Task Count Job Id Job Map CPU Job Map Cumulative Memory Bytes Job Map File Bytes Written Job Map GC Time Job Map Input Bytes/Sec Job Map Input Records/Sec Job Map Output Bytes/Sec Job Map Output Records/Sec Job Map Progress Job Map Reserve Slot Wait Job Map Spilled Records Job Map Split Raw Bytes Job Map Task Attempt Count Job Map Task Count Job Map Tasks Duration Job Map Virtual Memory Bytes Job MapR-FS Map Bytes Read Job MapR-FS Map Bytes Written Job MapR-FS Reduce Bytes Read Job MapR-FS Reduce Bytes Written Job MapR-FS Total Bytes Read Job MapR-FS Total Bytes Written Job Maximum Map Attempt Duration Job Maximum Reduce Attempt Duration Job Maximum Task Duration Job Name Job Non-local Map Tasks Job Rack-local Map Tasks Job Reduce CPU Job Reduce Cumulative Memory Bytes Job Reduce File Bytes Written Job Reduce GC Time Job Reduce Input Groups Job Reduce Input Records/Sec Job Reduce Output Records/Sec Job Reduce Progress Job Reduce Reserve Slot Wait Job Reduce Shuffle Bytes/Sec Job Reduce Spilled Records Job Reduce Split Raw Bytes Job Reduce Task Attempt Count Job Reduce Task Count Job Reduce Tasks Duration Job Reduce Virtual Memory Bytes Job Running Map Task Count Job Running Reduce Task Count Job Running Task Count Job Split Raw Bytes Job Start Time Job Submit Time Job Task Attempt Count Job Total File Bytes Written Job Total GC Time
Job Total Spilled Records Job Total Task Count Job User Logs Map Tasks Finish Time Map Tasks Start Time Priority Reduce Tasks Finish Time Reduce Tasks Start Time Status Virtual Memory Bytes Select the Filter checkbox to display the Filter toolbar, which provides additional data filtering options. The x-axis: drop-down selector lets you change the display scale of the histogram's X axis between a uniform or logarithmic scale. Hover the cursor over a bar in the histogram to display the Filter and Zoom buttons.
Click the Filter button or click the bar to filter the table below the histogram by the data range corresponding to that bar. The selected bar turns yellow. Hover the cursor over the selected bar to display the Clear Filter and Zoom buttons. Click the Clear Filter button to remove the filter from the data range in the table below the histogram. Double-click a bar or click the Zoom button to zoom in and display a new histogram that displays metrics constrained to the data range represented by the bar. The data range applied to the metrics data set displays above the histogram.
Click the plus or minus buttons in the filter conditions panel to add or remove filter conditions. Uncheck the Filter checkbox above the histogram to clear the entire filter. Check the box next to a job in the table below the histogram to enable the View Job button. If the job is still running, checking this box also enables the Kill Job button. Clicking Kill Job will display a confirmation dialog to choose whether or not to terminate the job. Click the View Job button or click the job name in the table below the histogram to open the Job tab for that job.
The Job Pane From the main Jobs page, select a job from the list below the histogram and click View Job. You can also click directly on the name of the job in the list. The Job Properties pane displays with the Tasks tab selected by default. This pane has three tabs, Tasks, Charts, and Info. If the job is running, the Kill Job button is enabled.
The Tasks Tab The Tasks tab has two panes. The upper pane displays histograms of metrics for the tasks and task attempts in the selected job. The lower pane displays a table that lists the tasks and primary task attempts in the selected job. Tasks can be in any of the following states: COMPLETE FAILED KILLED PENDING RUNNING The table of tasks also lists the following information for each task:
Task ID. Click the link to display a table with information about the task attempts for this task. Task type: M: Map R: Reduce TC: Task Cleanup JS: Job Setup JC: Job Cleanup Primary task attempt ID. Click the link to display the task attempt pane for this task attempt. Task starting timestamp Task ending timestamp Task duration Host locality Node running the task. Click the link to display the Node Properties pane for this node.
You can select the following task histogram metrics for this job from the drop-down selector: Task Duration Task Attempt Duration Task Attempt Local Bytes Read Task Attempt Local Bytes Written Task Attempt MapR-FS Bytes Read Click to see all Task metrics... Task Attempt MapR-FS Bytes Written Task Attempt Garbage Collection Time Task Attempt CPU Time Task Attempt Physical Memory Bytes Task Attempt Virtual Memory Bytes Map Task Attempt Input Records Map Task Attempt Output Records Map Task Attempt Skipped Records Map Task Attempt Input Bytes Map Task Attempt Output Bytes Reduce Task Attempt Input Groups Reduce Task Attempt Shuffle Bytes Reduce Task Attempt Input Records Reduce Task Attempt Output Records Reduce Task Attempt Skipped Records Task Attempt Spilled Records Combined Task Attempt Input Records Combined Task Attempt Output Records Uncheck the Show Map Tasks box to hide map tasks. Uncheck the Show Reduce Tasks box to hide reduce tasks. Check the Show Setup/Cleanup Tasks box to display job and task setup and cleanup tasks. Histogram filtering and zoom work in the same way as the Jobs pane
.
The Charts Tab Click the Charts tab to display your job's line chart metrics.
Click the Add chart button to add a new line chart. You can use the X and minus buttons at the top-left of each chart to dismiss or hide the chart. Line charts can display the following metrics for your job: Cumulative CPU used Cumulative physical memory used Number of failed map tasks Number of failed reduce tasks Number of running map tasks Click to see all available Chart metrics... Number of running reduce tasks Number of map task attempts Number of failed map task attempts Number of failed reduce task attempts Rate of map record input Rate of map record output Rate of map input bytes Rate of map output bytes Rate of reduce record output Rate of reduce shuffle bytes Average duration of map attempts Average duration of reduce attempts Maximum duration of map attempts Maximum duration of reduce attempts
The Information Tab
The Information tab of the Job Properties pane displays summary information about the job in three collapsible panes: The MapReduce Framework Counters pane displays information about this job's MapReduce activity. The Job Counters pane displays information about the number of this job's map tasks. The File System Counters pane displays information about this job's interactions with the cluster's file system.
The Task Table
The Task table displays a list of the task attempts for the selected task, along with the following information for each task attempt: Status: RUNNING SUCCEEDED FAILED UNASSIGNED KILLED COMMIT PENDING FAILED UNCLEAN KILLED UNCLEAN Task attempt ID. Click the link to display the task attempt pane for this task attempt. Task attempt type: M: Map R: Reduce TC: Task Cleanup JS: Job Setup JC: Job Cleanup Task attempt starting timestamp Task attempt ending timestamp Task attempt shuffle ending timestamp Task attempt sort ending timestamp Task attempt duration Node running the task attempt. Click the link to display the Node Properties pane for this node. A link to the log file for this task attempt Diagnostic information about this task attempt
The Task Attempt Pane The Task Attempt pane has two tabs, Info and Charts.
The Task Attempt Info Tab
The Info tab displays summary information about this task attempt in three panes: The MapReduce Framework Counters pane displays information about this task attempt's MapReduce activity. The MapReduce Throughput Counters pane displays information about the I/O performance in Bytes/sec and Records/sec. The File System Counters pane displays information about this task attempt's interactions with the cluster's file system.
The Task Attempt Charts Tab The Task Attempt Charts tab displays line charts for metrics specific to this task attempt. By default, this tab displays charts for these metrics: Cumulative CPU by Time
Physical Memory by Time Virtual Memory by Time Click the Add chart button to add a new line chart. You can use the X and minus buttons at the top-left of each chart to dismiss or hide the chart. Line charts can display the following metrics for your task: Combine Task Attempt Input Records Combine Task Attempt Output Records Map Task Attempt Input Bytes Map Task Attempt Input Records Map Task Attempt Output Bytes Click to see all available Task Attempt metrics... Map Task Attempt Output Records Map Task Attempt Skipped Records Reduce Task Attempt Input Groups Reduce Task Attempt Input Records Reduce Task Attempt Output Records Reduce Task Attempt Shuffle Bytes Reduce Task Attempt Skipped Records Task Attempt CPU Time Task Attempt Local Bytes Read Task Attempt Local Bytes Written Task Attempt MapR-FS Bytes Read Task Attempt MapR-FS Bytes Written Task Attempt Physical Memory Bytes Task Attempt Spilled Records Task Attempt Virtual Memory Bytes
MapR-FS Views The MapR-FS group provides the following views: Tables - information about M7 tables in the cluster Volumes - information about volumes in the cluster Mirror Volumes - information about mirrors User Disk Usage - cluster disk usage Snapshots - information about volume snapshots Schedules - information about schedules
Tables The Tables view displays a list of tables in the cluster.
The New Table button displays a field where you can enter the path to a new table to create from the MCS.
Click the name of a table from the Tables view to display the table detail view.
From the table detail view, click Delete Table to delete this table. The table detail view has the following tabs: Column Families Regions The Column Families tab displays the following information:
Column Family Name Max Versions Min Versions Compression Time-to-Live In Memory Click the Edit Column Family button to change these values. Click the Delete Column Family button to delete the selected column families. The Regions tab displays the following information:
Start Key - The first key in the region range. End Key - The last key in the region range. Physical Size - The physical size of the region with compression. Logical Size - The logical size of the region without compression. # Rows - The number of rows stored in the region. Primary Node - The region's original source for storage and computation. Secondary Nodes - The region's replicated sources for storage and computation. Last HB - The time interval since the last data communication with the region's primary node. Region Identifier - The tablet region identifier.
Volumes The Volumes view displays the following information about volumes in the cluster: Mnt - Whether the volume is mounted. ( ) Vol Name - The name of the volume. Mount Path - The path where the volume is mounted. Creator - The user or group that owns the volume. Quota - The volume quota. Vol Size - The size of the volume. Data Size - The size of the volume on the disk before compression. Snap Size - The size of the all snapshots for the volume. As the differences between the snapshot and the current state of the volume grow, the amount of data storage taken up by the snapshots increases. Total Size - The size of the volume and all its snapshots. Replication Factor - The number of copies of the volume. Physical Topology - The rack path to the volume. Clicking any column name sorts data in ascending or descending order by that column.
The Unmounted checkbox specifies whether to show unmounted volumes: selected - show both mounted and unmounted volumes unselected - show mounted volumes only The System checkbox specifies whether to show system volumes: selected - show both system and user volumes unselected - show user volumes only Selecting the Filter checkbox displays the Filter toolbar, which provides additional data filtering options. Clicking New Volume displays the New Volume dialog.
New Volume The New Volume dialog lets you create a new volume.
For mirror volumes, the Snapshot Scheduling section is replaced with a section called Mirror Scheduling:
The Volume Setup section specifies basic information about the volume using the following fields: Volume Type - a standard volume, or a local or remote mirror volume Volume Name (required) - a name for the new volume Mount Path - a path on which to mount the volume (check the small box at the right to indicate the mount path for the new volume; if the box is not checked, an unmounted volume is created) Topology - the new volume's rack topology Read-only - if checked, prevents writes to the volume The Permissions section lets you grant specific permissions on the volume to certain users or groups: User/Group field - the user or group to which permissions are to be granted (one user or group per row) Permissions field - the permissions to grant to the user or group (see the Permissions table below) Delete button ( ) - deletes the current row [+ Add Permission ] - adds a new row
Volume Permissions Code
Allowed Action
dump
Dump/Back up the volume
restore
Restore/Mirror the volume
m
Edit volume properties
d
Delete the volume
fc
Full control (admin access and permission to change volume ACL)
The Usage Tracking section displays cluster usage and sets quotas for the volume using the following fields: Quotas - the volume quotas: Volume Advisory Quota - if selected, enter the advisory quota for the volume expressed as an integer plus the single letter abbreviation for the unit (such as 100G for 100GB). When this quota is reached, an advisory email is sent to the user or group. Volume Hard Quota - if selected, enter the maximum limit for the volume expressed as an integer plus the single letter abbreviation for the unit (such as 128G for 128GB). When this hard limit is reached, no more data is written to the volume. The Replication section contains the following fields: Replication - the requested replication factor for the volume Min Replication - the minimum replication factor for the volume. When the number of replicas drops down to or below this number, the volume is aggressively re-replicated to bring it above the minimum replication factor. Optimize Replication For - the basis for choosing the optimum replication factor (high throughput or low latency) The Snapshot Scheduling section (normal volumes) contains the snapshot schedule, which determines when snapshots will be automatically created. Select an existing schedule from the pop-up menu. The Mirror Scheduling section (local and remote mirror volumes) contains the mirror schedule, which determines when mirror volumes will be automatically created. Select an existing schedule from the pop-up menu. Buttons: OK - creates the new volume Cancel - exits without creating the volume
Volume Actions You can modify a volume's properties by selecting the checkbox next to that volume and clicking the Volume Actions button. A dropdown menu of properties you can modify displays.
To apply one set of changes to multiple volumes, mark the checkboxes next to each volume.
Properties Clicking on a volume name displays the Volume Properties dialog where you can view information about the volume, and check or change various settings. You can also remove the volume.
If you click on Remove Volume, the following dialog appears:
Buttons: OK - removes the volume or volumes Cancel - exits without removing the volume or volumes For information about the fields in the Volume Properties dialog, see New Volume. The Partly Out of Topology checkbox fills when containers in this volume are outside the volume's main topology.
Snapshots The Snapshots dialog displays the following information about snapshots for the specified volume: Snapshot Name - The name of the snapshot. Disk Used - The total amount of logical storage held by the snapshot. Since the current volume and all of its snapshots will often have
storage held in common, the total disk usage reported will often exceed the total storage used by the volume. The value reported in this field is the size the snapshot would have if the difference between the snapshot and the volume's current state is 100%. Created - The date and time the snapshot was created. Expires - The snapshot expiration date and time.
Buttons: New Snapshot - Displays the Snapshot Name dialog. Remove - When the checkboxes beside one or more snapshots are selected, displays the Remove Snapshots dialog. Preserve - When the checkboxes beside one or more snapshots are selected, prevents the snapshots from expiring. Close - Closes the dialog.
New Snapshot The Create New Snapshot dialog lets you specify the name for a new snapshot you are creating.
The Snapshot Name dialog creates a new snapshot with the name specified in the following field: Name For New Snapshot(s) - the new snapshot name Buttons: OK - creates a snapshot with the specified name Cancel - exits without creating a snapshot
Remove Snapshots The Remove Snapshots dialog prompts you for confirmation before removing the specified snapshot or snapshots.
Buttons Yes - removes the snapshot or snapshots No - exits without removing the snapshot or snapshots
Assign Cluster-Wide Provisionary Space Limit You can modify standard volume reserve limits by clicking the Assign Cluster-Wide Provisionary Space Limit button.
When you set a reserve limit, you provision a certain amount of space to each volume as a percentage of cluster capacity. You may want to set a reserve limit to free up space that could potentially be unused or allocate more space for replication. As data is written to the volume, available space is automatically allocated. The volume reserve increases up to the reserve limit you set.
Mirror Volumes The Mirror Volumes pane displays information about mirror volumes in the cluster: Mnt - whether the volume is mounted Vol Name - the name of the volume Src Vol - the source volume Src Clu - the source cluster Orig Vol -the originating volume for the data being mirrored Orig Clu - the originating cluster for the data being mirrored Last Mirrored - the time at which mirroring was most recently completed - status of the last mirroring operation % Done - progress of the mirroring operation Error(s) - any errors that occurred during the last mirroring operation
User Disk Usage The User Disk Usage view displays information about disk usage by cluster users: Name - the username Disk Usage - the total disk space used by the user # Vols - the number of volumes Hard Quota - the user's quota Advisory Quota - the user's advisory quota Email - the user's email address
Snapshots
The Snapshots view displays the following information about volume snapshots in the cluster: Snapshot Name - the name of the snapshot Volume Name - the name of the source volume volume for the snapshot Disk Space used - the disk space occupied by the snapshot Created - the creation date and time of the snapshot Expires - the expiration date and time of the snapshot Clicking any column name sorts data in ascending or descending order by that column.
Selecting the Filter checkbox displays the Filter toolbar, which provides additional data filtering options. Buttons: Remove Snapshot - when the checkboxes beside one or more snapshots are selected, displays the Remove Snapshots dialog Preserve Snapshot - when the checkboxes beside one or more snapshots are selected, prevents the snapshots from expiring
Schedules The Schedules view lets you view and edit schedules, which can then can be attached to events to create occurrences. A schedule is a named group of rules that describe one or more points of time in the future at which an action can be specified to take place.
The left pane of the Schedules view lists the following information about the existing schedules: Schedule Name - the name of the schedule; clicking a name displays the schedule details in the right pane for editing In Use - indicates whether the schedule is in use (
), or attached to an action
The right pane provides the following tools for creating or editing schedules: Schedule Name - the name of the schedule Schedule Rules - specifies schedule rules with the following components: A dropdown that specifies frequency (Once, Yearly, Monthly, Weekly, Daily, Hourly, Every X minutes) Dropdowns that specify the time within the selected frequency Retain For - the time for which the scheduled snapshot or mirror data is to be retained after creation [ +Add Rule ] - adds another rule to the schedule Navigating away from a schedule with unsaved changes displays the Save Schedule dialog. Buttons: New Schedule - starts editing a new schedule Remove Schedule - displays the Remove Schedule dialog Save Schedule - saves changes to the current schedule Cancel - cancels changes to the current schedule
Remove Schedule
The Remove Schedule dialog prompts you for confirmation before removing the specified schedule.
Buttons Yes - removes the schedule No - exits without removing the schedule
NFS HA Views The NFS view group provides the following views: NFS Setup - information about NFS nodes in the cluster VIP Assignments - information about virtual IP addresses (VIPs) in the cluster NFS Nodes - information about NFS nodes in the cluster
NFS Setup The NFS Setup view displays information about NFS nodes in the cluster and any VIPs assigned to them: Starting VIP - the starting IP of the VIP range Ending VIP - the ending IP of the VIP range Node Name(s) - the names of the NFS nodes IP Address(es) - the IP addresses of the NFS nodes MAC Address(es) - the MAC addresses associated with the IP addresses
Buttons: Start NFS - displays the Manage Node Services dialog Add VIP - displays the Add Virtual IPs dialog Edit - when one or more checkboxes are selected, edits the specified VIP ranges Remove- when one or more checkboxes are selected, removes the specified VIP ranges Unconfigured Nodes - displays nodes not running the NFS service (in the Nodes view) VIP Assignments - displays the VIP Assignments view
VIP Assignments The VIP Assignments view displays VIP assignments beside the nodes to which they are assigned: Virtual IP Address - each VIP in the range Node Name - the node to which the VIP is assigned IP Address - the IP address of the node MAC Address - the MAC address associated with the IP address
Buttons: Start NFS - displays the Manage Node Services dialog Add VIP - displays the Add Virtual IPs dialog Unconfigured Nodes - displays nodes not running the NFS service (in the Nodes view)
NFS Nodes The NFS Nodes view displays information about nodes running the NFS service: Hlth - the health of the node Hostname - the hostname of the node Physical IP(s) - physical IP addresses associated with the node Virtual IP(s) - virtual IP addresses associated with the node
Buttons: Properties - when one or more nodes are selected, navigates to the Node Properties View Forget Node - navigates to the Remove Node dialog, which lets you remove the node Manage Services - navigates to the Manage Node Services dialog, which lets you start and stop services on the node Change Topology - navigates to the Change Node Topology dialog, which lets you change the rack or switch path for a node
Alarms Views The Alarms view group provides the following views: Node Alarms - information about node alarms in the cluster Volume Alarms - information about volume alarms in the cluster User/Group Alarms - information about users or groups that have exceeded quotas Alarm Notifications - configure where notifications are sent when alarms are raised The following controls are available on views: Clicking any column name sorts data in ascending or descending order by that column. Selecting the Filter checkbox displays the Filter toolbar, which provides additional data filtering options. Clicking the Column Controls icon ( ) opens a dialog that lets you select which columns to view. Click any item to toggle its column on or off. You can also specify the refresh rate for updating data on the page. For example:
Node Alarms The Node Alarms view displays information about alarms on any node in the cluster that has raised an alarm.
The first two columns display Hlth - a color indicating the status of each node (see Cluster Heat Map) Hostname - the hostname of the node The remaining columns are based on alarm type, such as: Version Alarm - one or more services on the node are running an unexpected version No Heartbeat Alarm - no heartbeat has been detected for over 5 minutes, and the node is not undergoing maintenance UID Mismatch Alarm - services in the cluster are being run with different usernames (UIDs) Duplicate HostId Alarm - two or more nodes in the cluster have the same Host ID Too Many Containers Alarm - the number of containers on this node reached the maximum limit Excess Logs Alarm - debug logging is enabled on this node, which can fill up disk space Disk Failure Alarm - a disk has failed on the node (the disk health log indicates which one failed) Time Skew Alarm - the clock on the node is out of sync with the master CLDB by more than 20 seconds Root Partition Full Alarm - the root partition ("/") on the node is 99% full and running out of space Installation Directory Full Alarm - the partition /opt/mapr on the node is running out of space (95% full) Core Present Alarm - a service on the node has crashed and created a core dump file High FileServer Memory Alarm - the FileServer service on the node has high memory consumption Pam Misconfigured Alarm - the PAM authentication on the node is configured incorrectly TaskTracker Local Directory Full Alarm - the local directory used by the TaskTracker is full, and the TaskTracker cannot operate as a result CLDB Alarm - the CLDB service on the node has stopped running FileServer Alarm - the FileServer service on the node has stopped running JobTracker Alarm - the JobTracker service on the node has stopped running TaskTracker Alarm - the TaskTracker service on the node has stopped running HBase Master Alarm - the HBase Master service on the node has stopped running HBase RegionServer Alarm - the HBase RegionServer service on the node has stopped running NFS Gateway Alarm - the NFS Gateway service on the node has stopped running Webserver Alarm - the WebServer service on the node has stopped running HostStats Alarm - the HostStats service on the node has stopped running Metrics write problem Alarm - metric data was not written to the database, or there were issues writing to a logical volume See Alarms Reference. Note the following behavior on the Node Alarms view: Clicking a node's Hostname navigates to the Node Properties View, which provides detailed information about the node.
The left pane of the Node Alarms view displays the available topologies. Click a topology name to view only the nodes in that topology. Buttons: Properties - navigates to the Node Properties View Forget Node - opens the Forget Node dialog to remove the node(s) from active management in this cluster. Services on the node must be stopped before the node can be forgotten. Manage Services - opens the Manage Node Services dialog, which lets you start and stop services on the node Change Topology - opens the Change Node Topology dialog, which lets you change the rack or switch path for a node
Volume Alarms The Volume Alarms view displays information about volume alarms in the cluster: Mnt - whether the volume is mounted Vol Name - the name of the volume Snapshot Alarm - last Snapshot Failed alarm Mirror Alarm - last Mirror Failed alarm Replication Alarm - last Data Under-Replicated alarm Data Alarm - last Data Unavailable alarm Vol Advisory Quota Alarm - last Volume Advisory Quota Exceeded alarm Vol Quota Alarm- last Volume Quota Exceeded alarm Clicking any column name sorts data in ascending or descending order by that column. Clicking a volume name displays the Volume Properties dialog Selecting the Show Unmounted checkbox shows unmounted volumes as well as mounted volumes. Selecting the Filter checkbox displays the Filter toolbar, which provides additional data filtering options.Buttons: New Volume displays the New Volume Dialog. Properties - if the checkboxes beside one or more volumes is selected,displays the Volume Properties dialog Mount (Unmount) - if an unmounted volume is selected, mounts it; if a mounted volume is selected, unmounts it Remove - if the checkboxes beside one or more volumes is selected, displays the Remove Volume dialog Start Mirroring - if a mirror volume is selected, starts the mirror sync process Snapshots - if the checkboxes beside one or more volumes is selected,displays the Snapshots for Volume dialog New Snapshot - if the checkboxes beside one or more volumes is selected,displays the Snapshot Name dialog
User/Group Alarms The User/Group Alarms view displays information about user and group quota alarms in the cluster: Name - the name of the user or group User Advisory Quota Alarm - the last Advisory Quota Exceeded alarm User Quota Alarm - the last Quota Exceeded alarm
Buttons: Edit Properties - opens a User Properties dialog box that lets you change user properties and clear alarms.
Alerts The Alerts dialog lets you specify which alarms cause a notification event and where email notifications are sent. Fields:
Alarm Name - select the alarm to configure Standard Notification - send notification to the default for the alarm type (the cluster administrator or volume creator, for example) Additional Email Address - specify an additional custom email address to receive notifications for the alarm type
Buttons: OK - save changes and exit Cancel - exit without saving changes
System Settings Views The System Settings view group provides the following views: Email Addresses - specify MapR user email addresses Permissions - give permissions to users Quota Defaults - settings for default quotas in the cluster Balancer Settings - settings to configure the disk space and role replication Configuring Balancer Settings on the cluster. SMTP - settings for sending email from MapR HTTP - settings for accessing the MapR Control System via a browser (this view is only available in versions 2.1 through 3.0.2) Manage Licenses - MapR license settings Metrics Database - Settings for the MapR Metrics MySQL database
Email Addresses The Configure Email Addresses dialog lets you specify whether MapR gets user email addresses from an LDAP directory, or uses a company domain: Use Company Domain - specify a domain to append after each username to determine each user's email address Use LDAP - obtain each user's email address from an LDAP server
Buttons: OK - save changes and exit Cancel - exit without saving changes
Permissions The User Permissions dialog lets you grant specific cluster permissions to particular users and groups. User/Group field - the user or group to which permissions are to be granted (one user or group per row) Permissions field - the permissions to grant to the user or group (see the Permissions table below) Delete button ( ) - deletes the current row [ + Add Permission ] - adds a new row
Cluster Permissions The following table lists the actions a user can perform on a cluster, and the corresponding codes used in the cluster ACL: Code
Allowed Action
login
Log in to the MapR Control System, use the API and command-line interface, read access on cluster and volumes
ss
Start/stop services
cv
Create volumes
a
Administrative access (can edit and view ACLs, but cannot perform cluster operations)
fc
Full control over the cluster (this enables all cluster-related administrative options with the exception of changing the cluster ACLs)
Buttons: OK - save changes and exit Cancel - exit without saving changes
Quota Defaults The Configure Quota Defaults dialog lets you set the default quotas that apply to users and groups. The User Quota Defaults section contains the following fields: Default User Advisory Quota - if selected, sets the advisory quota that applies to all users without an explicit advisory quota. Default User Total Quota - if selected, sets the advisory quota that applies to all users without an explicit total quota. The Group Quota Defaults section contains the following fields: Default Group Advisory Quota - if selected, sets the advisory quota that applies to all groups without an explicit advisory quota. Default Group Total Quota - if selected, sets the advisory quota that applies to all groups without an explicit total quota. Buttons: OK - saves the settings Cancel - exits without saving the settings
SMTP The Configure SMTP dialog lets you configure the email account from which the MapR cluster sends alerts and other notifications.
The Configure Sending Email (SMTP) dialog contains the following fields: Provider - selects Gmail or another email provider; if you select Gmail, the other fields are partially populated to help you with the configuration SMTP Server specifies the SMTP server to use when sending email. The server requires an encrypted connection (SSL) - use SSL when connecting to the SMTP server SMTP Port - the port to use on the SMTP server Full Name - the name used in the From field when the cluster sends an alert email Email Address - the email address used in the From field when the cluster sends an alert email. Username - the username used to log onto the email account the cluster will use to send email. SMTP Password - the password to use when sending email. Buttons: OK - saves the settings Cancel - exits without saving the settings
Balancer Settings The Balancer Configuration dialog enables you to configure the behaviors of the disk space and role replication Configuring Balancer Settings.
The Balancer Configuration dialog has the following elements: Balancer Controls: Contains toggle settings for the Disk Balancer and the Role Balancer. Set a balancer's toggle to ON to enable that balancer. Disk Balancer Settings: Configures the behavior of the disk balancer. Disk Balancer Presets: These preconfigured settings enable quick setting of policies for Rapid, Moderate, and Relaxed disk balancing. The default setting is Moderate. Threshold: Move this slider to set a percentage usage of a storage pool that makes the storage pool eligible for rebalancing operations. The default value for this setting is 70%. % Concurrent Disk Rebalancers: Move this slider to set the maximum percentage of data that is actively being rebalanced at a given time. Rebalancing operations will not affect more data than the value of this slider. The default value for this setting is 10%. Role Balancer Settings: Configures the behavior of the role balancer. Role Balancer Presets: These preconfigured settings enable quick setting of policies for Rapid, Moderate, and Relaxed role balancing. The default setting is Moderate. % Concurrent Role Rebalancers: Move this slider to set the maximum percentage of data that is actively being rebalanced at a given time. Role rebalancing operations will not affect more data than the value of this slider. The default value for this setting is 10%. Delay For Active Data: Move this slider to set a time frame in seconds. Role rebalancing operations skip any data that was active within the specified time frame. The default value for this setting is 600 seconds. Buttons: OK - saves the settings Cancel - exits without saving the settings
HTTP (versions 2.1 through 3.0.2 only) In versions 2.1 through 3.0.2, the MCS includes a Configure HTTP dialog that lets you configure access to the MapR Control System via HTTP and HTTPS. As an alternative, you can edit web.conf, which resides in the /opt/mapr/conf directory. In version 3.1 and later, the HTTP settings can only be modified by editing web.conf directly. The sections in the Configure HTTP dialog let you enable HTTP and HTTPS access, and set the session timeout, respectively: Enable HTTP Access - if selected, configure HTTP access with the following field: HTTP Port - the port on which to connect to the MapR Control System via HTTP Enable HTTPS Access - if selected, configure HTTPS access with the following fields: HTTPS Port - the port on which to connect to the MapR Control System via HTTPS HTTPS Keystore Path - a path to the HTTPS keystore HTTPS Keystore Password - a password to access the HTTPS keystore HTTPS Key Password - a password to access the HTTPS key Session Timeout - the number of seconds before an idle session times out. Buttons: OK - saves the settings Cancel - exits without saving the settings
Manage Licenses The License Management dialog lets you add and activate licenses for the cluster, and displays the Cluster ID and the following information about existing licenses: Name - the name of each license Issued - the date each license was issued Expires - the expiration date of each license Nodes - the nodes to which each license applies
Fields: Cluster ID - the unique identifier needed for licensing the cluster Buttons: Add Licenses via Web - navigates to the MapR licensing form online Add License via Upload - alternate licensing mechanism: upload via browser Add License via Copy/Paste - alternate licensing mechanism: paste license key Apply Licenses - validates the licenses and applies them to the cluster Cancel - closes the dialog.
Metrics The Configure Metrics Database dialog enables you to specify the location and login credentials of the MySQL server that stores information for J ob Metrics.
Fields: URL - the hostname and port of the machine running the MySQL server Username - the username for the MySQL metrics database Password - the password for the MySQL metrics database Buttons: OK - saves the MySQL information in the fields Cancel - closes the dialog
Other Views In addition to the MapR Control System views, there are views that display detailed information about the system: CLDB View - information about the container location database HBase View - information about HBase on the cluster JobTracker View - information about the JobTracker Nagios View - information about the Nagios configuration script JobHistory Server View - information about MapReduce jobs after their ApplicationMaster terminates With the exception of the MapR Launchpad, the above views include the following buttons:
- Refresh Button (refreshes the view) - Popout Button (opens the view in a new browser window)
CLDB View The CLDB view provides information about the Container Location Database (CLDB). The CLDB is a management service that keeps track of container locations and the root of volumes. To display the CLDB view, open the MapR Control System and click CLDB in the navigation pane. The following table describes the fields on the CLDB view: Field
Description
CLDB Mode
The CLDB node can be in the following modes: MASTER_READ_WRITE, SLAVE_READ_ONLY, or /
CLDB BuildVersion
Lists the build version.
CLDB Status
Can be RUNNING, or
Cluster Capacity
Lists the storage capacity for the cluster.
Cluster Used
Lists the amount of storage in use.
Cluster Available
Lists the amount of available storage.
Active FileServers
A list of FileServers, and the following information about each: ServerID (Hex) - The server's ID in hexadecimal notation. ServerID - The server's ID in decimal notation. HostPort - The IP address of the host HostName - The hostname assigned to that file server. Network Location - The network topology for that file server. Last Heartbeat (s) - The timestamp for the last received heartbeat. State - Can be ACTIVE or Capacity (MB) - Total storage capacity on this server. Used (MB) - Storage used on this server. Available (MB) - Storage available on this server. In Transit (MB) -
Active NFS Servers
A list of NFS servers, and the following information about each: ServerID (Hex) - The server's ID in hexadecimal notation. ServerID - The server's ID in decimal notation. HostPort - The IP address of the host HostName - The hostname assigned to that file server. Last Heartbeat (s) - The timestamp for the last received heartbeat. State - Can be Active or
Volumes
A list of volumes, and the following information about each: Volume Name Mount Point - The path of where the volume is mounted over NFS. Mounted - Can be Y or N. ReadOnly - Can be Y or N. Volume ID - The Volume ID Volume Topology - The path describing the topology to which the volume is assigned. Quota - The total size of the volume's quota. A quota of 0 means no quota is assigned. Advisory Quota - The usage level that triggers a disk usage warning. Used - Total size of data written to the volume LogicalUsed - Actual size of data written to the volume Root Container ID - The ID of the root container. Replication Guaranteed Replication -
Accounting Entities
A list of users and groups, and the following information about each: AE Name AE Type AE Quota AE Advisory Quota AE Used -
Mirrors
A list of mirrors, and the following information about each: Mirror Volume Name Mirror ID Mirror NextID Mirror Status Last Successful Mirror Time Mirror SrcVolume Mirror SrcRootContainerID Mirror SrcClusterName Mirror SrcSnapshot Mirror DataGenerator Volume -
Snapshots
A list of snapshots, and the following information about each: Snapshot ID RW Volume ID Snapshot Name Root Container ID Snapshot Size Snapshot InProgress -
Containers
A list of containers, and the following information about each: Container ID Volume ID Latest Epoch SizeMB Container Master Location Container Locations Inactive Locations Unused Locations Replication Type -
Snapshot Containers
A list of snapshot containers, and the following information about each: Snapshot Container ID - unique ID of the container Snapshot ID - ID of the snapshot corresponding to the container RW Container ID - corresponding source container ID Latest Epoch SizeMB - container size, in MB Container Master Location - location of the container's master replica Container Locations Inactive Locations -
HBase View The HBase View provides information about HBase on the cluster.
Field
Description
Local Logs
A link to the HBase Local Logs View
Thread Dump
A link to the HBase Thread Dump View
Log Level
A link to the HBase Log Level View, a form for getting/setting the log level
Master Attributes
A list of attributes, and the following information about each: Attribute Name Value Description -
Catalog Tables
A list of tables, and the following information about each: Table Description -
User Tables Region Servers
A list of region servers in the cluster, and the following information about each: Address Start Code Load Total -
HBase Local Logs View The HBase Local Logs view displays a list of the local HBase logs. Clicking a log name displays the contents of the log. Each log name can be copied and pasted into the HBase Log Level View to get or set the current log level.
HBase Log Level View The HBase Log Level View is a form for getting and setting log levels that determine which information gets logged. The Log field accepts a log name (which can be copied from the HBase Local Logs View and pasted). The Level field takes any of the following valid log levels: ALL TRACE DEBUG INFO WARN ERROR OFF
HBase Thread Dump View The HBase Thread Dump View displays a dump of the HBase thread. Example:
Process Thread Dump: 40 active threads Thread 318 (1962516546@qtp-879081272-3): State: RUNNABLE Blocked count: 8 Waited count: 32 Stack: sun.management.ThreadImpl.getThreadInfo0(Native Method) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:147) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:123) org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:149) org.apache.hadoop.http.HttpServer$StackServlet.doGet(HttpServer.java:695) javax.servlet.http.HttpServlet.service(HttpServlet.java:707) javax.servlet.http.HttpServlet.service(HttpServlet.java:820) org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221 ) org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:826) org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212 ) org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.jav a:230) org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) org.mortbay.jetty.Server.handle(Server.java:326) org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) Thread 50 (perfnode51.perf.lab:60000-CatalogJanitor): State: TIMED_WAITING Blocked count: 1081 Waited count: 1350 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:91) org.apache.hadoop.hbase.Chore.run(Chore.java:74) Thread 49 (perfnode51.perf.lab:60000-BalancerChore): State: TIMED_WAITING Blocked count: 0 Waited count: 270 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:91) org.apache.hadoop.hbase.Chore.run(Chore.java:74) Thread 48 (MASTER_OPEN_REGION-perfnode51.perf.lab:60000-1): State: WAITING Blocked count: 2 Waited count: 3 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@6d1cf4e5 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQu euedSynchronizer.java:1925) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
JobTracker View Field State Started Version Compiled Identifier
Description
Cluster Summary
The heapsize, and the following information about the cluster: Running Map Tasks Running Reduce Tasks Total Submissions Nodes Occupied Map Slots Occupied Reduce Slots Reserved Map Slots Reserved Reduce Slots Map Task Capacity Reduce Task Capacity Avg. Tasks/Node Blacklisted Nodes Excluded Nodes MapTask Prefetch Capacity -
Scheduling Information
A list of queues, and the following information about each: Queue name State Scheduling Information -
Filter
A field for filtering results by Job ID, Priority, User, or Name
Running Jobs
A list of running MapReduce jobs, and the following information about each: JobId Priority User Name Start Time Map % Complete Current Map Slots Failed MapAttempts MapAttempt Time Avg/Max Cumulative Map CPU Current Map PMem Reduce % Complete Current Reduce Slots Failed ReduceAttempts ReduceAttempt Time Avg/Max Cumulative Reduce CPU Current Reduce PMem -
Completed Jobs
A list of current MapReduce jobs, and the following information about each: JobId Priority User Name Start Time Total Time Maps Launched Map Total Failed MapAttempts MapAttempt Time Avg/Max Cumulative Map CPU Reducers Launched Reduce Total Failed ReduceAttempts ReduceAttempt Time Avg/Max Cumulative Reduce CPU Cumulative Reduce PMem Vaidya Reports -
Retired Jobs
A list of retired MapReduce job, and the following information about each: JobId Priority User Name State Start Time Finish Time Map % Complete Reduce % Complete Job Scheduling Information Diagnostic Info -
Local Logs
A link to the local logs
JobTracker Configuration
A link to a page containing Hadoop JobTracker configuration values
JobTracker Configuration View Field
Default
fs.automatic.close
TRUE
fs.checkpoint.dir
${hadoop.tmp.dir}/dfs/namesecondary
fs.checkpoint.edits.dir
${fs.checkpoint.dir}
fs.checkpoint.period
3600
fs.checkpoint.size
67108864
fs.default.name
maprfs:///
fs.file.impl
org.apache.hadoop.fs.LocalFileSystem
fs.ftp.impl
org.apache.hadoop.fs.ftp.FTPFileSystem
fs.har.impl
org.apache.hadoop.fs.HarFileSystem
fs.har.impl.disable.cache
TRUE
fs.hdfs.impl
org.apache.hadoop.hdfs.DistributedFileSystem
fs.hftp.impl
org.apache.hadoop.hdfs.HftpFileSystem
fs.hsftp.impl
org.apache.hadoop.hdfs.HsftpFileSystem
fs.kfs.impl
org.apache.hadoop.fs.kfs.KosmosFileSystem
fs.maprfs.impl
com.mapr.fs.MapRFileSystem
fs.ramfs.impl
org.apache.hadoop.fs.InMemoryFileSystem
fs.s3.block.size
67108864
fs.s3.buffer.dir
${hadoop.tmp.dir}/s3
fs.s3.impl
org.apache.hadoop.fs.s3.S3FileSystem
fs.s3.maxRetries
4
fs.s3.sleepTimeSeconds
10
fs.s3n.block.size
67108864
fs.s3n.impl
org.apache.hadoop.fs.s3native.NativeS3FileSystem
fs.trash.interval
0
hadoop.job.history.location
file:////opt/mapr/hadoop/hadoop-0.20.2/bin/../logs/history
hadoop.logfile.count
10
hadoop.logfile.size
10000000
hadoop.native.lib
TRUE
hadoop.proxyuser.root.groups
root
hadoop.proxyuser.root.hosts
(none)
hadoop.rpc.socket.factory.class.default
org.apache.hadoop.net.StandardSocketFactory
hadoop.security.authentication
simple
hadoop.security.authorization
FALSE
hadoop.security.group.mapping
org.apache.hadoop.security.ShellBasedUnixGroupsMapping
hadoop.tmp.dir
/tmp/hadoop-${user.name}
hadoop.util.hash.type
murmur
io.bytes.per.checksum
512
io.compression.codecs
org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.
io.file.buffer.size
8192
io.map.index.skip
0
io.mapfile.bloom.error.rate
0.005
io.mapfile.bloom.size
1048576
io.seqfile.compress.blocksize
1000000
io.seqfile.lazydecompress
TRUE
io.seqfile.sorter.recordlimit
1000000
io.serializations
org.apache.hadoop.io.serializer.WritableSerialization
io.skip.checksum.errors
FALSE
io.sort.factor
256
io.sort.record.percent
0.17
io.sort.spill.percent
0.99
ipc.client.connect.max.retries
10
ipc.client.connection.maxidletime
10000
ipc.client.idlethreshold
4000
ipc.client.kill.max
10
ipc.client.tcpnodelay
FALSE
ipc.server.listen.queue.size
128
ipc.server.tcpnodelay
FALSE
job.end.retry.attempts
0
job.end.retry.interval
30000
jobclient.completion.poll.interval
5000
jobclient.output.filter
FAILED
jobclient.progress.monitor.poll.interval
1000
keep.failed.task.files
FALSE
local.cache.size
10737418240
map.sort.class
org.apache.hadoop.util.QuickSort
mapr.localoutput.dir
output
mapr.localspill.dir
spill
mapr.localvolumes.path
/var/mapr/local
mapred.acls.enabled
FALSE
mapred.child.oom_adj
10
mapred.child.renice
10
mapred.child.taskset
TRUE
mapred.child.tmp
./tmp
mapred.cluster.ephemeral.tasks.memory.limit.mb
200
mapred.compress.map.output
FALSE
mapred.fairscheduler.allocation.file
conf/pools.xml
mapred.fairscheduler.assignmultiple
TRUE
mapred.fairscheduler.eventlog.enabled
FALSE
mapred.fairscheduler.smalljob.max.inputsize
10737418240
mapred.fairscheduler.smalljob.max.maps
10
mapred.fairscheduler.smalljob.max.reducer.inputsize
1073741824
mapred.fairscheduler.smalljob.max.reducers
10
mapred.fairscheduler.smalljob.schedule.enable
TRUE
mapred.healthChecker.interval
60000
mapred.healthChecker.script.timeout
600000
mapred.inmem.merge.threshold
1000
mapred.job.queue.name
default
mapred.job.reduce.input.buffer.percent
0
mapred.job.reuse.jvm.num.tasks
-1
mapred.job.shuffle.input.buffer.percent
0.7
mapred.job.shuffle.merge.percent
0.66
mapred.job.tracker
:9001
mapred.job.tracker.handler.count
10
mapred.job.tracker.history.completed.location
/var/mapr/cluster/mapred/jobTracker/history/done
mapred.job.tracker.http.address
0.0.0.0:50030
mapred.job.tracker.persist.jobstatus.active
FALSE
mapred.job.tracker.persist.jobstatus.dir
/var/mapr/cluster/mapred/jobTracker/jobsInfo
mapred.job.tracker.persist.jobstatus.hours
0
mapred.jobtracker.completeuserjobs.maximum
100
mapred.jobtracker.instrumentation
org.apache.hadoop.mapred.JobTrackerMetricsInst
mapred.jobtracker.job.history.block.size
3145728
mapred.jobtracker.jobhistory.lru.cache.size
5
mapred.jobtracker.maxtasks.per.job
-1
mapred.jobtracker.port
9001
mapred.jobtracker.restart.recover
TRUE
mapred.jobtracker.retiredjobs.cache.size
1000
mapred.jobtracker.taskScheduler
org.apache.hadoop.mapred.FairScheduler
mapred.line.input.format.linespermap
1
mapred.local.dir
${hadoop.tmp.dir}/mapred/local
mapred.local.dir.minspacekill
0
mapred.local.dir.minspacestart
0
mapred.map.child.java.opts
-XX:ErrorFile=/opt/cores/mapreduce_java_error%p.log
mapred.map.max.attempts
4
mapred.map.output.compression.codec
org.apache.hadoop.io.compress.DefaultCodec
mapred.map.tasks
2
mapred.map.tasks.speculative.execution
TRUE
mapred.max.maps.per.node
-1
mapred.max.reduces.per.node
-1
mapred.max.tracker.blacklists
4
mapred.max.tracker.failures
4
mapred.merge.recordsBeforeProgress
10000
mapred.min.split.size
0
mapred.output.compress
FALSE
mapred.output.compression.codec
org.apache.hadoop.io.compress.DefaultCodec
mapred.output.compression.type
RECORD
mapred.queue.names
default
mapred.reduce.child.java.opts
-XX:ErrorFile=/opt/cores/mapreduce_java_error%p.log
mapred.reduce.copy.backoff
300
mapred.reduce.max.attempts
4
mapred.reduce.parallel.copies
12
mapred.reduce.slowstart.completed.maps
0.95
mapred.reduce.tasks
1
mapred.reduce.tasks.speculative.execution
FALSE
mapred.running.map.limit
-1
mapred.running.reduce.limit
-1
mapred.skip.attempts.to.start.skipping
2
mapred.skip.map.auto.incr.proc.count
TRUE
mapred.skip.map.max.skip.records
0
mapred.skip.reduce.auto.incr.proc.count
TRUE
mapred.skip.reduce.max.skip.groups
0
mapred.submit.replication
10
mapred.system.dir
/var/mapr/cluster/mapred/jobTracker/system
mapred.task.cache.levels
2
mapred.task.profile
FALSE
mapred.task.profile.maps
0-2
mapred.task.profile.reduces
0-2
mapred.task.timeout
600000
mapred.task.tracker.http.address
0.0.0.0:50060
mapred.task.tracker.report.address
127.0.0.1:0
mapred.task.tracker.task-controller
org.apache.hadoop.mapred.DefaultTaskController
mapred.tasktracker.dns.interface
default
mapred.tasktracker.dns.nameserver
default
mapred.tasktracker.ephemeral.tasks.maximum
1
mapred.tasktracker.ephemeral.tasks.timeout
10000
mapred.tasktracker.ephemeral.tasks.ulimit
4294967296>
mapred.tasktracker.expiry.interval
600000
mapred.tasktracker.indexcache.mb
10
mapred.tasktracker.instrumentation
org.apache.hadoop.mapred.TaskTrackerMetricsInst
mapred.tasktracker.map.tasks.maximum
(CPUS > 2) ? (CPUS * 0.75) : 1
mapred.tasktracker.reduce.tasks.maximum
(CPUS > 2) ? (CPUS * 0.50): 1
mapred.tasktracker.taskmemorymanager.monitoring-interval 5000 mapred.tasktracker.tasks.sleeptime-before-sigkill
5000
mapred.temp.dir
${hadoop.tmp.dir}/mapred/temp
mapred.userlog.limit.kb
0
mapred.userlog.retain.hours
24
mapreduce.heartbeat.10
300
mapreduce.heartbeat.100
1000
mapreduce.heartbeat.1000
10000
mapreduce.heartbeat.10000
100000
mapreduce.job.acl-view-job mapreduce.job.complete.cancel.delegation.tokens
TRUE
mapreduce.job.split.metainfo.maxsize
10000000
mapreduce.jobtracker.recovery.dir
/var/mapr/cluster/mapred/jobTracker/recovery
mapreduce.jobtracker.recovery.maxtime
120
mapreduce.jobtracker.staging.root.dir
/var/mapr/cluster/mapred/jobTracker/staging
mapreduce.maprfs.use.compression
TRUE
mapreduce.reduce.input.limit
-1
mapreduce.tasktracker.outofband.heartbeat
FALSE
mapreduce.tasktracker.prefetch.maptasks
1
mapreduce.use.fastreduce
FALSE
mapreduce.use.maprfs
TRUE
tasktracker.http.threads
2
topology.node.switch.mapping.impl
org.apache.hadoop.net.ScriptBasedMapping
topology.script.number.args
100
webinterface.private.actions
FALSE
Nagios View The Nagios view displays a dialog containing a Nagios configuration script.
Example:
############# Commands ############# define command { command_name check_fileserver_proc command_line $USER1$/check_tcp -p 5660 } define command { command_name check_cldb_proc command_line $USER1$/check_tcp -p 7222 } define command { command_name check_jobtracker_proc command_line $USER1$/check_tcp -p 50030 } define command { command_name check_tasktracker_proc command_line $USER1$/check_tcp -p 50060 } define command { command_name check_nfs_proc command_line $USER1$/check_tcp -p 2049 } define command { command_name check_hbmaster_proc command_line $USER1$/check_tcp -p 60000
} define command { command_name check_hbregionserver_proc command_line $USER1$/check_tcp -p 60020 } define command { command_name check_webserver_proc command_line $USER1$/check_tcp -p 8443 } ################# HOST: perfnode51.perf.lab ############### define host { use linux-server host_name perfnode51.perf.lab address 10.10.30.51 check_command check-host-alive } ################# HOST: perfnode52.perf.lab ############### define host { use linux-server host_name perfnode52.perf.lab address 10.10.30.52 check_command check-host-alive } ################# HOST: perfnode53.perf.lab ############### define host { use linux-server host_name perfnode53.perf.lab address 10.10.30.53 check_command check-host-alive } ################# HOST: perfnode54.perf.lab ############### define host { use linux-server host_name perfnode54.perf.lab address 10.10.30.54 check_command check-host-alive } ################# HOST: perfnode55.perf.lab ############### define host { use linux-server host_name perfnode55.perf.lab address 10.10.30.55 check_command check-host-alive } ################# HOST: perfnode56.perf.lab ###############
define host { use linux-server host_name perfnode56.perf.lab address 10.10.30.56
check_command check-host-alive }
Node-Related Dialog Boxes This page describes the node-related dialog boxes, which are accessible in most views that list node details. This includes the following dialog boxes: Forget Node Manage Node Services Change Node Topology
Forget Node The Forget Node dialog confirms that you wish to remove a node from active management in this cluster. Services on the node must be stopped before the node can be forgotten.
Manage Node Services The Manage Node Services dialog lets you start and stop services on a node, or multiple nodes.
The Service Changes section contains a dropdown menu for each service: No change - leave the service running if it is running, or stopped if it is stopped Start - start the service Stop - stop the service Restart - restart the service Buttons: OK - start and stop the selected services as specified by the dropdown menus Cancel - returns to the Node Properties View without starting or stopping any services You can also start and stop services in the Manage Node Services pane of the Node Properties view.
Change Node Topology The Change Node Topology dialog lets you change the rack or switch path for one or more nodes.
The Change Node Topology dialog consists of: Nodes affected (the node or nodes to be moved, as specified in the Nodes view) A field with a dropdown menu for the new node topology path The Change Node Topology dialog contains the following buttons: OK - changes the node topology Cancel - returns to the Nodes view without changing the node topology
Hadoop Commands All Hadoop commands are invoked by the bin/hadoop script. When you run these commands, you can specify the MapReduce mode in two different ways: 1. Use the hadoop keyword and specify the mode explicitly, where classic mode refers to Hadoop 1.x and yarn mode refers to Hadoop 2.x. 2. Use the hadoop1 or hadoop2 keyword and do not specify the mode. For example, the following commands are equivalent:
root@testnode100:/opt/mapr/conf/conf.d# hadoop2 conf | grep mapreduce.map.memory.mb mapreduce.map.memory.mb1024 root@testnode100:/opt/mapr/conf/conf.d# hadoop -yarn conf | grep mapreduce.map.memory.mb mapreduce.map.memory.mb1024
The following syntax summary applies to all commands.
Syntax Summary hadoop [-yarn|-classic] [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS] hadoop1 [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS] hadoop2 [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS] Hadoop has an option parsing framework that employs parsing generic options as well as running classes.
COMMAND_OPTION
Description
-mode
Specifies the Hadoop version: yarn or classic Aternatively, you can use a hadoop1 or hadoop2 command without setting the mode. If you use a hadoop command (instead of hadoop1 or hadoop2) and do not set the mode, the command runs in the mode set by the MAPR_MAPREDUCE_MODE environment variable. If this variable is not set, the command runs in the mode set in the hadoop version file on the node (default_mode = yarn or classic).
--config confdir
Overwrites the default Configuration directory. Default is ${HADOOP_HOME}/conf.
COMMAND
Various commands with their options are described in the following sections.
GENERIC_OPTIONS
The common set of options supported by multiple commands.
COMMAND_OPTIONS
Various command options are described in the following sections.
Useful Information Running the hadoop script without any arguments prints the help description for all commands.
Supported Commands for Hadoop 1.x MapR supports the following hadoop commands for Hadoop 1.x: Command
Description
archive -archiveName NAME *
The hadoop archive command creates a Hadoop archive, a file that contains other files. A Hadoop archive always has a *.har extension.
classpath
The hadoop classpath command prints the class path needed to access the Hadoop JAR and the required libraries.
conf
The hadoop conf command prints the configuration information for the current node.
daemonlog
The hadoop daemonlog command may be used to get or set the log level of Hadoop daemons.
distcp