Factory to create client IPC classes.
yarn.ipc.client.factory.class
Type of serialization to use.
yarn.ipc.serializer.type
protocolbuffers
Factory to create server IPC classes.
yarn.ipc.server.factory.class
Factory to create IPC exceptions.
yarn.ipc.exception.factory.class
Factory to create serializeable records.
yarn.ipc.record.factory.class
RPC class implementation
yarn.ipc.rpc.class
org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
The address of the applications manager interface in the RM.
yarn.resourcemanager.address
0.0.0.0:8032
The number of threads used to handle applications manager requests.
yarn.resourcemanager.client.thread-count
50
The expiry interval for application master reporting.
yarn.am.liveness-monitor.expiry-interval-ms
600000
The Kerberos principal for the resource manager.
yarn.resourcemanager.principal
The address of the scheduler interface.
yarn.resourcemanager.scheduler.address
0.0.0.0:8030
Number of threads to handle scheduler interface.
yarn.resourcemanager.scheduler.client.thread-count
50
The address of the RM web application.
yarn.resourcemanager.webapp.address
0.0.0.0:8088
yarn.resourcemanager.resource-tracker.address
0.0.0.0:8031
Are acls enabled.
yarn.acl.enable
true
ACL of who can be admin of the YARN cluster.
yarn.admin.acl
*
The address of the RM admin interface.
yarn.resourcemanager.admin.address
0.0.0.0:8033
Number of threads used to handle RM admin interface.
yarn.resourcemanager.admin.client.thread-count
1
How often should the RM check that the AM is still alive.
yarn.resourcemanager.amliveliness-monitor.interval-ms
1000
The maximum number of application master retries.
yarn.resourcemanager.am.max-retries
1
How often to check that containers are still alive.
yarn.resourcemanager.container.liveness-monitor.interval-ms
600000
The keytab for the resource manager.
yarn.resourcemanager.keytab
/etc/krb5.keytab
How long to wait until a node manager is considered dead.
yarn.nm.liveness-monitor.expiry-interval-ms
600000
How often to check that node managers are still alive.
yarn.resourcemanager.nm.liveness-monitor.interval-ms
1000
Path to file with nodes to include.
yarn.resourcemanager.nodes.include-path
Path to file with nodes to exclude.
yarn.resourcemanager.nodes.exclude-path
Number of threads to handle resource tracker calls.
yarn.resourcemanager.resource-tracker.client.thread-count
50
The class to use as the resource scheduler.
yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
The minimum allocation for every container request at the RM,
in MBs. Memory requests lower than this won't take effect,
and the specified value will get allocated at minimum.
yarn.scheduler.minimum-allocation-mb
1024
The maximum allocation for every container request at the RM,
in MBs. Memory requests higher than this won't take effect,
and will get capped to this value.
yarn.scheduler.maximum-allocation-mb
8192
The minimum allocation for every container request at the RM,
in terms of virtual CPU cores. Requests lower than this won't take effect,
and the specified value will get allocated the minimum.
yarn.scheduler.minimum-allocation-vcores
1
The maximum allocation for every container request at the RM,
in terms of virtual CPU cores. Requests higher than this won't take effect,
and will get capped to this value.
yarn.scheduler.maximum-allocation-vcores
32
Enable RM to recover state after starting. If true, then
yarn.resourcemanager.store.class must be specified
yarn.resourcemanager.recovery.enabled
false
The class to use as the persistent store.
yarn.resourcemanager.store.class
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
URI pointing to the location of the FileSystem path where
RM state will be stored. This must be supplied when using
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
as the value for yarn.resourcemanager.store.class
yarn.resourcemanager.fs.rm-state-store.uri
${hadoop.tmp.dir}/yarn/system/rmstore
The maximum number of completed applications RM keeps.
yarn.resourcemanager.max-completed-applications
10000
Interval at which the delayed token removal thread runs
yarn.resourcemanager.delayed.delegation-token.removal-interval-ms
30000
Interval for the roll over for the master key used to generate
application tokens
yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs
86400
Interval for the roll over for the master key used to generate
container tokens. It is expected to be much greater than
yarn.nm.liveness-monitor.expiry-interval-ms and
yarn.rm.container-allocation.expiry-interval-ms. Otherwise the
behavior is undefined.
yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs
86400
The address of the container manager in the NM.
yarn.nodemanager.address
0.0.0.0:0
Environment variables that should be forwarded from the NodeManager's environment to the container's.
yarn.nodemanager.admin-env
MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
Environment variables that containers may override rather than use NodeManager's default.
yarn.nodemanager.env-whitelist
JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME
who will execute(launch) the containers.
yarn.nodemanager.container-executor.class
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
Number of threads container manager uses.
yarn.nodemanager.container-manager.thread-count
20
Number of threads used in cleanup.
yarn.nodemanager.delete.thread-count
4
Number of seconds after an application finishes before the nodemanager's
DeletionService will delete the application's localized file directory
and log directory.
To diagnose Yarn application problems, set this property's value large
enough (for example, to 600 = 10 minutes) to permit examination of these
directories. After changing the property's value, you must restart the
nodemanager in order for it to have an effect.
The roots of Yarn applications' work directories is configurable with
the yarn.nodemanager.local-dirs property (see below), and the roots
of the Yarn applications' log directories is configurable with the
yarn.nodemanager.log-dirs property (see also below).
yarn.nodemanager.delete.debug-delay-sec
0
Heartbeat interval to RM
yarn.nodemanager.heartbeat.interval-ms
1000
Keytab for NM.
yarn.nodemanager.keytab
/etc/krb5.keytab
List of directories to store localized files in. An
application's localized file directory will be found in:
${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
Individual containers' work directories, called container_${contid}, will
be subdirectories of this.
yarn.nodemanager.local-dirs
${hadoop.tmp.dir}/nm-local-dir
Address where the localizer IPC is.
yarn.nodemanager.localizer.address
0.0.0.0:8040
Interval in between cache cleanups.
yarn.nodemanager.localizer.cache.cleanup.interval-ms
600000
Target size of localizer cache in MB, per local directory.
yarn.nodemanager.localizer.cache.target-size-mb
10240
Number of threads to handle localization requests.
yarn.nodemanager.localizer.client.thread-count
5
Number of threads to use for localization fetching.
yarn.nodemanager.localizer.fetch.thread-count
4
Where to store container logs. An application's localized log directory
will be found in ${yarn.nodemanager.log-dirs}/application_${appid}.
Individual containers' log directories will be below this, in directories
named container_{$contid}. Each container directory will contain the files
stderr, stdin, and syslog generated by that container.
yarn.nodemanager.log-dirs
${yarn.log.dir}/userlogs
Whether to enable log aggregation
yarn.log-aggregation-enable
false
How long to keep aggregation logs before deleting them. -1 disables.
Be careful set this too small and you will spam the name node.
yarn.log-aggregation.retain-seconds
-1
How long to wait between aggregated log retention checks.
If set to 0 or a negative value then the value is computed as one-tenth
of the aggregated log retention time. Be careful set this too small and
you will spam the name node.
yarn.log-aggregation.retain-check-interval-seconds
-1
Time in seconds to retain user logs. Only applicable if
log aggregation is disabled
yarn.nodemanager.log.retain-seconds
10800
Where to aggregate logs to.
yarn.nodemanager.remote-app-log-dir
/tmp/logs
The remote log dir will be created at
{yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam}
yarn.nodemanager.remote-app-log-dir-suffix
logs
Amount of physical memory, in MB, that can be allocated
for containers.
yarn.nodemanager.resource.memory-mb
8192
Ratio between virtual memory to physical memory when
setting memory limits for containers. Container allocations are
expressed in terms of physical memory, and virtual memory usage
is allowed to exceed this allocation by this ratio.
yarn.nodemanager.vmem-pmem-ratio
2.1
Number of CPU cores that can be allocated
for containers.
yarn.nodemanager.resource.cpu-cores
8
Ratio between virtual cores to physical cores when
allocating CPU resources to containers.
yarn.nodemanager.vcores-pcores-ratio
2
NM Webapp address.
yarn.nodemanager.webapp.address
0.0.0.0:8042
How often to monitor containers.
yarn.nodemanager.container-monitor.interval-ms
3000
Class that calculates containers current resource utilization.
yarn.nodemanager.container-monitor.resource-calculator.class
Frequency of running node health script.
yarn.nodemanager.health-checker.interval-ms
600000
Script time out period.
yarn.nodemanager.health-checker.script.timeout-ms
1200000
The health check script to run.
yarn.nodemanager.health-checker.script.path
The arguments to pass to the health check script.
yarn.nodemanager.health-checker.script.opts
Frequency of running disk health checker code.
yarn.nodemanager.disk-health-checker.interval-ms
120000
The minimum fraction of number of disks to be healthy for the
nodemanager to launch new containers. This correspond to both
yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there
are less number of healthy local-dirs (or log-dirs) available, then
new containers will not be launched on this node.
yarn.nodemanager.disk-health-checker.min-healthy-disks
0.25
The path to the Linux container executor.
yarn.nodemanager.linux-container-executor.path
The class which should help the LCE handle resources.
yarn.nodemanager.linux-container-executor.resources-handler.class
org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler
The cgroups hierarchy under which to place YARN proccesses (cannot contain commas).
If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have
been pre-configured), then this cgroups hierarchy must already exist and be writable by the
NodeManager user, otherwise the NodeManager may fail.
Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.
yarn.nodemanager.linux-container-executor.cgroups.hierarchy
/hadoop-yarn
Whether the LCE should attempt to mount cgroups if not found.
Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.
yarn.nodemanager.linux-container-executor.cgroups.mount
false
Where the LCE should attempt to mount cgroups if not found. Common locations
include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux
distribution in use. This path must exist before the NodeManager is launched.
Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and
yarn.nodemanager.linux-container-executor.cgroups.mount is true.
yarn.nodemanager.linux-container-executor.cgroups.mount-path
T-file compression types used to compress aggregated logs.
yarn.nodemanager.log-aggregation.compression-type
none
The kerberos principal for the node manager.
yarn.nodemanager.principal
yarn.nodemanager.aux-services
No. of ms to wait between sending a SIGTERM and SIGKILL to a container
yarn.nodemanager.sleep-delay-before-sigkill.ms
250
Max time to wait for a process to come up when trying to cleanup a container
yarn.nodemanager.process-kill-wait.ms
2000
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
mapreduce.job.jar
mapreduce.job.hdfs-servers
${fs.defaultFS}
The kerberos principal for the proxy, if the proxy is not
running as part of the RM.
yarn.web-proxy.principal
Keytab for WebAppProxy, if the proxy is not running as part of
the RM.
yarn.web-proxy.keytab
The address for the web proxy as HOST:PORT, if this is not
given then the proxy will run as part of the RM
yarn.web-proxy.address
CLASSPATH for YARN applications. A comma-separated list
of CLASSPATH entries
yarn.application.classpath
$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*