Hadoop HA与ZKFC
in bigdata with 0 comment

Hadoop HA与ZKFC

in bigdata with 0 comment

基于ZK的HA切换原理

在讲解ZKFC进程的组成部分之前,我们需要了解HDFS如何依赖ZK实现切换操作的。首先我们需要了解一下什么是ZK以及ZK有什么作用,然后我们才能理解HDFS为什么要利用ZK来实现自动切换的机制。

ZK全称是Zookeeper,ZK的一个很大的特点是它可以保持高度的一致性,而且它本身可以支持HA,在ZK集群最后,只要保证半数以上节点存活,ZK集群就还能对外提供服务。

那么HDFS的Active、Standby节点与ZK有什么关联呢?

当一个NameNode被成功切换为Active状态时,它会在ZK内部创建一个临时的znode,在znode中将会保留当前Active NameNode的一些信息,比如主机名等等。当Active NameNode出现失败或连接超时的情况下,监控程序会将ZK上对应的临时znode进行删除,znode的删除事件会主动触发到下一次的Active NamNode的选择。

因为ZK是具有高度一致性的,它能保证当前最多只能有一个节点能够成功创建znode,成为当前的Active Name。这也就是为什么社区会利用ZK来做HDFS HA的自动切换的原因。

ZKFC

在ZKFC的进程内部,运行着3个对象服务:

HealthMonitor:监控NameNode是否不可用或是进入了一个不健康的状态。

ActiveStandbyElector:控制和监控ZK上的节点的状态。

ZKFailoverController:协调HealMonitor和ActiveStandbyElector对象,处理它们发来的event变化事件,完成自动切换的过程。
切换示意图

问题:两个resourcemanager都处于standby,zkfc无法切换resourcemanager

yarn rmadmin -transitionToActive --forcemanual rm1

You have specified the forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.

It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.

You may abort safely by answering 'n' or hitting ^C now.

Are you sure you want to continue? (Y or N) Y

15/10/07 09:28:36 WARN ha.HAAdmin: Proceeding with manual HA state management even though

automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@51931956

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/hcg/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/hcg/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

Operation failed: Error when transitioning to Active mode

at org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:304)

at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)

at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

Caused by: org.apache.hadoop.service.ServiceStateException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=4096, maxMemory=4000

at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)

at org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)

at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:999)

at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1036)

at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1032)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)

at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1032)

at org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:295)

... 10 more

Caused by: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=4096, maxMemory=4000

at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:203)

at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:377)

at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:320)

at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:309)

at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:413)

at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1192)

at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:575)

at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)

... 18 more

解决方法

将yarn-site.xml中的yarn.nodemanager.resource.memory-mb调大,要比yarn.scheduler.maximum-allocation-mb大

hdfs ha : $HADOOP_HOME/bin/hdfs haadmin -failover --forcefence --forceactive nn1 nn2

hadoop2.7.2

hdfs haadmin -transitionToStandby --forcemanual nn2

hdfs haadmin -transitionToActive --forcemanual nn1

Responses