hbase-2.1.1 hdfs文件删除失败

异常如下:

2019-06-12 09:00:03,490 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=8.00 MB, freeSize=10.66 GB, max=10.67 GB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=7109, evicted=0, evictedPerRun=0.0
2019-06-12 09:00:07,383 INFO  [BucketCacheStatsExecutor] bucket.BucketCache: failedBlockAdditions=0, totalSize=32.00 GB, freeSize=32.00 GB, usedSize=0 B, cacheSize=0 B, accesses=0, hits=0, IOhitsPerSecond=0, IOTimePerHit=NaN, hitRatio=0,cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=0, evicted=0, evictedPerRun=0.0
2019-06-12 09:02:27,095 INFO  [MobFileCache #0] mob.MobFileCache: MobFileCache Statistics, access: 0, miss: 0, hit: 0, hit ratio: 0%, evicted files: 0
2019-06-12 09:02:50,118 INFO  [ForkJoinPool-1-worker-13] cleaner.CleanerChore: Could not delete dir under hdfs://cluster1/hbase/archive/data/default/hour_cert. might be transient; we'll retry. if it keeps happening, use following exception when asking on mailing list.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException): `/hbase/archive/data/default/hour_cert is non empty': Directory is not empty
at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:84)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3687)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:953)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:623)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy18.delete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:545)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy19.delete(Unknown Source)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372)
at com.sun.proxy.$Proxy20.delete(Unknown Source)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372)
at com.sun.proxy.$Proxy20.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2053)
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:703)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore$CleanerTask.lambda$compute$2(CleanerChore.java:520)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore$CleanerTask.deleteAction(CleanerChore.java:546)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore$CleanerTask.compute(CleanerChore.java:520)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore$CleanerTask.compute(CleanerChore.java:471)
at java.util.concurrent.RecursiveTask.exec(RecursiveTask.java:94)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
2019-06-12 09:03:21,790 WARN  [master/nna:16000.Chore.1] balancer.StochasticLoadBalancer: calculatedMaxSteps:14212800 for loadbalancer's stochastic walk is larger than maxSteps:30000. Hence load balancing may not work well. Setting parameter "hbase.master.balancer.stochastic.runMaxSteps" to true can overcome this issue.(This config change does not require service restart)
2019-06-12 09:03:21,790 INFO  [master/nna:16000.Chore.1] balancer.StochasticLoadBalancer: start StochasticLoadBalancer.balancer, initCost=117.83342598037535, functionCost=RegionCountSkewCostFunction : (500.0, 0.19951338199513383); PrimaryRegionCountSkewCostFunction : (500.0, 0.0); MoveCostFunction : (7.0, 0.0); ServerLocalityCostFunction : (25.0, 0.37566472137178286); RackLocalityCostFunction : (15.0, 0.0); TableSkewCostFunction : (35.0, 0.08328267477203648); RegionReplicaHostCostFunction : (100000.0, 0.0); RegionReplicaRackCostFunction : (10000.0, 0.0); ReadRequestCostFunction : (5.0, 0.0); WriteRequestCostFunction : (5.0, 0.46954826148755985); MemStoreSizeCostFunction : (5.0, 0.2903225806451613); StoreFileCostFunction : (5.0, 0.39417382416579866);  computedMaxSteps: 1000000
2019-06-12 09:03:21,797 ERROR [master/nna:16000.Chore.1] hbase.ScheduledChore: Caught error
java.lang.ArrayIndexOutOfBoundsException
[size=13]
[/size]
已邀请:

要回复问题请先登录注册


中国HBase技术社区微信公众号:
hbasegroup

欢迎加入HBase生态+Spark社区钉钉大群