数据批量导入,导致Hbase region 100G,未进行分裂是什么原因

版本:Hbase 2.0批量往hbase进行数据导入,导致部分region大小超过100G,而集群配置最大是10G,(之前是200G,自动分裂到了100多G)。
具体是什么原因呢?集群基于CDH搭建,没有其他操作,处于空闲状态,负载也比较低,是什么原因导致region没有进行分裂?
并且手动执行分裂提示无法分裂
hbase(main):004:0> split 'tbName'

ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: fc09d55f91ed8831b4316f34b07719c5 NOT splittable
at org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.checkSplittable(SplitTableRegionProcedure.java:182)
at org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.<init>(SplitTableRegionProcedure.java:114)
at org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:772)
at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1635)
at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1627)
at org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:774)
at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)

Split entire table or pass a region to split individual region. With the
second parameter, you can specify an explicit split key for the region.
Examples:
split 'tableName'
split 'namespace:tableName'
split 'regionName' # format: 'tableName,startKey,id'
split 'tableName', 'splitKey'
split 'regionName', 'splitKey'

Took 0.3912 seconds
图片不会上传

fc09d55f91ed8831b4316f34b07719c5 fwqzx011.zh:16030 500 0 180.41 GB 3 0 B 1.0 - -
07c32308a85d7797f18e6011ab5eef01 fwqzx011.zh:16030 500 0 145.27 GB 3 0 B 1.0 - -
1f4bc1398c7e7b951ab081fc6d27f483 fwqzx011.zh:16030 500 0 101.15 GB 1 0 B 1.0 - -
b3a6f416a4ecba0d2fb0f668ee6055f9 fwqzx011.zh:16030 500 0 101.15 GB 1 0 B 1.0 - -
24eb3dbf8ef77a2c4d1e61a17b406faf fwqzx011.zh:16030 500 0 123.58 GB 1 0 B 1.0 - -
6a7d20ea20009c84653914ed884e9052 fwqzx011.zh:16030 500 0 123.58 GB 1 0 B 1.0 - -
8f6cf3b72841476d37f2b2899905b47e fwqml005.zh:16030 0 0 34.08 GB 1 0 B 1.0 - -
7d13f4777eacd30dd12e96b2583087bf fwqml005.zh:16030 0 0 34.08 GB 1 0 B 1.0 - -
4fe76a7e32093161e02764382f5bd10a fwqml005.zh:16030 500 0 8.52 GB 1 0 B 1.0 - -
1bf2254eac114614678b42489ce23799 fwqml005.zh:16030 500 0 8.52 GB 1 0 B 1.0 - -
已邀请:

zsh - 90IT

赞同来自:

又自动分裂了一些,不过还是很多比较大,两天~ 分裂好慢啊
 fe5cd6025adcd2ff7394018852e05622 fwqml001.zh:16030 1,000 0 203.47 GB 1 0 B 1.0 - -
cb2641874c9318a070a3517a47275c9d fwqml001.zh:16030 1,000 0 203.47 GB 1 0 B 1.0 - -
e7ce065f7a87cfa0d55fc1506635195d fwqzx011.zh:16030 1,101 0 149.46 GB 1 0 B 1.0 - -
56ce7319c9c0b897d7982bf1bdb69be5 fwqzx011.zh:16030 1,000 0 149.46 GB 1 0 B 1.0 - -
24eb3dbf8ef77a2c4d1e61a17b406faf fwqzx011.zh:16030 3,700 0 123.58 GB 1 0 B 1.0 - -
6a7d20ea20009c84653914ed884e9052 fwqzx011.zh:16030 3,700 0 123.58 GB 1 0 B 1.0 - -
a523eb6f9c84c5824952b5edf7467310 fwqzx011.zh:16030 2,600 0 103.89 GB 1 0 B 1.0 - -
6067ccca2e610d400db6eebb23692941 fwqml001.zh:16030 3,700 0 102.07 GB 1 0 B 1.0 - -
1f4bc1398c7e7b951ab081fc6d27f483 fwqzx011.zh:16030 3,700 0 101.15 GB 1 0 B 1.0 - -
b3a6f416a4ecba0d2fb0f668ee6055f9 fwqzx011.zh:16030 3,700 0 101.15 GB 1 0 B 1.0 - -
53f4de45775b795060d48984b2e9b8f7 fwqml001.zh:16030 3,700 0 79.46 GB 1 0 B 1.0 - -
08d5c1833b66dbba374e942968e426c2 fwqml001.zh:16030 3,700 0 79.46 GB 1 0 B 1.0 - -
7d8c8ce43206fec8e7c64dff3b58ab99 fwqml001.zh:16030 100 0 66.76 GB 1 0 B 1.0 - -
730438ba27932b280b920f0527b4a81e fwqml001.zh:16030 100 0 66.76 GB 1 0 B 1.0 - -
5394c6861aebc3e717ba0ffacc39f2a0 fwqzx011.zh:16030 400 0 51.96 GB 1 0 B 1.0 - -

要回复问题请先登录注册


中国HBase技术社区微信公众号:
hbasegroup

欢迎加入HBase生态+Spark社区钉钉大群