配置集群中均衡器的行为¶
On this page
均衡器是运行在集群中 一个 mongos 上的确保 数据块 均匀分布在每个分片上的任务.在大多数情况下,默认的配置已经可以工作得很好.不过,有时候,按照应用与操作的不同,管理员可以修改均衡器的工作.如果你确定要修改均衡器的行为,使用这篇文档的教程:
参见 开启分片集合的均衡 与 集群均衡器 获得对均衡过程概念上的了解.
配置默认的数据块大小¶
默认的数据块大小是64MB,在大多数情况下,默认的数据块大小对于数据块分裂与迁移是合适的.参见 数据块大小 获得数据块大小是如何影响部署的细节.
更改默认的数据块大小会影响自动分裂与数据块的迁移,但是不会对以往的所有数据块都造成影响.
参见 修改集群中数据块的大小 获得如何修改默认的数据块大小.
改变一个指定分片所使用的最大存储空间¶
The maxSize field in the shards collection in the config database sets the maximum size for a shard, allowing you to control whether the balancer will migrate chunks to a shard. If mem.mapped size [1] is above a shard’s maxSize, the balancer will not move chunks to the shard. Also, the balancer will not move chunks off an overloaded shard. This must happen manually. The maxSize value only affects the balancer’s selection of destination shards.
默认情况下,``maxSize`` 没有被设置,如果有必要,分片可以使用磁盘的所有空间.
可以在添加分片时设置 maxSize ,也可以在分片运行时添加.
在添加分片时设置 maxSize ,需要为 addShard 设置 maxSize 参数,单位为MB,以下的命令添加了一个分片,并设置 maxSize 为125MB:
db.runCommand( { addshard : "example.net:34008", maxSize : 125 } )
为以存在的分片设置 maxSize,需要插入或者更新存储在 config database 中 shards 上的 maxSize 记录,以 MB 为单位.
示例
假设有以下的分片,没有设置过 maxSize 子段:
{ "_id" : "shard0000", "host" : "example.net:34001" }
运行以下命令,为此分片插入 125MB 的 maxSize .
use config
db.shards.update( { _id : "shard0000" }, { $set : { maxSize : 125 } } )
以后如果想要更新 maxSize 为250MB,使用以下命令:
use config
db.shards.update( { _id : "shard0000" }, { $set : { maxSize : 250 } } )
[1] | 这个值包含了 local 数据库与 admin 数据库,在设定时应该注意到这点. |
Change Replication Behavior for Chunk Migration¶
Secondary Throttle¶
在 3.0.0 版更改: The balancer configuration document added configurable writeConcern to control the semantics of the _secondaryThrottle option.
The _secondaryThrottle parameter of the balancer and the moveChunk command affects the replication behavior during chunk migration. By default, _secondaryThrottle is true, which means each document move during chunk migration propagates to at least one secondary before the balancer proceeds with the next document: this is equivalent to a write concern of { w: 2 }.
均衡器与 moveChunk 命令的 _secondaryThrottle 参数影响了在进行 数据块迁移与复制集 时复制集的行为.默认情况下, _secondaryThrottle 被设定为 true ,这意味着在迁移过程进行到下一个阶段之前,每条被迁移的记录都被同步到了至少一个从节点上.参见 数据块迁移与复制集 获得更多在均衡阶段复制集行为的信息.
To change the balancer’s _secondaryThrottle and writeConcern values, connect to a mongos instance and directly update the _secondaryThrottle value in the settings collection of the config database. For example, from a mongo shell connected to a mongos, issue the following command:
use config
db.settings.update(
{ "_id" : "balancer" },
{ $set : { "_secondaryThrottle" : false ,
"writeConcern": { "w": "majority" } } },
{ upsert : true }
)
The effects of changing the _secondaryThrottle and writeConcern value may not be immediate. To ensure an immediate effect, stop and restart the balancer to enable the selected value of _secondaryThrottle. See 管理集群均衡过程 for details.
Wait for Delete¶
The _waitForDelete setting of the balancer and the moveChunk command affects how the balancer migrates multiple chunks from a shard. By default, the balancer does not wait for the on-going migration’s delete phase to complete before starting the next chunk migration. To have the delete phase block the start of the next chunk migration, you can set the _waitForDelete to true.
For details on chunk migration, see 数据块迁移. For details on the chunk migration queuing behavior, see 数据块迁移排队.
The _waitForDelete is generally for internal testing purposes. To change the balancer’s _waitForDelete value:
Connect to a mongos instance.
Update the _waitForDelete value in the settings collection of the config database. For example:
use config db.settings.update( { "_id" : "balancer" }, { $set : { "_waitForDelete" : true } }, { upsert : true } )
要更改均衡器的 _secondaryThrottle ,连接到一个 mongos 上,直接修改存储在 config database 上 settings 集合中的 _secondaryThrottle 值,举例,连接到 mongos 之后,使用以下命令: