- 分片 >
- Sharded Cluster Administration >
- Sharded Cluster Balancer >
- 管理集群均衡过程
管理集群均衡过程¶
On this page
在 3.4 版更改: 这篇教程描述了与均衡相关的额一般的管理方法.参见 Sharded Cluster Balancer 获得关于均衡的介绍.参见 Cluster Balancer 获得更底层的关于均衡的信息.
This page describes common administrative procedures related to balancing. For an introduction to balancing, see Sharded Cluster Balancer. For lower level information on balancing, see Cluster Balancer.
重要
Use the version of the mongo shell that corresponds to the version of the sharded cluster. For example, do not use a 3.2 or earlier version of mongo shell against the 3.4 sharded cluster.
Check the Balancer State¶
sh.getBalancerState() checks if the balancer is enabled (i.e. that the balancer is permitted to run). sh.getBalancerState() does not check if the balancer is actively balancing chunks.
To see if the balancer is enabled in your sharded cluster, issue the following command, which returns a boolean:
sh.getBalancerState()
3.0.0 新版功能: You can also see if the balancer is enabled using sh.status(). The currently-enabled field indicates whether the balancer is enabled, while the currently-running field indicates if the balancer is currently running.
Check if Balancer is Running¶
To see if the balancer process is active in your cluster:
Configure Default Chunk Size¶
The default chunk size for a sharded cluster is 64 megabytes. In most situations, the default size is appropriate for splitting and migrating chunks. For information on how chunk size affects deployments, see details, see Chunk Size.
这个均衡过程从运行于 mongos0.example.net 的 mongos 上发起.
state 的值可以说明 mongos 是不是获得了锁.在2.0以后的版本,值为 2 表明存在锁,在之前的版本,值为 1 表明存在锁.
为均衡窗口设置时间表¶
In some situations, particularly when your data set grows slowly and a migration can impact performance, it is useful to ensure that the balancer is active only at certain times. The following procedure specifies the activeWindow, which is the timeframe during which the balancer will be able to migrate chunks:
Switch to the Config Database.¶
Issue the following command to switch to the config database.
use config
Ensure that the balancer is not stopped.¶
The balancer will not activate in the stopped state. To ensure that the balancer is not stopped, use sh.setBalancerState(), as in the following:
sh.setBalancerState( true )
The balancer will not start if you are outside of the activeWindow timeframe.
Modify the balancer’s window.¶
Set the activeWindow using update(), as in the following:
db.settings.update(
{ _id: "balancer" },
{ $set: { activeWindow : { start : "<start-time>", stop : "<stop-time>" } } },
{ upsert: true }
)
Replace <start-time> and <end-time> with time values using two digit hour and minute values (i.e. HH:MM) that specify the beginning and end boundaries of the balancing window.
- For HH values, use hour values ranging from 00 - 23.
- For MM value, use minute values ranging from 00 - 59.
MongoDB evaluates the start and stop times relative to the time zone of the member which is serving as a primary in the config server replica set.
注解
The balancer window must be sufficient to complete the migration of all data inserted during the day.
As data insert rates can change based on activity and usage patterns, it is important to ensure that the balancing window you select will be sufficient to support the needs of your deployment.
Do not use the sh.startBalancer() method when you have set an activeWindow.
删除均衡时间窗口¶
If you have set the balancing window and wish to remove the schedule so that the balancer is always running, use $unset to clear the activeWindow, as in the following:
use config
db.settings.update({ _id : "balancer" }, { $unset : { activeWindow : true } })
Disable the Balancer¶
By default, the balancer may run at any time and only moves chunks as needed. To disable the balancer for a short period of time and prevent all migration, use the following procedure:
使用以下命令禁用均衡器:
sh.stopBalancer()
如果有均衡正在运行,系统会继续将这个均衡过程运行完,之后才禁用.
运行以下命令,如果返回为 false` 可以确认均衡过程被禁用:
sh.getBalancerState()
Optionally, to verify no migrations are in progress after disabling, issue the following operation in the mongo shell:
use config while( sh.isBalancerRunning() ) { print("waiting..."); sleep(1000); }
注解
如果要在不提供 sh.stopBalancer() 方法或者 sh.setBalancerState() 方法的驱动中禁用均衡过程,可以在 config 数据库中使用以下命令禁用:
db.settings.update( { _id: "balancer" }, { $set : { stopped: true } } , { upsert: true } )
开启均衡过程¶
在禁用均衡过程之后,使用以下命令将其重新打开:
使用以下命令之一启用均衡器.
在 mongo 终端中,执行:
sh.setBalancerState(true)
在不提供 sh.startBalancer() 方法的驱动中,可以更改 config 数据库将均衡过程打开:
db.settings.update( { _id: "balancer" }, { $set : { stopped: false } } , { upsert: true } )
在备份过程中禁用均衡过程¶
如果在进行 备份 期间MongoDB歉意了数据块,可能造成不一致的备份结果.因此在备份期间要保证均衡过程一定不能运行.
如果在均衡过程运行中禁用均衡过程,当前的数据块迁移会继续运行,结束后之后来的均衡过程不再运行.
在开始备份操作之前,要确认均衡过程没有在进行.使用以下命令确认均衡过程没有在进行:
!sh.getBalancerState() && !sh.isBalancerRunning()
在备份结束之后,可以将均衡过程重新打开.
在集合上禁用均衡过程¶
使用 sh.disableBalancing() 禁止某个特定的集合上的均衡过程.在数据导入导出期间,也许有需要禁用某个特定集合上的均衡过程.
When you disable balancing on a collection, MongoDB will not interrupt in progress migrations.
要禁用某个集合上的均衡过程,使用 mongo 终端连接到一个 mongos 并使用 sh.disableBalancing() 方法.
示例
sh.disableBalancing("students.grades")
sh.disableBalancing() 使用集合的完整 namespace 作为参数.
在集合上开启均衡过程¶
可以使用 sh.enableBalancing() 为特定的集合开启分片.
在开启了集合的均衡之后,MongoDB并不会 立刻 开始均衡数据.不过,如果集合中的数据是不均衡的,MongoDB可以分发数据使得数据更均衡.
在一个集合上开启均衡,需要使用 mongo 连接到一个 mongos 并执行 sh.enableBalancing() 命令:
示例
sh.enableBalancing("students.grades")
sh.enableBalancing() 方法的参数为集合的完整的 namespace .
确认均衡是启用的还是禁用的¶
要确认一个集合的均衡是否启用,可以检查 config 数据库中的 collections 集合,找到相应的 namespace 记录,查看 noBalance 字段.
db.getSiblingDB("config").collections.findOne({_id : "students.grades"}).noBalance;
This operation will return a null error, true, false, or no output:
空错误表明集合的namespace不正确.
返回 true ,表明均衡过程被禁用.
如果返回值是 false ,表明集合当前的均衡是启用的,但是之前被禁用过.在下次均衡器运行时,集合的均衡会开始进行.
如果操作没有返回数据,表明均衡过程是启用的且从没被禁用过.在下次均衡器运行时,集合的均衡会开始进行.
3.0.0 新版功能: You can also see if the balancer is enabled using sh.status(). The currently-enabled field indicates if the balancer is enabled.
Change Replication Behavior for Chunk Migration¶
Secondary Throttle¶
During chunk migration (initiated either automatically via the balancer or manually via moveChunk command), the _secondaryThrottle value determines when the balancer proceeds with the next document in the chunk:
If true, each document move during chunk migration propagates to at least one secondary before the balancer proceeds with the next document. This is equivalent to a write concern of { w: 2 }.
注解
The writeConcern field in the balancer configuration document allows you to specify a different write concern semantics with the _secondaryThrottle option.
If false, the balancer does not wait for replication to a secondary and instead continues with the next document.
Starting in MongoDB 3.4, for WiredTiger, the default value _secondaryThrottle is false for all chunk migrations.
The default value remains true for MMAPv1.
To change the balancer’s _secondaryThrottle and writeConcern values, connect to a mongos instance and directly update the _secondaryThrottle value in the settings collection of the config database. For example, from a mongo shell connected to a mongos, issue the following command:
use config
db.settings.update(
{ "_id" : "balancer" },
{ $set : { "_secondaryThrottle" : true ,
"writeConcern": { "w": "majority" } } },
{ upsert : true }
)
The effects of changing the _secondaryThrottle and writeConcern value may not be immediate. To ensure an immediate effect, stop and restart the balancer to enable the selected value of _secondaryThrottle. See 管理集群均衡过程 for details.
For more information on the replication behavior during various steps of chunk migration, see Chunk Migration and Replication.
Wait for Delete¶
The _waitForDelete setting of the balancer and the moveChunk command affects how the balancer migrates multiple chunks from a shard. By default, the balancer does not wait for the on-going migration’s delete phase to complete before starting the next chunk migration. To have the delete phase block the start of the next chunk migration, you can set the _waitForDelete to true.
For details on chunk migration, see Chunk Migration. For details on the chunk migration queuing behavior, see Asynchronous Chunk Migration Cleanup.
The _waitForDelete is generally for internal testing purposes. To change the balancer’s _waitForDelete value:
Connect to a mongos instance.
Update the _waitForDelete value in the settings collection of the config database. For example:
use config db.settings.update( { "_id" : "balancer" }, { $set : { "_waitForDelete" : true } }, { upsert : true } )
Once set to true, to revert to the default behavior:
Connect to a mongos instance.
Update or unset the _waitForDelete field in the settings collection of the config database:
use config db.settings.update( { "_id" : "balancer", "_waitForDelete": true }, { $unset : { "_waitForDelete" : "" } } )
Change the Maximum Storage Size for a Given Shard¶
By default shards have no constraints in storage size. However, you can set a maximum storage size for a given shard in the sharded cluster. When selecting potential destination shards, the balancer ignores shards where a migration would exceed the configured maximum storage size.
The shards collection in the config database stores configuration data related to shards.
{ "_id" : "shard0000", "host" : "shard1.example.com:27100" }
{ "_id" : "shard0001", "host" : "shard2.example.com:27200" }
To limit the storage size for a given shard, use the db.collection.updateOne() method with the $set operator to create the maxSize field and assign it an integer value. The maxSize field represents the maximum storage size for the shard in megabytes.
The following operation sets a maximum size on a shard of 1024 megabytes:
config = db.getSiblingDB("config")
config.shards.updateOne( { "_id" : "<shard>"}, { $set : { "maxSize" : 1024 } } )
This value includes the mapped size of all data files on the shard, including the local and admin databases.
By default, maxSize is not specified, allowing shards to consume the total amount of available space on their machines if necessary.
You can also set maxSize when adding a shard.
To set maxSize when adding a shard, set the addShard command’s maxSize parameter to the maximum size in megabytes. The following command run in the mongo shell adds a shard with a maximum size of 125 megabytes:
config = db.getSiblingDB("config")
config.runCommand( { addshard : "example.net:34008", maxSize : 125 } )