翻译或纠错本页面

Back Up a Sharded Cluster with Database Dumps

On this page

在 3.2 版更改: 如果需要精确的时间点的备份,需要在备份过程中停止集群的写入.否则,只能备份大致在时间点附近的数据.

要通过 mongodump 备份集群中所有的数据库,需要使用 backup 角色.:authrole:backup 拥有备份所有数据库的权限,同时没有任何其他权限,以符合 least privilege 的原则.

要备份 system.profile 集合,需要在这个数据库的这个集合上有 read 权限.有一些角色提供了这个权限,比如 clusterAdmindbAdmin .

You may alternatively use file system snapshots to capture the backup data. File system snapshots may be more efficient in some situations if your system configuration supports them.

For more information on backups in MongoDB and backups of sharded clusters in particular, see MongoDB备份方案 and 备份和恢复集群.

Prerequisites

重要

To capture a point-in-time backup from a sharded cluster you must stop all writes to the cluster. On a running production system, you can only capture an approximation of point-in-time snapshot.

Access Control

在这个过程中,你需要停止集群的均衡过程,并备份 config database, 之后使用文件系统快照工具备份每个分片的数据.如果需要精确的时间点的备份,需要在备份过程中停止集群的写入.否则,只能备份大致在时间点附近的数据.

在 3.2.1 版更改: The backup role provides additional privileges to back up the system.profile collections that exist when running with database profiling. Previously, users required an additional read access on this collection.

Consideration

To create backups of a sharded cluster, you will stop the cluster balancer, take a backup of the config database, and then take backups of each shard in the cluster using mongodump to capture the backup data. To capture a more exact moment-in-time snapshot of the system, you will need to stop all application writes before taking the filesystem snapshots; otherwise the snapshot will only approximate a moment in time.

For approximate point-in-time snapshots, you can minimize the impact on the cluster by taking the backup from a secondary member of each replica set shard.

Procedure

1

Disable the balancer process.

To disable the balancer, connect the mongo shell to a mongos instance and run sh.stopBalancer() in the config database.

use config
sh.stopBalancer()

For more information, see the Disable the Balancer procedure.

警告

If you do not stop the balancer, the backup could have duplicate data or omit data as chunks migrate while recording backups.

2

Lock one secondary member of each replica set.

Lock a secondary member of each replica set in the sharded cluster, and one secondary of the config server replica set (CSRS).

Ensure that the oplog has sufficient capacity to allow these secondaries to catch up to the state of the primaries after finishing the backup procedure. See Oplog大小 for more information.

Lock shard replica set secondary.

For each shard replica set in the sharded cluster, connect a mongo shell to the secondary member’s mongod instance and run db.fsyncLock().

db.fsyncLock()

When calling db.fsyncLock(), ensure that the connection is kept open to allow a subsequent call to db.fsyncUnlock().

Lock config server replica set secondary.

If locking a secondary of the CSRS, confirm that the member has replicated data up to some control point. To verify, first connect a mongo shell to the CSRS primary and perform a write operation with "majority" write concern on a control collection:

use config
db.BackupControl.findAndModify(
   {
     query: { _id: 'BackupControlDocument' },
     update: { $inc: { counter : 1 } },
     new: true,
     upsert: true,
     writeConcern: { w: 'majority', wtimeout: 15000 } }
   }
);

The operation should return either the newly inserted document or the updated document:

{ "_id" : "BackupControlDocument", "counter" : 1 }

Query the CSRS secondary member for the returned control document. Connect a mongo shell to the CSRS secondary to lock and use db.collection.find() to query for the control document:

rs.slaveOk();
use config;

db.BackupControl.find(
   { "_id" : "BackupControlDocument", "counter" : 1 }
).readConcern('majority');

If the secondary member contains the latest control document, it is safe to lock the member. Otherwise, wait until the member contains the document or select a different secondary member that contains the latest control document.

To lock the secondary member, run db.fsyncLock() on the member:

db.fsyncLock()

When calling db.fsyncLock(), ensure that the connection is kept open to allow a subsequent call to db.fsyncUnlock().

3

Backup one config server.

Run mongodump against a config server mongod instance to back up the cluster’s metadata. You only need to back up one config server. Perform this step against the locked config server.

Use mongodump with the --oplog option to backup one of the config servers.

mongodump --oplog

If your deployment uses CSRS config servers, unlock the config server node before proceeding to the next step. To unlock the CSRS member, use db.fsyncUnlock() method in the mongo shell used to lock the instance.

db.fsyncUnlock()
4

Back up a replica set member for each shard.

Back up the locked replica set members of the shards using mongodump with the --oplog option. You may back up the shards in parallel.

mongodump --oplog
5

Unlock replica set members for each shard.

To unlock the replica set members, use db.fsyncUnlock() method in the mongo shell. For each locked member, use the same mongo shell used to lock the instance.

db.fsyncUnlock()

Allow these members to catch up with the state of the primary.

6

Re-enable the balancer process.

To re-enable to balancer, connect the mongo shell to a mongos instance and run sh.setBalancerState().

use config
sh.setBalancerState(true)

Additional Resources

See also MongoDB Cloud Manager for seamless automation, backup, and monitoring.