翻译或纠错本页面

复制集架构

The architecture of a replica set affects the set’s capacity and capability. This document provides strategies for replica set deployments and describes common architectures.

最基础的复制集架构是由三个成员组成的。这样的架构为复制集提供了冗余与故障切换的余地。根据应用的需求来设计复制集的架构,尽量避免不必要的复杂化。

策略

决定复制集的成员个数

请根据以下的策略来考量复制集的成员数

我们可以通过开启journaling功能个来在服务意外关闭或是掉电时保护数据。如果没有开启journaling功能,MongoDB将无法在服务意外关闭或是掉电后恢复数据。

A replica set can have up to 50 members, but only 7 voting members. [1] If the replica set already has 7 voting members, additional members must be non-voting members.

复制集应含有奇数个成员

Ensure that the replica set has an odd number of voting members. If you have an even number of voting members, deploy an arbiter so that the set has an odd number of voting members.

如果应用程序需要连接多个复制集,那么每个复制集需要有独立的名字。驱动通过复制集名来进行数据库连接。

警告

In general, avoid deploying more than one arbiter per replica set.

故障容错的考量

Fault tolerance for a replica set is the number of members that can become unavailable and still leave enough members in the set to elect a primary. In other words, it is the difference between the number of members in the set and the majority of voting members needed to elect a primary. Without a primary, a replica set cannot accept write operations. Fault tolerance is an effect of replica set size, but the relationship is not direct. See the following table:

Number of Members Majority Required to Elect a New Primary Fault Tolerance
3 2 1
4 3 1
5 3 2
6 4 2

为复制集新增节点 不一定 能够复制集的故障容错能力,但是却可以为一些专用的功能提供服务,比如备份或是报表。

为特殊需求使用隐藏节点和延时节点。

新增 隐藏节点 或是 延时节点 来为特殊需求提供服务,比如备份或是报表。

以读为主的架构的负载均衡

若业务带来的 大量 的读请求,我们可以通过做读写分离来提升复制集的读能力。随着业务的扩展,我们可以通过在其他数据中心新增从节点的方式来提高冗余能力与可用性。

为了能够选举出主节点,请保证复制集中多数节点的可用性。

不要在负载饱和时才想到来提高性能与承载能力。

请保证复制集拥有足够的备用容量来新增节点。不要在复制集负载饱和的时候才想到新增节点来提高承载能力,应有前瞻性。

Distribute Members Geographically

To protect your data in case of a data center failure, keep at least one member in an alternate data center. If possible, use an odd number of data centers, and choose a distribution of members that maximizes the likelihood that even with a loss of a data center, the remaining replica set members can form a majority or at minimum, provide a copy of your data.

To ensure that the members in your main data center be elected primary before the members in the alternate data center, set the members[n].priority of the members in the alternate data center to be lower than that of the members in the primary data center.

拥有四个或四个以上节点的复制集可以为读操作提供更广的分布结构,且可以将某些节点用于某些专用功能。

Target Operations with Tag Sets

Use replica set tag sets to target read operations to specific members or to customize write concern to request acknowledgement from specific members.

参见

在其他数据中心拥有至少一个复制集节点可以很好地在主数据中心出问题时为数据提供安全性保障。将这类节点的 priority 设置为0,以防其升职为主节点。

Use Journaling to Protect Against Power Failures

MongoDB enables journaling by default. Journaling protects against data loss in the event of service interruptions, such as power failures and unexpected reboots.

当复制集在多个数据中心拥有节点,且各数据中心网络隔离时,为了保证数据的复制与传输,各节点之间需要能够正常沟通。

在选举中,各节点需要能够互相沟通来保证其多数性。为了保证复制集节点能够保持多数且能够正常的选举出主节点,我们需要保证一个数据中心拥有复制集中的多数节点。

Deployment Patterns

The following documents describe common replica set deployment patterns. Other patterns are possible and effective depending on the application’s requirements. If needed, combine features of each architecture in your own deployment:

通过 复制集标签 来确保操作能够复制到指定的数据中心。我们也能通过标签来将读操作发送到指定的节点。
Three-member replica sets provide the minimum recommended architecture for a replica set.
Replica Sets Distributed Across Two or More Data Centers
Geographically distributed sets include members in multiple locations to protect against facility-specific failures, such as power outages.
[1]While replica sets are the recommended solution for production, a replica set can support up to 50 members in total. If your deployment requires more than 50 members, you’ll need to use master-slave replication. However, master-slave replication lacks the automatic failover capabilities.