>>>>>>> cn
翻译或纠错本页面

分片介绍

On this page

分片是使用多个机器存储数据的方法,MongoDB使用分片以支持巨大的数据存储量与对数据操作.

分片的目的

高数据量和吞吐量的数据库应用会对单机的性能造成较大压力,大的查询量会将单机的CPU耗尽,大的数据量对单机的存储压力较大,最终会耗尽系统的内存而将压力转移到磁盘IO上.

为了解决这些问题,有两个基本的方法: 纵向扩展分片 .

Vertical scaling adds more CPU and storage resources to increase capacity. Scaling by adding capacity has limitations: high performance systems with large numbers of CPUs and large amount of RAM are disproportionately more expensive than smaller systems. Additionally, cloud-based providers may only allow users to provision smaller instances. As a result there is a practical maximum capability for vertical scaling.

Sharding, or horizontal scaling, by contrast, divides the data set and distributes the data over multiple servers, or shards. Each shard is an independent database, and collectively, the shards make up a single logical database.

Diagram of a large collection with data distributed across 4 shards.

分片为应对高吞吐量与大数据量提供了方法.

  • 使用分片减少了每个分片需要处理的请求数,因此,通过 水平扩展 ,集群可以提高自己的存储容量和吞吐量.

    举例来说,当插入一条数据时,应用只需要访问存储这条数据的分片.

  • 使用分片减少了每个分片存储的数据.

    For example, if a database has a 1 terabyte data set, and there are 4 shards, then each shard might hold only 256 GB of data. If there are 40 shards, then each shard might hold only 25 GB of data.

MongoDB的分片

MongoDB通过配置 集群 支持分片.

Diagram of a sample sharded cluster for production purposes.  Contains exactly 3 config servers, 2 or more ``mongos`` query routers, and at least 2 shards. The shards are replica sets.

Sharded cluster has the following components: shards, query routers and config servers.

Shards store the data. To provide high availability and data consistency, in a production sharded cluster, each shard is a replica set [1]. For more information on replica sets, see Replica Sets.

Query Routers, or mongos instances, interface with client applications and direct operations to the appropriate shard or shards. A client sends requests to a mongos, which then routes the operations to the shards and returns the results to the clients. A sharded cluster can contain more than one mongos to divide the client request load, and most sharded clusters have more than one mongos for this reason.

Config servers store the cluster’s metadata. This data contains a mapping of the cluster’s data set to the shards. The query router uses this metadata to target operations to specific shards.

在 3.2 版更改: Starting in MongoDB 3.2, config servers for sharded clusters can be deployed as a replica set. The replica set config servers must run the WiredTiger storage engine. MongoDB 3.2 deprecates the use of three mirrored mongod instances for config servers.

[1]For development and testing purposes only, each shard can be a single mongod instead of a replica set.

数据分区

MongoDB中数据的分片是以集合为基本单位的,集合中的数据通过 片键 被分成多部分.

片键

To shard a collection, you need to select a shard key. A shard key is either an indexed field or an indexed compound field that exists in every document in the collection. MongoDB divides the shard key values into chunks and distributes the chunks evenly across the shards. To divide the shard key values into chunks, MongoDB uses either range based partitioning or hash based partitioning. See the Shard Key documentation for more information.

以范围为基础的分片

对于 基于范围的分片 ,MongoDB按照片键的范围把数据分成不同部分.假设有一个数字的片键:想象一个从负无穷到正无穷的直线,每一个片键的值都在直线上画了一个点.MongoDB把这条直线划分为更短的不重叠的片段,并称之为 数据块 ,每个数据块包含了片键在一定范围内的数据.

在使用片键做范围划分的系统中,拥有”相近”片键的文档很可能存储在同一个数据块中,因此也会存储在同一个分片中.

Diagram of the shard key value space segmented into smaller ranges or chunks.

基于哈希的分片

对于 基于哈希的分片 ,MongoDB计算一个字段的哈希值,并用这个哈希值来创建数据块.

在使用基于哈希分片的系统中,拥有”相近”片键的文档 很可能不会 存储在同一个数据块中,因此数据的分离性更好一些.

Diagram of the hashed based segmentation.

基于范围的分片方式与基于哈希的分片方式性能对比

基于范围的分片方式提供了更高效的范围查询,给定一个片键的范围,分发路由可以很简单地确定哪个数据块存储了请求需要的数据,并将请求转发到相应的分片中.

不过,基于范围的分片会导致数据在不同分片上的不均衡,有时候,带来的消极作用会大于查询性能的积极作用.比如,如果片键所在的字段是线性增长的,一定时间内的所有请求都会落到某个固定的数据块中,最终导致分布在同一个分片中.在这种情况下,一小部分分片承载了集群大部分的数据,系统并不能很好地进行扩展.

与此相比,基于哈希的分片方式以范围查询性能的损失为代价,保证了集群中数据的均衡.哈希值的随机性使数据随机分布在每个数据块中,因此也随机分布在不同分片中.但是也正由于随机性,一个范围查询很难确定应该请求哪些分片,通常为了返回需要的结果,需要请求所有分片.

Customized Data Distribution with Tag Aware Sharding

MongoDB allows administrators to direct the balancing policy using tag aware sharding. Administrators create and associate tags with ranges of the shard key, and then assign those tags to the shards. Then, the balancer migrates tagged data to the appropriate shards and ensures that the cluster always enforces the distribution of data that the tags describe.

Tags are the primary mechanism to control the behavior of the balancer and the distribution of chunks in a cluster. Most commonly, tag aware sharding serves to improve the locality of data for sharded clusters that span multiple data centers.

See 受标记作用的分片 for more information.

数据均衡的维护

新数据的加入或者新分片的加入可能会导致集群中数据的不均衡,即表现为有些分片保存的数据块数目显著地大于其他分片保存的数据块数.

MongoBD使用两个过程维护集群中数据的均衡:分裂和均衡器.

分裂

Splitting is a background process that keeps chunks from growing too large. When a chunk grows beyond a specified chunk size, MongoDB splits the chunk in half. Inserts and updates triggers splits. Splits are an efficient meta-data change. To create splits, MongoDB does not migrate any data or affect the shards.

Diagram of a shard with a chunk that exceeds the default chunk size of 64 MB and triggers a split of the chunk into two chunks.

均衡

The balancer is a background process that manages chunk migrations. The balancer can run from any of the mongos instances in a cluster.

当集群中数据的不均衡发生时,均衡器会将数据块从数据块数目最多的分片迁移到数据块最少的分片上,举例来讲:如果集合 users分片1 上有100个数据块,在 分片2 上有50个数据块,均衡器会将数据块从 分片1 一直向 分片2 迁移,一直到数据均衡为止.

The shards manage chunk migrations as a background operation between an origin shard and a destination shard. During a chunk migration, the destination shard is sent all the current documents in the chunk from the origin shard. Next, the destination shard captures and applies all changes made to the data during the migration process. Finally, the metadata regarding the location of the chunk on config server is updated.

If there’s an error during the migration, the balancer aborts the process leaving the chunk unchanged on the origin shard. MongoDB removes the chunk’s data from the origin shard after the migration completes successfully.

Diagram of a collection distributed across three shards. For this collection, the difference in the number of chunks between the shards reaches the *migration thresholds* (in this case, 2) and triggers migration.

在集群中增加或者删除分片

在集群中增加分片时,由于新的分片上并没有数据块,会造成数据的不均衡.此时MongoDB会立即开始向新分片迁移数据,集群达到数据均衡的状态需要花费一些时间.

当删除一个分片时,均衡器需要将被删除的分片上的数据全部迁移到其他分片上,在全部迁移结束且元信息更新完毕之后,你可以安全地将这个分片移除.

对集合进行分片时,你需要选择一个 片键 , shard key 是每条记录都必须包含的,且建立了索引的单个字段或复合字段,MongoDB按照片键将数据划分到不同的 数据块 中,并将 数据块 均衡地分布到所有分片中.为了按照片键划分数据块,MongoDB使用 基于范围的分片方式 或者 基于哈希的分片方式 ,参见 片键 获得更多信息.

  • Sharding Methods for MongoDB (Presentation)
  • MongoDB允许管理员使用 标记 直接决定集群的均衡策略.管理员使用标记与片键的范围做绑定,并将标记与分片直接绑定,之后,均衡器会将满足标记的数据直接分发到与之绑定的分片上,并且确保之后满足标记的数据一直存储在相应的分片上.

  • 标记是控制均衡器行为和数据块分布的首要条件,一般来讲,在拥有多个数据中心时,才会使用标记自定义集群中数据块的分布,以提高不同地域之间数据访问的效率.

  • MongoDB Operations Best Practices White Paper
  • 分片管理在后台管理从 源分片目标分片数据块迁移 ,在迁移过程中, 目标分片 首先会接收源分片在迁移数据块上的所有数据,之后,目标分片应用在上一迁移步骤之间发生在源分片上的迁移数据块的更改,最后,存储在 配置服务器 上的元信息被更新.

  • MongoDB Consulting Package
  • Quick Reference Cards
←   分片 分片概念介绍  →