前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >WL#7277: InnoDB: Bulk Load for Create Index

WL#7277: InnoDB: Bulk Load for Create Index

作者头像
王文安@DBA
发布2021-08-12 10:31:06
7290
发布2021-08-12 10:31:06
举报
文章被收录于专栏:Sth interesting, as a DBA

原文链接

Description

Currently we create an index by repeatedly inserting records. It's very slow

when we have a large collection of records. Bulk loading can be done much

more efficiently.

We sort the records using index key and then insert (exploiting sorted order of

the records). The basic idea of bulk load is we builds an index from bottom up

(also known as sorted index build).

Index entries first go into the leaf page, and when the leaf page is filled up

we add the corresponding search-key entry in to the parent non-leaf node and

start filling next leaf page. Filling records into leaf page is fast, and we

have much less splitting in the building process. Also this approach help in

maintaining good locality and page once pinned and unpinned can be flushed to

disk as it would not be pinned again for build phase.

Problem Statement:


Speed up index build by bulking load the data.

When we build or rebuild an index, we usually have three phases.

Phase-1: Generate Initial Runs

Scan cluster index, generate index entries and add it to sort buffer. When

sort buffer is full, we sort the entries, and write/append the buffer to a

temporary intermediate file.

Phase-2: Merge Sort

Now we have one or more runs in the file, so we run merge sort to get complete

file sorted (in turn all the entries are now sorted).

Phase-1 and Phase-2 is a common external sort alogrithm.

Phase-3: Index Build

Once we have all index entries in sorted order, entries are inserted into B-tree

using normal insert apis. Obivious optimization here should have been using

sorted build. This WL addresses this requirement and enables sorted build path

instead of normal insert apis.


Let's understand current implementation in InnoDB:

  • STEP-1: Optimistic insert - Open a B-tree cursor to find the insert position, and try to insert the tuple in the page. If insert fails (page can't accomodate new record as page has already reached its capacity) try pessimistic insert.
  • STEP-2: Pessimistic insert - Open a B-tree cursor and do necessary split & merge to find a proper space for the tuple.

Current algorithm has following shortcomings:

  • For each entry, position to insert is searched which is costly process even though each search is bounded by log(n) complexity.
  • Current algorithm is top-down which would result in exhaustive split and merge.

New proposed algorithm try to address these issues using bottom-up build

technique and thus avoids/reduces split and merge. (For compressed tables splits

will occur.)


High Level Approach Description:


  • Bottom-up Index BuildSince tuples are ordered, we allocate a leaf page and append tuples to it. Once it is full (fill factor is dictated by external configuration variable), we append a node pointer of the page to its parent page and same process is then followed for all the non-leaf page which in turn can lead to insert upto root. Next tuple would then go to new page following same semantics as described above.

We hold references to the rightmost pages at each level in the B-tree.

All inserts goes to these pages (including node-pointer inserted to non-leaf).

If a page has no space left for a new tuple, we allocate a new sibling page

releasing reference to the old page after updating the non-leaf pages as

needed. The newly pinned siblings will now act as rightmost page and default

location for upcoming new tuples.

  • Redo Logging DisabledREDO logging is selectively turned off and instead checkpoint is done to ensure that build index can withstand crash/failure. Checkpoint will force writing up of all the dirty pages to disk. During the progress of bulk load, we signal page cleaner to flush dirty pages periodically, so the checkpoint could be short and fast.

(Note: page-cleaner thread by default would do it but would trigger the action

only if non-dirty pages fall below some set threshold. In this case we would

like to flush the pages immediately reducing overhead of checkpoint and also

parallelize IO and CPU activities.)

There are 2 main action that needs redo logging:

  1. Page-Allocation: REDO logging is turned-on as usual for this action except if complete table is being rebuild.
  2. Page-Modification: REDO logging is turned-off for this. On crash there is no need to restore page content as anyways index is half-build and will not be updated in metadata.
  3. Fill Factor

The fill-factor value determines percentage of space on page to be filled with

data, reserving the remainder on each page as free space for future growth. For

example, specifying a fill-factor value of 80 means that 20 percent of each page

will be left empty. We don't keep exactly 20 percent free space, and the actual

free percentage varies, greater than 20, or less than 20. It would be

interpreted as hint and not a hard assurance.

Fill factor applies to both leaf-level page and non-leaf page, but doesn't apply

to external page for TEXT/BLOB in B-tree. Fill-factor concept is newly being

introduced and use of it is currently restricted only for sorted build.

It is controlled using a global configuration parameter named

"INNODB_FILL_FACTOR".

Agreeable limits: 10 <= fill-factor <= 100

Note: it could be better that we support fill factor per index in syntax. e.g.

CREATE INDEX ... FILLFACTOR=80. We need a separate WL to address it cooperating

with other teams. As Marko mentioned, fill factor should be an attribute in

dd.indexes.options string.

  • Compressed TableIn existing approach record is inserted to both uncompressed and compressed page. When the modification log of the compressed page is filled up, we recompress the compressed page, and split the page if compression fails.

In bulk load, we append records only to uncompressed pages. when a uncompressed

page is filled up, we compress the page. If compression fails, we split the page

into two pages, and recompress until the compression succeeds.

We keep certain padding space in a page, which is determined by adaptive

padding.

  • Fulltext IndexCurrent implementation use SQL to insert a record into fulltext index. We will use bulk load to speedup fulltext index build as well.

本文系转载,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文系转载前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Description
  • Problem Statement:
相关产品与服务
云数据库 SQL Server
腾讯云数据库 SQL Server (TencentDB for SQL Server)是业界最常用的商用数据库之一,对基于 Windows 架构的应用程序具有完美的支持。TencentDB for SQL Server 拥有微软正版授权,可持续为用户提供最新的功能,避免未授权使用软件的风险。具有即开即用、稳定可靠、安全运行、弹性扩缩等特点。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档