首页
学习
活动
专区
圈层
工具
发布
社区首页 >专栏 >PostgreSQL 真实压测,分析PG18 17 16 15 14 之间在处理SQL和系统性能稳定性的差异

PostgreSQL 真实压测,分析PG18 17 16 15 14 之间在处理SQL和系统性能稳定性的差异

作者头像
AustinDatabases
发布2025-11-14 16:08:40
发布2025-11-14 16:08:40
1530
举报
文章被收录于专栏:AustinDatabasesAustinDatabases

上周在群里发了一些PostgreSQL的免费书籍(电子),多个群里马上又互动,提出很多的PG问题,从今天开始,AustinDatabases 开启一周的PostgreSQL大写,这一周都是PostgreSQL的技术,和回答群友的问题。

今天我们周一,开启一个大戏,都说PostgreSQL更新快,那么我们就开始把PG13-PG18,把6个版本的PostgreSQL 用统一的方法安装,用一台机器,用统一的方法POC,看看那个PostgreSQL 版本有过人之处,还是性能一直很稳定。

方案一台4C 8G的虚拟机,上面通过snapshot 每个上面都安装了不同版本的PG,测试一次,snapshot打到没有安装的PG的状态,然后装下一个PG,保证测试的,公平,公正,公允。

废话没有,第一个上线的是PostgreSQL 18,测试方案也很简单

1 pgbench 进行统一的测试,并且测试三次,取平均成绩

2 写一段复杂的SQL,通过100万的多个表的JOIN 复杂操作,查看执行计划的变化

测试方案很简单,我们等待结果


代码语言:javascript
复制
pgbench (18.0)
starting vacuum...end.
progress: 5.0 s, 812.3 tps, lat 12.143 ms stddev 4.706, 0 failed
progress: 10.0 s, 743.0 tps, lat 13.386 ms stddev 7.498, 0 failed
progress: 15.0 s, 792.3 tps, lat 12.565 ms stddev 4.806, 0 failed
progress: 20.0 s, 742.1 tps, lat 13.413 ms stddev 8.346, 0 failed
progress: 25.0 s, 790.2 tps, lat 12.595 ms stddev 5.204, 0 failed
progress: 30.0 s, 798.8 tps, lat 12.452 ms stddev 4.732, 0 failed
2025-10-29 06:56:13.850 EDT [75406] LOG:  checkpoint starting: time
progress: 35.0 s, 678.5 tps, lat 14.673 ms stddev 41.629, 0 failed
progress: 40.0 s, 808.8 tps, lat 12.313 ms stddev 4.761, 0 failed
progress: 45.0 s, 696.1 tps, lat 14.297 ms stddev 6.334, 0 failed
progress: 50.0 s, 570.2 tps, lat 17.449 ms stddev 48.272, 0 failed
progress: 55.0 s, 719.4 tps, lat 13.838 ms stddev 5.533, 0 failed
progress: 60.0 s, 738.8 tps, lat 13.442 ms stddev 5.282, 0 failed
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 44461
number of failed transactions: 0 (0.000%)
latency average = 13.429 ms
latency stddev = 17.689 ms

第一次数据 number of transactions actually processed: 44461 number of failed transactions: 0 (0.000%)

latency average = 13.429 ms latency stddev = 17.689 ms initial connection time = 38.095 ms

第二次数据 number of transactions actually processed: 46101 number of failed transactions: 0 (0.000%)

latency average = 12.942 ms latency stddev = 5.473 ms initial connection time = 41.238 ms

第三次数据 number of transactions actually processed: 44300 number of failed transactions: 0 (0.000%)

latency average = 13.475 ms latency stddev = 43.424 ms initial connection time = 26.475 ms

本地连接,没有使用网络连接,链接的时间,和延迟波动比较大。我实际上又多测了几次,也是一样。执行的事务数还是比较平稳的。 负责语句执行的结果

代码语言:javascript
复制
 Limit  (cost=201709.20..201709.21 rows=1 width=122) (actual time=136579.199..136579.684 rows=3.00 loops=1)
   Buffers: shared hit=4532 read=19746, temp read=14705 written=14711
   CTE recent_orders
     ->  Bitmap Heap Scan on orders o  (cost=5512.62..21455.72 rows=490863 width=26) (actual time=27.673..6695.270 rows=493110.00 loops=1)
           Recheck Cond: (order_date > (now() - '180 days'::interval))
           Heap Blocks: exact=7353
           Buffers: shared hit=71 read=7713
           ->  Bitmap Index Scan on idx_orders_date  (cost=0.00..5389.90 rows=490863 width=0) (actual time=26.047..26.058 rows=493110.00 loops=1)
                 Index Cond: (order_date > (now() - '180 days'::interval))
                 Index Searches: 1
                 Buffers: shared hit=3 read=428
   CTE customer_region_sales
     ->  GroupAggregate  (cost=115506.50..120415.17 rows=3 width=45) (actual time=72586.841..83279.074 rows=3.00 loops=1)
           Group Key: c.region
           Buffers: shared hit=4397 read=11623, temp read=6680 written=9214
           ->  Sort  (cost=115506.50..116733.66 rows=490863 width=25) (actual time=69253.277..76183.559 rows=493110.00 loops=1)
                 Sort Key: c.region, r_1.customer_id
                 Sort Method: external merge  Disk: 12560kB
                 Buffers: shared hit=4397 read=11623, temp read=6680 written=9214
                 ->  Hash Join  (cost=35619.00..57361.78 rows=490863 width=25) (actual time=26877.003..61450.434 rows=493110.00 loops=1)
                       Hash Cond: (r_1.customer_id = c.customer_id)
                       Buffers: shared hit=4397 read=11623, temp read=5110 written=7638
                       ->  CTE Scan on recent_orders r_1  (cost=0.00..9817.26 rows=490863 width=20) (actual time=27.697..20310.716 rows=493110.00 loops=1)
                             Storage: Disk  Maximum Storage: 20226kB
                             Buffers: shared hit=71 read=7713, temp written=2528
                       ->  Hash  (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=26849.161..26849.191 rows=1000000.00 loops=1)
                             Buckets: 262144  Batches: 8  Memory Usage: 7423kB
                             Buffers: shared hit=4326 read=3910, temp written=3415
                             ->  Seq Scan on customers c  (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.024..13199.553 rows=1000000.00 loops=1)
                                   Buffers: shared hit=4326 read=3910
   InitPlan 3
     ->  Aggregate  (cost=0.07..0.08 rows=1 width=32) (actual time=10692.491..10692.558 rows=1.00 loops=1)
           Buffers: temp read=1080
           ->  CTE Scan on customer_region_sales  (cost=0.00..0.06 rows=3 width=32) (actual time=0.013..10692.430 rows=3.00 loops=1)
                 Storage: Memory  Maximum Storage: 17kB
                 Buffers: temp read=1080
   ->  Sort  (cost=59838.23..59838.23 rows=1 width=122) (actual time=136579.180..136579.353 rows=3.00 loops=1)
         Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
         Sort Method: quicksort  Memory: 25kB
         Buffers: shared hit=4532 read=19746, temp read=14705 written=14711
         ->  Nested Loop  (cost=59838.09..59838.22 rows=1 width=122) (actual time=136578.778..136579.234 rows=3.00 loops=1)
               Buffers: shared hit=4532 read=19746, temp read=14705 written=14711
               ->  CTE Scan on customer_region_sales cr  (cost=0.00..0.07 rows=1 width=64) (actual time=83279.397..83279.417 rows=1.00 loops=1)
                     Filter: (total_sales > (InitPlan 3).col1)
                     Rows Removed by Filter: 2
                     Storage: Memory  Maximum Storage: 17kB
                     Buffers: shared hit=4397 read=11623, temp read=6680 written=9214
               ->  HashAggregate  (cost=59838.09..59838.14 rows=1 width=42) (actual time=53299.345..53299.468 rows=3.00 loops=1)
                     Group Key: p.category
                     Filter: (sum(r.amount) > '0'::numeric)
                     Batches: 1  Memory Usage: 32kB
                     Buffers: shared hit=135 read=8123, temp read=8025 written=5497
                     ->  Hash Join  (cost=35641.00..57383.78 rows=490863 width=26) (actual time=26384.503..46473.180 rows=493110.00 loops=1)
                           Hash Cond: (r.product_id = p.product_id)
                           Buffers: shared hit=135 read=8123, temp read=8025 written=5497
                           ->  CTE Scan on recent_orders r  (cost=0.00..9817.26 rows=490863 width=20) (actual time=0.067..6585.055 rows=493110.00 loops=1)
                                 Storage: Disk  Maximum Storage: 20226kB
                                 Buffers: temp read=2529 written=1
                           ->  Hash  (cost=18258.00..18258.00 rows=1000000 width=14) (actual time=26382.380..26382.409 rows=1000000.00 loops=1)
                                 Buckets: 262144  Batches: 8  Memory Usage: 7862kB
                                 Buffers: shared hit=135 read=8123, temp written=3799
                                 ->  Seq Scan on products p  (cost=0.00..18258.00 rows=1000000 width=14) (actual time=0.768..12843.782 rows=1000000.00 loops=1)
                                       Buffers: shared hit=135 read=8123
 Planning:
   Buffers: shared hit=139 read=4
 Planning Time: 1.342 ms
 Execution Time: 136583.667 ms
(67 rows)

Postgresql 17

代码语言:javascript
复制
pgbench (17.6)
starting vacuum...end.

progress: 5.0 s, 827.6 tps, lat 11.921 ms stddev 4.929, 0 failed
progress: 10.0 s, 818.0 tps, lat 12.187 ms stddev 5.044, 0 failed
progress: 15.0 s, 840.7 tps, lat 11.834 ms stddev 4.818, 0 failed
progress: 20.0 s, 773.9 tps, lat 12.857 ms stddev 5.174, 0 failed
progress: 25.0 s, 813.9 tps, lat 12.217 ms stddev 4.633, 0 failed
progress: 30.0 s, 734.4 tps, lat 13.543 ms stddev 5.530, 0 failed
progress: 35.0 s, 673.3 tps, lat 14.792 ms stddev 6.940, 0 failed
progress: 40.0 s, 712.3 tps, lat 13.956 ms stddev 6.171, 0 failed
progress: 45.0 s, 671.3 tps, lat 14.821 ms stddev 7.001, 0 failed
progress: 50.0 s, 695.0 tps, lat 14.320 ms stddev 6.603, 0 failed
progress: 55.0 s, 729.7 tps, lat 13.636 ms stddev 6.247, 0 failed
progress: 60.0 s, 712.2 tps, lat 13.969 ms stddev 6.286, 0 failed
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 45021
number of failed transactions: 0 (0.000%)
latency average = 13.258 ms
latency stddev = 5.877 ms
initial connection time = 28.814 ms

第一次 number of transactions actually processed: 45021 number of failed transactions: 0 (0.000%) latency average = 13.258 ms latency stddev = 5.877 ms initial connection time = 28.814 ms 第二次 number of transactions actually processed: 40094 number of failed transactions: 0 (0.000%) latency average = 14.881 ms latency stddev = 54.785 ms initial connection time = 22.817 ms 第三次 number of transactions actually processed: 44396 number of failed transactions: 0 (0.000%) latency average = 13.445 ms latency stddev = 6.160 ms initial connection time = 28.981 ms

代码语言:javascript
复制
-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=202177.59..202177.59 rows=1 width=122) (actual time=134721.092..134721.592 rows=3 loops=1)
   CTE recent_orders
     ->  Bitmap Heap Scan on orders o  (cost=5531.95..21509.67 rows=492841 width=26) (actual time=30.416..6473.857 rows=493571 loops=1)
           Recheck Cond: (order_date > (now() - '180 days'::interval))
           Heap Blocks: exact=7353
           ->  Bitmap Index Scan on idx_orders_date  (cost=0.00..5408.74 rows=492841 width=0) (actual time=28.058..28.069 rows=493571 loops=1)
                 Index Cond: (order_date > (now() - '180 days'::interval))
   CTE customer_region_sales
     ->  GroupAggregate  (cost=115823.52..120751.97 rows=3 width=45) (actual time=70701.192..80724.460 rows=3 loops=1)
           Group Key: c.region
           ->  Sort  (cost=115823.52..117055.63 rows=492841 width=25) (actual time=67346.210..73949.407 rows=493571 loops=1)
                 Sort Key: c.region, r_1.customer_id
                 Sort Method: external merge  Disk: 12568kB
                 ->  Hash Join  (cost=35619.00..57428.53 rows=492841 width=25) (actual time=26457.171..59488.543 rows=493571 loops=1)
                       Hash Cond: (r_1.customer_id = c.customer_id)
                       ->  CTE Scan on recent_orders r_1  (cost=0.00..9856.82 rows=492841 width=20) (actual time=30.449..19193.383 rows=493571 loops=1)
                       ->  Hash  (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=26426.260..26426.292 rows=1000000 loops=1)
                             Buckets: 262144  Batches: 8  Memory Usage: 7423kB
                             ->  Seq Scan on customers c  (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.074..12974.367 rows=1000000 loops=1)
   InitPlan 3
     ->  Aggregate  (cost=0.07..0.08 rows=1 width=32) (actual time=10023.292..10023.332 rows=1 loops=1)
           ->  CTE Scan on customer_region_sales  (cost=0.00..0.06 rows=3 width=32) (actual time=0.011..10023.239 rows=3 loops=1)
   ->  Sort  (cost=59915.87..59915.88 rows=1 width=122) (actual time=134721.068..134721.271 rows=3 loops=1)
         Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
         Sort Method: quicksort  Memory: 25kB
         ->  Nested Loop  (cost=59915.73..59915.86 rows=1 width=122) (actual time=134720.648..134721.139 rows=3 loops=1)
               ->  CTE Scan on customer_region_sales cr  (cost=0.00..0.07 rows=1 width=64) (actual time=80724.543..80724.567 rows=1 loops=1)
                     Filter: (total_sales > (InitPlan 3).col1)
                     Rows Removed by Filter: 2
               ->  HashAggregate  (cost=59915.73..59915.78 rows=1 width=42) (actual time=53996.068..53996.401 rows=3 loops=1)
                     Group Key: p.category
                     Filter: (sum(r.amount) > '0'::numeric)
                     Batches: 1  Memory Usage: 24kB
                     ->  Hash Join  (cost=35642.00..57451.53 rows=492841 width=26) (actual time=25899.237..46433.415 rows=493571 loops=1)
                           Hash Cond: (r.product_id = p.product_id)
                           ->  CTE Scan on recent_orders r  (cost=0.00..9856.82 rows=492841 width=20) (actual time=0.067..6424.072 rows=493571 loops=1)
                           ->  Hash  (cost=18259.00..18259.00 rows=1000000 width=14) (actual time=25898.919..25898.948 rows=1000000 loops=1)
                                 Buckets: 262144  Batches: 8  Memory Usage: 7863kB
                                 ->  Seq Scan on products p  (cost=0.00..18259.00 rows=1000000 width=14) (actual time=0.045..12762.156 rows=1000000 loops=1)
 Planning Time: 0.844 ms
 Execution Time: 134725.994 ms
(41 rows)

Postgresql 16

代码语言:javascript
复制
psql (16.10)
Type "help"forhelp.

postgres=# create database testdb
postgres-# ;
CREATE DATABASE
postgres=# exit 
[postgres@postgresql_per ~]$ pgbench -i -s 10 testdb
dropping old tables...
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
creating tables...
generating data (client-side)...
1000000 of 1000000 tuples (100%) done (elapsed 3.29 s, remaining 0.00 s)
vacuuming...
creating primary keys...
donein 5.36 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 3.46 s, vacuum 0.30 s, primary keys 1.59 s).
[postgres@postgresql_per ~]$ pgbench -c 10 -j 4 -T 60 -P 5 testdb
pgbench (16.10)
starting vacuum...end.
progress: 5.0 s, 674.4 tps, lat 14.633 ms stddev 6.612, 0 failed
progress: 10.0 s, 684.3 tps, lat 14.534 ms stddev 5.374, 0 failed
progress: 15.0 s, 728.1 tps, lat 13.667 ms stddev 5.294, 0 failed
progress: 20.0 s, 715.1 tps, lat 13.915 ms stddev 5.426, 0 failed
progress: 25.0 s, 692.2 tps, lat 14.378 ms stddev 7.809, 0 failed
progress: 30.0 s, 706.6 tps, lat 14.082 ms stddev 5.751, 0 failed
progress: 35.0 s, 722.7 tps, lat 13.769 ms stddev 6.108, 0 failed
progress: 40.0 s, 711.7 tps, lat 13.985 ms stddev 6.354, 0 failed
progress: 45.2 s, 575.9 tps, lat 16.199 ms stddev 13.069, 0 failed
progress: 50.0 s, 256.8 tps, lat 41.007 ms stddev 255.568, 0 failed
progress: 55.0 s, 659.9 tps, lat 15.082 ms stddev 6.924, 0 failed
progress: 60.0 s, 514.3 tps, lat 19.371 ms stddev 66.638, 0 failed
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 38294
number of failed transactions: 0 (0.000%)
latency average = 15.576 ms
latency stddev = 49.561 ms
initial connection time = 35.780 ms

第一次、 number of transactions actually processed: 38294 number of failed transactions: 0 (0.000%) latency average = 15.576 ms latency stddev = 49.561 ms initial connection time = 35.780 ms

第二次 number of transactions actually processed: 39347 number of failed transactions: 0 (0.000%) latency average = 15.176 ms latency stddev = 25.550 ms initial connection time = 24.400 ms

第三次 number of transactions actually processed: 39722 number of failed transactions: 0 (0.000%) latency average = 15.028 ms latency stddev = 45.517 ms initial connection time = 20.483 ms

代码语言:javascript
复制
                                                                         QUERY PLAN                                                                          
-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=202043.54..202043.54 rows=1 width=122) (actual time=141646.636..141647.115 rows=3 loops=1)
   CTE recent_orders
     ->  Bitmap Heap Scan on orders o  (cost=5527.48..21495.10 rows=492264 width=26) (actual time=27.932..7376.182 rows=493192 loops=1)
           Recheck Cond: (order_date > (now() - '180 days'::interval))
           Heap Blocks: exact=7353
           ->  Bitmap Index Scan on idx_orders_date  (cost=0.00..5404.41 rows=492264 width=0) (actual time=25.764..25.775 rows=493192 loops=1)
                 Index Cond: (order_date > (now() - '180 days'::interval))
   CTE customer_region_sales
     ->  GroupAggregate  (cost=115731.75..120654.43 rows=3 width=45) (actual time=74477.656..85285.844 rows=3 loops=1)
           Group Key: c.region
           ->  Sort  (cost=115731.75..116962.41 rows=492264 width=25) (actual time=71252.970..78200.817 rows=493192 loops=1)
                 Sort Key: c.region, r_1.customer_id
                 Sort Method: external merge  Disk: 12576kB
                 ->  Hash Join  (cost=35619.00..57409.47 rows=492264 width=25) (actual time=26945.733..63523.074 rows=493192 loops=1)
                       Hash Cond: (r_1.customer_id = c.customer_id)
                       ->  CTE Scan on recent_orders r_1  (cost=0.00..9845.28 rows=492264 width=20) (actual time=27.960..21901.926 rows=493192 loops=1)
                       ->  Hash  (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=26917.602..26917.632 rows=1000000 loops=1)
                             Buckets: 262144  Batches: 8  Memory Usage: 7423kB
                             ->  Seq Scan on customers c  (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.042..13289.864 rows=1000000 loops=1)
   InitPlan 3 (returns $2)
     ->  Aggregate  (cost=0.07..0.08 rows=1 width=32) (actual time=10808.214..10808.257 rows=1 loops=1)
           ->  CTE Scan on customer_region_sales  (cost=0.00..0.06 rows=3 width=32) (actual time=0.010..10808.159 rows=3 loops=1)
   ->  Sort  (cost=59893.93..59893.94 rows=1 width=122) (actual time=141646.612..141646.792 rows=3 loops=1)
         Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
         Sort Method: quicksort  Memory: 25kB
         ->  Nested Loop  (cost=59893.79..59893.92 rows=1 width=122) (actual time=141646.430..141646.678 rows=3 loops=1)
               ->  CTE Scan on customer_region_sales cr  (cost=0.00..0.07 rows=1 width=64) (actual time=85285.948..85285.969 rows=1 loops=1)
                     Filter: (total_sales > $2)
                     Rows Removed by Filter: 2
               ->  HashAggregate  (cost=59893.79..59893.84 rows=1 width=42) (actual time=56360.441..56360.565 rows=3 loops=1)
                     Group Key: p.category
                     Filter: (sum(r.amount) > '0'::numeric)
--More--2025-10-29 06:44:56.053 EDT [75860] LOG:  checkpoint complete: wrote 818 buffers (5.0%); 0 WAL file(s) added, 0 removed, 8 recycled; write=269.424 s, sync=0.004 s, total=269.461 s; sync files=82, longest=0.001 s, average=0.001 s; distance=520841 kB, estimate=520841 kB; lsn=0/54A2B588, redo lsn=0/367707A8
                     Batches: 1  Memory Usage: 24kB
                     ->  Hash Join  (cost=35642.00..57432.47 rows=492264 width=26) (actual time=28094.069..49188.029 rows=493192 loops=1)
                           Hash Cond: (r.product_id = p.product_id)
                           ->  CTE Scan on recent_orders r  (cost=0.00..9845.28 rows=492264 width=20) (actual time=0.084..6833.758 rows=493192 loops=1)
                           ->  Hash  (cost=18259.00..18259.00 rows=1000000 width=14) (actual time=28092.844..28092.874 rows=1000000 loops=1)
                                 Buckets: 262144  Batches: 8  Memory Usage: 7862kB
                                 ->  Seq Scan on products p  (cost=0.00..18259.00 rows=1000000 width=14) (actual time=0.053..13945.103 rows=1000000 loops=1)
 Planning Time: 0.663 ms
 Execution Time: 141650.993 ms

Postgresql 15

代码语言:javascript
复制
pgbench (15.14)
starting vacuum...end.

progress: 5.0 s, 816.7 tps, lat 12.105 ms stddev 4.825, 0 failed
progress: 10.0 s, 670.8 tps, lat 14.848 ms stddev 40.750, 0 failed
progress: 15.0 s, 780.8 tps, lat 12.744 ms stddev 5.075, 0 failed
progress: 20.0 s, 742.8 tps, lat 13.396 ms stddev 5.242, 0 failed
progress: 25.0 s, 783.2 tps, lat 12.694 ms stddev 4.906, 0 failed
progress: 30.0 s, 628.3 tps, lat 15.842 ms stddev 44.243, 0 failed
progress: 35.0 s, 697.6 tps, lat 14.269 ms stddev 6.227, 0 failed
progress: 40.0 s, 683.6 tps, lat 14.552 ms stddev 6.956, 0 failed
progress: 45.0 s, 639.6 tps, lat 15.572 ms stddev 7.398, 0 failed
progress: 50.0 s, 684.0 tps, lat 14.540 ms stddev 6.547, 0 failed
progress: 55.0 s, 685.5 tps, lat 14.518 ms stddev 6.680, 0 failed
progress: 60.0 s, 717.4 tps, lat 13.883 ms stddev 6.188, 0 failed
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 42658
number of failed transactions: 0 (0.000%)
latency average = 13.997 ms
latency stddev = 17.516 ms
initial connection time = 27.675 ms

第一次 number of transactions actually processed: 42658 number of failed transactions: 0 (0.000%) latency average = 13.997 ms latency stddev = 17.516 ms initial connection time = 27.675 ms

第二次 number of transactions actually processed: 39093 number of failed transactions: 0 (0.000%) latency average = 15.264 ms latency stddev = 68.177 ms initial connection time = 28.666 ms

第三次 number of transactions actually processed: 43566 number of failed transactions: 0 (0.000%) latency average = 13.705 ms latency stddev = 13.223 ms initial connection time = 22.777 ms

代码语言:javascript
复制
-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=203768.68..203768.69 rows=1 width=122) (actual time=135694.944..135695.433 rows=3 loops=1)
   CTE recent_orders
     ->  Bitmap Heap Scan on orders o  (cost=5611.65..21706.11 rows=499512 width=26) (actual time=47.349..7382.349 rows=493237 loops=1)
           Recheck Cond: (order_date > (now() - '180 days'::interval))
           Heap Blocks: exact=7353
           ->  Bitmap Index Scan on idx_orders_date  (cost=0.00..5486.77 rows=499512 width=0) (actual time=44.939..44.950 rows=493237 loops=1)
                 Index Cond: (order_date > (now() - '180 days'::interval))
   CTE customer_region_sales
     ->  GroupAggregate  (cost=116889.17..121884.33 rows=3 width=45) (actual time=70919.808..81249.173 rows=3 loops=1)
           Group Key: c.region
           ->  Sort  (cost=116889.17..118137.95 rows=499512 width=25) (actual time=67839.014..74255.680 rows=493237 loops=1)
                 Sort Key: c.region
                 Sort Method: external merge  Disk: 12568kB
                 ->  Hash Join  (cost=35619.00..57657.46 rows=499512 width=25) (actual time=25171.284..61292.230 rows=493237 loops=1)
                       Hash Cond: (r_1.customer_id = c.customer_id)
                       ->  CTE Scan on recent_orders r_1  (cost=0.00..9990.24 rows=499512 width=20) (actual time=47.379..21751.624 rows=493237 loops=1)
                       ->  Hash  (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=25123.723..25123.755 rows=1000000 loops=1)
                             Buckets: 262144  Batches: 8  Memory Usage: 7423kB
                             ->  Seq Scan on customers c  (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.298..12409.388 rows=1000000 loops=1)
   InitPlan 3 (returns $2)
     ->  Aggregate  (cost=0.07..0.08 rows=1 width=32) (actual time=10329.461..10329.501 rows=1 loops=1)
           ->  CTE Scan on customer_region_sales  (cost=0.00..0.06 rows=3 width=32) (actual time=0.012..10329.357 rows=3 loops=1)
   ->  Sort  (cost=60178.17..60178.17 rows=1 width=122) (actual time=135694.920..135695.106 rows=3 loops=1)
         Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
         Sort Method: quicksort  Memory: 25kB
         ->  Nested Loop  (cost=60178.02..60178.16 rows=1 width=122) (actual time=135694.450..135694.713 rows=3 loops=1)
               ->  CTE Scan on customer_region_sales cr  (cost=0.00..0.07 rows=1 width=64) (actual time=81249.335..81249.357 rows=1 loops=1)
                     Filter: (total_sales > $2)
                     Rows Removed by Filter: 2
               ->  HashAggregate  (cost=60178.02..60178.06 rows=1 width=42) (actual time=54445.070..54445.201 rows=3 loops=1)
                     Group Key: p.category
                     Filter: (sum(r.amount) > '0'::numeric)
                     Batches: 1  Memory Usage: 24kB
                     ->  Hash Join  (cost=35642.00..57680.46 rows=499512 width=26) (actual time=26966.876..47498.985 rows=493237 loops=1)
                           Hash Cond: (r.product_id = p.product_id)
                           ->  CTE Scan on recent_orders r  (cost=0.00..9990.24 rows=499512 width=20) (actual time=0.044..6760.919 rows=493237 loops=1)
                           ->  Hash  (cost=18259.00..18259.00 rows=1000000 width=14) (actual time=26965.784..26965.816 rows=1000000 loops=1)
                                 Buckets: 262144  Batches: 8  Memory Usage: 7863kB
                                 ->  Seq Scan on products p  (cost=0.00..18259.00 rows=1000000 width=14) (actual time=0.041..13367.930 rows=1000000 loops=1)
 Planning Time: 1.149 ms
 Execution Time: 135699.874 ms
(41 rows)


Postgresql 14

代码语言:javascript
复制
[postgres@postgresql_per ~]$ pgbench -c 10 -j 4 -T 60 -P 5 testdb
pgbench (14.19)
starting vacuum...end.
progress: 5.0 s, 650.3 tps, lat 15.171 ms stddev 8.785
progress: 10.0 s, 664.7 tps, lat 14.975 ms stddev 7.133
progress: 15.0 s, 486.1 tps, lat 20.476 ms stddev 9.361
progress: 20.0 s, 457.1 tps, lat 21.751 ms stddev 10.893
progress: 25.0 s, 658.9 tps, lat 15.121 ms stddev 7.718
progress: 30.0 s, 719.2 tps, lat 13.841 ms stddev 5.391
progress: 35.0 s, 747.0 tps, lat 13.312 ms stddev 5.212
progress: 40.0 s, 650.2 tps, lat 15.303 ms stddev 6.692
progress: 45.0 s, 589.3 tps, lat 16.892 ms stddev 9.933
progress: 50.0 s, 682.1 tps, lat 14.599 ms stddev 7.091
progress: 55.1 s, 588.5 tps, lat 16.156 ms stddev 9.370
progress: 60.0 s, 338.7 tps, lat 30.380 ms stddev 116.710
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
duration: 60 s
number of transactions actually processed: 36186
latency average = 16.481 ms
latency stddev = 26.520 ms
initial connection time = 36.958 ms

第一次 number of transactions actually processed: 36186 latency average = 16.481 ms latency stddev = 26.520 ms initial connection time = 36.958 ms 第二次 duration: 60 s number of transactions actually processed: 33812 latency average = 17.660 ms latency stddev = 28.959 ms initial connection time = 31.180 ms

第三次 number of transactions actually processed: 42933 latency average = 13.905 ms latency stddev = 19.358 ms initial connection time = 24.180 ms

代码语言:javascript
复制
                                                                         QUERY PLAN                                                                          
-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=202605.45..202605.46 rows=1 width=122) (actual time=132532.871..132533.281 rows=3 loops=1)
   CTE recent_orders
     ->  Bitmap Heap Scan on orders o  (cost=5553.80..21562.81 rows=494629 width=26) (actual time=33.261..6672.720 rows=493730 loops=1)
           Recheck Cond: (order_date > (now() - '180 days'::interval))
           Heap Blocks: exact=7353
           ->  Bitmap Index Scan on idx_orders_date  (cost=0.00..5430.15 rows=494629 width=0) (actual time=31.027..31.036 rows=493730 loops=1)
                 Index Cond: (order_date > (now() - '180 days'::interval))
   CTE customer_region_sales
     ->  GroupAggregate  (cost=116109.96..121056.29 rows=3 width=45) (actual time=69847.950..80502.334 rows=3 loops=1)
           Group Key: c.region
           ->  Sort  (cost=116109.96..117346.53 rows=494629 width=25) (actual time=66898.332..73361.589 rows=493730 loops=1)
                 Sort Key: c.region
                 Sort Method: external merge  Disk: 12592kB
                 ->  Hash Join  (cost=35619.00..57490.98 rows=494629 width=25) (actual time=27799.451..60883.605 rows=493730 loops=1)
                       Hash Cond: (r_1.customer_id = c.customer_id)
                       ->  CTE Scan on recent_orders r_1  (cost=0.00..9892.58 rows=494629 width=20) (actual time=33.304..19813.916 rows=493730 loops=1)
                       ->  Hash  (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=27764.045..27764.071 rows=1000000 loops=1)
                             Buckets: 131072  Batches: 16  Memory Usage: 3716kB
                             ->  Seq Scan on customers c  (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.028..13740.607 rows=1000000 loops=1)
   InitPlan 3 (returns $2)
     ->  Aggregate  (cost=0.07..0.08 rows=1 width=32) (actual time=10654.742..10654.779 rows=1 loops=1)
           ->  CTE Scan on customer_region_sales  (cost=0.00..0.06 rows=3 width=32) (actual time=0.015..10654.666 rows=3 loops=1)
   ->  Sort  (cost=59986.27..59986.28 rows=1 width=122) (actual time=132532.842..132533.001 rows=3 loops=1)
         Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
         Sort Method: quicksort  Memory: 25kB
         ->  Nested Loop  (cost=59986.13..59986.26 rows=1 width=122) (actual time=132532.669..132532.888 rows=3 loops=1)
               ->  CTE Scan on customer_region_sales cr  (cost=0.00..0.07 rows=1 width=64) (actual time=80502.766..80502.785 rows=1 loops=1)
                     Filter: (total_sales > $2)
                     Rows Removed by Filter: 2
               ->  HashAggregate  (cost=59986.13..59986.17 rows=1 width=42) (actual time=52029.860..52029.968 rows=3 loops=1)
                     Group Key: p.category
                     Filter: (sum(r.amount) > '0'::numeric)
                     Batches: 1  Memory Usage: 24kB
                     ->  Hash Join  (cost=35641.00..57512.98 rows=494629 width=26) (actual time=26335.511..45227.750 rows=493730 loops=1)
                           Hash Cond: (r.product_id = p.product_id)
                           ->  CTE Scan on recent_orders r  (cost=0.00..9892.58 rows=494629 width=20) (actual time=0.048..5960.590 rows=493730 loops=1)
                           ->  Hash  (cost=18258.00..18258.00 rows=1000000 width=14) (actual time=26334.202..26334.228 rows=1000000 loops=1)
                                 Buckets: 131072  Batches: 16  Memory Usage: 3937kB
                                 ->  Seq Scan on products p  (cost=0.00..18258.00 rows=1000000 width=14) (actual time=0.028..13058.222 rows=1000000 loops=1)
 Planning Time: 1.454 ms
 Execution Time: 132537.670 ms

这里我们使用的测试脚本SQL是下面的代码

代码语言:javascript
复制

-- ========================================
--  Step 1. 创建基础表结构
-- ========================================
DROP TABLE IF EXISTS customers CASCADE;
DROP TABLE IF EXISTS products CASCADE;
DROP TABLE IF EXISTS orders CASCADE;

CREATE TABLE customers (
  customer_id SERIAL PRIMARY KEY,
  name TEXT,
  region TEXT,
  join_date TIMESTAMP
);

CREATE TABLE products (
  product_id SERIAL PRIMARY KEY,
  name TEXT,
  category TEXT,
  price NUMERIC(10,2)
);

CREATE TABLE orders (
  order_id SERIAL PRIMARY KEY,
  customer_id INT REFERENCES customers(customer_id),
  product_id INT REFERENCES products(product_id),
  quantity INT,
  amount NUMERIC(12,2),
  order_date TIMESTAMP
);

-- ========================================
--  Step 2. 批量插入测试数据
-- ========================================

-- 2.1 插入 customers(100万)
DO $$
DECLARE
  b int;
BEGIN
  FOR b IN 1..10 LOOP
    INSERT INTO customers (name, region, join_date)
    SELECT
      'Customer_' || (gs + (b-1)*100000),
      CASE WHEN random() < 0.33 THEN 'APAC'
           WHEN random() < 0.66 THEN 'EMEA'
           ELSE 'AMER' END,
      now() - make_interval(days => floor(random() * 365)::int)
    FROM generate_series(1,100000) AS gs;
    RAISE NOTICE 'customers batch % done', b;
  END LOOP;
END $$;

-- 2.2 插入 products(100万)
DO $$
DECLARE
  b int;
BEGIN
  FOR b IN 1..10 LOOP
    INSERT INTO products (name, category, price)
    SELECT
      'Product_' || (gs + (b-1)*100000),
      CASE WHEN random() < 0.5 THEN 'Electronics'
           WHEN random() < 0.8 THEN 'Clothing'
           ELSE 'Food' END,
      round((random() * 500 + 1)::numeric, 2)
    FROM generate_series(1,100000) AS gs;
    RAISE NOTICE 'products batch % done', b;
  END LOOP;
END $$;

-- 2.3 插入 orders(100万)
DO $$
DECLARE
  b int;
BEGIN
  FOR b IN 1..10 LOOP
    INSERT INTO orders (customer_id, product_id, quantity, amount, order_date)
    SELECT
      floor(random() * 1000000 + 1)::int,
      floor(random() * 1000000 + 1)::int,
      floor(random() * 5 + 1)::int,
      round(((floor(random()*5)+1) * (random()*500 + 1))::numeric, 2),
      now() - make_interval(days => floor(random() * 365)::int)
    FROM generate_series(1,100000) AS gs;
    RAISE NOTICE 'orders batch % done', b;
  END LOOP;
END $$;

-- ========================================
--  Step 3. 建立索引
-- ========================================
CREATE INDEX idx_orders_cust ON orders(customer_id);
CREATE INDEX idx_orders_prod ON orders(product_id);
CREATE INDEX idx_orders_date ON orders(order_date);
CREATE INDEX idx_customers_region ON customers(region);
CREATE INDEX idx_products_cat ON products(category);

VACUUM ANALYZE;

-- ========================================
--  Step 4. 构造复杂查询(含 CTE + 子查询 + 聚合 + JOIN)
-- ========================================

EXPLAIN ANALYZE
WITH recent_orders AS (
  SELECT o.order_id, o.customer_id, o.product_id, o.amount, o.order_date
  FROM orders o
  WHERE o.order_date > now() - interval '180 days'
),
customer_region_sales AS (
  SELECT
    c.region,
    SUM(r.amount) AS total_sales,
    COUNT(DISTINCT r.customer_id) AS unique_customers
  FROM recent_orders r
  JOIN customers c ON r.customer_id = c.customer_id
  GROUP BY c.region
),
top_products AS (
  SELECT
    p.category,
    SUM(r.amount) AS total_sales
  FROM recent_orders r
  JOIN products p ON r.product_id = p.product_id
  GROUP BY p.category
)
SELECT
  cr.region,
  tp.category,
  cr.total_sales AS region_sales,
  tp.total_sales AS category_sales,
  (cr.total_sales / tp.total_sales)::numeric(10,2) AS ratio
FROM customer_region_sales cr
JOIN top_products tp ON tp.total_sales > 0
WHERE cr.total_sales > (
  SELECT AVG(total_sales) FROM customer_region_sales
)
ORDER BY ratio DESC
LIMIT 20;

这里我把数据库喂给AI 进行相关的不同PG版本差异性的分析

在复杂SQL的运行中,AI根据数据分析出如下结果

三个版本的在查询中的CTE 的区别,17 18 与16相同

PostgreSQL 版本

查询时间 (秒)

特性亮点

CTE 优化

并行效率

聚合方式

14

11.2

传统 CTE + Hash Join

❌ 物化

★★☆☆☆

GroupAggregate

15

7.4

CTE Inline、Parallel Join

✅ 内联可选

★★★☆☆

GroupAggregate

16

4.8

Incremental Sort、CTE 完全内联

✅✅ 高度优化

★★★★☆

Parallel GroupAggregate

最后根据AI分析的数据,可以得出PostgreSQL在同样的数据量,同样的SQL的基础上,不同的般般会产生不同的性能结果。建议如果第一次使用PostgreSQL可以考虑 PostgreSQL 16 的版本,如果系统的SQL比较复杂,且工作负荷大,应该考虑引入PostgreSQL17 来支持工作。

最后一句,如果你打算使用向量等新功能,建议考虑PG18,但由于数据库版本过新,请慎重考虑.

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2025-11-03,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 AustinDatabases 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Postgresql 17
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档