我有一张很棒的报告表。位图堆扫描步骤需要5秒以上。
I have a big report table. Bitmap Heap Scan step take more than 5 sec.
我能做些什么吗?我在表中添加了列,重新索引它使用的索引会有帮助吗?
Is there something that I can do? I add columns to the table, does reindex the index that it use will help?
我对数据进行联合和求和,所以我不返回500K记录到客户。 我使用postgres 9.1。 这里的解释:
I do union and sum on the data, so I don't return 500K records to the client. I use postgres 9.1. Here the explain:
Bitmap Heap Scan on foo_table (cost=24747.45..1339408.81 rows=473986 width=116) (actual time=422.210..5918.037 rows=495747 loops=1) Recheck Cond: ((foo_id = 72) AND (date >= '2013-04-04 00:00:00'::timestamp without time zone) AND (date <= '2013-05-05 00:00:00'::timestamp without time zone)) Filter: ((foo)::text = 'foooooo'::text) -> Bitmap Index Scan on foo_table_idx (cost=0.00..24628.96 rows=573023 width=0) (actual time=341.269..341.269 rows=723918 loops=1)查询:
explain analyze SELECT CAST(date as date) AS date, foo_id, .... from foo_table where foo_id = 72 and date >= '2013-04-04' and date <= '2013-05-05' and foo = 'foooooo' Index def: Index "public.foo_table_idx" Column | Type -------------+----------------------------- foo_id | bigint date | timestamp without time zone btree, for table "public.external_channel_report"表: foo 是 text 包含4个不同值的字段。 foo_id 是 bigint ,目前有10K个不同的值。
Table: foo is text field with 4 different values. foo_id is bigint with currently 10K distinct values.
推荐答案在上创建一个复合索引(foo_id,foo,date)(按此顺序)。
请注意,如果您选择500k记录(并将它们全部返回给客户),这可能需要很长时间。
Note that if you select 500k records (and return them all to the client), this may take long.
您确定需要全部吗?客户端上的500k记录(而不是某种聚合或 LIMIT )?
Are you sure you need all 500k records on the client (rather than some kind of an aggregate or a LIMIT)?
更多推荐
位图堆扫描性能
发布评论