PostgreSQL表太大了?

编程入门 行业动态 更新时间:2024-10-27 20:26:52
本文介绍了PostgreSQL表太大了?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我正在为公司的RoR项目进行设计,而我们的开发团队已经对设计(尤其是数据库)进行了一些辩论。

I'm working on the design for a RoR project for my company, and our development team has already run into a bit of a debate about the design, specifically the database.

我们有一个名为 Message 的模型,需要持久化。这是一个非常非常小的模型,除了id之外只有三个db列,但是当我们投入生产时,可能会有很多这样的模型。我们每天最多要看1,000,000次插入。只能通过两个可索引的外键来搜索模型。同样,不必删除这些模型,但是一旦它们使用了大约三个月,我们也不必保留它们。

We have a model called Message that needs to be persisted. It's a very, very small model with only three db columns other than the id, however there will likely be A LOT of these models when we go to production. We're looking at as much as 1,000,000 insertions per day. The models will only ever be searched by two foreign keys on them which can be indexed. As well, the models never have to be deleted, but we also don't have to keep them once they're about three months old.

所以,我们难怪在Postgres中实现此表是否会带来严重的性能问题?有没有人有使用大型SQL数据库的经验,可以告诉我们这是否会成为问题?如果是这样,我们应该采用什么替代方法?

So, what we're wondering is if implementing this table in Postgres will present a significant performance issue? Does anyone have experience with very large SQL databases to tell us whether or not this will be a problem? And if so, what alternative should we go with?

推荐答案

每张表的行本身并不是问题。

Rows per a table won't be an issue on it's own.

因此,大致来说,每天一百万行90天是9000万行。我认为没有理由让Postgres无法处理它,而不知道您正在做的所有细节。

So roughly speaking 1 million rows a day for 90 days is 90 million rows. I see no reason Postgres can't deal with that, without knowing all the details of what you are doing.

根据您的数据分布,您可以混合使用索引,筛选索引和某种类型的表分区,以加快速度,一旦您看到可能存在或可能没有的性能问题。在我所知道的任何其他RDMS上,您的问题将相同。如果您只需要3个月的数据设计时间即可修剪掉数据,则不再需要。这样,您的表上的数据量将保持一致。幸运的是,您知道将存在多少数据,对其容量进行测试并查看获得的结果。测试一个有9000万行的表可能很容易:

Depending on your data distribution you can use a mixture of indexes, filtered indexes, and table partitioning of some kind to speed thing up once you see what performance issues you may or may not have. Your problem will be the same on any other RDMS that I know of. If you only need 3 months worth of data design in a process to prune off the data you don't need any more. That way you will have a consistent volume of data on the table. Your lucky you know how much data will exist, test it for your volume and see what you get. Testing one table with 90 million rows may be as easy as:

select x,1 as c2,2 as c3 from generate_series(1,90000000) x;

wiki.postgresql/wiki/FAQ

Limit Value Maximum Database Size Unlimited Maximum Table Size 32 TB Maximum Row Size 1.6 TB Maximum Field Size 1 GB Maximum Rows per Table Unlimited Maximum Columns per Table 250 - 1600 depending on column types Maximum Indexes per Table Unlimited

更多推荐

PostgreSQL表太大了?

本文发布于:2023-07-20 22:14:21,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1171558.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:太大了   PostgreSQL

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!