我是Linux和服务器管理领域的新手,并且感到困惑。
I'm brand-spanking new to the world of linux and server administration, and I'm stuck.
我有一个Rails应用,偶尔需要执行大数据插入,通常大约20,000行。该代码似乎可以在开发(osx)中正常工作,但是在生产服务器(ubunto,在linode vps上)上,每次都失败,通常在插入大约1700次之后。确切的数字有所不同(1655、1697、1756),但始终在该范围内。
I have a rails app that occasionally needs to perform large data inserts, usually around 20,000 rows. The code seems to work fine in development (osx), but on the production servers (ubunto, on a linode vps), it fails every time, usually after about 1,700 insertions. The precise number varies (1655, 1697, 1756), but it's consistently in that ballpark.
我在production.log文件中看不到有什么帮助。只是:
I'm not seeing much that's helpful in the production.log file. just:
Connecting to database specified by database.yml在失败后一秒钟左右。
在PostgreSQL主日志中:
In the postgresql main log:
2012-10-21 23:01:28 EDT LOG: could not receive data from client: Connection reset by peer 2012-10-21 23:01:28 EDT LOG: unexpected EOF on client connection我正在运行Rails 3.2.8, ruby 1.9.3-p194,psql 1.9.4,nginx,独角兽
I'm running Rails 3.2.8, ruby 1.9.3-p194, psql 1.9.4, nginx, unicorn
真正遵循以下部署步骤: railscasts/episodes/335-deploying-to-a-vps
Really following deployment steps outlined in: railscasts/episodes/335-deploying-to-a-vps
其他说明:
a)我试过包装而不是在事务中包装ActiveRecord插入。没什么区别。
a) I've tried wrapping and not wrapping the ActiveRecord insertions in a transaction. No differnece.
b)Ruby在插入数据库之前做了很多工作来收集和组织数据。这包括多次调用第三方Web服务。但是我已经确认这些通信是成功的,并且数据看起来不错。
b) Ruby is doing a lot work to gather and organize the data before inserting to the db. This includes multiple calls to a third party web service. But I've confirmed that these communications are successful, and the data looks fine.
有什么想法吗?或至少有关于我可以继续学习的建议?非常感谢,
Any ideas? Or at least any suggestions as to where I can continue sleuthing? Thanks so much,
推荐答案故事的寓意是:如有疑问,请怪独角兽。
The moral of the story is: "When in doubt, blame unicorns."
(Unicorn设置为在30秒后使工作进程超时。)
(Unicorn was set to timeout worker processes after 30 seconds.)
更多推荐
为什么Rails会在大批量插入中删除Postgres连接?
发布评论