有效地合并并发的结果.未来并行执行?

编程入门 行业动态 更新时间:2024-10-11 11:25:04
本文介绍了有效地合并并发的结果.未来并行执行?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我有大约1亿行的熊猫数据框.在多核计算机上并行处理效果很好,每个核的利用率为100%.但是, executor.map()的结果是一个生成器,因此为了实际收集处理后的结果,我遍历了该生成器.这非常非常慢(几小时),部分是因为它是单核,部分是因为循环.实际上,它比 my_function()

I have a pandas data frame of about 100M rows. Processing in parallel works very well on a multi-core machine, with 100% utilization of each core. However, the results of executor.map() is a generator so in order to actually collect the processed results, I iterate through that generator. This is very, very slow (hours), in part because it's single core, in part because of the loop. In fact, it's much slower than the actual processing in the my_function()

是否有更好的方法(可能是并发和/或矢量化的)?

Is there a better way (perhaps concurrent and/or vectorized)?

将pandas 0.23.4(当前最新)与Python 3.7.0一起使用

Using pandas 0.23.4 (latest at this time) with Python 3.7.0

import concurrent import pandas as pd df = pd.DataFrame({'col1': [], 'col2': [], 'col3': []}) with concurrent.futures.ProcessPoolExecutor() as executor: gen = executor.map(my_function, list_of_values, chunksize=1000) # the following is single-threaded and also very slow for x in gen: df = pd.concat([df, x]) # anything better than doing this? return df

推荐答案

以下是与您的案例有关的基准: stackoverflow/a/31713471/5588279

Here is a benchmark related to your case: stackoverflow/a/31713471/5588279

如您所见,concat(追加)多次无效.您应该只执行 pd.concat(gen).我相信underlyig实施会预先分配所有需要的内存.

As you can see, concat(append) multiple times is very inefficient. You should just do pd.concat(gen). I believe the underlyig implementation will preallocate all needed memory.

对于您而言,每次都会进行内存分配.

In your case, the memory allocation is done everytime.

更多推荐

有效地合并并发的结果.未来并行执行?

本文发布于:2023-11-25 06:43:26,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1628716.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:有效地   未来

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!