本文介绍了如何确保在关闭池之前执行所有 python **pool.apply_async()** 调用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
如何确保在过早调用 pool.close 之前执行所有 pool.apply_async() 调用并通过 回调 累积它们的结果() 和 pool.join()?
How to make sure that all the pool.apply_async() calls are executed and their results are accumulated through callback before a premature call to pool.close() and pool.join()?
numofProcesses = multiprocessing.cpu_count() pool = multiprocessing.Pool(processes=numofProcesses) jobs=[] for arg1, arg2 in arg1_arg2_tuples: jobs.append(pool.apply_async(function1, args=(arg1, arg2, arg3,), callback=accumulate_apply_async_result)) pool.close() pool.join() 推荐答案您需要等待附加的 AsyncResult 对象,然后退出池.这是
You need to wait on the appended AsyncResult objects before exiting the pool. That's
for job in jobs: job.wait()在 pool.close() 之前.
但你可能在这里工作太辛苦了.你可以
But you may be working too hard here. You could
with multiprocessing.Pool() as pool: for result in pool.starmap(function1, (arg_1, arg_2, arg_3) for arg_1, arg_2 in sim_chr_tuples)): accumulate_apply_async_result(result)- Pool 的默认值是 cpu_count() 所以不需要添加它
- with 为您关闭/加入
- starmap 等待结果
- the default for Pool is cpu_count() so no need to add it
- with does the close/join for you
- starmap waits for results for you
一个完整的工作示例是
import multiprocessing result_list = [] def accumulate_apply_async_result(result): result_list.append(result) def function1(arg1, arg2, arg3): return arg1, arg2, arg3 sim_chr_tuples = [(1,2), (3,4), (5,6), (7,8)] arg_3 = "third arg" if __name__ == "__main__": with multiprocessing.Pool() as pool: for result in pool.starmap(function1, ((arg_1, arg_2, arg_3) for arg_1, arg_2 in sim_chr_tuples)): accumulate_apply_async_result(result) for r in result_list: print(r)更多推荐
如何确保在关闭池之前执行所有 python **pool.apply
发布评论