如何在python中优化以下算法的内存和时间使用

编程入门 行业动态 更新时间:2024-10-28 19:28:43
本文介绍了如何在python中优化以下算法的内存和时间使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我正在尝试在Python中完成以下逻辑运算,但遇到了内存和时间问题.由于我是python的新手,因此对于如何以及在何处优化问题的指导将不胜感激! (我的理解是,以下问题有些抽象)

I am trying to accomplish the following logical operation in Python but getting into memory and time issues. Since, I am very new to python, guidance on how and where to optimize the problem would be appreciated ! ( I do understand that the following question is somewhat abstract )

import networkx as nx dic_score = {} G = nx.watts_strogatz_graph(10000,10,.01) # Generate 2 graphs with 10,000 nodes using Networkx H = nx.watts_strogatz_graph(10000,10,.01) for Gnodes in G.nodes() for Hnodes in H.nodes () # i.e. For all the pair of nodes in both the graphs score = SomeOperation on (Gnodes,Hnodes) # Calculate a metric dic_score.setdefault(Gnodes,[]).append([Hnodes, score, -1 ]) # Store the metric in the form a Key: value, where value become a list of lists, pair in a dictionary

然后根据此处提到的标准对生成的字典中的列表进行排序 sorting_criterion

Then Sort the lists in the generated dictionary according to the criterion mentioned here sorting_criterion

我的问题/疑问是:

1)比使用for循环进行迭代有更好的方法吗?

1) Is there a better way of approaching this than using the for loops for iteration?

2)解决上述问题的最优化(最快)方法应该是什么?我应该考虑使用除字典以外的其他数据结构吗?或可能的文件操作?

2) What should be the most optimized (fastest) method of approaching the above mentioned problem ? Should I consider using another data structure than a dictionary ? or possibly file operations ?

3)由于我需要对该字典中的列表进行排序,该字典具有10,000个键,每个键对应于10,000个值的列表,因此内存需求很快变得非常庞大,用光了.

3) Since I need to sort the lists inside this dictionary, which has 10,000 keys each corresponding to a list of 10,000 values, memory requirements become huge quite quickly and I run out of it.

3)是否有一种方法可以将排序过程整合到字典本身的计算中,即避免执行单独的循环来进行排序?

3) Is there a way to integrate the sorting process within the calculation of dictionary itself i.e. avoid doing a separate loop to sort?

任何输入将不胜感激!谢谢!

Any inputs would be appreciated ! Thanks !

推荐答案

1)您可以为此使用itertools模块中的功能之一.我只想提一下,您可以阅读手册或致电:

1) You can use one of functions from itertools module for that. Let me just mention it, you can read the manual or call:

from itertools import product help(product)

这是一个例子:

for item1, item2 in product(list1, list2): pass

2)如果结果太大而无法容纳在内存中,请尝试将其保存在某个地方.您可以将其输出到CSV文件中,例如:

2) If the result is too big to fit in memory, try saving them somewhere. You can output it into a CSV file for example:

with open('result.csv') as outfile: writer = csv.writer(outfile, dialect='excel') for ... writer.write(...)

这将释放您的内存.

3)我认为最好事后对结果数据进行排序(因为sort函数相当快捷),而不是使问题复杂化并即时对数据进行排序.

3) I think it's better to sort the result data afterwards (because sort function is rather quick) rather than complicate the matters and sort the data on the fly.

您可以改用 NumPy arroy/matrix操作(求和,乘积,甚至将函数映射到每个矩阵行) ).它们是如此之快,以至于有时过滤数据要比计算所有东西花费更多.

You could instead use NumPy arroy/matrix operations (sums, products, or even map a function to each matrix row). These are so fast that sometimes filtering the data costs more than calculating everything.

如果您的应用程序仍然非常慢,请尝试对其进行性能分析,以准确了解操作缓慢或执行过多次:

If your app is still very slow, try profiling it to see exactly what operation is slow or is done too many times:

from cProfile import Profile p = Profile() p.runctx('my_function(args)', {'my_function': my_function, 'args': my_data}, {}) p.print_stats()

您会看到表格:

2706 function calls (2004 primitive calls) in 4.504 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 2 0.006 0.003 0.953 0.477 pobject.py:75(save_objects) 43/3 0.533 0.012 0.749 0.250 pobject.py:99(evaluate) ...

更多推荐

如何在python中优化以下算法的内存和时间使用

本文发布于:2023-08-05 11:14:26,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1305173.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:算法   内存   时间   如何在   python

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!