为什么C ++程序会为局部变量分配更多的内存,而不是在最坏的情况下需要?

编程入门 行业动态 更新时间:2024-10-11 07:34:51
本文介绍了为什么C ++程序会为局部变量分配更多的内存,而不是在最坏的情况下需要?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

受此问题的启发。

显然,下面的代码:

#include< Windows.h> int _tmain(int argc,_TCHAR * argv []) { if(GetTickCount()> 1){ char buffer [500 * 1024] ; SecureZeroMemory(buffer,sizeof(buffer)); } else { char buffer [700 * 1024]; SecureZeroMemory(buffer,sizeof(buffer)); } return 0; }

使用默认堆栈大小(1兆字节) (/ O2)发生堆栈溢出,因为程序试图在堆栈上分配1200千字节。

上面的代码当然稍微夸张了,堆栈在一个相当蠢的方式。然而在实际情况下,堆栈大小可以更小(如256千字节),并且可能有更多的分支与更小的对象,这将导致足够的分配大小溢出堆栈。

这没有意义。最坏的情况是700千字节 - 这将是构建一组具有最大总大小的局部变量的代码路径。在编译期间检测该路径应该不是问题。

因此,编译器生成一个程序,试图分配比最坏的情况更多的内存。根据此答案 LLVM也是这样。

这可能是编译器的一个缺陷,或者可能有一些真正的原因。我的意思是说,我只是不明白编译器设计中的东西,这将解释为什么这样的分配是必要的。

为什么编译器要一个程序分配更多的内存,下面的代码在使用GCC 4.5.1编译时在 ://ideone/w9t0grel =nofollow> ideone 将两个数组放置在同一地址:

#include< iostream> int main() { int x; std :: cin>> X; if(x%2 == 0) { char buffer [500 * 1024]; std :: cout<< static_cast< void *>(buffer)<< std :: endl; } if(x%3 == 0) { char buffer [700 * 1024]; std :: cout<< static_cast< void *>(buffer)<< std :: endl; } }

/ p>

输出: 0xbf8e9b1c 0xbf8e9b1c

如果你想要这个优化,答案可能是使用另一个编译器。

Inspired by this question.

Apparently in the following code:

#include <Windows.h> int _tmain(int argc, _TCHAR* argv[]) { if( GetTickCount() > 1 ) { char buffer[500 * 1024]; SecureZeroMemory( buffer, sizeof( buffer ) ); } else { char buffer[700 * 1024]; SecureZeroMemory( buffer, sizeof( buffer ) ); } return 0; }

compiled with default stack size (1 megabyte) with Visual C++ 10 with optimizations on (/O2) a stack overflow occurs because the program tries to allocate 1200 kilobytes on stack.

The code above is of course slightly exaggerated to show the problem - uses lots of stack in a rather dumb way. Yet in real scenarios stack size can be smaller (like 256 kilobytes) and there could be more branches with smaller objects that would induce a total allocation size enough to overflow the stack.

That makes no sense. The worst case would be 700 kilobytes - it would be the codepath that constructs the set of local variables with the largest total size along the way. Detecting that path during compilation should not be a problem.

So the compiler produces a program that tries to allocate even more memory than the worst case. According to this answer LLVM does the same.

That could be a deficiency in the compiler or there could be some real reason for doing it this way. I mean maybe I just don't understand something in compilers design that would explain why doing allocation this way is necessary.

Why would the compiler want a program allocate more memory than the code needs in the worst case?

解决方案

The following code when compiled using GCC 4.5.1 on ideone places the two arrays at the same address:

#include <iostream> int main() { int x; std::cin >> x; if (x % 2 == 0) { char buffer[500 * 1024]; std::cout << static_cast<void*>(buffer) << std::endl; } if (x % 3 == 0) { char buffer[700 * 1024]; std::cout << static_cast<void*>(buffer) << std::endl; } }

input: 6

output: 0xbf8e9b1c 0xbf8e9b1c

The answer is probably "use another compiler" if you want this optimization.

更多推荐

为什么C ++程序会为局部变量分配更多的内存,而不是在最坏的情况下需要?

本文发布于:2023-11-30 22:13:30,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1651627.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:是在   而不   会为   变量   最坏

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!