SDL2纹理更新速度[重复](SDL2 texture update speed [duplicate])

系统教程 行业动态 更新时间:2024-06-14 16:57:18
SDL2纹理更新速度[重复](SDL2 texture update speed [duplicate])

这个问题在这里已有答案:

SDL2:快速像素操作 1回答

我正在尝试为将来运行的软件渲染示例设置SDL2环境,所以我需要直接访问像素来绘制。 下面是一些代码,它将1个红色像素绘制到纹理,然后显示一个代表https://wiki.libsdl.org/MigrationGuide#If_your_game_just_wants_to_get_fully-rendered_frames_to_the_screen

#include <SDL.h> #include <stdio.h> const int SCREEN_WIDTH = 1920; const int SCREEN_HEIGHT = 1080; SDL_Window* gWindow; SDL_Renderer* gRenderer; SDL_Texture* gTexture; SDL_Event e; void* gPixels = NULL; int gPitch = SCREEN_WIDTH * 4; bool gExitFlag = false; Uint64 start; Uint64 end; Uint64 freq; double seconds; int main(int argc, char* args[]) { SDL_Init(SDL_INIT_VIDEO); gWindow = SDL_CreateWindow("SDL Tutorial", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL); gRenderer = SDL_CreateRenderer(gWindow, -1, SDL_RENDERER_ACCELERATED); // | SDL_RENDERER_PRESENTVSYNC); vsync is turned off gTexture = SDL_CreateTexture(gRenderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, SCREEN_WIDTH, SCREEN_HEIGHT); while (!gExitFlag) { while (SDL_PollEvent(&e) != 0) { if (e.type == SDL_QUIT) { gExitFlag = true; } } start = SDL_GetPerformanceCounter(); SDL_LockTexture(gTexture, NULL, &gPixels, &gPitch); *((uint32_t*)gPixels) = 0xff000ff; SDL_UnlockTexture(gTexture); //20-100ms on different hardware end = SDL_GetPerformanceCounter(); freq = SDL_GetPerformanceFrequency(); SDL_RenderCopy(gRenderer, gTexture, NULL, NULL); SDL_RenderPresent(gRenderer); gPixels = NULL; gPitch = 0; seconds = (end - start) / static_cast<double>(freq); printf("Frame time: %fms\n", seconds * 1000.0); } SDL_DestroyWindow(gWindow); SDL_DestroyRenderer(gRenderer); SDL_DestroyTexture(gTexture); SDL_Quit(); return 0; }

正如我在代码注释中提到的,SDL_UnlockTexture使用fullhd纹理最多可达100ms。 (切换到SDL_UpdateTexture导致没有显着差异)我觉得实时渲染太多了。 我做错了什么或者我不应该使用所有纹理API(或任何其他GPU加速的api,其中纹理必须每帧上传到gpu内存)以实时渲染整帧?

This question already has an answer here:

SDL2: Fast Pixel Manipulation 1 answer

I'm trying to setup SDL2 environment for future running software rendering examples, so i need direct access to pixels to draw. Here is some code, that draws 1 red pixel to texture, then displaying a it said https://wiki.libsdl.org/MigrationGuide#If_your_game_just_wants_to_get_fully-rendered_frames_to_the_screen

#include <SDL.h> #include <stdio.h> const int SCREEN_WIDTH = 1920; const int SCREEN_HEIGHT = 1080; SDL_Window* gWindow; SDL_Renderer* gRenderer; SDL_Texture* gTexture; SDL_Event e; void* gPixels = NULL; int gPitch = SCREEN_WIDTH * 4; bool gExitFlag = false; Uint64 start; Uint64 end; Uint64 freq; double seconds; int main(int argc, char* args[]) { SDL_Init(SDL_INIT_VIDEO); gWindow = SDL_CreateWindow("SDL Tutorial", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL); gRenderer = SDL_CreateRenderer(gWindow, -1, SDL_RENDERER_ACCELERATED); // | SDL_RENDERER_PRESENTVSYNC); vsync is turned off gTexture = SDL_CreateTexture(gRenderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, SCREEN_WIDTH, SCREEN_HEIGHT); while (!gExitFlag) { while (SDL_PollEvent(&e) != 0) { if (e.type == SDL_QUIT) { gExitFlag = true; } } start = SDL_GetPerformanceCounter(); SDL_LockTexture(gTexture, NULL, &gPixels, &gPitch); *((uint32_t*)gPixels) = 0xff000ff; SDL_UnlockTexture(gTexture); //20-100ms on different hardware end = SDL_GetPerformanceCounter(); freq = SDL_GetPerformanceFrequency(); SDL_RenderCopy(gRenderer, gTexture, NULL, NULL); SDL_RenderPresent(gRenderer); gPixels = NULL; gPitch = 0; seconds = (end - start) / static_cast<double>(freq); printf("Frame time: %fms\n", seconds * 1000.0); } SDL_DestroyWindow(gWindow); SDL_DestroyRenderer(gRenderer); SDL_DestroyTexture(gTexture); SDL_Quit(); return 0; }

As i mention in the code comment SDL_UnlockTexture gets up to 100ms with fullhd texture. (Switching to SDL_UpdateTexture cause no significant difference) It is too much for realtime rendering i think. Am i doing something wrong or i should not use at all texture API(or any other GPU-accelerated api, where texture must be uploaded to gpu memory every frame) for realtime rendering whole frame?

最满意答案

如果您想使用原始pixeldata,您应该使用SDL的SDL_Surface而不是纹理。 它是针对您的情况优化的不同SDL API,请参阅此示例 ,不要忘记更新 。

原因是纹理存储在VRAM中,而从VRAM读取的速度非常慢。 曲面存储在RAM中,在那里处理,只写入VRAM,速度非常快。

As you want to work with raw pixeldata, you should use SDL's SDL_Surfaces and not textures. It's different SDL API optimized for your case, see this example and dont forget to update.

The reason for this is that textures are stored in VRAM and reads from VRAM are very slow. Surfaces are stored in RAM, processed there and are only written to VRAM, which is very fast.

更多推荐

本文发布于:2023-04-12 21:00:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/dzcp/862f4e2ab97a8e247e09900440e91302.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:纹理   速度   texture   duplicate   speed

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!