failing create a texture in a thread [closed]












-1














In an application i am creating, i am using several textures which i load on demand.
Till now i created them in the main process and everything works fine.
Now i wanted to load them in a separate thread.
So i call the function for loading and binding the texture with beginthread
The texture is loaded, but GL fails with SHADER_ERROR (1282).
I assume OpenGL needs probably an initial init, but i am clueless



I am codding in C++, compiling with GCC on an WinX64
using GL3 & GLFW and STB for image processing



here the code



    GLuint  load_map(const char*filename)
{
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture); // all upcoming GL_TEXTURE_2D operations now have effect on this texture object

// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// load image
int width, height, nrChannels;
//
unsigned char *data = stbi_load((filename), &width, &height, &nrChannels, STBI_rgb);
if (data)
{ LOG1( "LOADED<",width,height); LOG1( "LOADED<",temp,found);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
// glGenerateMipmap(GL_TEXTURE_2D);
}
else LOG1( "Failed to load texture","","");

stbi_image_free(data); LOG1("load_map","ID>",texture)
}
GLenum err =glGetError(); LOG1("load_map","error",err)
return texture;
}


LOG1 are just logging helper










share|improve this question













closed as off-topic by πάντα ῥεῖ, Jesper Juhl, Rabbid76, Nicol Bolas, genpfault Nov 22 at 20:25


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. Questions without a clear problem statement are not useful to other readers. See: How to create a Minimal, Complete, and Verifiable example." – πάντα ῥεῖ, Jesper Juhl, Rabbid76, Nicol Bolas, genpfault

If this question can be reworded to fit the rules in the help center, please edit the question.









  • 2




    Post a Minimal, Complete, and Verifiable example, please.
    – Jesper Juhl
    Nov 22 at 18:28


















-1














In an application i am creating, i am using several textures which i load on demand.
Till now i created them in the main process and everything works fine.
Now i wanted to load them in a separate thread.
So i call the function for loading and binding the texture with beginthread
The texture is loaded, but GL fails with SHADER_ERROR (1282).
I assume OpenGL needs probably an initial init, but i am clueless



I am codding in C++, compiling with GCC on an WinX64
using GL3 & GLFW and STB for image processing



here the code



    GLuint  load_map(const char*filename)
{
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture); // all upcoming GL_TEXTURE_2D operations now have effect on this texture object

// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// load image
int width, height, nrChannels;
//
unsigned char *data = stbi_load((filename), &width, &height, &nrChannels, STBI_rgb);
if (data)
{ LOG1( "LOADED<",width,height); LOG1( "LOADED<",temp,found);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
// glGenerateMipmap(GL_TEXTURE_2D);
}
else LOG1( "Failed to load texture","","");

stbi_image_free(data); LOG1("load_map","ID>",texture)
}
GLenum err =glGetError(); LOG1("load_map","error",err)
return texture;
}


LOG1 are just logging helper










share|improve this question













closed as off-topic by πάντα ῥεῖ, Jesper Juhl, Rabbid76, Nicol Bolas, genpfault Nov 22 at 20:25


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. Questions without a clear problem statement are not useful to other readers. See: How to create a Minimal, Complete, and Verifiable example." – πάντα ῥεῖ, Jesper Juhl, Rabbid76, Nicol Bolas, genpfault

If this question can be reworded to fit the rules in the help center, please edit the question.









  • 2




    Post a Minimal, Complete, and Verifiable example, please.
    – Jesper Juhl
    Nov 22 at 18:28
















-1












-1








-1







In an application i am creating, i am using several textures which i load on demand.
Till now i created them in the main process and everything works fine.
Now i wanted to load them in a separate thread.
So i call the function for loading and binding the texture with beginthread
The texture is loaded, but GL fails with SHADER_ERROR (1282).
I assume OpenGL needs probably an initial init, but i am clueless



I am codding in C++, compiling with GCC on an WinX64
using GL3 & GLFW and STB for image processing



here the code



    GLuint  load_map(const char*filename)
{
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture); // all upcoming GL_TEXTURE_2D operations now have effect on this texture object

// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// load image
int width, height, nrChannels;
//
unsigned char *data = stbi_load((filename), &width, &height, &nrChannels, STBI_rgb);
if (data)
{ LOG1( "LOADED<",width,height); LOG1( "LOADED<",temp,found);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
// glGenerateMipmap(GL_TEXTURE_2D);
}
else LOG1( "Failed to load texture","","");

stbi_image_free(data); LOG1("load_map","ID>",texture)
}
GLenum err =glGetError(); LOG1("load_map","error",err)
return texture;
}


LOG1 are just logging helper










share|improve this question













In an application i am creating, i am using several textures which i load on demand.
Till now i created them in the main process and everything works fine.
Now i wanted to load them in a separate thread.
So i call the function for loading and binding the texture with beginthread
The texture is loaded, but GL fails with SHADER_ERROR (1282).
I assume OpenGL needs probably an initial init, but i am clueless



I am codding in C++, compiling with GCC on an WinX64
using GL3 & GLFW and STB for image processing



here the code



    GLuint  load_map(const char*filename)
{
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture); // all upcoming GL_TEXTURE_2D operations now have effect on this texture object

// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// load image
int width, height, nrChannels;
//
unsigned char *data = stbi_load((filename), &width, &height, &nrChannels, STBI_rgb);
if (data)
{ LOG1( "LOADED<",width,height); LOG1( "LOADED<",temp,found);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
// glGenerateMipmap(GL_TEXTURE_2D);
}
else LOG1( "Failed to load texture","","");

stbi_image_free(data); LOG1("load_map","ID>",texture)
}
GLenum err =glGetError(); LOG1("load_map","error",err)
return texture;
}


LOG1 are just logging helper







c++ opengl glfw beginthread






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 22 at 18:19









alfetta

2418




2418




closed as off-topic by πάντα ῥεῖ, Jesper Juhl, Rabbid76, Nicol Bolas, genpfault Nov 22 at 20:25


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. Questions without a clear problem statement are not useful to other readers. See: How to create a Minimal, Complete, and Verifiable example." – πάντα ῥεῖ, Jesper Juhl, Rabbid76, Nicol Bolas, genpfault

If this question can be reworded to fit the rules in the help center, please edit the question.




closed as off-topic by πάντα ῥεῖ, Jesper Juhl, Rabbid76, Nicol Bolas, genpfault Nov 22 at 20:25


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. Questions without a clear problem statement are not useful to other readers. See: How to create a Minimal, Complete, and Verifiable example." – πάντα ῥεῖ, Jesper Juhl, Rabbid76, Nicol Bolas, genpfault

If this question can be reworded to fit the rules in the help center, please edit the question.








  • 2




    Post a Minimal, Complete, and Verifiable example, please.
    – Jesper Juhl
    Nov 22 at 18:28
















  • 2




    Post a Minimal, Complete, and Verifiable example, please.
    – Jesper Juhl
    Nov 22 at 18:28










2




2




Post a Minimal, Complete, and Verifiable example, please.
– Jesper Juhl
Nov 22 at 18:28






Post a Minimal, Complete, and Verifiable example, please.
– Jesper Juhl
Nov 22 at 18:28














1 Answer
1






active

oldest

votes


















1














Before any gl-call is used, the context must be set as current to the thread where those gl-calls are used.



For GLFW, use glfwMakeContextCurrent(GLFWwindow *window)

Yes, even with the same window value you need to set as current, due to a different thread.



Now, if you think of different threads, all running at once to use GL, and trying to set as current the same context... NO, that won't work.



You could use several context's and share them at window creation. But GLFW is simple, AFAIK you can't create several context's for the same window.



You could by-pass GLFW and create your own context's, shared with the GLFW one... I don't know how to do this.



You could not use GLFW and handle contexts and windows with some other library or on your own.



The point is that shared contexts share the textures. So you can upload in one context and the texture is available in all the shared context with this context.



But... there's always a "but"... most of current graphics cards don't show a performance in using several contexts. Just a few allow to upload in several contexts at once, or read and draw simultaneusly. So, the multi-context advantage is not that great.



What you can do in your multi-thread app is read each image from disk to RAM in a dedicated thread. And pass it to the GPU when it's ready, only in the main thread.






share|improve this answer





















  • i got the point, big thanks, for future wisdom searcher, i split the logic and moved the GLFW back to the main thread
    – alfetta
    Nov 23 at 15:07


















1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














Before any gl-call is used, the context must be set as current to the thread where those gl-calls are used.



For GLFW, use glfwMakeContextCurrent(GLFWwindow *window)

Yes, even with the same window value you need to set as current, due to a different thread.



Now, if you think of different threads, all running at once to use GL, and trying to set as current the same context... NO, that won't work.



You could use several context's and share them at window creation. But GLFW is simple, AFAIK you can't create several context's for the same window.



You could by-pass GLFW and create your own context's, shared with the GLFW one... I don't know how to do this.



You could not use GLFW and handle contexts and windows with some other library or on your own.



The point is that shared contexts share the textures. So you can upload in one context and the texture is available in all the shared context with this context.



But... there's always a "but"... most of current graphics cards don't show a performance in using several contexts. Just a few allow to upload in several contexts at once, or read and draw simultaneusly. So, the multi-context advantage is not that great.



What you can do in your multi-thread app is read each image from disk to RAM in a dedicated thread. And pass it to the GPU when it's ready, only in the main thread.






share|improve this answer





















  • i got the point, big thanks, for future wisdom searcher, i split the logic and moved the GLFW back to the main thread
    – alfetta
    Nov 23 at 15:07
















1














Before any gl-call is used, the context must be set as current to the thread where those gl-calls are used.



For GLFW, use glfwMakeContextCurrent(GLFWwindow *window)

Yes, even with the same window value you need to set as current, due to a different thread.



Now, if you think of different threads, all running at once to use GL, and trying to set as current the same context... NO, that won't work.



You could use several context's and share them at window creation. But GLFW is simple, AFAIK you can't create several context's for the same window.



You could by-pass GLFW and create your own context's, shared with the GLFW one... I don't know how to do this.



You could not use GLFW and handle contexts and windows with some other library or on your own.



The point is that shared contexts share the textures. So you can upload in one context and the texture is available in all the shared context with this context.



But... there's always a "but"... most of current graphics cards don't show a performance in using several contexts. Just a few allow to upload in several contexts at once, or read and draw simultaneusly. So, the multi-context advantage is not that great.



What you can do in your multi-thread app is read each image from disk to RAM in a dedicated thread. And pass it to the GPU when it's ready, only in the main thread.






share|improve this answer





















  • i got the point, big thanks, for future wisdom searcher, i split the logic and moved the GLFW back to the main thread
    – alfetta
    Nov 23 at 15:07














1












1








1






Before any gl-call is used, the context must be set as current to the thread where those gl-calls are used.



For GLFW, use glfwMakeContextCurrent(GLFWwindow *window)

Yes, even with the same window value you need to set as current, due to a different thread.



Now, if you think of different threads, all running at once to use GL, and trying to set as current the same context... NO, that won't work.



You could use several context's and share them at window creation. But GLFW is simple, AFAIK you can't create several context's for the same window.



You could by-pass GLFW and create your own context's, shared with the GLFW one... I don't know how to do this.



You could not use GLFW and handle contexts and windows with some other library or on your own.



The point is that shared contexts share the textures. So you can upload in one context and the texture is available in all the shared context with this context.



But... there's always a "but"... most of current graphics cards don't show a performance in using several contexts. Just a few allow to upload in several contexts at once, or read and draw simultaneusly. So, the multi-context advantage is not that great.



What you can do in your multi-thread app is read each image from disk to RAM in a dedicated thread. And pass it to the GPU when it's ready, only in the main thread.






share|improve this answer












Before any gl-call is used, the context must be set as current to the thread where those gl-calls are used.



For GLFW, use glfwMakeContextCurrent(GLFWwindow *window)

Yes, even with the same window value you need to set as current, due to a different thread.



Now, if you think of different threads, all running at once to use GL, and trying to set as current the same context... NO, that won't work.



You could use several context's and share them at window creation. But GLFW is simple, AFAIK you can't create several context's for the same window.



You could by-pass GLFW and create your own context's, shared with the GLFW one... I don't know how to do this.



You could not use GLFW and handle contexts and windows with some other library or on your own.



The point is that shared contexts share the textures. So you can upload in one context and the texture is available in all the shared context with this context.



But... there's always a "but"... most of current graphics cards don't show a performance in using several contexts. Just a few allow to upload in several contexts at once, or read and draw simultaneusly. So, the multi-context advantage is not that great.



What you can do in your multi-thread app is read each image from disk to RAM in a dedicated thread. And pass it to the GPU when it's ready, only in the main thread.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 22 at 20:21









Ripi2

4,2631825




4,2631825












  • i got the point, big thanks, for future wisdom searcher, i split the logic and moved the GLFW back to the main thread
    – alfetta
    Nov 23 at 15:07


















  • i got the point, big thanks, for future wisdom searcher, i split the logic and moved the GLFW back to the main thread
    – alfetta
    Nov 23 at 15:07
















i got the point, big thanks, for future wisdom searcher, i split the logic and moved the GLFW back to the main thread
– alfetta
Nov 23 at 15:07




i got the point, big thanks, for future wisdom searcher, i split the logic and moved the GLFW back to the main thread
– alfetta
Nov 23 at 15:07



Popular posts from this blog

What visual should I use to simply compare current year value vs last year in Power BI desktop

How to ignore python UserWarning in pytest?

Alexandru Averescu