Webgl - Wait for Texture to Load

WebGL - wait for texture to load

The easiest way to fix that is to make a 1x1 texture at creation time.

var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE,
new Uint8Array([255, 0, 0, 255])); // red

Then when the image loads you can replace the 1x1 pixel texture with the image. No flags needed and your scene will render with the color of your choice until the image has loaded.

var img = new Image();
img.src = "http://someplace/someimage.jpg";
img.onload = function() {
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);

// then either generate mips if the image uses power-of-2 dimensions or
// set the filtering correctly for non-power-of-2 images.
setupTextureFilteringAndMips(img.width, img.height);
}

Just for the sake of saving people the trouble of running into the next problem they are most likely going to run into, WebGL requires mips or it requires filtering that doesn't require mips. On top of that it requires textures with dimensions that are a power of 2 (ie, 1, 2, 4, 8, ..., 256, 512, etc) to use mips. So, when loading an image you'll most likely want to setup the filtering to handle this correctly.

function isPowerOf2(value) {
return (value & (value - 1)) == 0;
};

function setupTextureFilteringAndMips(width, height) {
if (isPowerOf2(width) && isPowerOf2(height) {
// the dimensions are power of 2 so generate mips and turn on
// tri-linear filtering.
gl.generateMipmap(gl.TEXTURE_2D);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR);
} else {
// at least one of the dimensions is not a power of 2 so set the filtering
// so WebGL will render it.
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
}
}

Textures not loading in webgl. reading and manipulating from 2D canvas, storing in Array

Your problem is that, for example, the line

   imageArray[0].src = "./data/data0000.png";

initiates an image load, but does not wait for it. So when you immediately call toSagitaal() the images are not yet loaded.

Typically you have to specify a handler that does something when the image has loaded. To do this you assign a function to the onload property.

If you refactor so that you have a function toSagitaal(i) that processes the ith image, then you might add a line like

   imageArray[0].onload = function(){toSagitaal(0);} ;

after the imageArray[0].src assignment and similarly for the other two images.

The reason that your present example works on reload is that the browser has by that time already loaded the images, so the fact that you aren't waiting doesn't matter.

WebGL textures not working on some textures

Images are loaded asynchronously over the network, you have to wait for them to load before using them

function loadFileAsIMG(filename, callback) {
const img = new Image();
img.onload = () => callback(img);
img.src = filename;
}

function funcToCallAfterImageLoads(img) {
...
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);
...
}

loadFileAsIMG('url/for/some/image', funcToCallAfterImageLoads);

See this article

You can also make an image loader that returns a Promise and then use async functions to await on the image being loaded

function loadImage(url) {
return new Promise((resolve, reject) => {
const img = new Image();
img.onload = () => resolve(img);
img.onerror = reject;
img.src = url;
});
}

async function main() {
const img = await loadImage('url/to/image');
...
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);
...
}

main();

Also see this

Measure WebGL texture load in ms

The only way to measure any timing in WebGL is to figure out how much work you can do in a certain amount of time. Pick a target speed, say 30fps, use requestAnimationFrame, keep increasing the work until you're over the target.

var targetSpeed  = 1/30;
var amountOfWork = 1;

var then = 0;
function test(time) {
time *= 0.001; // because I like seconds

var deltaTime = time - then;
then = time;

if (deltaTime < targetTime) {
amountOfWork += 1;
}

for (var ii = 0; ii < amountOfWork; ++ii) {
doWork();
}

requestAnimationFrame(test);
}
requestAnimationFrame(test);

It's not quite that simple because the browsers, at least in my experience, don't seem to give a really stable timing for frames.

Caveats

  1. Don't assume requestAnimationFrame will be at 60fps.

    There are plenty of devices that run faster (VR) or slower (low-end hd-dpi monitors).

  2. Don't measure time to start emitting commands until the time you stop

    Measure the time since the last requestAnimationFrame. WebGL just
    inserts commands into a buffer. Those commands execute in the driver
    possibly even in another process so

    var start = performance.now;         // WRONG!
    gl.someCommand(...); // WRONG!
    gl.flush(...); // WRONG!
    var time = performance.now - start; // WRONG!
  3. Actually use the resource.

    Many resources are lazily initialized so just uploading a resource
    but not using it will not give you an accurate measurement. You'll
    need to actually do a draw with each texture you upload. Of course
    make it small 1 pixel 1 triangle draw, with a simple shader. The
    shader must actually access the resource otherwise the driver
    my not do any lazy initialization.

  4. Don't assume different types/sizes of textures will have proportional
    changes in speed.

    Drivers to different things. For example some GPUs might not support
    anything but RGBA textures. If you upload a LUMINANCE texture the
    driver will expand it to RGBA. So, if you timed using RGBA textures
    and assumed a LUMINANCE texture of the same dimensions would upload
    4x as fast you'd be wrong

    Similarly don't assume different size textures will upload at
    speed proportional to their sizes. Internal buffers of drivers
    and other limits mean that difference sizes might take differnent
    paths.

    In other words you can't assume 1024x1024 texture will upload
    4x as slow as a 512x512 texture.

  5. Be aware even this won't promise real-world results

    By this I mean for example if you're on tiled hardware (iPhone
    for example) then the way the GPU works is to gather all of
    the drawing commands, separate them into tiles, cull any
    draw that are invisible and only draw what's left where as
    most desktop GPUs draw every pixel of every triangle.

    Because a tiled GPU
    does everything at the end it means if you keep uploading
    data to the same texture and draw between each upload it will
    have to keep copies of all your textures until it draws.
    Internally there might be some point at which it flushes and
    draws what it has before buffering again.

    Even a desktop driver wants to pipeline uploads so you upload
    contents to texture B, draw, upload new contents to texture B,
    draw. If the driver is in the middle of doing the first drawing
    it doesn't want to wait for the GPU so it can replace the contents.
    Rather it just wants to upload the new contents somewhere else
    not being used and then when it can point the texture to the new
    contents.

    In normal use this isn't a problem because almost no one uploads
    tons of textures all the time. At most they upload 1 or 2 video
    frames or 1 or 2 procedurally generated textures. But when you're
    benchmarking you're stressing the driver and making it do things
    it won't actually be doing normally. In the example above it might
    assume a texture is unlikely to be uploaded 10000 times a frame
    you'll hit a limit where it has to freeze the pipeline until
    some of your queued up textures are drawn. That freeze will make
    your result appear slower than what you'd really get in normal
    use cases.

    The point being you might benchmark and get told it takes 5ms
    to upload a texture but in truth it only takes 3ms, you just
    stalled pipeline many times which outside your benchmark is
    unlikely to happen.

WebGL renderable texture not rendering. RENDER WARNING: texture bound to texture unit 0 is not renderable

if(!mesh.texture) {    // isn't it 'undefined'
mesh.initTexture(ctx);
}

Maybe it's better to have appropriate flag for whole thing.

// when creating mesh
mesh.texInit = false;

And when loading:

Mesh.prototype.initTexture = function(ctx) {

this.texture = this.ctx.createTexture();
this.texture.image = new Image();

// ctx.bindTexture(this.ctx.TEXTURE_2D, this.texture);
// ctx.texImage2D(this.ctx.TEXTURE_2D, 0, this.ctx.RGBA, 1, 1, 0, this.ctx.RGBA, this.ctx.UNSIGNED_BYTE, new Uint8Array([255, 0, 0, 255])); // red

mesh.texture.image.onload = function () {
//Upon callback, 'this' is different, so we use the global variable for now
mesh.handleLoadedTexture(mesh.texture,mesh.texture.image);
mesh.texInit = true;
// note ^ here
}
this.texture.image.src = "/path/to/images/nehe.gif";
}

And for rendering:

if(mesh.texInit) { doMagic(); } // aka render


Related Topics



Leave a reply



Submit