Three.Js Generate Uv Coordinate

THREE.js generate UV coordinate

To my knowledge there is no automatic way to calculate UV.

You must calculate yourself. Calculate a UV for a plane is quite easy, this site explains how: calculating texture coordinates

For a complex shape, I don't know how. Maybe you could detect planar surface.

EDIT

Here is a sample code for a planar surface (x, y, z) where z = 0:

geometry.computeBoundingBox();

var max = geometry.boundingBox.max,
min = geometry.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x, 0 - min.y);
var range = new THREE.Vector2(max.x - min.x, max.y - min.y);
var faces = geometry.faces;

geometry.faceVertexUvs[0] = [];

for (var i = 0; i < faces.length ; i++) {

var v1 = geometry.vertices[faces[i].a],
v2 = geometry.vertices[faces[i].b],
v3 = geometry.vertices[faces[i].c];

geometry.faceVertexUvs[0].push([
new THREE.Vector2((v1.x + offset.x)/range.x ,(v1.y + offset.y)/range.y),
new THREE.Vector2((v2.x + offset.x)/range.x ,(v2.y + offset.y)/range.y),
new THREE.Vector2((v3.x + offset.x)/range.x ,(v3.y + offset.y)/range.y)
]);
}
geometry.uvsNeedUpdate = true;

Programmatically generate simple UV Mapping for models

You need to be more specific. Here, I'll apply UV mapping programmatically

for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
]);
}

Happy?

There are an infinite ways of applying UV coordinates. How about this

for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
]);
}

There's no RIGHT answer. There's just whatever you want to do is up to you. It's kind of like asking how do I apply pencil to paper.

Sorry to be so snarky, just pointing out the question is in one sense nonsensical.

Anyway, there are a few common methods for applying a texture.

  • Spherical mapping

    Imagine your model is translucent, there's a sphere inside made of film and inside the sphere is a point light so that it projects (like a movie projector) from the sphere in all directions. So you do the math to computer the correct UVs for that situation

    To get a point on there sphere multiply your points by the inverse of the world matrix for the sphere then normalize the result. After that though there's still the problem of how the texture itself is mapped to the imaginary sphere for which again there are an infinite number of ways.

    The simplest way is I guess called mercator projection which is how most 2d maps of the world work. they have the problem that lots of space is wasted at the north and south poles. Assuming x,y,z are the normalized coordinates mentioned in the previous paragraph then

    U = Math.atan2(z, x) / Math.PI * 0.5 - 0.5;
    V = 0.5 - Math.asin(y) / Math.PI;
  • Projection Mapping

    This is just like a movie. You have a 2d image being projected from a point. Imagine you pointed a movie projector (or a projection TV) at a chair. Compute those points

    Computing these points is exactly like computing the 2D image from 3D data that nearly all WebGL apps do. Usually they have a line in their vertex shader like this

    gl_Position = matrix * position;

    Where matrix = worldViewProjection. You can then do

    clipSpace = gl_Position.xy / gl_Position.w

    You now have x,y values that go from -1 to +1. You then convert them
    to 0 to 1 for UV coords

    uv = clipSpace * 0.5 + 0.5;

    Of course normally you'd compute UV coordinates at init time in JavaScript but the concept is the same.

  • Planar Mapping

    This is the almost the same as projection mapping except imagine the projector, instead of being a point, is the same size as you want to project it. In other words, with projection mapping as you move your model closer to the projector the picture being projected will get smaller but with planar it won't.

    Following the projection mapping example the only difference here is using an orthographic projection instead of a perspective projection.

  • Cube Mapping?

    This is effectively planar mapping from 6 directions. It's up to you
    to decide which UV coordinates get which of the 6 planes. I'd guess
    most of the time you'd take the normal of the triangle to see which
    plane it most faces, then do planar mapping from that plane.

    Actually I might be getting my terms mixed up. You can also do
    real cube mapping where you have a cube texture but that requires
    U,V,W instead of just U,V. For that it's the same as the sphere
    example except you just use the normalized coordinates directly as
    U,V,W.

  • Cylindrical mapping

    This is like sphere mapping except assume there's tiny cylinder projecting on to your model. Unlike a sphere a cylinder has orientation but basically you can move the points of the model into the orientation of the cylinder then assuming x,y,z are now relative to the cylinder (in other words you multiplied them by the inverse matrix of the matrix that represents the orientation of the cylinder), then .

    U = Math.atan2(x, z) / Math.PI * 0.5 + 0.5
    V = y

2 more solutions

  1. Maybe you want Environment Mapping?

    Here's 1 example and Here's another.

  2. Maybe you should consider using a modeling package like Maya or Blender that have UV editors and UV projectors built in.



Related Topics



Leave a reply



Submit