Transform the Modelmatrix

Transform the modelMatrix

In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix. Finally the projected scene is mapped to the viewport.

Model coordinates (Object coordinates)

The model space is the local space, where within a mesh is defined. The vertex coordinates are defined in model space.

e.g.:

model coordinates

World coordinates

The world space is the coordinate system of the scene. Different models (objects) can be placed multiple times in the world space to form a scene, in together.

Model matrix

The model matrix defines the location, orientation and the relative size of a model (object, mesh) in the scene. The model matrix transforms the vertex positions of a single mesh to world space for a single specific positioning. There are different model matrices, one for each combination of a model (object) and a location of the object in the world space.

The model matrix looks like this:

( X-axis.x, X-axis.y, X-axis.z, 0 )
( Y-axis.x, Y-axis.y, Y-axis.z, 0 )
( Z-axis.x, Z-axis.y, Z-axis.z, 0 )
( trans.x, trans.y, trans.z, 1 )

e.g.:

(  0.0, -0.5,  0.0,  0.0 )
( 2.0, 0.0, 0.0, 0.0 )
( 0.0, 0.0, 1.0, 0.0 )
( 0.4, 0.0, 0.0, 1.0 )

model to world

View space (Eye coordinates)

The view space is the local system which is defined by the point of view onto the scene.
The position of the view, the line of sight and the upwards direction of the view, define a coordinate system relative to the world coordinate system. The objects of a scene have to be drawn in relation to the view coordinate system, to be "seen" from the viewing position. The inverse matrix of the view coordinate system is named the view matrix.

In general world coordinates and view coordinates are Cartesian coordinates

View matrix

The view coordinates system describes the direction and position from which the scene is looked at. The view matrix transforms from the world space to the view (eye) space.

If the coordiante system of the view space is a Right-handed system, then the X-axis points to the left, the Y-axis up and the Z-axis out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).

view coordinates

Clip coordinates

Clip space coordinates are Homogeneous coordinates. In clip space the clipping of the scene is performed.

A point is in clip space if the x, y and z components are in the range defined by the inverted w component and the w component of the homogeneous coordinates of the point:

-w <=  x, y, z  <= w.

Projection matrix

The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.

e.g.:

look at: eye position (2.5, -1.5, 3.5), center (2, 0, 0), up vector (0, 1, 0)

perspective projection: field of view (y) of 100°, near plane at 0.1, far plane at 20.0

perspective projection

Normalized device coordinates

The normalized device coordinates are the clip space coordinates divide by the w component of the clip coordinates. This is called Perspective divide

normaliced device coordinates

Window coordinates (Screen coordinates)

The window coordinates are the coordinates of the viewport rectangle. The window coordinates finally are passed to the raterization process.

Viewport and depthrange

The normalized device coordinates are linearly mapped to the Window Coordinates (Screen Coordinates) and to the depth for the depth buffer.
The viewport is defined by glViewport. The depthrange is set by glDepthRange and is by default [0, 1].

Compositing a transformation in glm and OpenGL

The matrix multiplication order is wrong. The example code calculates T * S instead of S * T. Matrix multiplications are not commutative, thus the result defers from your expectation.

The following code should produce the result you need:

glm::mat4 modelMatrix(1.0f); 
modelMatrix = glm::scale(modelMatrix,glm::vec3(1.5, 1.5, 1.0));
modelMatrix = glm::translate(modelMatrix, glm::vec3(0.5, 0.5f, 0.5f));

How to transform (translate, rotate, scale) object correctly using GLM or JOML matrices for modern OpenGL

Solution:

Matrix4f moveOriginMat = new Matrix4f();
Vector3d centroid = getPickedObjectLocalCentroid();
moveOriginMat.translation(-(float)centroid.x * (scaleMat.m00() - 1), -(float)centroid.y * (scaleMat.m11() - 1), -(float)centroid.z * (scaleMat.m22() - 1));
modelMatrix.set(moveOriginMat);
modelMatrix.mul(scaleMat);
modelMatrix.translate((float)centroid.x, (float)centroid.y, (float)centroid.z);
modelMatrix.mul(zRotationMatrix);
modelMatrix.mul(yRotationMatrix);
modelMatrix.mul(xRotationMatrix);
modelMatrix.translate(-(float)centroid.x, -(float)centroid.y, -(float)centroid.z);
modelMatrix.mul(translationMatrix);

I just needed to add translation to origin before rotation and translation back after this one. Also signes for translation made me confused.

How to do matrix multiplication with `ModelMatrix` object in Julia?

If you check the source of ModelMatrix you can see the object has a property m which is the value of the underlying matrix. You can pull it out using mm.m (where mm is a ModelMatrix).

Example:

Generating a ModelMatrix:

julia> using DataFrames

julia> df = DataFrame(X = randn(4), Y = randn(4), Z = randn(4))
4×3 DataFrames.DataFrame
│ Row │ X │ Y │ Z │
├─────┼──────────┼────────────┼──────────┤
│ 1 │ 0.766271 │ 0.669007 │ 0.232803 │
│ 2 │ 2.08208 │ 0.239115 │ 0.855068 │
│ 3 │ -1.48009 │ 0.00220079 │ 0.105638 │
│ 4 │ -1.57438 │ 0.650456 │ 0.557467 │

julia> mf = ModelFrame(Z ~ X + Y, df)
DataFrames.ModelFrame(4×3 DataFrames.DataFrame
│ Row │ Z │ X │ Y │
├─────┼──────────┼──────────┼────────────┤
│ 1 │ 0.232803 │ 0.766271 │ 0.669007 │
│ 2 │ 0.855068 │ 2.08208 │ 0.239115 │
│ 3 │ 0.105638 │ -1.48009 │ 0.00220079 │
│ 4 │ 0.557467 │ -1.57438 │ 0.650456 │
...

julia> mm = ModelMatrix(mf)
DataFrames.ModelMatrix{Array{Float64,2}}(4x3 Array{Float64,2}:
1.0 0.766271 0.669007
1.0 2.08208 0.239115
1.0 -1.48009 0.00220079
1.0 -1.57438 0.650456 ,[0,1,2])

Using the ModelMatrix:

julia> m = mm.m
4x3 Array{Float64,2}:
1.0 0.766271 0.669007
1.0 2.08208 0.239115
1.0 -1.48009 0.00220079
1.0 -1.57438 0.650456

julia> m * rand(3,1)
4x1 Array{Float64,2}:
1.9474
3.08515
-0.522879
-0.371708

Why transform normals with the transpose of the inverse of the modelview matrix?

Take a look at this tutorial:

https://paroj.github.io/gltut/Illumination/Tut09%20Normal%20Transformation.html

You can imagine that when the surface of a sphere stretches (so the sphere is scaled along one axis or something similar) the normals of that surface will all 'bend' towards each other. It turns out you need to invert the scale applied to the normals to achieve this. This is the same as transforming with the Inverse Transpose Matrix. The link above shows how to derive the inverse transpose matrix from this.

Also note that when the scale is uniform, you can simply pass the original matrix as normal matrix. Imagine the same sphere being scaled uniformly along all axes, the surface will not stretch or bend, nor will the normals.

Transformations of different models not working

Even though you're not making an explicit scene graph there's always some kind of simple scene graph involved if you have more than one object to render.

You have to reset to the identity matrix and do the matrix multiplications separately for each object.

initialize()
{
modelA.load("file.obj");
modelB.load("file2.3ds");
// compile and link shaders
// init lighting
// init camera


modelB.translate(glm::vec3(-10.f, 0.f, 0.f));
}

passUniforms()
{
auto MVP = projectionMatrix * viewMatrix * modelMatrix;

GLint MVP_id = glGetUniformLocation(programHandle, "MVP");
glUniformMatrix4fv(MVP_id, 1, GL_FALSE, glm::value_ptr(MVP));

// pass other uniforms
}

void render()
{
modelMatrix = glm::mat4(1.0);
modelMatrix *= modelA.getTransformationMatrix();
passUniforms();
modelA.render(programHandle);

modelMatrix = glm::mat4(1.0);
modelMatrix *= modelB.getTransformationMatrix();
passUniforms();
modelB.render(programHandle);
}

There's no need to calculate inverses.

Calculating a transformation matrix to place an object on a sphere in glsl

Found a solution to the problem which allows me to place objects on the surface of a sphere facing in the correct directions. Here is the code:

  mat4 m = mat4(1);

vec3 worldPos = getWorldPoint(sphericalCoords);


//Add a random number to the world pos, then normalize it so that it is a point on a unit sphere slightly different to the world pos. The vector between them is a tangent. Change this value to rotate the object once placed on the sphere
vec3 xAxis = normalize(normalize(worldPos + vec3(0.0,0.2,0.0)) - normalize(worldPos));

//Planet is at 0,0,0 so world pos can be used as the normal, and therefore the y axis
vec3 yAxis = normalize(worldPos);

//We can cross the y and x axis to generate a bitangent to use as the z axis
vec3 zAxis = normalize(cross(yAxis, xAxis));

//This is our rotation matrix!
mat3 baseMat = mat3(xAxis, yAxis, zAxis);

//Fill this into our 4x4 matrix
m = mat4(baseMat);

//Transform m by the Radius in the y axis to put it on the surface
mat4 m2 = transformMatrix(mat4(1), vec3(0,radius,0));
m = m * m2;

//Multiply by the MVP to project correctly
m = mvp* m;

//Draw an instance of your object
drawInstance(m);


Related Topics



Leave a reply



Submit