The Model-View Transform (viewport transformation model)

Monday to Friday, a day, 7 am Beijing time on time updates -

The Model-View Transform (viewport transformation model)

In a simple OpenGL application, one of the most common transformations is to take a model from model space to view space so as to render it (in OpenGL programs, the most common is to transform a model coordinate system to the viewport down, then render it). In effect, we move the model first into world space (ie, place it relative to the world's origin) and then from there into view space (placing it relative to the viewer) (In fact, we first model transformation to the world coordinates and then converting it into the viewport coordinate system). This process establishes the vantage point of the scene. By default, the point of observation in a perspective projection is at the origin (0,0,0 ) looking down the negative z axis (into the monitor or screen) (this processing flow determines the position of the observation point, the default condition, the observer's position in the projection matrix in place in 0,0,0, see in the negative direction of the z-axis). this point of observation is moved relative to the eye coordinate system to provide a specific vantage point (which Observation points will viewport coordinate system with respect to the motion). When the point of observation is located at the origin, as in a perspective projection, objects drawn with positive z values ​​are behind the observer (the observer when the projection is at the origin of the coordinate system, when everything positive z-axis direction in the back of the viewer). In an orthographic projection, however, the viewer is assumed to be infinitely far away on the positive z axis and can see everything within the viewing volume (in the orthogonal projection, the viewer can see is set to infinity the z-axis, so long as an observer within the view frustum, it can be seen by the viewer). because this transform takes vertices from model space (which is also sometimes known as object space) directly into view space and effectively bypasses world space (because this will direct the matrix objects thrown into the world coordinate system from the model coordinate system, then go directly thrown into the viewport), it is often referred to as the model-view transform and the matrix that encodes this transformation is known as the model-view matrix (model and fit the viewport matrix matrix matrix commonly called model viewport). the model transform essentially places objects into world space (matrix model is basically the model into the world coordinate system to go). Each object is likely to have its own model transform, which will generally consist of a sequence of scale, rotation, and translation operations (each object contains a substantially transformation model, which includes rotation, scaling and translation). The result of multiplying the positions of vertices in model space by the model transform is a set of positions in world space. This transformation is sometimes called the model-world transform (model matrix by point-point conversion can go to the world coordinate system, sometimes also known as the model matrix model of world transformation). the view transformation allows you to place the point of observation anywhere you want and look in any direction (viewport transformation lets you put the viewer in any position, facing any direction). Determining the viewing transformation is like placing and pointing a camera at the scene (define a viewport transformation is defined as a video camera in the scene). in the grand scheme of things, you must apply the viewing transformation before any other modeling transformations (a macro sense, you have to change the model Viewport transformation performed before the change). The reason is that it appears to move the current working coordinate system with respect to the eye coordinate system (because it will be an object to operate with respect to the viewport coordinate system). All subsequent transformations then occur based on the newly modified coordinate system ( All subsequent transformation are based on the current coordinate system). the transform that moves coordinates from world space to view space is sometimes called the world-view transform (the things in the world coordinate system transformation to transform the viewport coordinate system called World viewport transformation). concatenating the model-world and world-view transform matrices by multiplying them together yields the model-view product matrix (model matrix and viewport matrix combination is called the model viewport matrix) (ie, the matrix that takes coordinates from model to view space). there are some advantages to doing this (doing is good). first, there are likely to be many models in your scene and many vertices in each model (first, your scene there will be many models, model There will be a lot of points). Using a singlecomposite transform to move the model into view space is more efficient than moving it first into world space and then into view space as explained earlier (synthetic matrix more efficient than 3.10). The second advantage has more to do with the numerical accuracy of single-precision floating-point numbers: the world could be huge and computation performed in world space will have different precision depending on how far the vertices are from the world origin (second advantage is that the precision and single precision floating point about: the world will be great in the world coordinate system based on the accuracy of each point to stay away from various points of different transformation is not the same) However, if you perform the same calculations in view space, then precision is. dependent on how far vertices are from the viewer, which is probably what you want- a great deal of precision is applied to objects that are close to the viewer at the expense of precision very far from the viewer (if they convert directly into view mouth Standard department, accuracy and distance between the viewer is related exactly in line with our needs).

The Lookat Matrix (viewport transformation model)

that will represent a rotation that will point a camera in the correct direction and a translation that will move the origin to the center of the camera (thus, given an origin, a direction, a point of view, we need to construct a matrix just to the expression of such a posture of the camera). this matrix is ​​known as a lookat matrix and can be constructed using only the math covered in this chapter so far. (this matrix is ​​called lookat matrix, you can use ordinary mathematical knowledge you can get this matrix ) first, we know that subtracting two positions gives us a vector which would move a point from the first position to the second and normalizing that vector result gives us its directional (first we through vector subtraction, can be a direction vector that). So, if we take the coordinates of a point of interest, subtract from that the position of our camera, and then normalize the resulting vector, we have a new vector that represents the direction of view from the camera to the point of i nterest. We call this the forward vector (we use minus the viewpoint position of the camera, then unitization, we get a vector pointing to the front). Next, we know that if we take the cross product of two vectors, we will receive a third vector that is orthogonal (at a right angle) to both input vectors (Secondly, we know that both vector cross product and maybe get a vector perpendicular vector). Well, we have two vectors-the forward vector we just calculated, and the up vector, which represents the direction we consider to be upward. Taking the cross product of those two vectors results in a third vector that is orthogonal to each of them and points sideways with respect to our camera (we let the point in front of the vector and up vector cross product to obtain a vector, the vector is still relative to the camera, we call this a sideways vector). we call this the sideways vector. However, the up and forward vectors are not necessarily orthogonal to each other and we need a third orthogonal vector to const ruct a rotation matrix (however, not up and forward vector must be perpendicular to each other, so we also need a third vector perpendicular to constitute a rotation matrix). To obtain this vector, we can simply apply the same process again-taking the cross product of the forward vector and our sideways vector to produce a third that is orthogonal to both and that represents up with respect to the camera (the third vector on the use of forward vector cross product and sideway do we just get to do vector cross product) .These three vectors are of unit length and are all orthogonal to one another, so they form a set of orthonormal basis vectors and represent our view frame (Sa vectors which are perpendicular to a unit vector and are so configured that they become a quadrature-yl). Given these three vectors, we can construct a rotation matrix that will take a point in the standard Cartesian basis and move it into the basis of our camera (using these vectors, we can build a rotation matrix, can convert the object to the viewport coordinates to). in the following math, e is the eye (or camera) position, p is the point of interest , and u is the up vector. Here we go. (the next mathematics, e is Like position machine, p is a point of view, u vector is up, let's start deduce it) First, construct our forward vector, f:
The Model-View Transform (viewport transformation model)
Next, take the cross product of f and u to construct a side vector s :( then calculates the vector side)
The Model-View Transform (viewport transformation model)
Now, up Vector Construct A new new U 'in Reference :( Our Camera apos then calculate a new vector direction of the camera)
The Model-View Transform (viewport transformation model)
the Finally , construct a rotation matrix representing a reorientation into our newly constructed orthonormal basis :( Finally we can get our rotation matrix of the orthogonal)

The Model-View Transform (viewport transformation model)
Finally, we have our lookat matrix, T. If this seems like a lot of steps to you, you're in luck. There's a function in the vmath library that will construct the matrix for you :( Finally we got lookat matrix, It looks like you need to do a lot of things, but you have to help vmath completed)

template
static inline Tmat4 lookat(const vecN& eye,const vecN& center,
const vecN& up) { ... }
The matrix produced by the vmath::lookat function can be used as the basis for your camera matrix—the matrix that represents the position and orientation of your camera. In other words, this can be your view matrix(vmath::lookat产生的矩阵就是你的视口矩阵了).

Translations of this day to get here, see you tomorrow, bye ~

Get the latest plot first time, please pay attention to the Eastern Han Dynasty academy and the heart of the public graphic No.

Han College, waiting for you to play Oh

Guess you like

Origin blog.51cto.com/battlefire/2427568