I'm trying to make an object to be rendered independed of camera position just like skyboxes cubemaps. I found an opengl article. The article said that I need to make a matrix that contains only the upper-left corner(3x3) of the matrix.(Convert the matrix4x4 to matrix3x3 and then converting it back to a matrix4x4). This has the effect of nulling the last row and last collumn. I know unity is using directx not opengl, however a camera matrix is a camera matrix so I tried: Code (CSharp): Matrix4x4 view = Matrix4x4.LookAt(...); view.SetColumn(3, Vector4.Zero); view.SetRow(3, Vector4.Zero); The affected object is not visible anymore. I thought that after nulling the last row and column I have to set the bottom-right corner to 1 like the identity matrix has: Code (CSharp): Matrix4x4 view = Matrix4x4.LookAt(...); view.SetColumn(3, Vector4.Zero); view.SetRow(3, new Vector4(0,0,0,1f)); The object is still not appearing.

It's completely unclear where or how you use this matrix ^^. Note that the camera matrix is adjusted based on the underlying graphics engine. Unity generally uses the opengl standard in shaders. So inside the shader pretty much everything is righthanded while Unity is lefthanded. That's actually corrected through the camera matrix. Also note that the camera matrix is the inverse of the camera transform and not just the cameras transform. The view matrix should bring the worldspace into camera space, not camera space into world space. From your snippets it's completely unclear how you create or use this matrix.

I think you posted in the wrong thread. I understand that. When a cubemap is rendered, first person camera moving trough the world doesn't affect its rendering, if it would, you would be able to go outside the skybox. The rotation of the camera is accounted for cubemap render, because otherwise, it would be just a still background, however we are able to look around the skybox's sides. Here:https://learnopengl.com/Advanced-OpenGL/Cubemaps it said "taking the upper-left 3x3 matrix of the 4x4 matrix. We can achieve this by converting the view matrix to a 3x3 matrix (removing translation) and converting it back to a 4x4 matrix:" Code (CSharp): glm::mat4 view = glm::mat4(glm::mat3(camera.GetViewMatrix())); I'm trying to do this in unity. I created a shader for this object and sent the matrix to the shader and applied the matrix to the vertex position. This means in the shader forward is (0, 0, -1) and in unity forward is (0, 0, 1) or thet means that in shader the multiplication is with column vectors and in unity with row-vectors and then the matriced transposed for the shader?

I completely understood your intention. Though you used Matrix4x4.LookAt in your code which would produce an object to world matrix and not the required inverse. Also in your latest reply you said this is also not clear how you did that and if you used the matrices inside the shader correctly. Again there are simply too many unclear variables There's no real need to convert between 4x4 and 3x3 to just get rid of the translation. All you have to do is set m03, m13 and m23 to 0f. You may have a look at my matrix crash course for reference. As I said, the first thing you need is to have a valid camera / view matrix in the first place. Note that in Unity the camera / view matrix is specified through the worldToCameraMatrix. Make sure you read the documentation carefully.

I tried like this: Code (CSharp): using UnityEngine public class RemoveCameraPosition : MonoBehaviour { public Camera camera; Matrix4x4 Position; Matrix4x4 Rotation; Matrix4x4 Scale; Matrix4x4 Object; Matrix4x4 View; Matrix4x4 Perojection; void Start(){ } void Update(){ Position = Matrix4x4.Translate(transform.position); Rotation = Matrix4x4.Rotate(transform.rotation); Scale = Matrix4x4.Scale(transform.localScale); Object = Positon * Rotation * Scale //I tried Scale * Rotation * Position also View = Matrix4x4.LookAt(camera.transform.position, camera.transform.position + camera.transform.forward, Vector3.up); Projection = Matrix4x4.Perspective(camera.fieldOfView, camera.pixelWidth/(float)camera.pixelHeight, camera, camera.nearClipPlane, camera farClipPlane); GetComponent<Renderer>().material.SetMatrix("_object", Object ); GetComponent<Renderer>().material.SetMatrix("_view", View ); GetComponent<Renderer>().material.SetMatrix("_proj", Projection ); } } Shader: Code (CSharp): ... float4x4 _object; float4x4 _view; float4x4 _proj; float4x4 OVP; v2f vert (float4 pos : POSITION, float2 uv : TEXCOORD0) { v2f o; OVP = mul(mul(_object, _view), _proj);//I tried also proj * view * _object; o.pos = mul(pos, OVP); //I also tried OVP * pos; o.uv = uv; return o; }

I thought the matrix that convert the vertices from object space is the object matrix or transform matrix. I thought lookat produces the camera/view matrix which converts from the world space to camera space and the projection matrix converts from camera space to screen space.

Is that right:? I have a vertex at (0, 1, 0). The position is defined in local/object space relative to the mesh's center. I want to place the mesh at (0, 7, 2), then I create a position matrix using Matrix4x4.Translate(new Vector3(0, 7, 2)), which generate something like this: Code (CSharp): 1 0 0 0 0 1 0 0 0 0 1 0 x y z 1 or(for column vectors) 1 0 0 x 0 1 0 y 0 0 1 z 0 0 0 1 If I have a rotation and a scale I multiply all the transformation matrices to a 1 object matrix. For this example I will have only position changed. If I multiply the object matrix with the vertex position(0, 1, 0) I will end up with (0, 8, 2) which is the position of the vertex in world space. It is like I have in unity a object child to another object and use this Vector3 world_pos = child.transform.localPosition + child.transform.parent.position; So I got from vertexLocalPosition: (0, 1, 0) to vertexWorldPosition(0, 8, 2) using object matrix. To make the vertex to react to camera movement I use the lookat/camera/view matrix using Matrix4x4.LookAt(CameraPosition, CameraPosition + CameraForward, WorldUp). I place camera at (1, 0, 0), looking at position(1, 0, 0) + cameraForward(0, 0, 1) = (1, 0, 1). Since the scene is moving and the camera is still is like moving the vertex position to -CameraPosition and rotate to match the camera forward. (I set the camera forward to world forward to not worry about that now). Then Matrix4x4.LookAt() will generate a matrix that converts the vertex from world space(which object matrix did) to camera space. So now if I apply both matrices my vertex will be (0, 8, 2) + -(1, 0, 0) = (-1, 8, 2) and that's the vertex's position relative to the camera. It would cumulate the rotation from the object matrix rotation and from the camera forward direction, however I set those to 0 to not worry about them in this example. Then we have to project the vertex to the screen so we use Matrix4x4.Perspective or Matrix4x4.Orthographic which creates a matrix that converts from camera space(which view matrix did) to screen space and now the pixel shader can tell the gpu: please make the monitor's pixel at this screen position green. So basically: Object/Transform matrix = ObjectToWorld (Matrix4x4.Translate * Matrix4x4.Rotate * Matrix4x4.Scale) Camera/View/LookAt matrix = WorldToCamera (Matrix4x4.LookAt) Projection = CameraToScreen(CgHLSL: unity_CameraProjection) (Matrix4x4.Perspective)

No, I think you confused the method Matrix4x4.LookAt because of the similar name with the GLU method gluLookAt. However they are not the same thing. Unity's LookAt method, as you can read in the documentation, just creates a transformation matrix. So Matrix4x4.LookAt just does this: Code (CSharp): Matrix4x4.TRS(from, Quaternion.LookRotation(to-from, up), Vector3.one). The view matrix is just a transformation matrix. However it does the opposite. Every object is transformed into worldspace and after that it's transformed from worldspace into the object space of the camera which is simply called camera space. So the view matrix is simply the inverse transform of the camera object itself. Though as mentioned above, in addition the worldToCameraMatrix does also convert from Unity's left handed system into the OpenGL right handed system by inverting the z axis. So Unity's LookAt method creates a transformation that makes the object this method is applied to to be located at "from" and look at "to". By looking at we mean the positive z axis is pointing towards that object. Though the view matrix has to do the opposite thing. So if you want to calculate your "view matrix" yourself with LookAt, you have to take the inverse of that matrix. Also you probably need to manually flip the scale on the z axis by using an additional scale matrix like this: Code (CSharp): 1 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 1 Though it seems weird why you want to calculate the view matrix yourself. Is there a reason you don't want to use a Camera? Just take the view matrix from your camera and remove the position / offset. Though this could even be done inside a shader, especially when you already have a specialized shader for your skybox / cube map. Note that there are many other solutions that dont even require any extra transformation, just proper texture lookup like in this doom style skybox. This type of skybox did not have any bottom or top face as in doom you couldn't look up or down. However doing a cube map texture lookup is just equally possible in the fragment shader.

So let me see If I understood: Code (CSharp): S R R 0 R S R 0 R R S 0 T T T 1 S - scale R - rotation T - position (objectToWorld) View matrix(Not the same with Matrix4x4.LookAt) (worldToCamera) Perspective Proj matrix(Matrix4x4.Perspective) (cameraToScreen) It is confusing because when opengl use LookAt() It generates this: Eye = from At = to And I don't understand why Unity LookAt generates something else.

I ran some tests and here Is what I found. I called the LookAt with the same parameters for 3 APIs: DirectX11(SharpDX): Code (CSharp): Matrix view = Matrix.LookAtLH(new Vector3(1, 2, 3), new Vector3(3, 2, 4), Vector3.Up); for (int i = 0; i < 16; i++) { Console.Write(String.Format("{0:0.00}", view.ToArray()[i])+ " "); if ((i + 1) % 4 == 0) Console.WriteLine(); // 3, 7, 11, 15; } Output: Code (CSharp): 0.45 0.00 0.89 0.00 0.00 1.00 0.00 0.00 -0.89 0.00 0.45 0.00 2.24 -2.00 -2.24 1.00 Unity's modified DirectX11: Code (CSharp): Matrix4x4 view = Matrix4x4.LookAt(new Vector3(1, 2, 3), new Vector3(3, 2, 4), Vector3.up); string text = null; text += (System.String.Format("{0:0.00}", view.GetRow(0).ToString()) + "\n"); text += (System.String.Format("{0:0.00}", view.GetRow(1).ToString()) + "\n"); text += (System.String.Format("{0:0.00}", view.GetRow(2).ToString()) + "\n"); text += (System.String.Format("{0:0.00}", view.GetRow(3).ToString()) + "\n"); print(text); Output: Code (CSharp): (0.4, 0.0, 0.9, 1.0) (0.0, 1.0, 0.0, 2.0) (-0.9, 0.0, 0.4, 3.0) (0.0, 0.0, 0.0, 1.0) OpenGL(GLM): Code (CSharp): glm::mat4 viewMatrix = glm::lookAt(glm::vec3(1, 2, 3),glm::vec3(3, 2, 4),glm::vec3(0, 1, 0)); const float *pSource = (const float*)glm::value_ptr(viewMatrix); for (int i = 0; i < 16; ++i) dArray[i] = pSource[i]; for (int i = 0; i < 16; i++) { printf("%4.2f ", dArray[i]); if ((i + 1) % 4 == 0) printf("\n"); // 3, 7, 11, 15; } Output: Code (CSharp): 0.45 0.00 -0.89 2.24 0.00 1.00 0.00 -2.00 0.89 0.00 0.45 -2.24 0.00 0.00 0.00 1.00 So Standard Dx lookat is the same with opengl lookat(just transposed), unity changed the value of lookat.

Why are you still comparing those different LookAt methods? As I said in my last post, Unity's LookAt method does not create a view matrix. The view matrix is just the inverse object matrix of the camera. Your tests are hard / difficult to follow since you use odd positions and an odd target location. It's almost impossible to follow what the result should be in a right handed or left handed system. Look at this example: Code (CSharp): public Camera cam; public Vector3 pos; public Vector3 target; public Vector3 upRef = Vector3.up; void Start() { cam.transform.position = pos; cam.transform.LookAt(target, upRef); var view1 = cam.worldToCameraMatrix; var view2 = Matrix4x4.LookAt(pos, target, upRef); view2 = view2.inverse; view2.SetRow(2, -view2.GetRow(2)); PrintMatrix("cam.worldToCamera", view1); PrintMatrix("view2", view2); } void PrintMatrix(string aName, Matrix4x4 aMat) { Debug.Log(aName + "\n" + aMat.GetRow(0) + "\n" + aMat.GetRow(1) + "\n" + aMat.GetRow(2) + "\n" + aMat.GetRow(3) + "\n"); } Here you can see I set the camera to "pos" and let it's transform look at the target position with the same up reference. Then I grab the WorldToCameraMatrix. So the view matrix. Second I create view2 based on Matrix4x4.LookAt. As I said, the view matrix is the inverse of that. Also as I already said, The view matrix in the shader is expected in the OpenGL format and therefore right handed. So we have to invert the third row so the z axis is inverted. The two log statements give us Code (CSharp): cam.worldToCamera (0.4, 0.0, -0.9, 2.2) (0.0, 1.0, 0.0, -2.0) (-0.9, 0.0, -0.4, 2.2) (0.0, 0.0, 0.0, 1.0) view2 (0.4, 0.0, -0.9, 2.2) (0.0, 1.0, 0.0, -2.0) (-0.9, 0.0, -0.4, 2.2) (0.0, 0.0, 0.0, 1.0) As you can see the result is identical. I used your values for pos and target. So pos is (1,2,3) and target is (3,2,4)

I just wanted to point that opengl and directx lookat method generates the view matrix without being required inverting it like unity. I understood that unity lookat matrix is not the same with view matrix. I just wanted to clarify if I understood: WorldToCamera(view) = Matrix4x4.Invert(Matrix4x4.LookAt) //Unity WorldToCamera(view) = glm::lookat() //OpenGl - without inverting - WorldToCamera(view) = Matrix.LookAt() //Direct3d11(SharpDx) - without inverting- SharpDx math is based on DirectX Math lib so it is the same as the regular c++ direct3d11 Anyway I found why the cubemap disappears after removing the translation. After removing the translation, the camera was inside the cubemap and since I used a modified regular shader for rendering the cubemap the inside was culled. After adding Cull off in the shader, the cubemap is working.