OpenGL ray tracing using inverse transformations
I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
c++ opengl raytracing
add a comment |
I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
c++ opengl raytracing
add a comment |
I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
c++ opengl raytracing
I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
c++ opengl raytracing
c++ opengl raytracing
edited Nov 25 '18 at 15:31
Rabbid76
36.2k113247
36.2k113247
asked Nov 25 '18 at 11:42
LivingRobotLivingRobot
367620
367620
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r
and a normalized direction d
finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()
I'm trying this and still not quite intersecting. It seems to be off by quite a large factor as the ray_origin is all > 1, when previously it was all between 0 and 1.
– LivingRobot
Nov 27 '18 at 1:42
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53467077%2fopengl-ray-tracing-using-inverse-transformations%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r
and a normalized direction d
finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()
I'm trying this and still not quite intersecting. It seems to be off by quite a large factor as the ray_origin is all > 1, when previously it was all between 0 and 1.
– LivingRobot
Nov 27 '18 at 1:42
add a comment |
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r
and a normalized direction d
finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()
I'm trying this and still not quite intersecting. It seems to be off by quite a large factor as the ray_origin is all > 1, when previously it was all between 0 and 1.
– LivingRobot
Nov 27 '18 at 1:42
add a comment |
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r
and a normalized direction d
finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r
and a normalized direction d
finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()
edited Nov 25 '18 at 15:35
answered Nov 25 '18 at 15:30
Rabbid76Rabbid76
36.2k113247
36.2k113247
I'm trying this and still not quite intersecting. It seems to be off by quite a large factor as the ray_origin is all > 1, when previously it was all between 0 and 1.
– LivingRobot
Nov 27 '18 at 1:42
add a comment |
I'm trying this and still not quite intersecting. It seems to be off by quite a large factor as the ray_origin is all > 1, when previously it was all between 0 and 1.
– LivingRobot
Nov 27 '18 at 1:42
I'm trying this and still not quite intersecting. It seems to be off by quite a large factor as the ray_origin is all > 1, when previously it was all between 0 and 1.
– LivingRobot
Nov 27 '18 at 1:42
I'm trying this and still not quite intersecting. It seems to be off by quite a large factor as the ray_origin is all > 1, when previously it was all between 0 and 1.
– LivingRobot
Nov 27 '18 at 1:42
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53467077%2fopengl-ray-tracing-using-inverse-transformations%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown