之前用opencv求得某个物体位姿后,结合 相机 内参数矩阵, 使用 opengl 将其渲染出来
使用内参数矩阵计算出投影矩阵, 然后 思考了下 opengl 环境下 某个3D点最终在渲染窗口 的像素坐标是如何计算出来的
最终得到了如下代码, opengl 的坐标计算过程 参考如下链接
http://www.songho.ca/opengl/gl_transform.html
这个人 应该是某个大牛, 文章写得特别好,正确!
代码如下,包含opengl的计算方式, 和 直接使用 位姿变换流程的计算方式
void calcPixelCoor(const cv::Mat & poseMat)
{
//cv::Point3f ptx(2, 3, 3);
cv::Mat point3d = cv::Mat(4, 1, CV_64FC1);
//41.1071, 8.41076, -36.0432
point3d.at<double>(0) = 41.1071;
point3d.at<double>(1) = 8.41076;
point3d.at<double>(2) = 36.0432;
point3d.at<double>(3) = 1;
cv::Mat _poseMat = cv::Mat::zeros(4, 4, CV_64FC1);
for(int i = 0; i < 3; i++)
{
for(int j = 0; j < 4; j++)
{
_poseMat.at<double>(i, j) = poseMat.at<double>(i, j);
}
}
_poseMat.at<double>(3, 3) = 1;
// 先进行模型视图变换 4*4, 第4行为0,0,0,1
cv::Mat mvPt = _poseMat * point3d;
// 投影变换
double farPlane = 2500.0;
double nearPlane = 1;
double width = 640;
double height = 480;
double alpha = 747.72457662;
double beta = 744.939228;
double cx = 332.835123;
double cy = 252.998865;
cv::Mat project = cv::Mat::zeros(4, 4, CV_64FC1);
project.at<double>(0,0) = -2 * alpha / width;
project.at<double>(0,2) = -(2 * (cx / width) - 1);
project.at<double>(1,1) = -2 * beta / height;
project.at<double>(1,2) = 1 - 2 * cy / height;
project.at<double>(2,2) = (farPlane + nearPlane)/(farPlane-nearPlane);;
project.at<double>(2,3) = -2*farPlane*nearPlane/(farPlane-nearPlane);
project.at<double>(3,2) = -1;
cv::Mat proPt = project * mvPt;
// 透视除法
double x_ndc = proPt.at<double>(0) / proPt.at<double>(3);
double y_ndc = proPt.at<double>(1) / proPt.at<double>(3);
// 视口变换
int xw = (x_ndc + 1) * (width / 2) + 0;
int yw = (y_ndc + 1) * (height / 2) + 0;
std::cout << "xw " << xw << " yw " << yw << std::endl;
// 内参数
double params_LOGIC[] = { 747.72457662, // fx
744.939228, // fy
332.835123, // cx
252.998865}; // cy
cv::Mat A_matrix = cv::Mat::zeros(3, 3, CV_64FC1);
A_matrix.at<double>(0, 0) = params_LOGIC[0]; // [ fx 0 cx ]
A_matrix.at<double>(1, 1) = params_LOGIC[1]; // [ 0 fy cy ]
A_matrix.at<double>(0, 2) = params_LOGIC[2]; // [ 0 0 1 ]
A_matrix.at<double>(1, 2) = params_LOGIC[3];
A_matrix.at<double>(2, 2) = 1;
// 2D point vector [u v 1]'
cv::Mat point2d_vec = cv::Mat(4, 1, CV_64FC1);
point2d_vec = A_matrix * poseMat * point3d;
// Normalization of [u v]'
cv::Point2f point2d;
point2d.x = (float)(point2d_vec.at<double>(0) / point2d_vec.at<double>(2));
point2d.y = (float)(point2d_vec.at<double>(1) / point2d_vec.at<double>(2));
std::cout << "xw2 " << point2d.x << " yw2 " << point2d.y << std::endl;
}
glViewPort即按照上述公式进行运算, x和y默认取0, 即视口左下角作为即在原点
使用 内参数生成Projection矩阵的代码为
// 投影变换
double farPlane = 2500.0;
double nearPlane = 1;
double width = 640;
double height = 480;
double alpha = 747.72457662;
double beta = 744.939228;
double cx = 332.835123;
double cy = 252.998865;
cv::Mat project = cv::Mat::zeros(4, 4, CV_64FC1);
project.at<double>(0,0) = -2 * alpha / width;
project.at<double>(0,2) = -(2 * (cx / width) - 1);
project.at<double>(1,1) = -2 * beta / height;
project.at<double>(1,2) = 1 - 2 * cy / height;
project.at<double>(2,2) = (farPlane + nearPlane)/(farPlane-nearPlane);;
project.at<double>(2,3) = -2*farPlane*nearPlane/(farPlane-nearPlane);
project.at<double>(3,2) = -1;
这种计算方法在网上 找了很久都没找到, 只找到一些相近的, 最终 偶然试出来的