首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >世界坐标系、车身坐标系和相机坐标系的转换过程不对吗?

世界坐标系、车身坐标系和相机坐标系的转换过程不对吗?

提问于 2023-12-31 03:05:31
回答 0关注 0查看 120

世界坐标系下的点需要映射到相机坐标系,有2种方法。 我定义了如下的函数

代码语言:text
复制
def get_obj3d_from_annotation(ann, ego_pose, calib_data):
    obj_ann = dict()

    if ann['category_name'] == 'vehicle.car':
        obj_type = 'car'
        obj_id = ann['instance_token']

        center_true = np.array(ann['translation'])
        temp_center = center_true.copy()

        global_orientation = Quaternion(np.array(ann['rotation']))
        x, y, z = temp_center[0], temp_center[1], temp_center[2]
        w, l, h = ann['size']
        x_corners = l / 2 * np.array([-1, 1, 1, -1, -1, 1, 1, -1])
        y_corners = w / 2 * np.array([1, 1, -1, -1, 1, 1, -1, -1])
        z_corners = h / 2 * np.array([-1, -1, -1, -1, 1, 1, 1, 1])
        global_box3d = np.vstack((x_corners, y_corners, z_corners))
        global_box3d = np.dot(global_orientation.rotation_matrix, global_box3d)
        global_box3d[0, :] = global_box3d[0, :] + x
        global_box3d[1, :] = global_box3d[1, :] + y
        global_box3d[2, :] = global_box3d[2, :] + z

        ego_quat = Quaternion(ego_pose['rotation']).inverse
        temp_center -= np.array(ego_pose['translation'])
        temp_center = np.dot(ego_quat.rotation_matrix, temp_center)

        temp_center -= np.array(calib_data['translation'])
        sensor_quat = Quaternion(calib_data['rotation']).inverse
        temp_center = np.dot(sensor_quat.rotation_matrix, temp_center)
        orientation = ego_quat * global_orientation
        orientation = sensor_quat * orientation
        print('sensor0:', temp_center )

        sensor_center = temp_center.copy()
        sensor_x, sensor_y, sensor_z = sensor_center
        sensor_x_corners = l / 2 * np.array([-1, 1, 1, -1, -1, 1, 1, -1])
        sensor_y_corners = w / 2 * np.array([1, 1, -1, -1, 1, 1, -1, -1])
        sensor_z_corners = h / 2 * np.array([-1, -1, -1, -1, 1, 1, 1, 1])
        sensor_box3d = np.vstack((sensor_x_corners, sensor_y_corners, sensor_z_corners))
        sensor_box3d = np.dot(orientation.rotation_matrix, sensor_box3d)
        sensor_box3d[0, :] = sensor_box3d[0, :] + sensor_x
        sensor_box3d[1, :] = sensor_box3d[1, :] + sensor_y
        sensor_box3d[2, :] = sensor_box3d[2, :] + sensor_z

        obj_ann['type'] = obj_type
        obj_ann['global_box3d'] = global_box3d
        obj_ann['global_center'] = np.array(ann['translation']).reshape(3, 1)

        obj_ann['sensor_box3d'] = sensor_box3d
        obj_ann['sensor_center'] = sensor_center

        obj_ann['id'] = obj_id

        # 计算连续化的外参矩阵
        R1 = Quaternion(ego_pose['rotation']).inverse.rotation_matrix
        t1 = -np.dot(R1, ego_pose['translation'])
        R2 = Quaternion(calib_data['rotation']).inverse.rotation_matrix
        t2 = -np.dot(R2, calib_data['translation'])

        extrinsic = np.eye(4)
        extrinsic[:3, :3] = np.dot(R2, R1)
        extrinsic[:3, 3] = np.dot(R2, t1) + t2

        obj_ann['extrinsic'] = extrinsic

        center2 = center_true
        center2 = center2.reshape((3,1))
        center2 = np.vstack((center2, np.array([1])))


        extrinsic[:3, 3] += np.dot(extrinsic[:3, :3], t1)
        sensor_center2 = np.dot(extrinsic, center2)
        print('sensor2:', sensor_center2[:3])

第一次print输出的是将全局坐标系下的物体中心点center_true转换到相机坐标系下的结果,经过了三次变换:首先减去了相机在全局坐标系下的位姿坐标和旋转,然后减去了相机的内参和位姿坐标,最后将物体的旋转变换到相机坐标系下。输出的是相机坐标系下的物体中心点。

第二次print输出的是将全局坐标系下的物体中心点center_true通过外参矩阵变换到相机坐标系下的结果。这里直接使用了函数中计算得到的外参,将世界坐标系下的点映射到世界坐标系。 为什么两次print的结果相差很大呢?烦请各位指教。是哪里写错了呢?

回答

和开发者交流更多问题细节吧,去 写回答
相关文章

相似问题

相关问答用户
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档