US 12,169,232 B2
Radar and camera data fusion
Amin Ansari, San Diego, CA (US); Sundar Subramanian, San Diego, CA (US); Radhika Dilip Gowaikar, San Diego, CA (US); Ahmed Kamel Sadek, San Diego, CA (US); Makesh Pravin John Wilson, San Diego, CA (US); Volodimir Slobodyanyuk, San Diego, CA (US); Shantanu Chaisson Sanyal, San Diego, CA (US); and Michael John Hamilton, San Diego, CA (US)
Assigned to QUALCOMM Incorporated, San Diego, CA (US)
Filed by QUALCOMM Incorporated, San Diego, CA (US)
Filed on May 10, 2021, as Appl. No. 17/316,223.
Prior Publication US 2022/0357441 A1, Nov. 10, 2022
Int. Cl. G01S 13/86 (2006.01); G01S 13/89 (2006.01); G06N 20/00 (2019.01); G06T 7/11 (2017.01); G06T 7/521 (2017.01)
CPC G01S 13/867 (2013.01) [G01S 13/865 (2013.01); G01S 13/89 (2013.01); G06N 20/00 (2019.01); G06T 7/11 (2017.01); G06T 7/521 (2017.01); G06T 2207/10028 (2013.01)] 36 Claims
OG exemplary drawing
 
1. A method for processing image data, the method comprising:
obtaining a radar point cloud and one or more frames of camera data;
determining depth estimates of one or more pixels of the one or more frames of camera data;
generating a pseudo lidar point cloud using the depth estimates of the one or more pixels of the one or more frames of camera data, wherein the pseudo lidar point cloud comprises a three-dimensional representation of at least one frame of the one or more frames of camera data, and wherein generating the pseudo lidar point cloud comprises transforming the one or more frames of the camera data from camera coordinates to world coordinates; and
determining one or more object bounding boxes based on the radar point cloud and the pseudo lidar point cloud.