LiDAR sensors mounted on aircrafts or drones are becoming widely used for mapping. And they are often perceived as a competing technology with standard cameras. It is quite the opposite as they in fact complement each other by capturing data of different nature. Merged together, they actually lead to great advantages for carrying information about the terrain. While LiDAR describes the topography at a high resolution, imagery captures the appearance of ground features. Thus, to integrate these two sources of information usually means using the images to colorize the LiDAR point cloud. It can also imply using the LiDAR as elevation data to generate orthophotos.
The main challenge in integrating LiDAR with imagery is to achieve an accurate co-registration. In other words, both datasets must be perfectly aligned together. The traditional approach is to use ground control points (GCPs) to ensure absolute accuracy separately in both datasets. When subsequent fusion is done, then alignment of the same ground features is accomplished. A major drawback in that method is that the GCPs must be manually tagged twice i.e., in the LiDAR and then in the imagery.
A significantly more efficient method removes the need to control the imagery with GCPs. Instead, the LiDAR data is directly used as a reference and the imagery is automatically registered through bundle adjustment. Tie points are first created by matching corresponding spatial features from the imagery to the LiDAR intensity map. Overall, that new approach allows to quickly combine LiDAR with data from camera systems and produce high-quality outputs.