A Scalable Pipeline for Processing Massive Point Clouds

All Posts

Visualizing and processing point clouds is no easy task. Point clouds, the sets of data points in a coordinate system representing the external surface of an object, can be collected from a laser scanner (LiDAR), generated by a photogrammetry pipeline or some similar method of 3D capture. These data sets are typically very large in order to capture sufficient detail of the 3D model. In many cases, the data sets can be up to terabytes in size. Rather than visualizing the points themselves, it is often more interesting to visualize the object or the surface the points represent. As such, point clouds are typically an intermediate format transformed into another kind of representation that yields visually more appealing and otherwise more convenient results. However, there are situations where it is useful to visualize the raw point cloud data, such as assessing the quality of the 3D scan prior to any transformation into another format.

Direct visualization of point clouds is not exactly a trivial matter, especially in cases where the data sets are measured in terabytes. First, the data needs a significant amount of storage and the rendering application will need to have access to it. In practice, storing these kinds of vast data sets locally is an option only if you sacrifice mobility, and so the data will have to be dynamically streamed into the device, or at least from the disk into the memory and onto the GPU. The application will have to employ some form of a LOD mechanism, because generally not all the points inside the viewport can be streamed in, let alone rendered. Furthermore, due measurement error, point cloud data is typically somewhat noisy, which means there will be unnecessary bumps, cracks and holes in the scan. This means some form of filtering on the input data is necessary to improve the visual quality. Finally, you’re still rendering points instead of solid surfaces, which for many intents and purposes is not ideal, neither from a visual nor interaction standpoint.

 

point clouds four images

Image 1 - Top left: raw point cloud data, top right: unoptimized transformation into a triangle mesh, bottom left: optimized triangulation, bottom right: fine surface detail packed into normal maps

 

On the other hand, you typically wish to further transform your points into something more easily renderable and perhaps more visually appealing or otherwise more convenient. An obvious and common choice is to reconstruct a 3D polygon mesh from the point cloud data. Needless to say, when data sets are measured in terabytes, the reconstruction will require some heavy processing power and copious amounts of memory. Basically the algorithm suffers from the same limitations as described above for rendering; so forget mobile devices. In practice, this kind of reconstruction of any non-trivial point cloud has to be done in the cloud, where you can scale up the processing power based on the complexity of the input data and the desired resolution of the output. But even then, the reconstructed model will be either very low in detail, or too complex to be efficiently rendered or even transmitted back to the device! Many photogrammetry solutions solve the first part of the pipeline by transmitting the individual images into the cloud and creating the point cloud (or subsequent triangle mesh) there. But even in such a case, the problematic size of the adequately detailed output data remains.

What would an ideal pipeline look like, then? Well, for starters, it’d be great if you didn’t have to store the point clouds at all. So, for instance when scanning, you should be able to directly stream your points off the device and into a location where there is infinite storage and convenient access for further processing in the cloud. Then, the pipeline would take those points, or rather, chunks of points that can be processed in parallel resulting in scalable processing and fast turnaround times, and apply reconstruction into a triangle mesh. A key requirement for the reconstruction algorithm would of course be to retain as much of the detail of the input data as possible. Specifically, the reconstruction should be optimized for real-time rendering on the target GPUs, by keeping the amount of triangles relatively low, while still being able to pack surface details into normal maps, for instance. The algorithm would also apply filtering and attempt to patch holes and gaps in the input, yielding a visually appealing and topologically robust reconstruction of the input model. The pipeline would then package the reconstructed mesh into a number of streamable units, each with varying levels of detail, thus decoupling the complexity of the input entirely from the fidelity of the output data. Finally, these streamable reconstructions could be streamed back onto any device, with the desired bandwidth, chosen by the application based on the device’s capabilities, network utilization, desired visual fidelity and so forth.

 

null

Figure 2 - Pipeline for efficient point cloud streaming

 

If you’re in trouble with visualizing your point clouds because you don’t have a pipeline like one described above, we should talk!

Umbra has built a platform which makes it easy to render all kinds of 3D data in real time, including point clouds, regardless of size and complexity.


Interested in seeing some actual point cloud results? Check out our post on umbrafying the San Simeon data set.

Popular Posts