I recently attended the Collision conference in New Orleans, Louisiana earlier this month. For those not familiar with the event, it’s a massive meetup where hundreds of start-up companies have the opportunity to pitch their ideas to dozens of venture capitalists and thousands of other attendees.
Artificial Intelligence was, not surprisingly, one of the hottest topics of conversation, perhaps second only to blockchain technology. In an environment such as Collision where 15-minute pitches are given in open spaces, and deeply technical conversations happen real-time on the show floor, it was easy to eavesdrop on some of these discussions. During my three days there, I heard companies talking about dozens, if not hundreds of use cases for AI.
In the end, there were so many AI use cases, the term “Artificial Intelligence” was almost devoid of any tangible meaning. The term itself, now over 60 years old, is a bit of a misnomer. There is nothing “artificial” about AI. It is the result of programs and algorithms designed by real human beings to make machines act as ‘human’ as possible. Some of these programs are designed to allow the machine to learn from its own behavior, thus the creation of the term ‘machine learning.’ But I digress…
It struck me that many of the AI use cases discussed at Collision were *not* around making machines act more human. In fact, AI was largely discussed in the context of making computers become more efficient and effective at tasks they are already quite good at. Intelligent security and authentication, audio and video recognition, home automation, and real-time custom UX design to name a few.
An area that my own company, Umbra, has started to look at with increasing interest is how AI can be used to improve the quality of complex visualizations. The first of two key use cases relevant to our own core business of optimizing massive 3D data sets, is using AI to improve the quality of point cloud data. For those unfamiliar with this concept, a number of scanning technologies, including aerial or satellite photography, lidar, radar, or lasers can be used to recreate 3D representations of objects or geographic areas. These captures are converted into single data points – millions or even billions of points – that together provide a visualization that looks a bit like a Seurat painting in 3D.
All scanned data images property of Umbra © 2018
As one could imagine, some of these technologies work better than others depending on the subject being scanned. For instance, while aerial photography taken from planes or drones can do a fair job of mapping very large areas in a relatively short period of time, the resulting data, once converted to a point cloud, can be somewhat inaccurate both in location precision and visual replication. Sharp corners and straight walls can look as though they are warped or melting due to the restricted angles at which the aerial cameras can photograph the surfaces. In some cases, entire areas can be missed altogether, creating vacuous holes in the landscape.
This is where AI can have a huge impact on the quality of the scanned data, whether in point cloud form, or in 3D geometry. Through the use of computer vision -- AI applied to visuals -- computers can intelligently understand each point in a scan, or triangle in a model, and comprehend its relationship to all the other points or triangles in the dataset. This gives us the ability to automatically fill in the blank spots, straighten out warped walls, and even understand depth more effectively to recreate a more realistic relationship between substrates and the objects that sit on top of them. Ultimately, AI paired with improved scanning techniques will not only allow us to see a car’s license plate from space, but even render the ridges of that license plate’s letters, and the reflections of its paint.
Coming back down to earth (or virtual earth as it may be) AI also has great potential in the augmented reality space, particularly in enterprise use cases. Umbra provides technology that allows the construction industry to visualize 3D models by overlaying them directly on their jobsite. One interesting feature we continue to try to improve on is anchoring a virtual model on to a geographic location. Without a significant number of physical AR triggers, hardware sensors, and software that can contemplate where a user is, even if they aren’t connecting to location-based services, it is extremely challenging to prevent a virtual model from drifting while the user is walking around in that virtual environment.
AI could play a huge role in solving this issue once and for all, combining the ability to scan an environment real-time, recognize a specific location based on that data, and make real-time adjustments to both location information and associated visualization. By intelligently combining traditional computer vision, modern scanning, and real-time 3D AR rendering, the need for traditional location-based technology could be replaced with a far more accurate AI-driven solution for industries that require fine-grained precision when overlaying augmented 3D models on top of the real world.
We’re just beginning to scratch the surface of where AI can help improve existing technologies, even in Umbra’s niche of 3D optimization and visualization. You’ll see more thoughts from us in the coming weeks and months about additional use cases in our space that have become relevant and impactful to our customers. Stay tuned!
Trial our Composit platform for free and optimize your own design files. Learn more about complex 3D visualizations and see our latest point cloud model here.