What is ‘occlusion’ in Augmented Reality?

Occlusion was discussed when we here about the release of our SDK. In this post, we examine occlusion in augmented reality in more detail and prove how the SDK may help your application work more. The term “occlusion” describes when one object blocks another from a particular vantage point. Occlusion in AR is crucial for immersive experiences; virtual items should only be visible when there are no real-world obstructions between them and the camera. Unfortunately, today’s mobile AR applications lack this crucial capability.

Occlusion is one of the biggest and trickiest components of the augmented reality puzzle. Or put the capacity to conceal virtual items behind physical ones. This article explains why occlusion in augmented reality is so challenging and why deep learning may hold the key to a future solution.

When you observe a world that is real or virtual, you tend to accept its “rules” or suspend disbelief as long as it adheres to some fundamental principles of reality, such as gravity, lighting, shadows, etc. It jars and feels like something “doesn’t look right” when these principles are ignored, so you’ll notice it. That explains why people grimace at subpar special effects in movies.

What is an occlusion in AR?

Today’s topic, though, which is related to the one listed above, is occlusion. We must first check which aspects of your space are “captured” by your smartphone to comprehend this: These are the surfaces that AR items can be attached to or placed on first. These include floors, tabletop surfaces, ceilings, and walls. The caliber of the sensors on each smartphone determines how these are measured. For the reflections on the virtual AR object to match those on a real object in the real room, the lighting conditions in the real room are also captured. This is crucial for a realistic and natural appearance.

The surfaces and the light are, in a sense, the fundamental conditions for using augmented reality. Today, it obscures the AR object if, for instance, a person passes behind the virtual AR object in their line of vision. Given that a tangible object in front of the person would not be covered, this shouldn’t be the case. In this instance, the viewer’s sense of AR is much diminished as a result of their increased sensitivity to the deceptive illusion.

Ways to tool occlusion in apps

Applications can install occlusion using a variety of techniques. They are all utilized in concert. Being a new feature, producing a realistic occlusion is not that simple.

1. Depth map:

The depth map is a technique used in computer graphics and 3D illustration to account for how far away certain visual elements are from the user’s point of view. It is a remedy for the “visibility problem.” The depth map can be produced from the view from many cameras or obtained using a snapshot taken by a specific depth camera. Since utilizing one results in significant inaccuracies and low-resolution photos, at least two cameras are required. You have to calculate all the distances after obtaining the map, then you must apply them to the 3D scene. Determining which things will be blocked is thus possible. Depth maps will aid in the application of more sophisticated techniques.

2. 3D Reconstruction:

Reconstructing a scene or object in 3D from 2D photos or other data sources is known as 3D reconstruction. A virtual representation of an item or scene that can be utilized for many tasks, including visualization, animation, simulation, and analysis, is what 3D reconstruction aims to produce. It can be applied to areas including robotics, virtual reality, and computer vision.

3. Photogrammetry:

Determine the shape, size, location, and other attributes of things from their camera photos using this scientific and technical field. This technique is used to build three-dimensional spaces and objects in video games and the film industry. Thus, it is worthwhile to only draw the space’s primary components; the rest may be 3D scanned. For texturing, photogrammetry is also effective.

How does occlusion in AR work?

When designing AR sceneries, occlusion aims to maintain the laws of line-of-sight. This implies that any virtual object that is concealed or “occluded” by a real object should be so. How is it carried out in AR? Using some understanding of the 3D structure of the real world, we can stop particular areas of the virtual image from rendering on the screen.

Three key tasks are involved in doing this:

  • Detecting the 3D geometry of the physical world.
  • Creating a world model in 3D using digital technology.
  • Creating a translucent mask version of the model to conceal virtual items.


Occlusion is currently an unreliable technology. It needs sophisticated equipment, developer skills, and improvements. Issues like the existence of hands, enough lighting, and shadows remain unresolved due to occlusion. But this is a significant step towards developing a user experience that is both thrilling and realistic. Occlusion has a lot of possibilities for research and developing fresh application strategies.