Joe Wilkins | Software Engineer

Photogrammetry with Reality Capture

carousel

In April 2024, Epic Games made use of their photogrammetry tool, Reality Capture, free for developers making less than $1 million in gross revenue. It's an incredible tool which allows you to effectively scan real life objects, by taking lots of photos with a digital camera, to generate highly detailed models for use within a variety of software applications including games engines such as Unreal and Unity.

Process

During some down time between contracts, I decided to give photogrammetry a try. My main aims for the test were to understand how best to photograph a subject in terms of number of shots, lighting, angles and detail, in order to explore its potential. I took my camera out a nearby woodland where I knew there was a derelict pillbox that I thought would make a good test subject. This structure in particular has really good clearance all around which would allow me to cover all of the angles around at a range of distances and between the concrete construction, graffiti and foliage, it also presented an interesting range of textures and surfaces that I thought would help to demonstrate Reality Capture's capabilities.

On a slightly overcast afternoon, I made my way out to the pillbox location and captured around 720 images using a mirrorless digital camera with a 28mm lens. In terms of settings for this test, I set my shutter speed to 1/60 (approximately double the focal length to reduce motion blur, as a handy rule of thumb), aperture at ƒ/8 and ISO at 400 as an initial configuration to build on. The location was fairly open so there was a good level of light available, although it was an autumn afternoon and light was fading, overall though, the images came out fairly well exposed for the most part.

Once I'd captured what I thought would be a decent number of images, with plenty of overlap between each pass, I headed back to begin the alignment process. After some experimentation with the tools and features within Reality Capture, I ended up with the model shown in the images and video above.

Taking the process further, I then exported the model from Reality Capture and then imported it into Unreal Engine and created a model with several levels of detail (LOD) so that slightly different versions of the model appear when moving towards and away from it, to improve scalability of the model within an interactive piece.

Overall, I found the process really interesting and enjoyable to work on and I'm now thinking about how it could be applied to future projects. One potential use case that particularly interests me would be to use Reality Capture scans as a method for the archival of historic structures. I think that the ability to capture high resolution models of objects and structures would allow them to be viewed and studied in high detail, virtually, without the need to get to them physically.

Below are some of the key takeaways and snags that I came across in the process.

Takeaways

  • Reality Capture generates a huge amount of cache data, so I was frequently having to clear any unused files and cache data to ensure that I didn't run out of disk space with which to process models. Processing memory, or RAM, was sometimes an issue for me too. Even though I have a fairly decent 64GB of RAM, I found that, whilst I was trying different things out, I'd occasionally have to restart to clear the RAM on my machine in order to make progress.
  • I took over 700 shots and whilst this produced a decent model, it is clear in the model where I have done more overlapping and the texture and fidelity of the model is far greater in these areas. These areas can be seen around the small portholes within the pillbox where I tried to cover more detail, particularly in instances where there were objects such as logs or discarded fireworks contained within. Conversely, I didn't capture enough data around the areas of foliage at the bottom of the pillbox and there is a small area of ivy and brambles on one side of the structure that aren't very well resolved within the model.
  • In hindsight, I definitely should have taken a drone to capture the top surfaces of the Pillbox as it was a little higher than I was able to reach and capture. Failing that, an extended tripod held overhead and a shutter remote may have helped me to reach the higher areas.
  • As mentioned, I went out a little late in the day to get reliable and consistent lighting conditions so in future, I'd probably aim to head out earlier in the day to ensure that I have enough time to capture data. I can also see how weather and time of year would have a significant impact on the quality of the scan and I'll definitely bear this in mind in future.
  • There are very apparent glitches in the model, on the top of the pillbox, and this is most likely due to poor data availability. These could be easily removed through use of 3D modelling software such as Blender, but my main aim within this example was to focus on the data capture and alignment process. I think the key point that the glitches illustrate though is that the quality of the initial data capture is absolutely key to providing a high quality end product, and this is true of many situations!

← Back