More Photogrammetry with Puppy, and Cleanup in Blender

Revisiting a post from a couple of months ago, one of my personal pet projects has been to create a 3D model of Puppy, the great flower-covered statue outside of the Guggenheim Museum. Something I learned through that, and a few other attempts, is that you will have your best luck taking photos in diffuse light, which avoids shadows creating unintended asymmetric lighting.

I also learned from my first attempt that I needed to capture a few more angles, particularly those in the “pitch” axis (as I would call it in aerospace). While I picked up the “yaw” axis from encircling Puppy, “pitch” is somewhat harder, as I don’t have capacity to gather much difference in vertical angle when my specimen is a 40ish foot tall statue. I hypothesized that I could pick up a little bit more vertical detail by modifying my distance, taking close shots that are effectively “under” puppy, and distant shots that are closer to being horizontally aligned.

With that in mind, I took to the streets on March 5, a fairly cloudy day in Bilbao, and gathered up my second set of 27 photos for my second attempt. This time, I did two laps during my photo taking, one roughly as distant as I could get in the plaza where Puppy is situated, and one as close as I could get while still keeping the entire statue in the photo frame. For those who are interested in the photos and my current iteration of the script incorporating Apple’s RealityKit, I have uploaded these to Google Drive. The script is also on GitHub, though be aware that the file paths and some other settings would need modification for your own use.

So how did it work out? Well, you can see the initial result, the “raw” output from the Photogrammetry tool, in the three images below. In my opinion, quite impressive as a starting point, though with some room for growth. The three images below are examples of what is seen if you open the full resolution .usdz file right inside XCode. The first two show the left and right sides of Puppy, and look to be nearly photo quality from those angles. That is to be expected, of course, as there are literally photos in the dataset from almost those same angles. I am happy to see that the underside of Puppy is better captured in a geometric sense, without some of the gaps that I saw in my first attempt. I attribute that to the better lighting and “two lap” approach that I took in gathering the photos.

The third image is where things start to get interesting. As I mentioned, Puppy is about 40 feet tall, and surrounded by a flat urban plaza, so there is no good way to get a photo from above. This means when you rotate the model to look at the top of his head, you see … a blank empty spot. This spurred me to start scheming over the past few weeks just how I would obtain a photo to fill in the blanks. Hire a teenager with a drone? Make friends with one of the rich people with a balcony across the street?

I may come back to one of those ideas at some point in the future (and, if you know anybody across the street from the Guggenheim who would let me hang out on their balcony, please do let me know). For the time being, however, I have opted to do it the old-fashioned way; digital fakery in Blender. I was aided in this endeavor by a great blog post on Sketchfab on cleaning up a 3D scan, which I applied with great success to Puppy.

The next three images are a few of the first steps I applied in this case (follow the Sketchfab link if you want more detail on how this is done). The first image shows the cleanup of floating geometry, which are artifacts of the photogrammetry process producing nodes that are detached from the main model. Next, I rotated the model from the position shown in the second image, such that its base aligned with the horizontal plane. Then, I used the bisect tool to remove the portion of Puppy’s pedestal that was retained, resulting in the third image.

Next is where the I really started learning something new about Blender, which is using the Clone Tool to fill in gaps in the texture. Referring back to the last couple of galleries, you can see the complete lack of texture data on the top of Puppy’s head, and even where its not blank, you see smearing of the texture due to the very shallow angle of the camera relative to the surface. You can, however, see that in the front/right section of his face, there are orange flowers, and elsewhere you pick up purples, pinks, and whites.

Conveniently, while we don’t have good resolution of the flower planting’s on the top of Puppy’s head, we do know that the same orange, purple, pink, and white flower textures can be found at numerous other locations on his body. The Clone Tool forms a sort of 3D copy/paste, where we can digitally re-plant those flowers to fill in the gaps, and add fidelity to sections where there is smear. I used the Clone Tool from the Texture Paint panel in Blender. Control-click on sections of the model that have “good” texture, then click on sections where we want to have that texture inserted. The first image shows the Texture Paint panel at the beginning of this process, the second shows the right section of Puppy’s face after “planting” some fresh orange flowers, and the final image shows the result of my efforts.

Since I deleted the section of the pedestal that came with the raw Photogrammetry model, I wanted to recreate a more complete base to complete the model. The gallery below shows part of that process. I won’t go over every step in detail as its a work in process. A few things I did though, as seen in the gallery below, is flip Puppy over to view his bottom side, create a plane, then subdivide and reposition nodes to encompass the outline his torso and feet. I then performed various extrusions and scaling steps to reach the current shape. I’m still working on finding a set of textures that are to my liking, but maybe thats a post for another day.

With all that done, it was time to start generating high resolution renders in Blender. First off, I created a render with the sky and a horizon in the background with the Dynamic Sky plugin.

Changing gears, my next idea was to produce a render with the Ikurriña (flag of the Basque Country) in the background. What you see below is five separate plane sections, with white, green, and red materials, and varying elevations to appear as the flag.

To add depth to the render, I created a light scheme with seven spotlights; three near to the statue to light its base, three spotlights distant from the statue in a three point lighting scheme, and illuminating the flag, and one close to Puppy’s face.

Summing this post up, I learned from my first Puppy photogrammetry attempt to capture more angles, near and far, and to capture in better lighting conditions. To improve on the generated model, I cleaned up some extraneous nodes in Blender, and used the Clone Tool to digitally “plant” flowers on sections of Puppy that I couldn’t photograph. Finally, I added the Basque flag to my scene, and refined my lighting, to reach my final render. I’m going to use the latter as my LinkedIn cover.

I’ll leave with a few fun shots from one of my favorite features in Apple’s RealityKit, which is being able to take my model with me around town. Here is a a couple shots I took where Puppy enjoyed a sunset with me at Parque Etxebarria, and one we took while crossing at Udaletxeko Zubia.

2 thoughts on “More Photogrammetry with Puppy, and Cleanup in Blender

  1. Pingback: My first AR app on iOS: Juego de la Rana – DC Engineer

Leave a Reply to Isadora Godley Cancel reply

Your email address will not be published. Required fields are marked *