At the end of my last post, I referenced a project I am currently collaborating on with a client where we are using Augmented Reality (AR) to aid in performing measurements and cost estimation in the landscaping industry. We have progressed to a stage where we are building features to display in real-time what one’s future backyard may look like after completion of the project. This phase will involve creating a database of objects (mostly plants) in 3D Unified Scene Description (USDZ, the “Z” is for zipped) format. Since we haven’t built that database yet, I thought I’d get creative and generate a few sample plant models using photogrammetry.
Along the way, I even learned a bit about how to class up my Blender renders, adding cool photorealistic elements like PBR-textures, adaptive subdividing for material displacement. image-based lighting, and depth of field. This spurred a bit of a new hobby for me, where I created a sort of still-life icon style for the plants that I scanned, which you see below.
The sections to follow are not exactly a tutorial, as it won’t go step-by-step, but I hope it will provide at least the outline of how you might use these applications to create some cool 3D plant models of your own!
For those who don’t know, our current home city of Bilbao, País Vasco, is one of the more densely populated large cities in Europe. As such, big back yards, like we have in the United States, are practically if not entirely non-existent. So, I don’t exactly have a lot of realistic shrubs or other “yard-like” plants in my immediate vicinity. What we do have is a whole array of potted plants that we keep around the apartment.
Of course, these potted plants aren’t exactly what we wanted for our AR app: they’re too small, and, well, they’re in pots. So, even with a nice photogrammetry scan, there was still a bit of work to do. Thats where Blender comes in, and I’ll show how I used it to remove the pots and certain scanning or modeling artifacts, scale it up, and export it for use in an iPhone app.
Scanning with Polycam
I’ve tried a few different photogrammetry methods and applications over the past year and a half, including:
- Taking dozens of pictures and processing them on my laptop using the RealityKit Photogrammetry API.
- Doing the same, but with the free Reality Composer app in object capture mode on iPhone.
- Creating Gaussian Splats with the free Niantic Scaniverse app.
- Generating models using AI from just a single image.
Each of these work well, and are great free options. So, why did I choose Polycam for what I’m writing in the blog today? Well, I paid for it (about $150 per year), so I might as well use it. But aside from that, it has a very easy-to-use interface, and I like that it does the post-processing on the server, freeing me to use my device for other things (like, scanning more plants). I also found that the plant colors were just a bit more vivid in the Polycam scans than the Reality Composer app, and a bit smaller file size as well.
I decided to make the video below after the fact, so this isn’t one of my plants, rather its just the mug that was in front of me as I was writing this. This gives you an idea of the workflow to design one, 3D, and somewhat life-like model of an object.
This isn’t the greatest model, I will acknowledge that fact; theres a gaping hole in the right hand side, and a few ripples in the geometry. But, I wasn’t really intending to use this model for anything other than the video, so I’m fine with that. If I really wanted a good model, I’d probably do several laps around the mug to try to get >100 different photo angles to fill in all those gaps. Honestly though, its a smooth and shiny object, which is never handled well with photogrammetry which likes dull surfaces and sharp edges. Again though, good enough for the blog.
The screenshots here give you an idea of the Polycam process that you see in the video:






- From the library, tapping the (+) button in the lower right opens the create menu.
- Tapping the “Object” option opens the scan interface.
- The scan is started by tapping the camera button in the lower center.
- Continue to scan by moving your camera around the object, capturing as many angles and images as you have patience for (in general, the more the better, but there are some good rules like try to capture each feature point inside no less than 3 images, try to get about 45 degrees of separation between view angles of individual feature points, etc).
- Tap the done button to move into the processing menu.
- Select photogrammetry (as I use here) or Gaussian splat, choose quality settings, and tap the bottom button to upload to the server to process.
From this point, the Polycam server takes over, and you just wait for a notification that its processing has completed. In my experience, a wait time of around 20 minutes or so is fairly typical for scans where I’m uploading about 100 photos. Its nice to not have to sit with the app open on my phone though during that wait!
Clean-up with Blender
After creating the model in Polycam, the next steps are to export it into a format that we can subsequently import into our favorite design software (in this case, Blender), and perform our model refinements. For the demo, I’ll use my favorite of the scans, which is one of our potted snake plants. Exporting is simple, you just tap the icon with a down arrow in the upper right, which brings up a menu for which format to select. I chose “GLTF” (not USDZ, more on that in a moment), and then tap the export button at the button. This brings up a share menu, where I’ll usually tap the AirDrop option, and just like that, I’ve got the file downloaded onto my “MacBook Hair” laptop.



So why exactly would I choose GLTF, and not USDZ, given that my target app on RealityKit will require the USDZ format? Well, as I covered in my previous post on how to transfer Blender models into RealityKit, I’ve found the import/export support in Blender to be a bit less mature for USDZ. I think its improving over time, but right now, I’ve had good success with using GLTF for any thing that Blender will touch, and then using Apple’s Reality Converter at the end to produce USDZ.
Removing the Pot Using a Boolean Operation
After exporting the file in GLTF format (actually, it should get a .glb extension, as it will be a binary file), we can easily import that into blender for our editing purposes. Our main task here is to remove the pot, which accomplish using a boolean modifier. For the potted plants, I centered the mesh so that the base of the plant is roughly at the world origin. Then, I created a cylinder, just larger in diameter than the pot. Going into edit mode, I created two insets on the top face of the cylinder; one to cover the lip of the pot, and another that is offset downward using the “depth” field to form a cup around the base of the plant. The idea is that the bottom of this cup should be just below the soil line, and the lips of the cup should not intersect the plant itself, but should cover up the edges of the pot.
Generally, just the cylinder with two insets is not quite enough to both remove the segments of the mesh that I want to remove, while leaving the sections of the plant that I want to retain. To get it just right, I did a fair amount of manual modification of the edges and faces of the cylinder mesh to get them positioned as I want. The final cylinder, prior to applying the boolean operation, is seen in the video above. You can see that its not perfectly round, in fact I’ve squeezed it in on a couple of the sides, and moved it upwards on one side to get the cut line to run right at the boundary of one of the leaves that was touching the side of the pot.
Fixing the Mesh
Once the boolean was applied, the result is close to what I wanted, but still with a little bit of clean-up needed on the bottom of the mesh. Anytime we apply a boolean operation to a mesh that has an image based material texture, we’ll see the resulting slices of the mesh on the boundary do not have any meaningful connection to the original UV map. While they will pull UV coordinates from the image, they will look to be seemingly random in relation to the neighboring object. You can see this in the first image below, where the “cup” section of the cutout is mostly white, and also just has that very un-naturally flat and cylindrical appearance.



My solution takes a couple of parts:
- Create a “neutral” material and assign it to the section of the mesh on the clipping boundary. In my case, I chose a dark green, with roughness set to its maximum. The choice of color is not totally critical (especially after the second step), just choose whatever will blend in with the surroundings.
- Scale the section around the clipping region to be as small as you can, without overly distorting the surrounding mesh. I found that this takes a sequence of a few operations:
- Selecting the bottom plane, and selecting “more” to get its neighbors.
- Using the scale tool in edit mode, shrink it vertically and horizontally, and using proportional scaling to selectively scale neighboring nodes.
- Moving the scaled section downward to make sure it doesn’t intersect the object mesh, also using proportional scaling.
- Repeating this a few times until it looks right.
You see the result of these operations in the second image, where the bottom cup is now a much smaller green circle, while leaving the upper plant sections relatively undisturbed.
Augmented Reality
Having applied a variation of the above process to six of our houseplants, next I worked on preparing them for a RealityKit app. The main thing is to take whatever you are working on in Blender, and get it converted into USDZ format, which has full RealityKit support. I’ve complained a few times before about Blender’s support for USDZ, however, in the case of the model I was cleaning up in the preceding section, File->Export->Universal Scene Description turned out to work just fine. Just make sure to change the file extension to .usdz so that it bundles everything into a zip archive. I also usually select the “selection only” option to make sure only the object is included in the export, not the lights, cameras, or other unnecessary junk.


This is sufficient to bring the newly created model into Apple’s AR Quick Look, which is built into iOS. I usually do this by sharing with myself; finding the USDZ file I just exported, right clicking, click share, AirDrop, and then sending to my own iPhone. AR Quick Look should open automatically on your phone, where you can then view your model, either in AR, or with a blank background.
I took all of the photogrammetry-based models, and created a “Garden” scene in Reality Composer Pro that I import as a bundle inside my app. With the bundle imported, you can select any of the child models for further manipulation and experimentation. Reality Composer projects are fairly easy to add as a Package inside your XCode project to be used in this way:
- In XCode, select File->Add Package Dependencies, to open the package window.
- Click “Add Local” on the bottom of the window.
- Navigate to the folder that was created by Reality Composer Pro, which should include a file named
Package.realitycomposerpro. - Click “Add Package,” and navigate the menus to make sure it gets added to your app target.

With those steps done, you can now import your Reality Composer project as a bundle, in the swift file where you intend to use it, you import the package:
import GardenAssets // whatever name you gave to your Reality Composer project
Then, you can load individual assets from inside that asset bundle.
let name = "SnakePlant" // provide the name of the asset you want to access
guard let scene = try? await Entity(named: "Garden", in: GardenAssets.gardenAssetsBundle),
let asset = scene.findEntity(named: name) else {
return nil
}
Conclusion

Just for fun I put together the render above with all of the photogrammetry-scanned plants I’ve got in my current collection, with my current iteration of the DC-Engineer / Bilbao shield logo for good measure. This has been quite the exercise in learning to apply photogrammetry and the Polycam app to creating AR-ready 3D assets, along with expanding my skillset in Blender for mesh editing and rendering. I hope you found this post insightful, and you can go on to apply it to your own projects!











