How-to: Host 3D USDZ Content in iCloud for an AR App Using CloudKit, RealityKit, and SwiftUI

An upcoming iOS application that I am developing for a client includes content creation features in augmented reality (AR). When I started, I made fairly extensive use of Reality Composer Pro for prototyping and adding example 3D models. As soon as I added enough content to make the AR experience meaningful, however, the size of the build file rapidly ballooned to hundreds of megabytes.

Thus, I decided that I needed a cloud-based Host for the app content. As I was already making use of SwiftData for user data persistence, CloudKit was an inviting option for hosting our provided models, without forcing the user to download them at the time of installation. The benefit of CloudKit, relative to other hosting options, is that it is tightly integrated into iOS, and free up to a fairly high usage limit. Of course, it also ties you to be iOS-only, for the most part, but that’s not a concern with my current project.

If you like this tutorial, you may be interested in my previous explanation of how to create content for you RealityKit project in Blender.

I adapted the core cloud fetching and RealityKit entity display features from that project into a mini-app that I’m calling “Txirimiri,” which I’ve made open source on GitHub. I’ll cover most of the code from that repo in the sections to follow, but you may also clone that project if you want to use or modify some of its features. Make sure to add stars to the repo if you do!

This tutorial post will provide you with a procedure to add CloudKit support to your own app, which you can use to host your own content. This content is in the form of unified scene description (USDZ) files, a standard in the 3D industry that is natively supported by RealityKit. You will learn about the initial setup of your XCode project, and how to configure your schema in the CloudKit Console. Then, you will add code to fetch models from the cloud, and render them in AR on your iPhone or iPad (or even Apple Vision Pro).

Creating Your Project and CloudKit Container

Start by opening XCode, and selecting the “New Project” dialog. You can select the AR template if you like, though I prefer to start with a generic app template, and add the RealityView myself in later steps. Give your project a name; you can choose any that you like, for the tutorial purpose I am using “txirimiri,” one of the forms of the Basque word for a persistent light rain (from, you know, the clouds). 

Do make sure to choose a storage option (SwiftData or Core Data), and check the box to “Host in CloudKit.” This will automate some of the setup steps to create the CloudKit container where we will be hosting our models.

Generate a CloudKit Container

By creating the project with the CloudKit option turned on, an entitlements file will also be created, however, initially the “iCloud Container Identifier” section itself will be empty. We need to create our container before it can be used inside our app. 

Entitlements file, initially missing the Container ID

To do this, go to the “Signing & Capabilities” tab, and scroll to the iCloud section. Tap the plus (+) button above CloudKit Console to enter the “Add a new container” dialog, in which you should enter something like what is shown below, replacing com.dcengineer.txirimiri with an identifier consistent with your own app. Make sure to keep iCloud. at the beginning of the container identifier.

After you click OK, the new container will be created. It may initially be highlighted in red, as it is not detected instantaneously. After a few moments, you may click the refresh (⟳) button, after which it should no longer be red, as the container is now available in iCloud. Make sure the new container is selected, which you can also verify by returning to the entitlements file to see the new field in the “iCloud Container Identifiers.” Next we continue to the following step where we will add our content.

A final note of importance in the entitlements file is the “APS Environment” field, which will by default be set to a value of “development.” This is the version of the database that will be visible in your debug builds. When you are working in the CloudKit console, you will be able to create records in either your development or production database. Be aware as you are debugging, if you do not see the data you expect, you may need to toggle into your production database, or vice versa, and likewise, you may need to make sure you have uploaded your records into the proper database for the build of the that you are testing.

(Optional) Add Permission to Use the Camera in AR

The initial setup of this tutorial will use the non-spatial tracking mode, which does not require camera permission, and is compatible with simulators or macOS, if you choose. However, RealityKit shines with its AR features, and at the end we will add a menu control that allows us to switch into the AR mode. Unfortunately, if you try to switch to the spatial tracking camera, you will encounter a somewhat obscure-looking crash.

Strange crashes are triggered if camera permission is not enabled

If you check the threads on the left side, you are unlikely to find any references to your code, as it is not actually a bug in anything that you have written to this point. Rather, if you expand the error on the right side, you will see that you have violated a camera privacy requirement.

XCode 26 makes it convenient to add the permission

Thankfully, in XCode 26, this is a very simple correction, and you can simply click the “Add” button and provide your own explanation in the “Signing and Capabilities” tab. This belongs in the field in the lower right, which is initially blank. I have entered a short sentence in that space to explain the usage. 

Make your explanation short and simple

Short is the key word here, as you want whatever to write to fit nicely on a line or two in the dialog that is shown to the user (if on iOS <26, however, on the latest, only a default message is shown). Just make sure your reasons are true to your intended usage, for purposes of Apple’s review process.

What the user will see in iOS 26

(Optional) Clean Up the Template Files

We’ll close out the project creation section by cleaning out a bit of the template that Apple provided as example code when we created the project. Though you might want to add SwiftData support later to persist the user’s state in your own app, it’s not directly applicable to this tutorial, and so I am going to remove it. Also, there are a couple of example UI controls that are built-in to the template, but that we won’t be using. Summarizing these cleanup steps:

  • Delete the Item.swift file entirely.
  • In your <YourAppName>App.swift file, delete the sharedModelContainer property, and remove the .modelContainer(sharedModelContainer) modifier from the WindowGroup.
  • In ContentView.swift, delete the lines starting with @Environment and @Query at the top of the file, delete the .addItem and .deleteItems functions, and remove the .modelContainer modifier in the preview.

Remove the List in the body property to eliminate the remaining compilation errors, or, honestly, just delete the entire NavigationSplitView. We’ll reconstruct the latter in subsequent steps.

Adding Content in the CloudKit Console

Before we continue with writing our app code, we will go to our browser and perform some initial setup steps for our 3D model database. The quickest way is to tap the “CloudKit Console” button inside XCode, however you may also navigate to https://icloud.developer.apple.com. Once there, look for your newly created container in the drop-down menu in the upper left of the window. 

Create a New Record Type for your Model3D

On the left hand panel, find the “Schema” section, and click on “Record Types,” where we will define a format for our model data.

Start in the schema section

Click the plus (+) button to the right of “Record Type,” and give our new type the name “Model3D,” before tapping the “Create Record Type” button.

Name your new record type `Model3D`
New record type, but no fields defined

Next we will create our schema for a 3D model in USDZ format. We include the following fields:

  • name String: A short name for our model. This can include capitalization and spaces, and we will not require this field to be unique, as there will be a separate identifier given to each model when we create their entries.
  • description String: A longer description for our mode; a sentence or two is good.
  • extension String: The file extension for our 3D model, which will always be usdz in this tutorial, but could include other formats (obj, glb, stl, …) in your own app.
  • model Asset: The model file itself.
  • thumbnail Asset: A small image file representing our model.

To create our “name” field, tap the plus (+) button to the right of “Record Fields.” A dialog will appear, in which we name our field, and there is a dropdown where we can select its type, in this case choosing “String.”

Choose between asset, string, numeric, or other data types

Repeat with each of the other fields described above, choosing “String” for the description and extension fields, but “Asset” for the model and thumbnail fields.

After all fields created

Make Your Schema Indexable

To make our new schema searchable, we need to also add a basic index. Staying in the schema section, click the “Indexes” button in the left hand panel, and then click the “Add Index” button in the center. This will open up a dialog. Choose the Model3D record type in the first dropdown. Momentarily skip the subsequent “Name” field, and instead go to the third field for “Type” and select “QUERYABLE.” This will reveal the subsequent dropdown, in which you can select recordName, which is a unique identifier that should exist in all record schema. Returning to the second field, you may provide this any name you feel is most descriptive, but I chose recordName here as well for simplicity.

Deploy to the Production Database

Tap “Save Changes” in the lower right to finalize your Model3D schema. There is one more important step, however, which is to click the “Deploy Schema Changes” button in the lower left, which will make your schema available in the production database, which is the version that will be visible to your users. This will open the dialog below, in which you may simply click the “deploy” button once you have reviewed the changes.

Confirm your new schema

Upload Your Models

Now we can upload our model files and supporting information to fill out the schema for each. For this tutorial, I will create entries for several of my photogrammetry scanned models, which I have reduced to low-poly meshes with baked textures to be optimized for AR. These are also hosted on my SketchFab page at https://sketchfab.com/eliott.radcliffe, while I have added the thumbnail images which I have rendered in Blender.

First of all, make sure the same database version is selected in the dropdown at the top of the screen as you have specified in your entitlements file. You may end up uploading versions of your models to both production and development, just make a note to check to avoid confusion. Then click the “Records” section on the left hand side. You can tap the plus (+) button next to Records to create your first model. Mine will be a souvenir Basque Pelota that I purchased in Hendaia.

The real-world pelota

When the New Record dialog appears on the right hand side, you will see metadata for a new entry appear on the right. Note that there is a “Name” field at the top, which is pre-populated with a random UUID. This will, in fact, become the recordName field that we set up for querying. You may choose to keep this UUID, however, I prefer to provide my own name in this field, knowing that I will personally ensure that these will be unique. This is a different name field than the one that we defined in our schema; the metadata name must be a unique identifier used in querying, and is permanent, while the schema name, outside of the metadata, is just a label, is not permanent, and does not need to be unique. Anyway, if you want to give your own names for the metadata, make sure to do that prior to tapping save, as you will not be able to change them afterward. I gave my example the name “pelota,” in all lower case letters.

“name” under Metadata is equivalent to the indexable “recordName” property

The schema fields will be in alphabetical order, not the same order in which we created them. Provide the name, description, and extension fields as strings, the first two of which can be anything you feel is informative, while the latter should be exactly usdz, as it is the file extension. For now, usdz is the only format this tutorial will support. The model and thumbnail files can both be selected by tapping the respective “Choose File” fields, then navigating to the file you will upload. You can see that I am uploading a 1.2 megabyte usdz file, and a 21 kilobyte jpeg file, representing a “large” and a “small” format for our model data. Click the “Save” button on the bottom right to save your model data, so it can be accessed inside our app.

Rather than repeating myself, I will note here that this is one of several models that I uploaded using roughly this same procedure, which will shortly be visible in our app UI.

Building Code to Query the Database

Back in XCode, we will create the data model layer, which will be a Swift data structure that matches the schema we created in CloudKit, and a manager class that is responsible for querying for models in this form, and providing results to our views.

Create a Model3D Structure

Create a new file from template, select Swift File, and name it Model3D. This is a data structure to receive the CloudKit content, and convert it into the Entity format that can be displayed by RealityKit. Create a new Model3D structure, with the name, description, extension, model, and thumbnail fields as we previously defined in the schema. Also add an identifier field, which will be stored as a string.

struct Model3D: Identifiable, Hashable {
    var name: String
    var description: String
    var ext: String? // extension
    var model: Data?
    var thumbnail: Data?
    
    var id: String
}

Create a Content Manager

Next, create another new file and name it ContentManager. This class is responsible for initializing our CloudKit container in the app, fetching content from the cloud on demand, and storing the results for presentation in the app. Create the skeleton of the ContentManager class as follows, using the @Observable macro so that our SwiftUI views respond to updates.

import Foundation
import CloudKit

@Observable
class ContentManager {
    /* We will fill in our initialization and fetch methods here */
} 

To initialize CloudKit and query our database, we will need to provide an identifier, and then initialize the container and public cloud database associated with that identifier. Add the following inside the ContentManager class:

let database: CKDatabase
init(for identifier: String) {
    let container = CKContainer(identifier: identifier)
    database = container.publicCloudDatabase
} 

Create Template Query Functions

Now add our query methods. We’ll include a generic method that gathers the complete database entry, or can specifically request the name, description, model, or thumbnail fields. The latter are useful for querying a “lightweight” entry to be displayed in menus, without downloading the entire model at once. First, define a variable to store models that have already been fetched, and empty asynchronous functions that will perform the work.

/// Store models that have been fetched
var models = [Model3D]()

/// Perform a query for only the `name`, `description`, and `extension` fields for all models available in the database
func fetchLightweightRecords() async {
    
}

/// Perform a query for `thumbnail` data of a specific record
func fetchThumbnail(for name: String) async -> Model3D? {
    return nil
}

/// Perform a query for `model` data of a specific record
func fetchModel(for name: String) async -> Model3D? {
    return nil
}

Build a Lightweight Query Method

We will start by building our code for the fetchLightweightRecords method, in which you can insert the following:

func fetchLightweightRecords() async {
    // Build a query for records of type Model3D
    let predicate = NSPredicate(value: true)
    let query = CKQuery(
        recordType: "Model3D",
        predicate: predicate
    )

    // Gather all of the CKRecord objects for the Model3D schema, in lightweight form
    let records: [CKRecord]
    do {
        records = try await database.records(
            matching: query,
            desiredKeys: ["name", "description", "extension"]
        )
        .matchResults
        .compactMap { (id, result) in
            switch result {
            case .success(let record): return record
            default: return nil
            }
        }
    } catch {
        print("Could not fetch lightweight records: \(error.localizedDescription)")
        return
    }
    
    // Update the stored models given the array of CKRecord
    records.forEach { record in
        let id = record.recordID.recordName
        let name = record.value(forKey: "name") as? String
        let description = record.value(forKey: "description") as? String
        if let index = models.firstIndex(where: { $0.id == id }) {
            models[index].name = name ?? models[index].name
            models[index].description = description ?? models[index].description
        } else if let name, let description {
            models.append(Model3D(
                name: name,
                description: description,
                id: id
            ))
        }
    }
}

Refactor into Reusable Query, Fetch, and Convert Functions

We can see the above can be split into three distinct sections:

  1. Building a query for our model type, with a predicate that determines, when a record is found, if it should be retained in the result that is sent back.
  2. Fetching the records for that query from the CloudKit database.
  3. Convert records into Model3D format, and add or update the stored models.

While our implementation above is specific to the “lightweight” query, it is reasonable to assume that a variation of each of the above steps will be performed in the queries for thumbnails and models as well. Therefore, we should refactor such that we can reuse the common code in a set of three private functions. 

First, create our code to construct a query.

 private func buildQuery(for name: String? = nil) -> CKQuery {
    let predicate: NSPredicate
    if let name {
        let recordID: CKRecord.ID = CKRecord.ID(recordName: name)
        predicate = NSPredicate(format: "recordID=%@", recordID)
    } else {
        predicate = NSPredicate(value: true)
    }
    
    return CKQuery(
        recordType: "Model3D",
        predicate: predicate
    )
}

We can see a very similar structure to our original lightweight query, with the difference being that we have now provided a name argument. Using this argument, we can specify to only retain records for a given model, particularly when we want to query for the actual usdz-formatted model. Those are large files, and thus we only want to download it for a single model at once, not every model in the database. If no name argument is provided, we’ll gather all records, as we have previously in the lightweight query.

Next, create our code to asynchronously fetch records given that query.

private func fetchRecords(for query: CKQuery, desiredKeys: [String]) async -> [CKRecord] {
    do {
        return try await database.records(
            matching: query,
            desiredKeys: desiredKeys
        )
        .matchResults
        .compactMap { (id, result) in
            switch result {
            case .success(let record): return record
            default: return nil
            }
        }
    } catch {
        print("Could not fetch records for \n - query: \(query.recordType)\n - predicate \(query.predicate)\n - desiredKeys: \(desiredKeys)\n - error: \(error.localizedDescription)")
        return []
    }
}

Here we can see that we moved our desiredKeys into a function argument, instead of hard-coding it into the database.records method call. That method itself returns a tuple, with the first, or matchResults, field containing a tuple of identifiers and Result types. We unpack the latter using a compact mapping, in which we retain only the records where the query was successful. You may want to handle the error cases in your production app, however, here this is sufficient for demonstration. In the case that an error was thrown, details will be printed, and an empty array is returned.

Finally, create our method that we use to update our models array after the array of records are obtained.

private func updateModels(with records: [CKRecord]) {
    records.forEach { record in
        let id = record.recordID.recordName
        let name = record.value(forKey: "name") as? String
        let description = record.value(forKey: "description") as? String
        let ext = record.value(forKey: "extension") as? String
        let model = record.data(forKey: "model")
        let thumbnail = record.data(forKey: "thumbnail")
        
        // Create a new model with all available data
        let existingModel = models.first(where: { $0.id == id })
        let newModel = Model3D(
            name: name ?? existingModel?.name ?? id,
            description: description ?? existingModel?.description ?? id,
            ext: ext ?? existingModel?.ext,
            model: model ?? existingModel?.model,
            thumbnail: thumbnail ?? existingModel?.thumbnail,
            id: id
        )
        
        // If it exists, update the existing model with any new data, otherwise add it
        if let index = models.firstIndex(where: { $0.id == id }) {
            models[index] = newModel
        } else if name != nil, description != nil {
            models.append(newModel)
        }
    }
}

The primary update here is that the records are provided as an argument, and all fields from the Model3D schema are being unpacked from the record, not just name and description. A new record will only be created in the case that name and description were provided, which will always happen in the lightweight query, whereas in the case that other keys were queried, existing models will be updated, retaining their original data if the key fields are nil.

To make the above compile, we also need to add an extension to the CKRecord type which will provide the Data at a given key, if available. This is used with the model and thumbnail fields, which were created as asset types in the schema.

extension CKRecord {
    func data(forKey key: String) -> Data? {
        guard let asset = value(forKey: key) as? CKAsset else { return nil }
        guard let fileURL = asset.fileURL else { return nil }
        return try? Data(contentsOf: fileURL)
    }
}

Complete our Query Methods

Now that we have added these private query functions, we can see that the lightweight query can reduce to just three lines:

func fetchLightweightRecords() async {
    let query = buildQuery()
    let records = await fetchRecords(for: query, desiredKeys: ["name", "description", "extension"])
    updateModels(with: records)
}

Similarly, we can build our query for the model with thumbnail or model data:

func fetchThumbnail(for name: String) async -> Model3D? {
    let query = buildQuery(for: name)
    let records = await fetchRecords(for: query, desiredKeys: ["thumbnail"])
    updateModels(with: records)
    return models.first(where: { $0.id == name })
}

func fetchModel(for name: String) async -> Model3D? {
    let query = buildQuery(for: name)
    let records = await fetchRecords(for: query, desiredKeys: ["model"])
    updateModels(with: records)
    return models.first(where: { $0.id == name })
}

The changes from the lightweight version are that we build our query for a model with a specific name, we fetch with only the single key depending on the data we are requesting, and we return the model after updating.

Make the Content Manager Available to the View

Finally, we will instantiate our content manager in the app and make it available to our views. Find your app entry point, which for my example is txirimiriApp.swift, and insert the following under the struct definition:

let manager = ContentManager(for: "iCloud.com.dcengineer.txirimiri")

Then, add this as an environment variable by adding a modifier to the view, also inside your main app struct:

WindowGroup {
    ContentView()
        .environment(manager)
}

A quick note here is that we equivalently could have provided our content manager as an argument to the content view, with supporting initialization code. However, my opinion is that in a real app, the code to fetch data from the cloud may be functionality that you would want to access in multiple places in the UI, and that makes it fitting for injecting into the environment.

Of course, to make this variable available inside our content view, we need to add the corresponding reference in the top of our view struct, or anywhere we want to access it, which looks like this:

@Environment(ContentManager.self) var manager

By using the environment approach, that line will be available in any of the SwiftUI views where you need it.

Viewing Your Models in RealityKit

Now that we have our content manager in place to gather the model data, we can build our user interface (UI) to view a model in AR. 

Create a Model Selection Item

First we will build our selection interface, which will provide a list of items with the name, description, and thumbnail image of each model found in the database, as shown in the image below. 

Menu content for one model

Create a new file from template, select the SwiftUI View option, and give it the name ModelMenuContent. Build that view as follows:

struct ModelMenuContent: View {
    @Environment(ContentManager.self) var manager
    
    let model: Model3D
    
    var body: some View {
        HStack {
            thumbnailOrPlaceholderImage()
            
            VStack(alignment: .leading) {
                Text(model.name).font(.headline)
                Text(model.description).font(.caption)
            }
        }
    }
    
    private func thumbnailOrPlaceholderImage() -> some View {
        Group {
            if let thumbnail = model.thumbnail,
                let uiImage = UIImage(data: thumbnail) {
                Image(uiImage: uiImage)
                    .resizable()
            } else {
                Image(systemName: "cube")
                    .resizable()
                    .padding(8)
                    .task {
                        let _ = await manager.fetchThumbnail(for: model.id)
                    }
            }
        }
        .scaledToFill()
        .frame(width: 64, height: 64)
        .clipped()
    }
}

This is the content for a single model, with a horizontal stack of the thumbnail image on the left, and name and description on the right. Most of the code is contained in the thumbnailOrPlaceholderImage sub-view, which will either show an image derived from the data that we already fetched in a previous query, or show a placeholder image while requesting a new fetch from the content manager. If thumbnail data is found in that query, the view will automatically update with that thumbnail image replacing the placeholder.

Create the Model Selection List

Next we will create our root view, which presents a list of models that may be loaded and viewed in 3D. Once again, create a new file from template, choose SwiftUI view, and name it ModelSelectionView, with the following code in its body.

struct ModelSelectionView: View {
    @Environment(ContentManager.self) var manager
    
    var body: some View {
        VStack {
            Image(.header)
                .foregroundColor(.green)
            List {
                Section("Select a model from the list") {
                    ForEach(manager.models) { model in
                        NavigationLink(value: model) {
                            ModelMenuContent(model: model)
                        }
                    }
                }
            }
            .refreshable {
                await manager.fetchLightweightRecords()
            }
        }
        .task {
            await manager.fetchLightweightRecords()
        }
    }
}

The body of our view is a VStack, placing an optional header image at the top, and a list of models below. I stress that the header is optional, if you paste in the code and see a compilation error there because you don’t have an equivalent image, go ahead and delete that line, or get rid of the VStack entirely and make the List be the entirety of the body.

You could also simplify by providing the models as an argument to the list, and bypass the Section and ForEach inside of it. However, I have added these because I like the styling of the section header, which appears to me to be more “attached” to the list content, versus placing a simple text label above. That is purely an aesthetic choice though, you can style as you wish.

The menu content itself is wrapped in a NavigationLink, which will broadcast the model as a navigation destination. That navigation itself will be managed in the top-level view, which I’ll return to shortly.

Finally, note the two modifiers for .refreshable, and .task, which both will trigger the content manager to fetch lightweight records, in other words, names and descriptions of the models available in CloudKit. The latter will be called when the view first appears, and the latter if the user drags downward on the list.

Putting it all together, and with a few more models added in CloudKit, the model selection looks like this:

Create the Augmented Reality View

We will be using a RealityView, with SwiftUI overlays to select from our list of models in our CloudKit database. Start by replacing the body in our ContentView with:

var body: some View {
    RealityView { content in 
        /* Handle initialization, "make" the view */
    } update: { content in
        /* Add content that responds to state changes */
    } placeholder: {
	  /* Add a temporary view while the model is downloading */
    }
}

We can see that this separates our view into an initialization and an update method, each providing the same content object where we will build our scene. Also, we will include a placeholder, which is important since the USDZ models will tend to be several megabytes and take some time to load, so it is good practice to provide some feedback to the user that the download is in-process.

Add a View Model

There are various logical steps required to build the scene content. These could be built directly into our view structure, however, taking a design philosophy of separating our view from our logic, here we will create a view model, which resides in an extension to the view. Start with the basic template below.

extension Model3DView {
    @Observable
    class ViewModel {
        var model: Model3D
        var entity: ModelEntity?
        var isSpatialTracking = false
        
        init(model: Model3D) {
            self.model = model
        }
    }
}

extension Model3DView.ViewModel {
    /* Add methods to gather entity data from the Model3D */
}

The view model will receive a Model3D upon its initialization, and will have the responsibility of generating a RealityKit ModelEntity representation. There is also a boolean for whether the view will use a spatial tracking camera, which we will allow the user to select via a SwiftUI control.

Asynchronously Fetch a Model Entity

Earlier, when building our content manager class, we created an extension on the CKRecord class that would provide a generic Data object representing the file that is stored as an asset in CloudKit. This nicely decouples our Model3D object such that it is constructed purely of generic Swift types, and could be easily serialized for local and/or SwiftData storage, if desired. Unfortunately, at time of writing, there is not a built-in initializer of RealityKit entities from Data. Of course, we can generate our own, which I have done below in the view model extension as the getEntity method.

func getEntity(from manager: ContentManager) async -> ModelEntity? {
    // If the entity has already been loaded, return it
    if let entity {
        return entity
    }
    
    // If the entity has not been loaded, get the stored data, or fetch
    var data: Data? = model.model
    if data == nil, let fetchedModel = await manager.fetchModel(for: model.id) {
        data = fetchedModel.model
        model = fetchedModel
    }
    guard let data else { return nil }
guard model.ext == "usdz" else {
        print("WARNING: \(model.ext ?? "nil") is not a supported file extension for loading an entity, returning nil")
        return nil
    }
            
    // Convert the stored Data to a USDZ file into a temporary directory,
    // and give it an extension so RealityKit recognizes which loader to use
    let tempDir = FileManager.default.temporaryDirectory
    let tempURL = tempDir
        .appendingPathComponent(model.id)
        .appendingPathExtension(model.ext ?? "usdz")
    try? data.write(to: tempURL)

    // Load the entity, clean up the temporary file, and return the entity
    entity = try? await ModelEntity(contentsOf: tempURL)
    try? FileManager.default.removeItem(at: tempURL)
    return entity
}

This method is asynchronous, as it will typically include a fetch call from a remote server, and we don’t want the UI to freeze while waiting for a response. Even in the case where the model is already downloaded, entity initialization can still take a large enough fraction of a second to be noticeable, and we prefer it be handled in the background.

We include a couple checks to see if the model data and entity have previously been loaded, in which case we can bypass the fetching step. In the case that we do fetch, we update the stored model, and grab the data variable and verify it to be non-nil, before further processing.

RealityKit provides the asynchronous ModelEntity(contentsOf: … ) initializer, which loads an entity from a file, provided that file is in the USDZ format. Our roundabout method for obtaining the entity is to first write the contents of our Data object to a temporary file, which, if Swift is to be trusted, should be the exact same file as previously uploadedto CloudKit. Then, this file is loaded using the standard initializer.

Add the Fetched Entity to the Scene Content

Returning to the Model3DView, we need to initialize our view model using a model provided from the model selection list. Place the following at the top of the view structure, above its body:

@Environment(ContentManager.self) var manager

@State var viewModel: ViewModel
init(for model: Model3D) {
    viewModel = ViewModel(for: model)
}

With this in place, we can add the getEntity call inside the primary closure of the RealityView, also known as its “make” method.

let _ = await viewModel.getEntity(from: manager)

Here I am suppressing the output, which is either the fetched ModelEntity, or nil. A very simple starting point for your app may be to use the entity here, and put a new line directly underneath with content.add(entity). This would, indeed, add the 3D model to the scene. However, in my case I want to add some additional user interactivity, and therefore I will instead be making use of the update method to build the scene. The latter will be triggered for changes of published properties in the view model, namely, the entity (post-fetching), and the isSpatialTracking boolean. Inside the update closure, add the following:

content.camera = viewModel.isSpatialTracking ? .spatialTracking : .virtual
content.entities.removeAll()
let anchor = viewModel.currentAnchorEntity
content.add(anchor)
viewModel.entity?.setParent(anchor)

Here we set up the content.camera, which is a simple conversion of our boolean value to the proper camera structure. Next, we clean up the content, meaning removing any existing entities, which in our case prevents view artifacts from a prior camera configuration from carrying over into our update. Then, we gather an AnchorEntity, which I have delegated to the view model to provide. Add this to our content, and set the entity parent to this anchor. Note that because we fetched the entity in the first closure, the update closure will add it to the scene as soon as it is available in the view model. 

Because we enabled either spatial tracking or virtual camera modes, we need to have separate anchoring configurations for both. This anchor is constructed in the view model as follows:

var currentAnchorEntity: AnchorEntity {
    let anchor: AnchorEntity
    if isSpatialTracking {
        anchor = AnchorEntity(plane: .horizontal)
    } else {
        anchor = AnchorEntity(world: .zero)
        let camera = PerspectiveCamera()
        camera.look(at: .zero, from: .one, relativeTo: nil)
        camera.setParent(anchor)
    }
    return anchor
}

In the spatial tracking case, we want the anchor to be attached to a horizontal plane that is detected in the real world, which should appear directly in front of the user as soon as one is detected. In contrast, for the virtual camera, we want the anchor to be at the world origin, and we want to make sure the camera is properly positioned and oriented to look at this anchor.

There are, of course, many application-specific details of how you might want to handle the anchoring scheme. In my case, I also applied scaling and translation to the entity to make sure it is centered and sized properly for whichever model and view mode you choose. Those are pretty vanilla RealityKit code, and don’t need to be included in this tutorial, but you can view the GitHub repository if interested.

Finishing Touches with SwiftUI

The last steps toward making the 3D view visible in our app are completed using SwiftUI, by adding the view to our navigation hierarchy, and adding a few interaction elements. On the former, return to our ContentView.swift file, and replace its body with the following:

NavigationSplitView {
    ModelSelectionView()
        .navigationDestination(for: Model3D.self) { model in
            Model3DView(for: model)
                .navigationTitle(model.name)
                .id(model.id)
        }
} detail: {
    Text("Select a model")
}

This establishes that our selection list is the root view, and adds a navigation destination to our 3D view, which is invoked when the user taps one of the list items. Since this is a NavigationSplitView, if the app is run on an iPad or macOS, it will display with the selection list in the left panel, versus on an iPhone where it will behave like a conventional navigation stack. A key point is adding the .id modifier, which will ensure that the view gets refreshed on navigating to a new model.

Our other main changes are the modifiers that are attached to the RealityView, which are found in Model3DView.swift.

.realityViewCameraControls(.orbit)
.edgesIgnoringSafeArea(.all)
.toolbar {
    ToolbarItem {
        Menu {
            Picker("View Mode", selection: $viewModel.isSpatialTracking) {
                Label("Normal", systemImage: "camera").tag(false)
                Label("AR", systemImage: "arkit").tag(true)
            }
        } label: {
            Image(systemName: viewModel.isSpatialTracking ? "arkit" : "camera")
        }
    }
}

The .realityViewCameraControls(.orbit) modifier adds the orbiting camera control, so that we can rotate our camera around the model, and pinch to zoom. The toolbar adds a menu picker to choose between spatial tracking and virtual cameras. Since we included our scene building content in the update method, any change to the viewModel.isSpatialTracking variable will automatically trigger an update between the two view modes.

The Txirimiri app on macOS

The final result will look like the image above. I have selected a screenshot from a macOS build, just so you can see all of the key features in a single image. On the left hand side is the model selection view, with a header image, and list of models to choose from. The “Puppy” item is highlighted, indicating its selection from the list. On the right hand side we see the corresponding 3D model, rendered in RealityKit. The camera icon control, on the upper right, is where we would select between the spatial tracking or virtual views. Of course, since this screenshot is on macOS, AR views are not available, but you may refer back to the video at the start of this post for that feature.

Conclusion

That’s a wrap on my tutorial on how to create a native iOS app that fetches 3D model content from iCloud to display in AR. Following this approach will help you to greatly reduce the size of your app builds, as your users can now download 3D content on-demand, not all-at-once as part of the build file.

Lessons Learned

Some of the non-obvious things that I hope you may have learned from this tutorial are:

  • How to configure your CloudKit container, including setting up your entitlements file, creating schema in the browser console, and uploading your own models.
  • Fetching from iCloud using modern, asynchronous calls, with query logic in place to manage requests of specific models, not “all-at-once.”
  • How to convert a fetched model asset into Data, and then into a ModelEntity.
  • Using a RealityView, with code in the update closure that responds to changes in view model state when a model has been downloaded.

Possible Improvements

Though this tutorial ballooned to over 6000 words, there are still a handful of nice features that I did not yet attempt to implement, or omitted for brevity. In a future post, I may consider adding some of the following:

  • Better error handling, including notifying the user if a model failed to load, or retrying if there was a network failure.
  • Model data caching, so that a model only needs to be downloaded once.
  • Alternate common file formats and extensions, such as .glb, .obj, or .stl.
  • Importing and sharing of user models.
  • Manipulation of models inside the UI, such as transformation, or changing textures.

Want to Collaborate?

I hope you find this useful, and can apply this to your own projects. Of course, if you would like me to bring my expertise directly to your app, I am also currently open to taking on freelance or consulting projects. I bring a multidisciplinary background to the table, including over a decade of experience in the aerospace industry, and a more recent career focus on building 3D and AR experiences on multiple platforms. If you want to work with me, leave a comment, or email me at Eliott.Radcliffe@DC-Engineer.com

Txin Txin!

Leave a Reply

Your email address will not be published. Required fields are marked *