Unity just launched support for visionOS.
Unity's support for visionOS was first announced alongside Vision Pro in early June. Acknowledging the existing Unity AR/VR development community, Apple said "we know there is a community of developers who have been building incredible 3D apps for years" and announced a "deep partnership" with Unity.
Porting Unity full VR apps to visionOS - Apple calls these "Fully Immersive Experiences" - is relatively straightforward. You use a similar build chain to iOS, where Unity interfaces with Apple's XCode IDE. Rec Room is confirmed as coming to Vision Pro for example, with minimal changes.
But building AR apps in Unity for visionOS is very different, and introduces several important restrictions developers need to be aware of.
Content in the Shared Space is rendered using Apple's RealityKit framework, not Unity's own rendering subsystem. To translate Unity mesh renderers, materials, shaders, and particles to RealityKit, Unity developed a new system it calls PolySpatial.
PolySpatial only supports certain materials and shaders though. Of the included materials, for the Universal Render Pipeline (URP) it supports the Lit, Simple Lit, and Complex Lit shaders, while for the Built-in Render Pipeline it only supports the Standard shaders. Custom shaders and material types are supported, but only through the Shader Graph visual tool, not handwritten shaders.
PolySpatial does have a unique advantage though: you can enter play mode directly to the headset, rather than needing to rebuild each time. Unity says this should significantly reduce iteration testing time.
The Unity visionOS package requires Unity Pro, which costs $2040 per year. That's a steep ask for indie developers and people wanting to experiment with visionOS, especially after forking out $3500 for the headset itself.
Update November 18: This article has been updated to reflect the Unity visionOS package exiting beta to full release.