When creating plugins, such as my old Shader Forge, or now, Shapes, you generally want it to look and feel native, as well as being user-friendly. Unity has a ton of useful ways of creating and running in-editor scripts purely as a matter of practicality, but, sometimes, going that last 20% of polish expected of self-contained plugins turns out to be more difficult than you want it to be
Here are two (2) features I was missing from Unity that seem deceptively simple, that cascaded into a plethora of secondary issues that contributed heavily to the quality of Shapes, as well as how much time I had to spend on workarounds
Shapes has a set of components. Each component represents a simple shape, such as a disc, line, rectangle, and much more.
I wanted to make my own, self-contained Renderer component acting just like any other renderer in Unity, such as the MeshRenderer, SpriteRenderer, LineRenderer, and so on!
💔 You can’t inherit from Renderer 💔
If you try, they can’t be added to game objects, whether by dragging them there or adding them through code
The one route I can go now, is to use a mesh filter/renderer component, alongside my own Shape component
So, each Shape now requires two extra components, MeshFilter and MeshRenderer. The technical bit of making sure they are added, is easy enough, slap on a [RequireComponent] attribute and we’re good to go right?
Technically this “works”, but as a user, it’s a pretty horrible experience
Here are the issues that are a direct consequence of not being able to inherit from Renderer
The user should not be able to assign materials or meshes
This should be managed by the Shape component itself. I don’t want the user to feel expected to assign these themselves, or, even care about these components, and on top of that, it’s unnecessary and confusing clutter — seeing two components, one of them huge, that you don’t want to nor should touch, being auto-added to your game object every time you add a shape is confusing, messy, and can potentially have users make mistakes they shouldn’t be able to do in the first place.
Now, this one can relatively easily be solved by hiding the components using HideFlags.HideInInspector, so we now have a “single component” 👀 (as long as you don’t tell anyone about the two hidden extra components of MeshFilter and MeshRenderer)
That being said, this required extra shenanigans to make sure they were actually completely hidden. Prefabs will un-hide hidden components, and so I had to force them to be hidden in more than one place, and it took a while for me to even figure out how to do it without having them be visible for one frame and disappearing. Hiding them (again) in OnEnable of the Editor/Custom inspector works for the most part, but they still flash for one frame the first time you select a prefab asset with a Shape. I’ve yet to find a proper solution to this, but, in the grand scheme of things this is minor enough in my book
You can’t assign a mesh to MeshFilter in OnValidate
This might seem like a small issue, but OnValidate is used everywhere in Shapes to make sure that the mesh filter and the mesh renderer are using the correct, up-to-date meshes, materials and per-instance material properties. And some Shapes use procedurally generated meshes, where, just like all other properties, require updating if you tweak properties, which is done in OnValidate. The consequences of this as a user is that if you are tweaking any property in the inspector, your console will be flooded with this message every frame something changes.
This is currently marked as “by design” (?) in Unity’s issue tracker.
Oh, and, if you select a prefab asset where you assign a mesh in OnValidate, unity does, whatever fresh refresh loop hell this is
My workaround for this is, admittedly a little ugly— I subscribe to OnPreCull for the Shapes that use procedural meshes, and when I change properties that require meshes to be updated, I set a meshOutOfDate bool. Then, in OnPreCull, I regenerate the mesh if it’s marked as out of date. The nice thing about this method is that it works just as fine in both runtime and the editor, and prevents it from being updated more than once per frame if you’re tweaking a lot of properties that all require the mesh to update.
Deleting a Shape doesn’t delete the MeshFilter & MeshRenderer
You can’t “just” remove a shape component anymore, either in the editor or in code, you now have to clean up the other two components manually, which is annoying, and you shouldn’t have to do it in the first place if only it was just a self-contained renderer component.
So, what if I delete them in OnDestroy of the Shape component?
Ah, but you see, I can’t, because my Shape has [RequireComponent] on it, so I can’t delete them. Unity will yell at me if I try 🙃
So now, I can’t use [RequireComponent] anymore, so I have to add those components myself, and make sure they serialize correctly, and that in all the places I require them, they exist and are ready to be tweaked.
This turned out to be in so many more places than I expected, and, honestly, this was made much worse by the fact that there is no reliable constructor-like place where I can initialize the mesh filter and mesh renderer.
What about Awake()? You might ask. Well, Awake is called the first time the object is enabled. If you create a disabled game object, Awake will not be called, even though the component does exist. So now if you tweak a property of a Shape that has never been enabled, the Shape will try to assign per-instance material data to the mesh renderer. You know, the one that doesn’t exist yet, so, we get a null reference exception
I didn’t like this solution, but I essentially had to make sure that everywhere the MeshRenderer or MeshFilter is touched, I had to check to make sure they exist before doing anything with them. I’m mostly concerned with the performance overhead of checking this for all properties, but, I think I managed to make it cheaper by using a bool to check initialization state rather than null-checking Unity objects (which can be expensive)
So how do we delete them now? Well, I ended up with this nightmare of a code snippet, covering all the edge cases I discovered throughout all the bug reports and errors
*sips hot chocolate*
It would be nice if we could just inherit from Renderer wouldn’t it? And you know, not have to do all of the above.
Okay, so that was the first issue, moving on to the second one
I want Shapes to be performant, so I have to support some form of batching or instancing. I opted to go for instancing, because Shapes is very heavy on per-instance properties, and instancing will batch render renderers that all use the same mesh and shader, so, this is perfect for Shapes — almost all 2D Shapes use just one mesh.
However, because Shapes use a single mesh for each shape, usually a quad, the bounding box of all the mesh renderers will be based on that mesh. This would be fine if Shapes didn’t modify the vertices on the GPU, but, Shapes very much does.
If you set the radius of a Disc, the final vertex positions on the GPU will not be the same as on the vertex positions on the CPU, which the mesh bounds are based on. So the bounding box, from the perspective of all C# code, will no longer actually be representative of the output render if the shader displaces them to be larger or smaller than the mesh bounds. More often than not, there’s a huge difference between them.
So, all we have to do is — oh god wait…
💔 You can’t override Renderer bounds 💔
MeshRenderer does not allow you to set bounds to be used instead of the mesh data bounds. This matters because *takes a deep breath*
Frustum culling is based on renderer bounds
The renderer bounds are used to determine whether or not objects should be culled before rendering, but if the bounds are wrong, the frustum culling will too be wrong.
Bounds smaller than the render? The object will be culled even when visible
Bounds larger than the render? The object will be rendered even though it’s completely off screen
I can’t override what bounds renderers use, however, there is another thing I can do — modify bounds of the mesh itself. But, remember how instancing relies on grouping objects that use the same mesh?
Guess what? Changing the bounds of a mesh breaks instancing.
I have yet to figure out a solution to this one. My horrible workaround for now has been to make the mesh bounds cover the observable universe — this means Shapes effectively don’t have frustum culling, because they’re always deemed visible. (I have a setting in Shapes if you want to tweak this threshold, if you know how big your biggest shape is, you can still get some frustum culling out of it)
Now, instancing is generally extremely performant on its own, so even if lots of things are off screen it’ll perform well, but it’s still a waste to not have frustum culling, and, it’s kinda frustati — *clears throat* frustrating, that there’s just no way I can have per-renderer bounds and keep instancing.
Scene view F-focusing is based on renderer bounds
Pressing F to focus on a game object with a mesh renderer, will make the scene view camera adapt to the bounds of that renderer. Neat, right? Except remember how
“my horrible workaround for now has been to make the mesh bounds cover the observable universe”
Turns out, there is no way to change this behavior across the board. The only way to override the focus bounds right now, using OnGetFrameBounds, does not have multiselect support and only works when the Editor (Custom Inspector) is visible. On top of that, the bounds also seems to break navmesh baking, likely because of the large bounds too.
So for now, this is just an unfixable issue in Shapes