Until now I had a mesh per brush because I figured that would make it easier for the user to do whatever they want with the meshes they generate. They could use individual brushes for navigation, occlusion culling or give individual brushes different physics materials. I figured I would combine the meshes just before the game is played in the editor, or before the game is published to any particular platform.
Finally I hit a situation that clearly shows that I need to combine all meshes somewhere in the editor in real-time. This is unfortunate because I'd have to force the user to create parent objects for brushes that hold the meshes and I'm afraid this could be confusing to users when they create a brush (without a parent object) and nothing shows up.
The problems I bumped up against all have to do with Unity's lighting.
Unity doesn't seem to like meshes being split into multiple different objects. For some reason, even when vertices of 2 objects align exactly (I checked this in the debugger), it'll cause leaks in the lighting. Maybe Unity introduces some minor floating point errors somewhere, I have no idea. Still, the light leakage is huge for seams that must be incredibly tiny (no matter how I orientate the camera I cannot see any seams, not even tiny flickering pixels).
It is possible to get the lighting to not have seams by messing around with the light's bias parameters, but that doesn't seem like a long term solution to me. I don't want users to be force the users to spend hours messing around with lighting parameters just to make this work.
Another, much bigger problem, is that light-mapping using individual brushes is problematic to say the least. Technically it might be possible to join all brushes and force a re-bake of all the lighting when the game is published to a platform. It would cause changes in the appearance of the game between the editor and the run-time, and it would be SLOW to publish. So I'm pretty sure all I would accomplish is that I'd receive tons of hate mail and assassination attempts.
So now I'm rewriting some code so that there are these higher level components that capture all the meshes of it's child brushes and combine them in the editor in real-time. One problem is that unity would show this combined mesh as a single mesh in the editor. Select it and you select all the meshes of all the brushes that are combined in it at the same time.
Fortunately I was already rendering brush outlines with custom code and was forced to implement my own ray-brush intersection / drag & drop code in the editor anyway, so I can actually work around Unity's limitations. It won't be pretty, but it'll work.
The silver lining is that doing things this way means that there won't be a gazillion MeshFilters, MeshColliders and MeshRenderes in the scene anymore. The brush components can also be changed into relatively simple classes that can safely be removed at run-time without side-effects. All that will remain at run-time are the meshes that have been generated by the brushes.
Update:
I was wrong! Light leakage had to do with mesh size (not the size of the mesh in world-space). Apparently the shadow bias is calculated in the mesh's object-space, but then applied in world-space? So very large mesh + reasonable bias + downscale of mesh in world space = HUGE shadow bias that causes light leakage.
I (mostly) implemented the part where all CSG generated meshes are joined into one unity mesh and got a nice speed boost as well. I guess in retrospect, uploading all these tiny unity meshes was kind of slow. (which makes sense)
... now to fix a weird bug that causes all the brushes to be re-CSG-ed every frame, non-stop. *sigh*
(good thing it's still fast enough)