# Intro
Scattering in nature has a reason behind it. There is always a reason that fate has driven something to end up in a certain spot and state. In many of these cases the reason can be the result of a chain reaction where the reason one object ended up in a certain position or state is due to another object being in a certain position or state. In this sense natural scattering has a hierarchy to it.
For example: The reason there is a bunch of leaves scattered on the ground is likely because a tree was scattered not too far away from it, you can expect to see exposed knots of roots around the tree, the tree has also likely provided shade to other plants that needed it as well, those plants are likely good places to spawn butterflies and target points for their boid simulation and so on.
To scatter each of these elements by simply rolling the dice on points across the ground would be to ignore the either logical or artist driven reasons behind the placement of these elements and create a very chaotic looking result. In the current system the user is expected to hand paint areas in which the placement of these objects would likely make sense, taking away from the artist's time they can be directing the scene at a higher level. Therefore a scattering method which can take into account this kind of cause an affect would be extremely beneficial to creating believable natural environments quickly. Luckily creating the key to create such complex scattering in the geometry nodes system shouldn't be too difficult. At the core of all those reactions was the idea that in this heirarchy objects are likely found close to their parent. In this design document a point child scattering method is proposed.
# Breakdown
The input could be any point(s) in 3D space, whether it be a hand placed object, the vertices on a model, the result of a previous scatter or some data imported into the scene. In this example we will examine a single point.
{F9585018}
For each input point we choose a number of points to place randomly around the provoking point as our output points.
We can easily do this by plotting a random spot on a disk of a defined radius in [[ https://en.wikipedia.org/wiki/Polar_coordinate_system | polar space ]],
```
const float r = random_gen.get_float() * radius;
const float theta = random_gen.get_float() * M_PI * 2.0f;
const float h = 0.0f;
```
then we convert it back to euclidean space,
```
const float3 offset_position = float3(
r * cosf(theta),
r * sinf(theta),
h
);
```
(since we are working on a flat plane for now, we will skip orientation) add back to the provoking point position to get a random spot around the original point.
```
return provoking_position + offset_position;
```
If we repeat this step for each point we want to create around every input point, we end up with something like this:
{F9585022}
(Take note, this node only outputs the newly created points, the original input is displayed with a merge at the end of the network)
But now we have encountered a problem many of our points are too close to the original point, likely the artist would not be satisfied with the results. Luckily the solution is easy: we just need to offset the possible radius an artist defined amount from the center, this is as simple as a simple addition. Our random radius now becomes:
```
const float r = random_gen.get_float() * outer_radius + inner_radius;
```
As a result we are able to define an inner radius for the scatter.
{F9585046}
The outer radius did extend the distance at which points are scattered this could be a point of debate into if this is intuitive. You can look at the inner radius as defining the size of the object to scatter around. Creating a torus often works in a similar manner. In order to see this next point more clearly the settings of the scatter will be tweaked to something more reasonable.
{F9585058}
The scatter has now completed it's task. While it doesn't seem like much, this is an extremely important key in creating natural hierarchical scatters. At this point the user is free to work with the attributes, delete points (such as ones that might intersect buildings and/or walkways), once simulation nodes comes into play they could simulate these points (such as rocks falling or leaves blowing) or in our example run the results into another point scatter to produce another level of scattering. This is where the nested or hierarchical scattering element comes into play. Notice the green dots are now scattered around our previously created blue dots, which is scattered around the red dot.
{F9585073}
Now lets look at how these can scale up to give the user control over a large number of input points. Here more than one input is scattered using Poisson Disk. In this design the user can multiply the inner and outer radius using attributes (I picked in this example the radius attribute to control the outer and the scale attribute to control the inner but ideally the user should be able to define their own). In this example the inner and outer radius of the first scatter is being controlled.
{F9585079}
{F9585081}
The user can then drive the radii randomly, or by using their own math, painting, or textures to tune the results of the scatter.
{F9585092}
# Results
In this example a bunch of boulders are scattered randomly around a weight painted beach. With the help of the point child distribution: around those boulders are rocks and around those rocks are patches of gravel, (ignoring the placeholder art assets) this produces a rather believable looking scene.
{F9585089}
Compare this to the same scene without the point child approach. In this example the rocks and gravel are distributed with random distribution using the same density attribute as the boulder. The resulting image feels more chaotic and no longer seems to have a natural order. The boulders no longer blend into the beach and the eye ends up getting lost in noise, struggling to pick a subject. Without point child this would require manual artist intervention to correct, and when the layout of the scene changes so must their solution.
{F9585105}
This forest example helps break down some of what is achievable with this kind of approach. Multiple assets can be populated based on a single point. This result can be taken further by layering more scattering systems and instances to create a flushed out forest environment.
{F9585125}
# Issues and workarounds
There are still some problems and subjects that would need to be addressed.
1. So far in this design there are a fixed amount of points created for each input point. This is to help performance as the storage of the final result can easily be deduced and allocated up-front. However this can create some unnatural looking results if assets are too easy to pick out. One approach might be drive a chance each new point might not be created using the density attribute.
2. Unfortunately (whether it be an oversight or a misunderstanding of blender's math) so far I have not been able to find a concrete solution for orienting the distribution disk along the surface. The disk must have a proper direction following the direction the points it is emitted on is facing. {F9585140}
3. Even if 2. is addressed orienting the disk along the orientation of the incoming point isn't always enough (especially if this point was distributed far away from the original on bumpy terrain) there has to be a way that the created points can be re-projected back onto the intended geometry. Considering the original terrain geometry could be not accessible in the current point distribute node and found farther upstream in the node network, with more work that could be done (including culling) before re-projecting it might be worth considering leaving it up to an external "shrink-wrap" node.
4. An alternative solution/workaround might be to allow users to drive attributes via a "point-density" attribute node to drive scatter density, however this would rely on the mesh tessellation of the geometry users wish to scatter on, which could definitely be an issue with some archvis, background scenery, or low poly assets for games use cases.
5. The point distribute node is currently only designed to work on mesh components as inputs. It will likely have to be able to accommodate things such as volumes and point-clouds anyways in the future so it shouldn't be too big of a concern. However the approach taken in the prototype patch is hacky and requires code duplication.
6. Intersections aren't all fixed with the inner radius offset, the scatter is still not aware of neighboring or higher up the chain points. This can lead to some obvious intersections. Similarly to the terrain intersection issue it seems a bit much to solve it inside the scatter node. One approach might be to provide a node to iteratively "relax" a point cloud until none/less of their radii intersect. A benefit to this approach is it could be applied to more scattering approaches as well. {F9585159}
7. (If there are any other things that might need addressing please provide info)
# Prototype Patch
(This prototype was made with an earlier version and is in no shape to be merged in the current implementation)
https://developer.blender.org/D10133