Next thing we could consider is having our source texture (which now is all rotated and nicely distributed along
our wall or whatever surface) flipped or mirrored in each triangle. here I guess I should add an image to show what I mean.
So if u see 1 image shows our source texture, second one is nice hexagon, which we created by rotating our source texture.
now third one is more complex. as second one we get its shape and structure by rotating original source texture, but in addition
to that we mirror, flip, and rotate the source texture after it is already in its correct position. meaning, only after we rotate it.
well, again, hard to explain here using words, just have a closer look at third image and try to imagine what actions must be taken in order to get this
result using only source image.
Ok, so lets imagine our new tool is already capable of achieving what we described above. why to stop, lets add some more cool stuff.
next thing would be more textures. in this case we used one source texture and we created triangle out of it, then we rotated and copied it, and we have
all our imaginary surface nicely covered.
now we see that all elements of our surface, meaning original texture is same everywhere. it is rotated, flipped, mirrored, but essentially it
is same texture, and we don’t need lots of time to spot that. fells so unreal right?
so imagine now we have set of original textures. they are same in shape (image in bitmap forms same triangles) presumably also image is same,
but with slightly different coloring, or some of source textures have some cracks, some dirt or might be they are just made out of different glass,
therefore they reflect light in different way, or might be some of them has mirrors where the color is in others.
So we need an ability to have different source textures used in our final image generation. it might be random (for forming cracks) or very precise and
deliberate (say for constructing other patterns with different materials, while preserving original image)
Now interface for this is still quite a question. any input?
After we implement such an option to our generator, next logical step would be context awareness. what I mean by that. well we sed that
possibly we want cracks or dirt. and we achieve that by randomly adding some individual tiles which have some dirt applied in source image.
next step would be that our generator would recognise the world. it should assume rain direction (from up towards down along z coordinate with some random directional shifts)
And this is all because we see that dirt does not appear on a walls in totally random places we can distinguish areas where we will have more of it.
that would be where wall reaches ground, for one. also all sorts of dirty water forms nice patterns beneath windows. And so on. so I will discuss
something very similar in next post. (can’t link to it before I actually wrote next post, can I :) )
more coming in part 3, very soon :)