Test 1: Win8.1, AMD Radeon HD 7750
Test 2: Win7, Intel HD Graphics 2000
Doesn't work on both 2.67 and 2.69
Mirror "threshold" parameter doesn't affect the rendered result. Expected changes in the amount of noise in the reflection.
Here's the result I'm getting regardless of the value of that parameter: http://imgur.com/a/5MdAh
--- Steps for others to reproduce the error (preferably based on attached .blend file) ---
Render the scene with different threshold values.
Description
Event Timeline
cannot confirm this on Linux (see attached screenshot showing treshold 1.0 and 0.0 - also rendertimes are as expected [longer for 0.0 threshold])
This is most strange. I cannot seem to be able to reproduce the bug anymore. Or was the change too small to notice at first? Anyway, it appears that I might have mistaken having too much noise in the picture with the described behavior. If so, sorry about that.
Though I'd like to understand more about that noise.
One would expect that with more than 256 samples and 0% threshold the noise would be (almost) completely invisible. But even with 1024 samples, that does not appear to be the case. Judging by the amount of different colors visible in the rendering (when antialiasing is turned off, of course), I would say that there are a lot less samples actually used... I'd expect each additional sample to double the amount of unique colors generated for the reflection of a single-color object, but with 16 samples, I counted 5 unique colors. Could perhaps that be worthy of an investigation?
I am not an expert here, but I still think there is nothing wrong.
If you take two samples, you get 3 possibilities:
(1) both samples hit the background (lets say "black" pixel)
(2) one sample hits the background, one the single-color object (lets say "grey" pixel)
(3) both samples hit the single-color object (lets say "white" pixel)
If you take three samples, you get 4 possibilities:
(1) all 3 black
(2) 2 black, one white ("dark grey")
(3) 1 black, two white ("light grey")
(4) all 3 white
If you take four samples, you get 5 possibilities:
(1) all 4 black
(2) 3 black, 1 white ("dark grey")
(3) 2 black, 2 white ("mid grey")
(3) 1 black, 3 white ("light grey")
(4) all 4 white
So first of all, I wouldnt expect the possible unique colors to double with each sample.
And second: these are _possibilities_. You are not guaranteed to get this grey-distribution. No Importance sampling in BI afaik. You could still only get pixels that (a) all samples hit the single-color object or (b) all samples hit the background. So you could still end up with only two "unique colors". This is dependent on the sampling pattern, where exactly the object lives in the pixel and so on.
But like I said, I havent looked into Blender Internal, so I wouldnt know the details, just thinking it might all be/work just as intended here...
Indeed, you're right about sampling / possibilities. My math was wrong there.
I tried to implement a basic raytracer yesterday and the results I got were similar. Also, considering the fact that the reflected material is extremely bright, that would considerably increase the visibility of the noise, which is probably why I didn't see noise-free images even with 1024 samples.
So I guess there's nothing wrong with the renderer, it's just that the combination of different weak spots of raytracing turned into a visual annoyance.