Page MenuHome

Cycles: Added generic "Lens Polygon" camera model
ClosedPublic

Authored by Håkan Ardö (hakanardo) on Sep 29 2021, 1:04 PM.

Details

Summary

This model allows real world cameras to be modelled by specifying the
coordinates of a 4th degree polynomial that relates a pixels distance
(in mm) from the opticall center on the sensor to the angle (in
radians) of the world ray that is projected onto that pixel.

This was implemented as a "Panorama Type" of the Panoramic lens type
since that code-path was the closest. But this feature could be used
to model for example lens distortions in projective cameras as well.

Note that the inverse projection (direction_to_lens_polynom) is
currently untested. Where is it used and how do I best verify it?

Diff Detail

Repository
rB Blender
Branch
lenspoly_squashed2 (branched from master)
Build Status
Buildable 17717
Build 17717: arc lint + arc unit

Event Timeline

Håkan Ardö (hakanardo) requested review of this revision.Sep 29 2021, 1:04 PM
Håkan Ardö (hakanardo) created this revision.

Hi,
this is my first belender contribution. Could you have a look at this or point me towards whom I should talk to?

Thanx!
Brecht Van Lommel (brecht) requested changes to this revision.Sep 29 2021, 2:12 PM

I think it's great to have more camera models to match real cameras, however there should be some consistency on this in Blender. We should implement the lens distortion models in Blender's motion tracking rather than adding another model.

Inverse projection is used with the Window texture coordinates. It can be verified by checking that those coordinates match the result of other projection types.

Better default values as well as prection and step for easier to adjustments

Brecht Van Lommel (brecht) requested changes to this revision.Sep 29 2021, 2:36 PM
This revision now requires changes to proceed.Sep 29 2021, 2:36 PM

How would I reuse a camera model from Blender's motion tracking as an output (render target) camera model?

Note also that the model proposed here is more general than then OpenCV distortion model which I believe is used in Blender's motion tracking. It can for example model fisheye lenses with a FoV larger than 180 degrees.

Would implementing this model in Blender's motion tracking also mean that we need to implement support for estimating the parameters of the model from camera motion tracks?

How would I reuse a camera model from Blender's motion tracking as an output (render target) camera model?

You'd need to look into distortion_models.h in libmv, and figure out how to map these lens distortion models to Cycles.

Note also that the model proposed here is more general than then OpenCV distortion model which I believe is used in Blender's motion tracking. It can for example model fisheye lenses with a FoV larger than 180 degrees.

Blender's motion tracking has a few different models. I'm not sure how exactly they compare to OpenCV or the model proposed here.

Generality is good, but interoperability is important if users want to be able to configure these values to match real world cameras. Is there a name for the model you are proposing? Is it used in any other software, is there some way for users to obtain these parameters?

Would implementing this model in Blender's motion tracking also mean that we need to implement support for estimating the parameters of the model from camera motion tracks?

If it turns out adding this model is useful to Blender's motion tracking, then yes it would have to work for estimation.

Polynomial model is what was closer used in OpenCV. In Blender we are only lacking tangential coefficients from it. The Division model is the easiest one to represent wide-angle lenses. However, there are no fish-eye models in the motion tracker at this time.

For integrating the models into Cycles you need to be aware that depending on a model its analytic form will either be Apply (apply the distortion model) or Inverse (inverse the model), and the reverse operation involves a LM solver. You don't want to perform solution for every sample of a pixel, not to mention that this will not be possible on GPU.

A good approach would be to mimic the lens distortion compositor node, which first pre-calculates distortion "grid" which then is interpolated to get sub-pixel distorted (or undistored) coordinates.

The proposed model is what you get when you measure a lens using for example:

https://trioptics.com/products/imagemaster-hr-tempcontrol-universal-image-quality-mtf-testing/

It is supported as a render target by for example NVIDIA Omniverse.

The distortionmodels in "distortion_models.h" seems to all be based on converting to/from "normalized coordinates" which I guess is what a projective camera with f=1 and principalpoint (0,0) would produce, which means we can never model camera with a FoV > 180 degrees there, right?

How about I implement an export from one (or maybe several) of the models there that would generate the parameters of the model proposed here? That way we would get both interoperability (even if it does not utelize the full potential of this model) and be able to generalize to more fancy fisheye models.

If this is a somewhat standard distortion model for fisheye cameras, and these models are generally different than simple perspective camera models, then it seems reasonable to add this to Cycles without the motion tracker.

At least there is a path to consistency then, where motion tracking might add fisheye cameras in the future and Cycles might add distortion models for simple perspective camera models.

However I think the properties should then be set up differently. I think there should be a new fisheye_distortion_model enum with options None and Polynomial. And then there can be properties fisheye_polynomial_k0 to fisheye_polynomial_k4. I suggest k instead of c for consistency in Blender, and because I also see it used elsewhere (e.g. https://docs.nvidia.com/vpi/algo_ldc.html).

I think the distortion can be applied to both Fisheye Equidistant and Equisolid?

Consistency in properties are important and I'm happy to adjust that.

We do however use this model both for fisheye and projective lenses. But I belive you are right in that people working only with projective cameras typically use different distortion models, and are thus not interested in this model. In that respect I suppose it makes sens to call this model a fisheye model. So lets go with that if that's how you want it.

As for applying it both to Fisheye Equidistant and Equisolid: This is harder. Note that this model is not a distortion model in the same sense as the distortion models in for example "distortion_models.h". It is a full lens model. I.e. the polynom relates pixels on the sensor to directions in the world projected onto those pixels. So I'm not sure what it would mean to combine it with other lens models. This distinction is important when you want to handle FoV>180 degrees as in that case there is no obvious way to represent an intermediate "undistored" image. Especially not if you want to handle fisheye and projective lenses in the same way.

I've now renamed things to fisheye. I've also changed to express the
polynomial in degrees instead of radians to allow the parameters produced by trioptics
to be used directly without scaling them, in order to make it easier to set
the parameters right.

I did not move it to be a "distortion model" of the other fisheye models as
I'm not sure if that makes sens. If you still want it done that way let me
know and I'll update. I did however look closer at the Equidistant model
and it is not rotational symetric, so this model wont be able to produce
that kind of images. I suppose it would be possible to take the math this
model applies to the theta angle and apply it to the theta angle in the
Equidistant model, but I dont se how that would render useable images.

I've also added python code for calculating the fisheye_polynomial_k*
parameters from some of the other camera models. It is currently only used
to made the default values produce a camera that is the same as the default
projective camera in blender. The intention is that this could be used to
export the parameters of an estimated camera (with polinomial or division
distortion) from the motion tracking, and use them to set the
fisheye_polynomial_k* parameters. How to design the GUI for doing that is
however unclear to me, but maybe that could be a future step?

"I've also changed to express the polynomial in degrees instead of radians to allow the parameters produced by trioptics to be used directly without scaling them, in order to make it easier to set the parameters right."

You can expose degrees to the users while having the values internally being radians. See how we use subtype='ANGLE', for the fisheye_fov for example. This way the properties get the right decorator (°) as well.

I did however look closer at the Equidistant model and it is not rotational symetric.

I'm not sure I know what you mean. At the moment the fisheye modes don't support Lens Shift. I wouldn't mind if they did, I just never considered the need for that.

You can expose degrees to the users while having the values internally being radians. See how we use subtype='ANGLE', for the fisheye_fov for example. This way the properties get the right decorator (°) as well.

Nice! I'll try that.

I did however look closer at the Equidistant model and it is not rotational symetric.

I'm not sure I know what you mean. At the moment the fisheye modes don't support Lens Shift. I wouldn't mind if they did, I just never considered the need for that.

Hmm, I think I've mixed up the Equidistant and Equirectangular models, sorry. "Equisolid + Lens Polynom Distortion" and "Equidistant + Lens Polynom Distortion" would become the exact same model wouln't it?

Use radians internally and degrees in GUI.

Add fisheye_lens_polynom_from_equidistant parameter importer math

GPU rendering support

rebased on master

I will look at committing this patch in a few weeks, right now I'm too busy with getting things ready for 3.0. But we should be able to add this in 3.1.

From a quick glance at the code, please use consistent terminology:

  • Use "polynomial" everywhere instead of "polinomial", "polynom" and "poly".
  • Use consistent identifier and UI name, e.g. fisheye_polynomial_k0 should be "Fisheye Polynomial K0" instead of "Lens Poly C0", and can be abbreviated to "K0" in the UI.

I will look at committing this patch in a few weeks, right now I'm too busy
with getting things ready for 3.0. But we should be able to add this in 3.1.

Sounds good.

From a quick glance at the code, please use consistent terminology:

Fixed!

I'll commit this with some minor tweaks to the Python code, so we don't need to import numpy on Blender startup.

This revision is now accepted and ready to land.Dec 7 2021, 8:06 PM