Point3d transform

Hi,

I’m trying to transform Point3d, but the transformation didn’t succeeded, please check attached picture. Can anyone please explain why?

I think it might be necessary to see more of the code. It sure seems like what you’re doing should work. Have you tried transforming rigidBodyData.CenterOfGravity instead? I’ll assume the types are correct, they got cut off in the pic.

Nothing seem to work. The point have been initialized, however never added to the doc, could this be a reason?

Is there an exception being thrown? If you’re wondering why rigidBodyData.CenterOfGravity and tempRigidBodyData.CenterOfGravity are the same, it’s because Transform() acts in place and your RigidBodyContainer is probably a class (which is a reference type in c#).

Point3d is a value type (struct). This means that when you use .CenterOfGravity you get a copy of the point struct. This copy you can transform,but you need to re-assign it to .CenterOfGravity

This is different to reference types (class). Many of the geometry primitives like point, vector, line, circle, etc. are structs in RhinoCommon.

Just to be clear, what I mean is this:

Point3d cog = tempRigidBodyData.CenterOfGravity; // cog is now a copy of the member
cog.Transform(xFromRotationTransverse); // do the transform on the copy
tempRigidBodyData.CenterOfGravity = cog; // re-assign to the member variable

Thanks for explaining menno :slight_smile: This works.

Menno:

Can you point me to an example of transforming a point to a different coordinate system?

I’m not at all clear about the role of the the transform matrix in the process and how a matrix is constructed.

To transform something form one coordinate system to another, create a transform object using Rhino.Geometry.Transform.ChangeBasis.

– Dale

1 Like

Thanks, Dale.

Not being a mathematician, transformation matrices reside in a fog. Is there a method of retrieving the two planes that get passed as parameters to the ChangeBasis method?

No there is not.

@dale, I think a little clarification could be in order here. I know this was something of a stumbling block for me when I was learning transforms.

There are basically 3 methods of interest here:

Transform.ChangeBasis(x1,y1,z1,x2,y2,z2)
Transform.ChangeBasis(plane0,plane1)
Transform.PlaneToPlane(plane0,plane1)

So the basic task most people want to do is essentially move their geometry from one cplane to another maintaining the initial orientation between geometry and cplane. Transform.PlaneToPlane() does this without any surprises; it basically says “align myself to the current cplane, extend my hand out and grab the geometry, then walk to my new cplane origin and align myself with it while keeping my hand held out in front of me”

The two forms of Transform.ChangeBasis() both do (basically) the same thing, which is different than Transform.PlaneToPlane(). You would think that ChangeBasis(plane0,plane1) would do the same thing as PlaneToPlane(plane0,plane1), but it doesn’t. In simple terms, it sees what the geometry “looks like” in the new coordinate system, and makes the geometry look like that in the current coordinate system. i.e. assume that your current coordinate system is the World Cplane and your target coordinate system is aligned with the world, but moved +100x and +100y. If you have a point at 0,0,0 in world coordinates and you apply the transform you get from Transform.ChangeBasis(World,target), your point is not going to move to 100,100 in world coordinates, it will instead move to -100,-100 in world coordinates. So it actually reverses the operations of the PlaneToPlane method (align with target coordinates, grab the geo, move to initial coordinates). The other gotcha is that the ChangeBasis(x1,y1,z1,x2,y2,z2) is agnostic of origin, so it will only be able to rotate the coordinates with respect to world coordinates, and not make arbitrary translations.

Anyway, it might be helpful to add a note about this in the RC docs to help users avoid this pitfall

Here is some background on change of basis transformations.

Like I said, to transform something form one coordinate system to another, create a transform object using Transform.ChangeBasis.

For example, if you need convert a 3-D point from world x-y coordinates to Windows screen coordinates, you would use a change of basis transformation.

If you had a point in Rhino’s camera coordinate system and you need it translated into world x-y coordinates, you would use a change of basis transformation.

If you had a point in 3-D point from world x-y coordinates and you wanted to report to the user what that point was in construction plane coordinates (of the active viewport), you would use a change of basis transformation.

If you wanted to orient an object oriented from one plane to another plane, you would not use a change of basis transformation. This is because you are not looking to convert from one coordinate system to another. You are simply looking orient in the same coordinate system (world x-y).

Does this help?

1 Like

With this much to digest, I seriously doubt that I get it completely, but here’s a stab at a restatement.

To take a point inside a block that is (let’s say) at the point (1,2,0 ) in the block’s coordinate system and find its world coordinates - the center of gravity with respect to the world origin , I would make a copy of that point, get the block’s transform, a Rhino.DocObjects,InstanceObject.InstanceXform object.

What’s not clear in the .PlaneToPlane method is how to get the two plane objects to supply as parameters. Are these planes defined in the transform object?

Lets say you have a block that contains a point object and visually the point appears at (1,2,0). You get the block’s instance definition geometry, you query the point object for its location and it is not at (1,2,0). You then transform the point object using the block instance’s transformation. Now the point reports at (1,2,0).

Technically speaking, neither block definitions nor block instances (references) have their own coordinate system. Block instances (references) store the transformation applied to the instance definition.

The plane parameters are something you, the developer, needs to define. Beyond this, I’m not sure how to help…

I’m quite likely not describing the problem well enough. Here, I’ve attached a model illustrating the problem.

In it, I show the cg of a closed polycurve inside a block and note that the coordinates of that point are -900.00,-56.27,124.83 from both inside the block editor and from not-inside-the-block-editor.

I further posted the following code, which yields different coordinates (CG = -56.40, 124.83, -900.00
). The difference between the -56.27 and -56.40 is insignificant. I assumed the different location of the coordinate, 900, must have been a transform, stemming from the way the block was defined (?)

I’m trying to reconcile this difference so I can report the proper coordinates of the cg.


using System;
using Rhino;
using Rhino.Commands;forDale.3dm (57.2 KB)

namespace BLT_PartInfo
{
[System.Runtime.InteropServices.Guid(“746ecf68-f25b-4f91-9792-48a816d3ed4b”)]
public class ListBlockGeometry : Command
{
static ListBlockGeometry _instance;
public ListBlockGeometry()
{
_instance = this;
}

    ///<summary>The only instance of the ListBlockGeometry command.</summary>
    public static ListBlockGeometry Instance
    {
        get { return _instance; }
    }

    public override string EnglishName
    {
        get { return "ListBlockGeometry"; }
    }


    private int LayerIndexOf (string s)
    {
        int res;
        foreach (var lyr in RhinoDoc.ActiveDoc.Layers)
        {
            if (lyr.Name == s)
                return lyr.LayerIndex;
        }
        return -1;
    }


    protected override Result RunCommand(RhinoDoc doc, RunMode mode)
    {
        double PosArea = 0;
        double NegArea = 0;
        double PosMomX = 0;
        double PosMomY = 0;
        double PosMomZ = 0;
        double NegMomX = 0;
        double NegMomY = 0;
        double NegMomZ = 0;
        double CGX = 0;
        double CGY = 0;
        double CGZ = 0;

        Rhino.Geometry.AreaMassProperties area = null;
        Rhino.DocObjects.ObjRef objref;
        var rc = Rhino.Input.RhinoGet.GetOneObject("Select instance", false, Rhino.DocObjects.ObjectType.InstanceReference, out objref);
        if (rc != Rhino.Commands.Result.Success)
            return rc;
        var iref = objref.Object() as Rhino.DocObjects.InstanceObject;
        if (iref != null)
        {
            var idef = iref.InstanceDefinition;
            if (idef != null)
            {
                var rhino_objects = idef.GetObjects();
                for (int i = 0; i < rhino_objects.Length; i++)
                {
                    if ((rhino_objects[i].Geometry).ObjectType == Rhino.DocObjects.ObjectType.Curve)
                    {
                        if ((((Rhino.DocObjects.CurveObject)(rhino_objects[i])).CurveGeometry).IsClosed)
                        {
                            if (((Rhino.Geometry.PolyCurve)(rhino_objects[i].Geometry)).IsPlanar())
                            {
                                //Outside cuts are positive geometry
                                if (rhino_objects[i].Attributes.LayerIndex == LayerIndexOf("CUT"))
                                {
                                    area = Rhino.Geometry.AreaMassProperties.Compute(((Rhino.Geometry.PolyCurve)(rhino_objects[i].Geometry)));
                                    PosArea = PosArea + area.Area;
                                    PosMomX = PosMomX + (area.Area * area.Centroid.X);
                                    PosMomY = PosMomY + (area.Area * area.Centroid.Y);
                                    PosMomZ = PosMomZ + (area.Area * area.Centroid.Z);
                                }
                                //Inside cuts (holes) are negative geometry to be subtracted from the part's area.
                                if (rhino_objects[i].Attributes.LayerIndex == LayerIndexOf("HOLE"))
                                {
                                    area = Rhino.Geometry.AreaMassProperties.Compute(((Rhino.Geometry.PolyCurve)(rhino_objects[i].Geometry)));
                                    NegArea = NegArea + area.Area;
                                    NegMomX = NegMomX + (area.Area * area.Centroid.X);
                                    NegMomY = NegMomY + (area.Area * area.Centroid.Y);
                                    NegMomZ = NegMomZ + (area.Area * area.Centroid.Z);
                                }
                            }
                        }
                    }
                }
            }
        }
        double Area = PosArea - NegArea;
        CGX = (PosMomX - NegMomX) / Area;
        CGY = (PosMomY - NegMomY) / Area;
        CGZ = (PosMomZ - NegMomZ) / Area;
        string strCGX = CGX.ToString();
        string strCGY = CGY.ToString();
        string strCGZ = CGZ.ToString();
        string CG = string.Format("{0:0.00}", CGX) + ", " + string.Format("{0:0.00}", CGY) + ", "  + string.Format("{0:0.00}", CGZ);
        Rhino.RhinoApp.WriteLine("Area = {0}   CG = {1} ", string.Format("{0:0.00}",  Area), CG );
        return Rhino.Commands.Result.Success;
    }

}

}

Hi Cliff,

See if the attached helps any.

– D

TestCsCliff.cs.txt (1.9 KB)