How to efficiently extract mesh data?

I am trying to understand how to efficiently extract mesh data by attempting to recreate the deconstruct mesh component. My two attempts so far take much longer to compute. Some help understanding why would be much apreciated.

the first component is coded like this:

    protected override void SolveInstance(IGH_DataAccess DA)
    {
        //Pull in Mesh
        Mesh M = new Mesh();
        if (!DA.GetData(0, ref M))
        {
            return;
        }

        List<Point3d> vertices = new List<Point3d>();
        List<MeshFace> faces = new List<MeshFace>();
        List<Color> colors = new List<Color>();
        List<Vector3d> normals = new List<Vector3d>();

        int vCount = M.Vertices.Count;
        int fCount = M.Faces.Count;

        for (int i = 0; i < vCount; i++)
        {
            vertices.Add(M.Vertices[i]);
            normals.Add(M.Normals[i]);               
        }

        if (M.VertexColors.Count > 0)
        {
            for (int i = 0; i < vCount; i++)
            {
                colors.Add(M.VertexColors[i]);
            }
        }

        for (int i = 0; i < fCount; i++)
        {
            faces.Add(M.Faces[i]);
        }



        DA.SetDataList(0, vertices);
        DA.SetDataList(1, faces);
        DA.SetDataList(2, colors);
        DA.SetDataList(3, normals);

    }

and the second component is coded like this:

    protected override void SolveInstance(IGH_DataAccess DA)
    {
        Mesh M = new Mesh();
        if (!DA.GetData(0, ref M))
        {
            return;
        }

        MeshVertexList vertices = M.Vertices;
        MeshFaceList faces = M.Faces;
        MeshVertexColorList colors = M.VertexColors;
        MeshVertexNormalList normals = M.Normals;

        DA.SetDataList(0, vertices);
        DA.SetDataList(1, faces);
        DA.SetDataList(2, colors);
        DA.SetDataList(3, normals);
    }

I expected the second to perform better, because I’m using the data types that the mesh uses, and was surprised that it was so much worse. Insight into why this is would be greatly appreciated.

Does what I am asking not make sense for reasons beyond my understanding? I ultimately wish to utilize this to improve the performance of some components I am creating to work with large meshes, and this seemed like the most distilled way to understand this issue. Is there some way I should be working with DuplicateShallow to avoid creating a copy of the mesh?

Any direction would be appreciated. Even if its just to where I should read more about how to properly go about this.

thanks

There’re conversions between Rhino’s datatypes and Grasshopper’s datatypes.

Probably faster.

image

using System;
using System.Collections.Generic;

using Grasshopper.Kernel;
using Grasshopper.Kernel.Data;
using Grasshopper.Kernel.Parameters.Hints;
using Grasshopper.Kernel.Types;
using Rhino.Geometry;

namespace PancakeAlgo.Geometry
{
    public class pcgDeconMesh : GH_Component
    {
        /// <summary>
        /// Initializes a new instance of the pcgDeconMesh class.
        /// </summary>
        public pcgDeconMesh()
          : base("pcgDeconMesh", "pcgDeconMesh",
              "Description",
              "Pancake", "Geometry")
        {
        }

        /// <summary>
        /// Registers all the input parameters for this component.
        /// </summary>
        protected override void RegisterInputParams(GH_Component.GH_InputParamManager pManager)
        {
            pManager.AddMeshParameter("M", "M", "", GH_ParamAccess.item);
        }

        /// <summary>
        /// Registers all the output parameters for this component.
        /// </summary>
        protected override void RegisterOutputParams(GH_Component.GH_OutputParamManager pManager)
        {
            pManager.AddPointParameter("V", "V", "", GH_ParamAccess.list);
            pManager.AddMeshFaceParameter("F", "F", "", GH_ParamAccess.list);
            pManager.AddVectorParameter("N", "N", "", GH_ParamAccess.list);
        }

        /// <summary>
        /// This is the method that actually does the work.
        /// </summary>
        /// <param name="DA">The DA object is used to retrieve from inputs and store in outputs.</param>
        protected override void SolveInstance(IGH_DataAccess DA)
        {
            DA.DisableGapLogic();
            var treePts = Params.Output[0].VolatileData as GH_Structure<GH_Point>;
            var treeFace = Params.Output[1].VolatileData as GH_Structure<GH_MeshFace>;
            var treeNormal = Params.Output[2].VolatileData as GH_Structure<GH_Vector>;

            Mesh mesh = null;
            DA.GetData(0, ref mesh);

            if (mesh == null) return;

            var zeroPath = new GH_Path(0);
            treePts.EnsurePath(zeroPath);
            treeFace.EnsurePath(zeroPath);
            treeNormal.EnsurePath(zeroPath);

            var listPts = treePts[zeroPath];
            var listFace = treeFace[zeroPath];
            var listNormal = treeNormal[zeroPath];

            listPts.Clear();
            listFace.Clear();
            listNormal.Clear();

            var vertices = mesh.Vertices;
            listPts.Capacity = vertices.Count;

            foreach (var it in vertices.ToPoint3dArray())
                listPts.Add(new GH_Point(it));

            var faces = mesh.Faces;
            var faceCnt = faces.Count;
            listFace.Capacity = faceCnt;

            var faceArray = faces.ToIntArray(false);
            var i = 0;
            var maxIndex = faceArray.Length - 4;
            while (i <= maxIndex)
            {
                var A = faceArray[i];
                var B = faceArray[i + 1];
                var C = faceArray[i + 2];
                var D = faceArray[i + 3];

                if (C == D)
                {
                    listFace.Add(new GH_MeshFace(A, B, C));
                }
                else
                {
                    listFace.Add(new GH_MeshFace(A, B, C, D));
                }

                i += 4;
            }

            var normals = mesh.Normals;
            var normalCnt = normals.Count;
            listNormal.Capacity = normalCnt;

            i = 0;
            var normalArray = normals.ToFloatArray();
            maxIndex = normalArray.Length - 3;
            while (i <= maxIndex)
            {
                var dblA = normalArray[i];
                var dblB = normalArray[i + 1];
                var dblC = normalArray[i + 2];

                listNormal.Add(new GH_Vector(new Vector3d(dblA, dblB, dblC)));

                i += 3;
            }
        }

        /// <summary>
        /// Provides an Icon for the component.
        /// </summary>
        protected override System.Drawing.Bitmap Icon
        {
            get
            {
                //You can add image files to your project resources and access them like this:
                // return Resources.IconForThisComponent;
                return null;
            }
        }

        /// <summary>
        /// Gets the unique ID for this component. Do not change this ID after release.
        /// </summary>
        public override Guid ComponentGuid
        {
            get { return new Guid("12503d63-6a34-42dd-a6d3-a341b6be4f8a"); }
        }
    }
}
1 Like

Agreed about the casting taking time (GH_Mesh to Mesh), but in your code @gankeyu where are you setting the output params? This doesn’t seem like a fair time comparison, no?

No. It’s not a fair comparison.

My code is kind of tricky (as it cannot deal with multiple inputs), a little faster because it reduces 1 time of list copying. Oh the output is done by directly manipulating the GH’s structure.

GH_Mesh to Mesh is fast. Point3d to GH_Point, etc, are not.

Ok fair enough, you’re bypassing the recommended method my using the Params.Output[0].VolatileData . Something tells me that might have repercussions someplace but I’m probably wrong - perhaps I should rewrite some of my components!

Coming back to the original post @jporter.me and taking into account the points made around point conversion, I wonder if you would find a speed increase anyway and slightly simpler code by setting your list up with a GH_Point datatype to avoid the conversion during SetDataList:

List<GH_Point> vertices = new List<GH_Point>()				
Point3d[] myArray = M.Vertices.ToPoint3dArray();
								
for (int i = 0; i < myArray.Length; i++)					
{					
    vertices.Add(new GH_Point(array[i]));				
}
					
DA.SetDataList(0, vertices); 

Sorry, could be errors. I can’t write a fresh C# component to try this out!

John.

You can:

DA.SetDataList(0, myArray);

Generally you should set the Capacity of List if you know how many elements will be inserted so that memory re-allocation is avoided, e.g.:

vertices.Capacity = myArray.Length;

Good point. As you know the length, it might be better just to have the ‘vertices’ as an array anyway I guess rather than a list.

Thank you both for taking the time to explain and give examples of improvements on this. I am learning a lot here. Slow response was due to taking some time to explore these options myself, as well as life stuff getting in the way.

I would greatly appreciate a little more information regarding when to use GH datatypes vs Rhinocommon datatypes. I had success improving the efficiency of my component to match the deconstruct mesh component by switching to use the GH datatypes for the lists being output. But I am left a bit confused when these conversions are done, or atleast when it is important to utilize the GH datatypes. Is it only when using DA.SetDataList() that a conversion occurs?

this further confused my understanding:

the normal list is a list if GH_Vector but then immediately is converted back to Vector3D ?

As long as the data becomes your component’s output, it will be converted into GH’s datatype. It is one of GH’s mechanic that facilitates data representation in the GH domain. DA.SetData & DA.SetList converts the data while DA.SetDataTree doesn’t so you need to do the conversion yourself.

As for script components, all GH types are converted before the RunScript is executed.