# Linear cutting in a grasshopper

Hello, Guys

I think many users of the grasshopper who work with a sharp encounter with the problem of optimal cutting linear patterns

The task in the following is a list of the necessary billets of products and matches the corresponding number of products for each length

there is a size of whip from which will be made a file of linear elements

I tried to make the grasshopper’s standard methods impossible
Can any of the users solve a similar problem On the python or C#?
can there be any plug-in for a grasshopper for this task?

Linear cutting.gh (3.5 KB)

That’s very easy with C# : kind of primitive packing so to speak.

Do you want a sequential process: after packing with respect a given value (this means either one branch (or many) in the packTree with the last containing the remainder space [a%b etc etc]) … fill the remainder with a quantity with respect the next value in the List (or stop if that’s not possible) …

… Or you are after some recursion in order to deal with max efficiency (sum of remainder = min) matters? I.e. search for values/quantities that can fit best in the remainder space (meaning that a given branch may contain various values [+ quantity info]) .

1 Like

it’s like linear nesting as Peter says… be useful to be able to input a stock list… i.e. I have these lengths of billet stock and I need to cut these lengths from it… What is the best way to cut them to best use my material. Perhaps include a cut width allowance to avoid massive dissapointment in the real world!

Probably definitely a C# exercise so I’ll sit back and let Peter do it

1 Like

In fact is K-Means (or other) clustering on doubles. But since everything related with clustering in the practice is strictly internal (life sucks) … get this (a but naive) thingy. Notice however that efficiency doesn’t vary much (obviously it’s relevant to values, max value, Karma etc etc etc).

1 Like

Added an internalDemo option (widespread [rather unrealistic] values => efficiency goes to Mars [~98%]).

1 Like

thank you very much for the answer

yes just so, there is a lot of program for the rundown of linear material that solve this task easily

I tried this task through a grasshopper to decide, but I have a problem with waste, it is not rationally obtained

In general and for that type of crude clustering pretty much always the last cluster (last branch in the tree) is responsible for a big percentage of the total waste (check it with your data),

For instance for the pass captured the waste in the last cluster is the 40% of the total waste.

So a crude solution (for a crude solution) is … hmm … to forget the last cluster.

Of course the more the values are widely spread and the bigger the container … blah, blah.

Hi Peter,

I am trying to replicate the following sample with your definition, but without success yet, could you please advise what are the N, Qmin-Max, Dmin-max values for? Thanks in advance!

Alan

These are used when the internal demo is active: N is the number of loops, Q is quantity of dupes and D is value for the doubles : make random numbers (From Dmin to Dmax) that have random duplicates (From Qmin to Qmax). Then the List is shaken, not stirred for obvious reasons. The algo used for that is the fastest way known to man (the classic FisherYates thingy).

Note dList and qList are user provited Lists of type double and int: in this case the clustering is done with data from these 2.

Note: if dList.Count != cList.Count … then the internal demo is active anyway:

Here’s were all these are done:

With regard the primitive clustering (“pack” numbers in a “container” so to speak) … er … hmm … primitive (no // stuff and the likes) code is used:

But … by asking these questions … appears to me that you are not speaking C#. Is this the case?

Hi Peter,

Thank you so much for your detailed reply, really appreciated! As you already noticed, I am not a C# speaker, therefore, I am still struggling understanding you, regardless your kindly effort.

I was expecting with the following input -using your script- (please see Pic 1 below) to get a result similar like next image below (Pic 2). Hope this clarify my question, many thanks!

Pic 1

Pic 2

But my dear Watson … the max value used on your screen capture is sky high (16800)… meaning LESS Clusters with MORE elements. Max value is kinda the container size when we do classic packing with 2/3 dimension things. For instance if you try to fit numbers from 1 to 5 in a max value of 20000 … you’ll get some BIG Clusters. But if the numbers are 9000 to15000 you’ll get Clusters containing 1 to 2 elements (and some BIG waste).

In a more pragmatic example imagine packing yummy Ducatis in a container to send/sell them to Mars (Elon has plans to do that) . If the container is less than the size of a Ducati … Martians they’ll get null stuff (and their money back).

Vary the container size (the max thingy) and see what gives (and/or get rid of the last branch that in most of cases contains a big percentage of the waste).

Dear Peter,

Now its crystal clear, thanks a lot for your help! Really appreciated! Mystery solved dear Holmes!

Cheers!

Alan

Hi Peter,

Please see below some questions I still have, hope you can help me:

Question#1
How would you recommend running nested paths within your script? Imagine I have 3 different type of profiles (extrusions) so I will need them to be in different cut lists.Thanks!

Question #2

How can you ensure having the most efficient solution at the first run? Please see below couple different results I got after recompute the script (F5). Thanks!

Question 3

What would be your recommendation for tagging each piece accordingly to their existing name? Thanks!

Hmm … easy stuff provited that the goal(s) is/are clear >> time for the trad update: V1C , that is.

Note: I’m going to charge you extra for that: about 1M microdollars.

Post some “sets” of numbers that represend various cases of yours.

Hi Peter,

Please see attached the GH definition with the sets you are asking me. Hope they are clear. Thanks in advance!

PS: It sounds more than fair the price you are asking =), also, if you could please reply to my personal email, I will make sure to send you a token of gratitude because of your kindness! Please drop me a line at: alan.sketches@gmail.com

OK, heres the general set up:

1. There’s a List of material ID and 2 other Lists: L (for Length OR the clustering criterion anyway [whatever the name]) and Q (for Quantity). That said clustering is based in one value (L) thus the linear thingy.
2. But this means that the pieces per material are the same: enter L as tree where BranchCount equals the ID.Count (anyway some real-life checks are a must - input/user mistakes etc etc) . In this case there’s no need for Q since the Branch(x).Count is the Q of pieces per material. So … the only thing worth considering is the easiest way to feed/make these Lists/Trees (are from Excel? from Nowhere? from Alpha Centauri?).

That said randomizing the order of L values (randomize List of d in V2B) is a no-no (outside LQ branches, that is) for more than obvious reasons. If was allowed between ID1 and ID2 … why using the second ID? This means that a more efficient way is required for the clustering.

BTW: obviously we need an offset as well (for cutting the things) but that’s not an issue at all.

Here we are: an intro on VIC matters (no def included - yet - as it undergoes the final beta tests). Using 2 C# as the VIB:

1. The first C# works either by accepting your data (1 List that contains the material ID’s + 1 Tree that contains the lengths per branch [branch.Count must match the List.Count]). If the latter is not true it means that your data are wrong and the C# defaults into a demo mode where you can control the N of materials and the quantity of data and the range of their values.

1. The second C# does the job. You can cheat (by raising the max value [the “container”] in relation to the max value found in a given branch), you can take data as they are or you can randomize them or sort them or sort them in descending order. You can check the values as well (if they are indeed doubles and not, say, bananas). Then in does the job and reports the usual stuff:

Report (optional) tells you the good or bad news with regard the ER ratio. That measure of efficiency can vary greatly for more than obvious reasons … but if you cheat for a second time (excluding the last MOST COSTLY cluster) then you get better rates by far. Of course the doubles itselfs VS the max value play a great role on ER rating. Of course there’s situations where no solution is possible (the Ducatis in Mars case).

The big thing: Imagine that we cut pieces from a 4mm wire and we get ER 60%. Then we do the same but with a 40mm wire and we get ER 90%. Which case is the most profitable?

This means that your ID’s must take into account the cost as well: SS-316L_4mm and SS-316L_8mm are NOT the same animals (dollar wise).

Of course if you feed nothing you get ER as the negative Infinity (LOL).

The data is coming from the rhino geometry’s attribute user text. All data is flattened, no trees at this point.

Hey Peter, I would like to ask you if the sorting of the names are also considering the similarity of the pieces between them at some point? (Without changing the order of the script’s output, say, if one cluster has 2x-10" and 3x-8", they will still have the same nesting within the cluster, but the the names order will consider the proximity as per the type of the pieces - L1, L2, L3-). For instance, if we have a list of fruits, and the basket can hold up to 4 max, after sorting them out, would they be ordered randomly or would consider their proximity with each of their same type? Please see the sample below:

Original List (before the nesting):
Banana-01
Banana-02
Banana-03
Apple-01
Apple-02
Apple-03
Apple-04
Apple-05
Orange-01
Orange-02
Orange-03
Orange-04

Final List 1 (after the nesting) not that efficient:
Cluster 1(4 pcs)
Banana-01
Apple-03
Orange-01
Orange-02

Cluster 2(4 pcs)
Apple-01
Apple-02
Orange-03
Orange-04

Cluster 3 (4 pcs)
Banana-02
Banana-03
Apple-04
Apple-05

Or can consider the ordering based from their proximity to each type:

Final List 2 (after the nesting) The efficient one:
Cluster 1 (4 pcs)
Banana-01
Banana-02
Banana-03
Apple-01

Cluster 2 (4 pcs)
Orange-01
Orange-02
Orange-03
Orange-04

Cluster 3 (4 pcs)
Apple-02
Apple-03
Apple-04
Apple-05

Well …

Assume that we have 2 materials banana_01 and banana_02 in the material List (a) and we aim to make a yummy banana split.

This means getting pieces. Assume that we spec 12 pieces for 01 and 5 for 02. By spec I mean providing 12 doubles and 5 doubles. Have in mind that the stupid part (the computer) doesn’t know if 01 can indeed yield 12 pieces - nor he cares. Why? because banana_01 is an abstract thingy: is NOT a banana … is just an ID of a material

This means that we get a Tree of doubles (b) with 2 branches and 12/5 items.

Assume that the values are OK and the container is OK (max >= the min double per branch) … then we get, say, clusters:

{0;0}, {0;1} … {0;n} and {1;0}, {1;1} … {0;m} where the first dim is the index of a given banana in the banana List and the second dim is the (unpredictable) index of the cluster and n are the clusters for 01 and m … blah, blah.

So do you want - prior the clustering - to sort the List of materials (a) in sync with the Tree of doubles (b)?

NOTE: my bean counter said: you Idiot Lord you asked 1M micron dollars for that? Meaning ONE %^%\$ dollar? I said: well that was a typo: I had a 1Z mega dollars figure in mind.

best, The Lord (of mega dollars)

Study what happens with material theLord_666 prior and after the sync sort: Is this what you want? (I’ll charge you extra for that, mind).

PS: Feel free to deal with the mega dollars issue using my Keyman islands account:

User name: “The Greedy Lord”, Password: “More Is Better”;