Sunday, November 20, 2016

Kangaroo Physics Simulation for Grasshopper

This topic is an introduction to the Kangaroo plug-in for Grasshopper. Kangaroo is a set of Grasshopper components for form-finding, physics simulation, and geometric constraint solving.

Important Note: This post is for the previous version of Kangaroo. The new, integrated version inside Rhino 6 is documented here: Kangaroo Physics in Rhino 6.


Kangaroo is fantastic - but it is something of a moving target. It's under development and has gone through a total rewrite. The new version, which is a huge step forward, still is assisted in some definitions with the earlier version. So, as of this writing (11/2016) you still want to install both. This will give you the ability to explore the largest set of sample files.

You can download the various versions of Kangaroo here:

Grab both the latest version, currently 2.14, and the earlier 0.099. Then follow the installations instructions in each Zip file:

With Kangaroo installed you are ready to build some definitions. As an easily accessible reference here's the Kangaroo PDF Help File.

First Example - Project to Plane

As a first example let's look at simply moving a set of vertices onto a plane. You don't need Kangaroo for this - because you can do it in Rhino. But it's a good example to get going and explain some basics.

The first two components on the left, Box 2Pt and Populate 3D define a box and populate it with random points.

There are two Kangaroo components in this definition. One is a Goal and the other is the Solver.

The goal in this case is OnPlane. This moves a point to a given plane (and keeps it there). If you supply many points they are all pulled down to the plane. There are many goals supplied by Kangaroo. In general they take the current position of some points and output some target positions it "wants" the input points to move to. You can supply as many goals as you like and Kangaroo will adjust the points in order to meet all the goals.

The other component here is the Kangaroo Solver. This is the one which collects all the goals and solves the system. It outputs the "solved" vertex locations. In this particular example it outputs the points as moved to the plane.

There is also a Button component wired into the Reset socket. This resets the system and begins solving it again. The solver automatically runs over and over until it converges at a solution. Then it stops automatically. In some cases, such as this simple example, it converges so fast it stops immediately. That's because it can be mathematically calculated to instantly move points onto a plane. Other kinds of goals take iterations. For example making connected polygons all planar. That's an example we'll look at later - it often takes a few seconds to converge to a solution. Because the solver loops over and over converging towards a solution you can see it gradually solve the system.

In the simple definition above the Vertices output from the solver are wired into a Delaunay Mesh component which generates a mesh between all the points. If you press and hold the Reset button you'll see the mesh drawn through the original points. When you release the button the system is solved and the mesh is drawn through the planar points.

Meshed points before solution

Meshed points after being moved onto the ground plane

Catenary Mesh

This definition is slightly more complex but is much more useful. It's a form finding technique used to generate structures in pure tension or compression. The term catenary refers to a curve formed by a string, or chain hanging freely from two points and forming a U shape.

This can be expanded to process the vertices of a mesh into a three-dimensional form:

The definition operates on a mesh. In this example the start mesh is a rectangular grid as seen on the ground plane. Shown above it is the resulting catenary form as generated by Kangaroo:

The definition is below. Download Here.

It has three goals:
  • Anchor: This will keep a point in its original location. This is used to lock the corners of the mesh in place. 
  • Length: This goal tries to keep two points (line endpoints) at a given distance form each other. With a higher Strength specified the points move less. 
  • Load: This goal applies a force to the points. The force is specified as a vector and the length of the vector is the magnitude of the applied load. 
You can add a slider to the Strength component to control the stretch of the form. You can also affect that using the magnitude of the vector wired into the Load component. For example wire a slider into the Factor socket of the Unit Z vector.

You can bake the O output of the Solver. This generates the set of lines of the form. You can use the Weaverbird Mesh From Lines component to create a Rhino mesh. Then use other tools such as Rhino's OffsetMesh command to turn the result into a mesh solid suitable for 3D printing or rendering.

This basic definition can be modified to use your own mesh. Here's an example of using a few tools to build a base mesh.

Use the Rhino menu Mesh > Polygon Mesh Primitives and choose something like a Truncated Cone.

Explode that mesh and delete the top and bottom faces:

Use the ProjectToCPlane command make the mesh flat: Mirror the mesh and Join it together into one.

Press F10 to turn on the mesh vertices, then select two and use the Gumball to scale them with a value of 0. Do this for both side:

You can use the MeshRepair command to weld the mesh together, leaving only the outside edges as naked.

This mesh can be wired into the definition. It works nicely with Weaverbird to subdivide the mesh prior to running. More information on LunchBox and Weaverbird can be found in Working with Meshes in Rhino and Grasshopper.

This results in a structure like this:

When you Bake the output of the Kangaroo Solver is only lines. It doesn't output a mesh.

To convert it to a mesh you can use a Curve component to collect the lines then the Weaverbird Mesh From Lines (Weave Back) component to generate the mesh:

If you bake the output of the Mesh From Lines you'll get a proper Rhino mesh.

Tensile Forces

Similar to the example above, but not using a Load, you can explore some tensile forces on meshes.

By using a few anchor points on triangle meshes you can experiment with a surface that behaves a little like bending fabric. You can make a few Point entities that are snapped onto the mesh vertices. Kangaroo will find the matching mesh verts and anchor those. If you add the Grab component you can pull the points interactively to shape the form.

Here's the definition. Download Here. You can download a sample Rhino 3dm file here.

By holding the Alt key down and dragging on vertices in the mesh you can pull them:

Another use for tensile forces is to smooth, or relax meshes. The second definition in the above file does that.

These work best with Quad meshes - meshes composed of four sided polygons rather than the triangle meshes normally generated by Rhino. As of this writing (11/2016) if you are using the Rhino 6 WIP you can use the QuadMesh command to generate these from polysurfaces. Otherwise I'd recommend using a better quad mesh modeler like 3ds max, Maya, modo or ZBrush.

The original mesh is subdivided using Weaverbird. In this case using Catmull-Clark. The Weaverbird Join Meshes and Weld component makes sure there are no duplicate edges. The Weaverbird Mesh Edges component will output all the edges. These are wired into a Line component to ensure sure they are line entities. Then the Curve Length, multiplied by a factor, is wired into the Length goal (of Kangaroo).

The Weaverbird Naked Boundary component outputs all the open edge curves. These are Exploded then using the End Points component the start point of the lines are found. These become Anchor goals. By altering the strength of the anchor goal the form can remain closer to the original mesh or highly relaxed as shown below.

Circle Packing

This definition is a simple example that uses the SphereCollide Kangaroo component to pack circles onto a surface. The circles can be offset and extruded to achieve an effect similar to the one below. Note that in this definition all the circles have to be a uniform size. That's because the SphereCollide component is optimized for that condition.
Pavilion made of cardboard hoops by students at ETH, Zurich, Switzerland

Here's the definition. Download Here.

Here's the same surface with 120 circle:

Here's a sample surface with 1000 circles:

You can add a few components to get a 3D effect. First each circle is offset using the Offset Curve component. The extrusion process is achieved by using the Weave component to create a list with a circle followed by its offset. So the list is doubled in length. But it runs in pairs of circles. The next component is Partition. That breaks up the list into a tree - where each branch contains a pair of inner and outer circles. Then Boundary Surface is used make a surface for these which contains the hole in the center. This is then Extruded along the normal of each circle pair. The Amplitude component lets you set the thickness.

Planarize Hexagons

This definition will take a surface, panel it with hexagonal cells, and attempt to planarize each cell. The process of taking a surface, paneling it and making it ready for fabrication is called rationalizaion. One of the nicest methods is to planarize the panels. They can then be easily manufactured from sheet goods (for example plywood).

Here's an example that used planar panels and robotic fabrication for the edge joints.

Landesgartenschau Exhibition Hall / ICD/ITKE/IIGS University of Stuttgart

Here's the entire definition. Download Here.

This definition relies on LunchBox for the initial hexagon paneling. It generates a flat list of hex cells.

The next section gets things ready for planar paneling solving. The Explode component breaks each cell into individual segments. The output is a tree. The End Points component output start or end points of each segment. In this case we are only worried about the start points.

The key goals used are:
  • CoPlanar: Pulls a set of points towards their best fit plane. This can act on any number of points. 
  • ClampLength: Keeps the distance between 2 points between the specified limits, but applies no force when the distance is within these limits.  
The start points are wired into the CoPlanar goal. 

We need to limit the amount the segments are allowed to move. This keeps the segments all reasonably sized relative to one another. This is done by using the Curve Length and Average components to generate the average length. These are divided by constants 0.4 and 2.0 to set lower and upper length limits. 

If the Kangaroo Solver is able to solve the system it outputs the planarized curves. The Boundary Surface component is then used to surface these polygons. The Is Planar component is used to check if the surfaces are indeed planar. It outputs True for all the surfaces which are; otherwise False.

It's important to note that not all surfaces cannot be paneled with planar cells - it simply doesn't work geometrically. But this definition is remarkably successful.

When the surface is anticlastic (as indicated by negative Guassian curvature using the CurvatureAnalysis command in Rhino) the hexagons will look more like bowties as seen above and shown below:

In this example the surface is entirely synclastic and all internal panels are hexagons - although some have very shallow angles and are nearly rectilinear.

General Links to Forums and Component Code

Here's the main discussion forum for Kangaroo topics. This is a great source to find the latest definitions posted by Daniel Piker.

Here are some videos by Daniel Piker using Kangaroo:

Here's the source for additional examples:

Tuesday, November 15, 2016

Robotic Painting with Light

I got interested in trying to "paint" with light. That is, use long exposure photography with a moving light source to generate images. The concept is simple: if you put a camera on a tripod, turn off all the lights, open the camera shutter and keep it open, move a light source around in front of the camera, then close the shutter, you'll wind up with a single image of the light moving in one continuous stream.

If you carefully choreograph the color changes, motion, and turning on and off of lights via software you can generate some interesting "light paintings".

What started me on this idea was some work we do in my Robotics course in Taubman College at the University of Michigan. Here's a post on that: Robot Motion Analysis Using Light.


My plan for this required a robot to move the lights around, and some hardware to control the light. I'm using an 8 x 8 array of lights: NeoPixels from Adafruit. This is a really nice unit and only requires 3 pins from the Arduino.

The lights are really bright (understatement) and the programming of them is very easy. The micro-controller hardware which controls the lights is an Arduino Mega (an Uno would be fine as well) with the WiFi Shield. The image below shows the beginning of the prototype. As you can see the wiring is very simple (one resistor, one capacitor and the light array):

The robot used is a Kuka Agilus KR-6. I used the back robot in this picture.

The design of the tool which mounts to the robot looks like this, in model form (pixels, prototyping board, and Arduinos from top to bottom):

The finished tool ready to mount: 

And mounted to the robot: 


This works using Grasshopper and Kuka|prc to control the robot, sample the image, and send the code to the Arduino to drive the pixels.

My first idea was to take any image (photograph, picture of a painting, etc), sample it at points which exactly match the LED light panel pixel spacing, then use that sampled data to illuminate the light grid. Those lights form one small part of the overall image. The robot moves the light array and shows new colors each time.

Single LED light panel (yellow) with 6x6 grid of robot moves (blue):

Sampled points in the image (these are the lower left corner of the LED light panel).

Robot moves in a 6x6 grid with different colors each time:

All the coordination of motion and data is managed through Grasshopper. Here's the definition - as you can see it's pretty simple - not many components. The light blue is the robot control. The light cyan generates the Arduino code:

The Grasshopper definition lets you generate a preview of what the pixelated image will look like with baked Rhino geometry. Here are a few examples:

Van Gogh's Starry Night:

A Self-Portrait:

The robot sequence is this:
  1. Move to a new position. 
  2. Turn on the lights. 
  3. Pause for a bit to expose the image. 
  4. Turn off the lights. 
  5. Repeat until the entire image is covered. 
My first pass at this loaded all the color data, for every move, into the Arduino memory at the start. The very low memory capacity of an Arduino, even a Mega, made only 30 or so moves possible. I wanted more than that, so I changed to using a Arduino WiFi Shield and transmit the color data to the tool at each motion stop point. This places no memory limit on the size of the area that's covered.

Interfacing with the PLC

A robot has something called a Programmable Logic Controller (PLC). This is what allows the robot to interface to external inputs and outputs. Some example inputs are things like limit switches, and cameras. Outputs are things like servo motors and warning lights.

A program can also use the PLC to find out about the state of the robot - the current position and orientation of the tool, joint angles, etc.

For this project the PLC is used to track when the robot is moving. This is done by having the robot program set a bit of memory in the PLC to indicate the robot has arrived at a new position. Then it waits for a short amount of time (as the lights are turned on). Then that memory bit is turned off.

The Grasshopper definition monitors the PLC and informs the Arduino when to turn on and off. It also sends the new color data over WiFi.

First Test Images

The first attempts were interesting. It proves the concept works - which was very exciting to see for the first time. But they also clearly show there is room for improvement!

Here's the gradient example. I cropped this image wide so you can see the context - robot workcell, robot in the background, window behind, etc. The robot moved the light array 36 times to produce this image. So that's 36 moves times 64 pixels per move or 2304 pixels total. The tool was slightly rotated which results in the grid varying a bit. That's an easy fix - but it's interesting in the image below because it makes it very clear where each move was.

Here's a self portrait. You can see a variation in the color intensity between the moves of the robot. I realized this is because the robot is not staying in each location for the same time. That's because the robot checks if it should move every 200ms. And the exposure time is only 500ms. So when the robot is triggered to move early the exposure can be nearly 40% different. This is also an easy fix - I'll just sample every 10ms. The PLC also supports interrupt driven notifications - which would be even better.

Here's a portion of Vermeer's Girl with a Pearl Earring. This image again shows the variation in intensity. It also shows where colors get very dark the variation in RGB intensity of the pixels can become an issue. Amusingly she has red-eye! Obviously in all the images more pixels are needed to make them look good. That's accomplished with more robot moves but also with moving between the pixels. It's possible to quadruple the resolution of each of these images by simply moving into the space between the pixels and update the color values.

Van Gogh's Starry Night. With many more pixels this could be a nice image. It would also be interesting to experiment with moving the robot back a foot or so, reducing the intensity, and sweeping the robot with the lights on but the colors changing. So you'd have an overlay of the two. You could get a layered look, and a depth, and a sense of motion.

Next Steps

I'll be doing more with this in the future. One goal is to see if I can make Chuck Close style images. Here's an example of one of his self portraits:

His work is much, much more interesting (more examples are available here). Each cell in the image is multi-colored. Also the rectangular array of cells is rotated 45 degrees rather than vertical and horizontal. His cells are also not all circular - but elliptical and triangular. My plan is to put the correct color in the center of each cell. Then generate a new color and move that in a circle around the center point. His cells have 3 or 4 colors each, and also cross the cell boundaries as he sees fit. That I cannot do. But it'll be interesting to see how far I can get.

It's also possible to leave the LEDs on and move the robot. Or move the robot through space as to generate 2D images of painting in 3D. I'll be experimenting with many techniques (software changes) and I'll have another updated post in the future.

Other Methods

See this post for some other experiments I've done: Robotic Painting with a Line of Lights.