String art

Introduction

This is a variant of a similar past problem: draw something interesting, using a sequence of joined straight line segments, without ever lifting your pen. Or in this case, with one continuous thread.

As far as I can tell, the first realization of this particular idea was in 2016, when artist Petros Vrellis [1] arranged 200 pins evenly spaced around a 28-inch-diameter circular rim, and wound kilometers of thread stretched in straight line chords from one pin to the next, in a carefully determined order, to yield an image similar to the one shown below (created by me).

My implementation of string art, with one continuous thread wound around 256 pins across 8,927 chords. Full size output image is 4096×4096 black on white, from 512×512 grayscale input (source Birsak et. al. [2]).

Vrellis has not published any details of their algorithm, but since then, numerous others have done similar work. The most common approach has been a sequential greedy algorithm, starting with a grayscale input image and a blank white output image, rendering each thread chord in the output as a pixel-approximated line segment, at each iteration selecting a next pin so that the resulting chord makes the most improvement to the result.

Most of these approaches had a quirk that didn’t appeal to me: they effectively treat the thread as if it were not actually opaque, so that overlapping thread chords increase the darkness of the corresponding pixels in the output image (or in some cases, reduce the “remaining” darkness of the pixels in the input as each new thread overlaps them). That is, dark thread on top of dark thread yields an even darker intersection.

The motivation for this post is to explore the alternative described in 2018 by Birsak et. al. [2], who instead upscaled the input image to a higher-resolution output, with each input pixel corresponding to a b \times b square block of pixels in the output. When one or more threads– each one output pixel wide, purely black-on-white– intersect a block, the fraction of those b^2 pixels that are black is a more accurate approximation of the gray level that we can compare with the corresponding single input pixel.

I thought it would be interesting to implement just this upscaling idea, but to otherwise skip the complexity of the other aspects of their algorithm (involving iterative solving of an integer programming problem, extracting an Eulerian trail from the resulting unordered set of chords, weighting more important regions of the target image, etc.), sticking to the simple sequential greedy approach of selecting consecutive chords. I was pleased with the results: less than 200 lines of C++ code (available on GitHub), that generates nice-looking images in just a few minutes.

Implementation

The program takes four arguments: the input image filename, the upscaling block size, the number of pins, and the output image filename. For example, to use 16×16-pixel output blocks per input pixel, with 256 pins:

string_art input.pgm 16 256 output.pgm

Image files use the easy-to-parse Netpbm format, specifically the 8-bit grayscale binary “P5” format.

Initially, the output image “canvas” is all white. Starting at an arbitrary pin, we iteratively select the best next pin to wind the thread across. For each candidate chord, we use Bresenham’s algorithm to visit each pixel on the chord. The score for that chord is simply the average– over each of the intersected blocks that would change as a result– of the difference between the target gray level of the input pixel and the (potential) new fraction of black pixels in the block. Intuitively, we measure how important it is to continue to darken the intersected blocks of pixels.

Evaluating the score for each possible next chord, we select the one with maximum score, that is, that makes the largest average improvement to the blocks it would modify. We stop when that best average improvement is negative, that is, when we would make the image worse by continuing.

Results

The figure below shows the results for several example images. The top row is the 512×512 grayscale input (source Birsak [2]). The middle row is the Birsak algorithm output, using a block size b=8 (and thus a 4096×4096 output resolution) and 256 pins. The bottom row is my implementation using the same settings. In each case, the label indicates the number of thread chords, and the total length of thread required (for a 630-mm diameter canvas).

String art comparison between (a) top row input images, 512×512 grayscale; (b) middle row Birsak et. al., output, 4096×4096 with 8×8 blocks and 256 pins; and (c) bottom row showing my algorithm with same parameters. Labels indicate the number of chords and total thread length in meters (on 630-mm diameter canvas).

For my own project, I used the same 256 pins, but a 384×384 input image, and a higher resolution block size b=16, for a 6144×6144 output image, which looks nice as a 24-inch, 300-dpi print, with the target input image inset as shown below.

Have you ever tried to photograph a cat?

References:

  1. Vrellis, Petros, A new way to knit (2016). http://artof01.com/vrellis/works/knit.html
  2. Birsak, M., Rist, F., Wonka, P., & Musialski, P., String Art: Towards Computational Fabrication of String Images, Computer Graphics Forum, 37(2) 2018, 263–274, doi:10.1111/cgf.13359

4 thoughts on “String art

  1. I appreciate the simplicity of your approach! It’s portable and easy to follow along. Before even looking at your samples, the first thing I tried was a photo of one of my cats. It’s such a natural inclination.

    I haven’t looked at the Birsak paper, but your program treats intensity linearly. I think you get better results if you transform it with a gamma of 2.2 (or even just square it as an approximation) to more closely match human perception of intensity, a la sRGB.

    score_changed += pow(target, 2.2) - pow(current, 2.2);
    

    Getting rid of all those global variables would be great since it means you could more readily apply OpenMP to the pin loop and speed up image generation, especially for large numbers of pins. It would be easy to work it out in such a way that it works identically without OpenMP.

    I also came up with some “interesting” PGM inputs. 🙂

    $ printf 'P5 -1 -1 255\n' >buffer-overflow.pgm
    $ printf 'P5 65535 65535 255\n' >integer-overflow.pgm
    

    Related: You can fuzz your program without modifications very easily:

    $ afl-g++ -m32 -Os -fsanitize=address,undefined string_art.cpp
    $ mkdir i
    $ touch i/a
    $ afl-fuzz -m800 -i i -o o --args ./a.out /dev/stdin 4 16 /dev/null
    

    Though it seems afl has a lot of trouble penetrating that scanf.

    • Thanks, Chris! Re the perceptual distance model, my initial idea was actually to try to extend this string art approach to color. That is, imagine, say, three different spools (maybe RGB, maybe CMY, etc.), iteratively choosing which thread to wind next… and possibly even whether to wind above or below all of the existing threads (since that could matter with multiple colors). Then each block of pixels is a combination of four colors, maybe approximated as the corresponding linear combination in CIE-XYZ space, and the score could be some transformation of the before-and-after perceptual distances. I looked at that distance behavior for the purely black-on-white grayscale implementation here, in CIEDE2000, and decided to skip it since it made so little difference in the results.

      You’re right about the lack of OpenMP-readiness. I did do some parallelization during experimentation/analysis here, but with large numbers of image generations rather than multithreading any single run (which I would have wanted to turn off anyway)– playing with various combinations of the input resolution, block size (I thought 16×16 worked better than Birsak’s 8×8), as well as some approaches to randomly staggering the pin locations to try to mitigate Moiré effects, something that I didn’t elaborate on in the article (it’s most visible in my print of our cat).

  2. Pingback: Bresenham’s circles are (always?) optimal | Possibly Wrong

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.