AGA is a generative system that uses a robotic arm to autonomously or cooperatively create paintings. It utilizes various methods; including Markov Chains, openCV, and chance to design its compositions. Translating digital images into real-world paintings on paper or canvas. Designed to be a portable studio assist or artist.
produced by: Wesley Talbot
While AGA might be seen as an extension of the artists' hand, it actually stands somewhere between tool and agent. With the influx of hobby robotics and the growing accessibility to faster computation, our lives, and by extension the art world have seen monumental changes. Computational works exist far beyond the now antiquated concept of ‘digital art’ (which is not to say these works don’t employ similar content, but that they are now capable of much more).
The necessity of interaction with such entities is no longer a given, which is to say, these pieces no longer need us as participants to become valid. When A.I. is coupled with productive means (like a robotic arm) it enters at least the same sphere as every obscure artist. Whose works’ existence cannot be denied. In fact, the legacy artists leave behind will become systems of creation rather than the creations themselves; systems that will produce artistic pieces the artist will never see. Which should be seen as inherently different than conceptual art, such as the work of Sol Lewitt who left instructions for artistic pieces. The difference lying in the fact that the systems of today will have a chance to grow and learn while his will remain static.
Concept and background research
This project represents the culmination of many influences and revelations for me, both personally and academically. On a scholastic level the process of designing and constructing a robotic arm using an interface, homemade hardware, Machine Learning techniques provided from every course I undertook during my time at Goldsmiths. From Creative Coding, Physical Computing, Data and Machine Learning, to Computational Arts-Based Research and Theory. It is also a work that I have personally aspired to for many years. Coming from a background in painting with a strong interest in technology but no coding skills whatsoever, building a generative art system of even this humble a quality is a level of achievement I wouldn’t have thought possible even a few years ago.
My undergraduate studies we’re coupled with an apprenticeship to the painting professor at my local community college; an artist who believes strongly in serendipitous actions within the act of painting. From this experience, I built elaborate systems of pictorial assemblage, developing as a painter my own methods of artistic generation that relied on chance and looked for the unexpected mark. For example, a series of paintings whose content was derived from photographs taken from a running television set, the product of which was printed and smeared with paint only to be re-photographed and finally manipulated in Photoshop. The result of these efforts were images that coincidentally resembled poorly achieved generative adversarial network (GAN) images. In essence, I was destined to find Machine Learning and AI art; the idea of using artificial intelligence was an un-germinated seed.
The core concepts investigated within the scope of this project include: chance, agency, and ownership. Initially what I wanted more than anything was to reinvent the process through which I create paintings; to make more exciting paintings, more beautiful paintings than even I could imagine. The importance of procedurally removing myself from the design process as much as possible offered me a vantage point that is as unbiased as possible. It’s an opportunity to see the work as a spectator might. Seeing the work in its truest form. Following the procedural path forward led me here, but with this latest iteration of pictorial assemblage I have new questions to contend with: As the program I use becomes more advanced, how much control do I have? How much control do I want to have? If I can relegate the totality of the process to a machine am I the artist or just a mechanic? And, of course, is it art? We are familiar with certain novelty art stories that crop up from time to time, children making paintings, elephants, dogs, monkeys, etc. and the timeless argument of what is art. How is this different, and when will the novelty wear off? Furthermore, does it reduce the trade to its simplest form? Ironically the mechanization of art is more organic and natural than it may first appear. This was always going to be a part of the evolution of art; history has shown us that as the sciences advance art follows. Following painting alone we have seen artists capitalizing on every advancement: anatomy studies in the 1500’s, the camera-obscura refining their working process, Cubism and its origins in the theory of relativity, and so on. There will come a time when the realm of art does not belong to us alone, and it may even be more insightful, intriguing and beautiful than anything we could have produced as a collective.
When looking at this project from a contemporary perspective it would be easy to point to other artists who have made robots that also paint. My first exposure to robot made paintings was the work of Pinar Van Arman. Despite being aware of his work, I don’t see this as conceptually influential in terms of the work that he generates. More influential would be the works of Mario Klingemann and his Artificial Intelligence art, whose visuals are much more inspiring for what the landscape of art could entail. It is not simply the act of a machine putting paint on canvas, for that I would much rather point to actual painters. The most influential artist that I would cite for this piece (as a whole) is Albert Oehlen, specifically citing the works from his Computer Painting series. Both visually and conceptually I am intrigued by his use of the current technology to further the medium of painting, which aligns with whatever body of work AGA could produce. Another similar element seen in Oehlen’s work is his use of found imagery. Taken from advertisements he paints over the printed image in a coordinated work with technology (whether or not this was the intention I see this as a combined effort between his work (painting) and the computers work (the print)). This is the give-and-take of working with technology that I am investigating. Finally, I wouldn’t be making this art without the inspiration that I got from the painter Budge Hyde and his large-scale Cinema Verite series. The importance placed on random chance within this work reveals a more cohesive element through the body of work that is both beautiful and natural. As mentioned achieving this effect was a guiding principle for the work I have made in the past and specifically within this project.
This is a program written in OpenFrameWorks from scratch, with the exception the Firmata-Standard pre-loaded on the Arduino Uno for communication between the computer and the robot. It uses an openCV library for one of the painting options to find contours, pushing them into an array and painting them as a second element. It uses the webcam and a double for loop to reduce images into pixels that are large enough to be translated to the real-world canvas. It has been loaded onto a Raspberry Pi using SSH to run on boot, so that every time it is turned on it generates a random painting and then paints it. The robot, I designed and built myself, using MG 669 Servos and SG ky66 servos. It also connects to the Arduino with self-made “shield” . The frame of the robotic arm was self-designed, drawn up in Adobe Illustrator, laser-cut and hand assembled. The mathematics were solved applying the principles of inverse kinematics, where the angles are solved for using a series of triangles instead of matrices. I experimented with RunML during the process to see what effects could be generated with generative adversarial networks (GAN’s).
This project represents a series of future developments that range from simple technical modifications to updates that would expand the scope of its capabilities. The first changes I would like to make would be to work towards a finer accuracy. While the point of this is not to re-invent the printer, I would like for it to be able to paint figures, insignia, and more detailed forms. The reason for this would be a process of randomization that wouldn't be reduced to color choice alone. Ultimately I would like for this project to include; web-scraping, updating photos from its surrounding, GANs, and more. These inputs, combined with an ability to accurately paint its imaginings would mean that it could generate a piece unique to its time, place and "current thought". With its portable nature this could be as expressive and individual as a human counterpart. I would also like to include a variation on brushtrokes, by weight of brush, which would mean possibly adding a second grip to rotate between. Another important addition would be to include the ability to create GANs, which ould require a large amount of computing power.
What worked? What didn't? What would you have done differently? Your overal impression of your project. Did you achieve what you set out to achieve? Why? Why not? Acknowledge your mistakes for better grades...
While I am extremely happy with how the project turned out there are definitely some things that I would have liked to included but didn't. There are also some aspects of its functionality that could be imporved.
For starters I would have liked for it to have been more accurate, calculating the inverse kinematics was extremely difficult and some things that worked on paper didn't ultimately work in real life. As the arm rotates from one side of the canvas to the other, it creates a new distance from the base of the arm to the canvas. I'm not sure what to call this effect, and therfore couldn't find an equation to account for it; I did however come up with one but it doesnt seem perfect.
There is a jittereing effect on the paint-brush that is present when running off of the RaspberryPi, but not my laptop. This could be a lack of direct power to the servos (though I've tried adding batteries), it could be an error when selecting between pulse states within the servo itself (I tried rounding the angle being called), or it could be interferrence with the pulse width modulation caused by some leak in the wiring(to which I tried placing a capacitor to get rid of any noise). So, not for a lack of trying, but I never did get rid of that shaking. Fortunately, people seemed to really like that defect calling it "cute" or saying it looked as if " it was REALLY trying hard".
There were more than a few difficulties in working from home, challenges that had to be overcome in creative and sometimes desperate ways. What I found most damaging to the project was how it affected the timing of the overall development of the project. This project is in fact the first robot I have ever built and so there was a learning curve that was necessary to overcome. A crucial aspect of this project was to have the physical robot present as soon as possible, as many aspects of computation and calculation relied on the real-world feedback. At the outset of the lockdown, my initial steps were to begin with the software in lieu of the hardware, trying to gain as much ground as possible while stuck at home. This led me to work on some components that would ultimately not be included in the final version because of later time constraints caused by the lack of equipment, tools, and materials at the outset.
Another challenge to developing the software was that I needed to have specific measurements to code the calculations for all of the inverse kinematics (i.e. the math that allows me to accurately move a tooltip to a predetermined position). There would also be a huge difference between what the math determined and what the real-world translation of that math would later determine (accuracy of servos, Pulse Width Control differences, power supply, etc.).
Trying not to get further behind I ended up prototyping and eventually building an entire robot from cardboard and found items around the house. This was a particularly difficult and time consuming step considering I don’t have any tools in my apartment; as an international student I was not able to bring saws/knives, screwdrivers, hammers, etc. Instead The first prototypes were made using only a Leatherman ( a pocket sized multi-tool) and a pocket ruler. Furthermore many of the components were made out of household materials; milk jugs, plastic bottles, lots of ruined cardboard, yogurt containers, a shoestring, lots of glue, wooden coffee stirs, an old student ID, a used up tape roll, and screws from previously purchased toy pianos. Unfortunately, I was also required to repurpose parts from old Physical Computing projects, which meant destroying them(servos, wires, perf board, etc ) in service of the final project. If I’d had access to the lab tools and materials this would have been a much faster process.
Some initial variables that I needed to know were: how the arm would move, how accurate I could place a point, and most importantly the dimensions of everything. So, after creating the finalized version of the robot out of cardboard I started to do the calculations to move it (consequently, the robot I built since gaining access to the lab had to be an exact replica of the cardboard version). The difficulty with the cardboard as a construction material was that it was not especially accurate so that when I would set certain parameters they would gradually change because the cardboard was much too pliable. I had to reconstruct the arms on many occasions because of how fragile it was. To illustrate, because I was figuring out the math for this on my own, it took some trial and error to get the correct equations and as a result it wouldn’t take much to overshoot or overestimate a trajectory and slam the arm into the floor or wall which would cause the cardboard to instantly buckle (absolutely heartbreaking to watch. I would then build a new part as quickly as possible). There were also issues with hinges because they would wear out very quickly becoming inaccurate. This exacerbated how long the calculating stage took. Eventually, after I completed all of the math I then created an Adobe file to the exact specifications of the cardboard knowing those calculations to be correct. Only waiting for access to the laser-cutter. Making the final version out of wood, had an unforeseen effect, the wood was much heavier forcing me to purchase new servos which needed to be calibrated and recalculated to account for the difference.
During the initial prototype process, I did not have access to a soldering iron or access to electronic parts readily. Thankfully through a chance encounter with a neighbor I was able to borrow a soldering iron after some weeks. As fortunate as it was, the tip of the iron was missing the coating that makes the solder melt properly and was very difficult and time-consuming to use. Having this equipment, I began ripping wire and parts from some of my former projects that I’d made for Physical Computing 2 and reused parts to finalize the prototype version of this project. The final missing component was a constant power supply. At the time I was using a pack of AA batteries taped together to run the whole project and so I could only run tests for two minutes before the battery would die. I would eventually be able to obtain a rechargeable battery pack but still I would have to recharge the battery after 5 minutes of running (which I did endlessly. This eventually melted from overuse). This made it very tedious to run tests. It was only after the lab had opened, and I was able to use the laser cutter, build and wire the project that I could actually test the robot for the first time. Here is where I found that there were several issues I was previous unaware of, issues that I would have caught earlier if I had been able to run it for more than a few minutes. It was then essential to tackle these problems before I could even consider going forward with the software components, I assumed it wouldn’t have mattered if the robot couldn’t function properly. Testing beforehand would have greatly improved my chances of successfully calculating the inverse kinematics, brush strokes, and more.
In summation, I feel like this was a much more time-consuming project that I had realized even without the additional time constraints added by the Covid-19 pandemic. Most of this projects’ success relies on its physical abilities, which I made every attempt to refine with very limited resources, working blindly on code in hopes that it would translate well. I had ignorantly assumed that if the math was accurate on paper that it would provide some level of consistency or reliability in actual running of the project. This was wrong, with so many variables in electrical engineering, testing was pivotal to understanding its capabilities. To have so many steps stalled or slowed down I feel like I missed out on some of the more refined aspects that would come later in the process. I regret having to abandon some of the more ambitious ideas like the inclusion of a GAN or spending more time refining its precision because I was laboring over the physical hardware. That being said, I do not intend to abandon this project once submitted; I will continue to work on it as it is something I am genuinely interested in.