More selected projects

Recursus

Recursus is a triptych of computational literature composed of ‘WordSquares’, ‘Subwords’ and ‘AIT’, three modules using recursion and neural text generation.

produced by:  Jérémie C. Wenger

Introduction

This piece displays separate facets of a literary practice seeking new ways of intertwining the classical act of writing with computation. The three modules present in Recursus are the following: ‘WordSquares’, squares of letters in which all rows, columns and diagonals are words; ‘Subwords’, decompositions of longer words into smaller ones within them, without any letter left unaccounted for; lastly, ‘AIT’, prose-like texts written with the help of a neural network. In the case of AIT, the network was trained on a bulk of personal texts written in the past five years, a series called 'it I-VI', and the generated results were recomposed, corrected and edited as if they were my own drafts.

Concept and background research

My thoughts and development can be traced here. My core references in all things related to constraint, computational processes and literature is the Oulipo, and Georges Perec in particular. That was the main influence for the type of constraint used in ‘WordSquares’ and ‘Subwords’. When transitioning to neural text generation, other figures came to the fore, although none as literary or aesthetically relevant as the ones mentioned: what I found was mostly other experiments (by Allison Parish or Ross Goodwin, for instance, see this post) or toolkits (such as Max Woolf's library textgenrnn), and I am in the process of looking into Continental inspirations as well.

Overall, as more fully developed here, I started my journey with a very stringent approach to computational literature, that involved building databases (the possibilities generated by a constraint) before mining that space in search for good pieces, to something more subtle, where the constraint still allows for my personal input: the generated product of a neural network, that I impose myself not to discard, and to improve as much as I can, is flexible both in the first phase, where I choose what texts I feed to the network, and how the network is designed, and in the final one, where I rework the result to my heart's content.

Technical

For ‘WordSquares’ and ‘Subwords’, the two core technologies used are: the DAWG, or Direct Acyclic Word Graph, which reconfigures an alphabetically ordered list of words into a highly efficient look-up tool; the process of recursion, whereby one can systematically browse through all possibilities for a given process (when building a database). For both these modules, that meant making one step (checking whether there is a possiblity available for this slot in the square grid, or at that letter in the word being decomposed), and, if that step was successful, opening a new space of possibility (going deeper in the layers of recursion) that depends on the previous step, and where the systematic search can go on. All details explained in this and that repos.

For ‘AIT’, the main bulk of the work, which is still ongoing, was to get to grips with neural networks, and especially LSTMs, or Long Short Term Memory networks, in order to be able to use them productively. All my thoughts and progress are recorded here. The networks used the aforementioned library textgenrnn, built on TensorFlow and Keras.

Future development

As above, there is a rather clear distinction between ‘WordSquares’ & ‘Subwords’ on the one hand, and AIT on the other.

For the former, the possible improvements include:

  • More speed-up, either porting the code to C++ or using Cython;
  • Find a way to get access to a multicore computer to search for larger squares (which would take weeks on my laptop);
  • Solve the bug mentioned at the end of the WordSquares Readme: the Python multiprocessing implementation does not produce the same results depending on the number of processors, which was not at all fatal for the show (still hundreds of thousands of squares available), but is a major technical issue;
  • Refine my source dictionaries, possibly using web scraping to access other ressources than those I found so far (mostly on GitHub);
  • Develop my command of mining and visualisation tools (Bokeh, TensorBoard, while the database itself would have been vectorised using techniques like Word2Vec before being 'reduced' by PCA or t-SNE), so that my approach of large result databases is less crude and more diverse.

 

For ‘AIT’, which is my main focus for the future, the taks is to improve my knowledge and command of neural networks, the mathematics behind them, the appropriate libraries (TensorFlow, Keras, PyTorch, etc.), so that I can not only use existing libraries but, hopefully, come up with my own architectures. The question of how practice machine learning is also on the table (using cloud computing, investing in a GPU...).

Self evaluation

Apart from one clear technical issue in the multiprocessing implementation mentioned above, I see two lines of critique:

Whereas with ‘Subwords’ the results were clear and suprisingly easy to select (the plethora of lovely pieces, and the time to parse through the database being the only impediments encountered so far), with ‘WordSquares’ the very same process proved far more difficult, as expounded here. The issues here are:

  • the databases grow huge very quickly (I estimated that the number 4-squares without diagonals produced with my large dictionaries would be in the hundreds of millions...);
  • when calculating squares with smaller dictionaries the constraints can be so high that only very few pieces end up being generated (hence the advantage of creating larger ones, with more comprehensive dictionaries, but with the difficulties pushed into the mining part);
  • once the large database is created it is populated by an unfathomable ocean of meaningless or nonsensical pieces (so many of them using extremely rare words or abbreviations, that only make sense after extensive dictionary search, cf. any post here, with definitions at the bottom.), which is mind-numing to mine;
  • the rather crude method used so far, where I pick words I like, create a subset, repeat, on top of relying on chance to find results, can only overlook squares that could convey an interesting meaning without using any salient word (an idea to discover them would be to treat squares as bags of words, create a vector out of that with doc2vec, and see if there is any match between that and a large existing corpus, thus looking for squares that 'sound like normal sentences').

 

With Machine Learning the core aim that is yet to be reached is to develop a sense of both command and 'freedom' in handling networks myself (and not as I did this time, which was a theoretical study and some tweaking on existing networks on the one hand, and the import of a ready made library on the other). I am quite satisfied to have been able nearly to complete Parag Mital's course on TensorFlow, which is not text-oriented, and I am looking forward to the day where I can devise my own architectures for text generation.

Finally, there is a potential question regarding the aesthetics of the piece: it is not entirely clear that its hybrid nature, combining the two constrained approaches (one mainly recursion- and database- oriented, the other building a neural production pipeline), really works and creates the necessary coherence for an independent work of art. One could have imagined three different pieces, without any overarching theme or structure. The answer to that is circumstantial: for the most part of this year I worked in the first kind of framework (hard constraint, databases), and had developed squares without having presented them for evaluation, but had started feeling I was hitting a wall, and didn't progress as much as I wished. I could have stopped there, and presented, say, only WordSquares and Subwords, but that would have meant that most of the work would have been done by early June. At the same time, it had become clear that I wanted to explore the possibilities of machine learning, while realising the difficulty of the challenge. It seemed difficult to present only a piece based on these techniques, given the time at hand, yet it also felt impossible not to work on the subject, given its growing importance in my practice. ‘Recursus’ is the result of that transition, and, as such, less a unified piece than the display of that transition between two frameworks, the coexistence of various research threads and, ultimately, a snapshot of the practice as a whole (where themes, images, and facets of my sensibility can be found across genres and techniques).

References

Ouvroir de Littérature Potentielle on Wikipedia.
Allison Parish on decontextualize.com.
Georges Perec, Alphabets, Paris: Galilée, 1976.

Rebecca Fiebrink, Machine Learning for Musicians and Artist (Kadenze)
Mikhail Korobov, Python Dawg implementation (Anaconda)
Parag Mital, Creative Applications of Deep Learning with TensorFlow (Kadenze and repo)
Max Woolf, textgenrnn (repo)

Keras
TensorFlow

More references on recursus.co