menu

4. Experimental animations

Previous sections of this document have described how orthodox approaches to 3D software often focus on control, and involve a comportment which resembles Heidegger’s description of Enframing and challenging-forth. This section describes a practice-led enquiry aimed at developing approaches to 3D software which encourage a comportment of active receptivity, or bringing-forth.

Motivated by the search for particular qualities of practice (i.e. by a particular feeling or experience of making) this research has involved the creation of a large number of short 3D animations. As the research progressed, I began to recognise a number of creative strategies that were used repeatedly. These strategies are interrelated and overlapping but I refer to them by the following names: Playing with Software, Playing with History, Animation as Relation, Colour as Light, Geometry as Shadow, Working from Sketches and Working from Life. Using these strategies as an organisational device, this section describes a number of experimental animations, creative strategies and custom tools, with a focus on those that reflect significant moments in my enquiry.

Research methods

Reflection on prior practice

This research draws on my previous experience as a 3D animator, and as a visual artist using traditional media. Since graduating with a fine arts degree in 1996, I have worked with a variety of creative media including pencil, charcoal, paint and clay. In 2001 I began using 3D animation software.

Over the past 15 years I have been employed as a 3D animator in the advertising industry, have delivered 3D animation courses at several tertiary institutions and have worked on personally motivated 3D animation projects. In my role as a 3D animator I have engaged with the global 3D community through online forums, courses and tutorials and I have created work that approaches photorealism (e.g. Figure 4.1), as well as work that strives for a sketchy or painterly aesthetic (e.g. Figure 4.2).

Figure 4.1: Still from The Deep (a five minute 3D animation completed in 2009) and a 3D architectural rendering of a bathroom (completed in 2010). These are examples of previous works which approach photorealism.
Figure 4.2: Still from Plant to Plant (a five minute 3D animation completed in 2008) and a development still from Saltram (an animated TVC completed in 2004). These are examples of previous 3D animations which approach a hand made aesthetic.

While I have shown that it is possible for a 3D animation to look like a moving drawing or painting, I have also noticed that digital and analogue media tend to encourage different artistic concerns. Through reflection on my prior practice, as well as my work with Default Whippet, I have identified a number of common 3D animation practices which derive from and encourage a comportment of control:

Against the backdrop of common 3D practices such as those listed above, my research uses a practice-led enquiry to explore a variety of alternative practices in order to develop approaches to 3D software which encourage a comportment of active receptivity. In the discussion that follows I describe a number of these alternative practices with a focus on those which were found to be most successful.

Taking notes during production

My production work has involved the creation of a large number of short 3D animations. While making these works I kept a project diary consisting of written and visual records which were useful during production and also when I later reflected on what (if anything) had been interesting, relevant or surprising to me. These records include images of works in progress, technical notes and snippets of code, as well as observations about my general mood and level of engagement.

At regular intervals throughout the production each animation, I noted whether the process was engaging, boring, challenging or absorbing. I also noted whether I encountered surprises and, if so, whether they altered the direction of a project. I wrote about my plans for a specific project and, as I worked, I jotted down ideas and opportunities as they arose. Reading back over these notes, it’s clear that my plans continually changed. Selected pages from my project diary, which exists as a series of online documents, have been downloaded in PDF format and appear in Appendix A of this document.

Technical notes

Description of hardware and software

All of the animations created for this research use one of three commercial 3D software packages: Autodesk’s Maya, Side Effects’ Houdini, and Pixelogic’s Zbrush. Maya is the primary software used for this research and the package with which I feel most comfortable (having used it for almost 15 years). Houdini is used for a few of the animations and, as its tools and workflows are different from those in Maya, it provides a point of comparison. ZBrush represents a third example of contemporary 3D software; its architecture is different from both Maya and Houdini. As discussed above, ZBrush is commonly considered an “artist friendly” package because its interface design encourages users to think of polygon modelling as “digital sculpting” (Pixologic, 2015). In the past I have used ZBrush to create detailed polygon models, in this research ZBrush was used to create photographic textures for Default Whippet.

Figure 4.3: Screenshot of ZBrush being used to texture Default Whippet.

Scripting

Many of the experimental animations described in this section use existing tools in an unorthodox way, or they use custom-made tools created as part of this research. Some of these custom tools take the form of simple scripts (executed from Maya’s Script Editor or via a shelf button ) which are used on one or two projects, while others take the form of graphical user interfaces (GUIs or UIs) and are used for a whole series of works.

Some of the scripts simply speed up repetitive tasks, while others enable ways of interacting with the software which would not otherwise be feasible. To undertake this research, it was necessary for me to acquire rudimentary programming skills and I chose to learn the object oriented programming (OOP) language, Python. A relative simple language, Python can be used to customise several 3D software packages including Maya and Houdini. Maya’s native scripting language is MEL and Houdini’s is VEX; some scripting in these languages was also undertaken as part of this research.

Custom tools discussed in this section are available in Appendix B of this document as Python code (which has been converted to PDF format for readability). Rather than contributing to a system of standard software solutions, these tools should be regarded as works in progress.

Description of practice-led enquiry

For Default Whippet (Figure 1.2 above ), I was after as much control over the dog's form and movement as possible; but in my experimental animations I wanted to find ways of varying my level of control over the work. This is analogous to the idea, common among painters, that the paint should be given some freedom to do what it wants; to splash, to dribble and make marks that are not entirely intentional. A painter can vary their level of control of a work by altering the consistency of the paint, throwing paint at the canvas or by altering their grip on the brush. And a painter can respond to accidental marks by leaving them, adjusting them or painting over them. In search of equivalent approaches to 3D software, my animations explore the following questions:

How can a user vary their control over the software in order to elicit glitches or surprise outcomes? How can they respond to these surprise outcomes and how can these responses be incorporated into a final animation?

Playing with Software

Many of the animation projects described below aimed to elicit suggestions from the software to which I could respond. In my search for suggested form and suggested movement, there are several strategies that I used repeatedly. The first of these strategies I refer to as Playing with Software.

Playing with Software is different from using software because it is less concerned with what a tool has been designed to do and is more concerned with what the tool actually does.

Cow Reduce

Figure 4.4: Cow Reduce animation.                                         

An example of Playing with Software is Cow Reduce, an early animation which I created by animating Maya's Reduce . tool. This tool is used primarily for the purpose of creating a low-resolution version of a high-resolution model. This is especially useful in situations where file sizes need to be kept to a minimum, such as in the production of computer games. The Reduce tool is designed to give the user immediate visual feedback while they work but is not normally a tool that is animated.

Wobbling Primitives

Some of my software experiments involved playing with simple polygon primitives and one day I noted in my project diary, “I am just changing the creation parameters of a pyramid primitive ... as an animation it gets a great little wobble as divisions are added” (Appendix A, Project Diary: General, 17/4/13). Intrigued by the “great little wobble” that you can see in the movie at Figure 4.5b, I recorded the steps I had taken in my project diary:

make a polygon pyramid primitive (number of sides = 5), scale it on Y axis, PolySplitRing on each side (split from bottom to almost to the top), move new edges outward (the further you move them the more extreme the wobble), animate the PolyPyramid “subdivisions height” attribute (in this case it goes from 1 to 100). (Appendix B, Project Diary: General, 17/04/13)

These steps are illustrated in the viewport snapshots (Figure 4.5a).

Figure 4.5: a) Three viewport snapshots showing steps taken to create Wobbling Primitives. b) Wobbling Primitives 3D animation.

The animated forms and the great little wobble described above result from characteristics of the software’s architecture; specifically they are the result of construction history. In the context of 3D animation, "history" refers to data stored during the creation of a 3D mesh. When you create a mesh the software stores certain information about how it was made and (in Maya, as well as some other 3D programs) this stored information is known as construction history or simply as history.

An Autodesk manual from 2009 defines history as “A buffer in which the construction steps of an object are stored for later editing” (Derakhshani, 2011, p. 584). As indicated by this definition, in orthodox 3D practice altering the construction history on an object can be a useful way of editing. However, as the list of stored construction steps grows, small changes to a model’s history can yield surprising results. This is because the list of construction steps is entangled, i.e. the result of one step depends on the results of previous steps. In order to avoid unpredictable deformations, we normally delete construction history while we work but many of the experiments discussed here explore surprising deformation and movement that results from this historical dependence. Used to elicit suggested form and suggested movement, I refer to this strategy as Playing with History.

Playing with history

Hand

Like the wobbling pyramid (Figure 4.5), the hand models (Figure 4.6) were also created by Playing with History. In this case I deliberately made the model on the right and then changed its history to cycle through a variety of forms such as those you can see to the left of this original model. My favourite is the form on the far left because, although evocative of a hand, it is not something that I could have designed in advance or even imagined. In other words, this form says something about hands but it’s not how I would think to describe a hand.

Figure 4.6: A composite image showing stills from Hand.

Blinding Tree

Figure 4.7: Blinding Tree animation.                        

Another example of Playing with History is Blinding Tree, which started as a simple polygon cube. After extruding faces to create branches, I changed properties of the original cube which resulted in a series of unpredictable mesh deformations. Like the hand and wobbling primitive examples above, I enjoyed cycling through the tree’s glitched forms in the Maya viewport. Like Wobbling Primitives, for Blinding Tree I keyframed attributes on the cube so that they changed throughout the timeline. In this way the cycling is animated so that what I saw in the viewport is similar to the rendered animation. The movie in Figure 4.7 shows how I also animated the tree's shader so that light passes over the tree object. This reminds me of trees and passing cars that I see each evening on my ride home.

Traditional animation principles: Conformity and disruption

In 3D character animation we normally strive for as much control as possible over a character’s movement and its deformation. Figure 4.8a shows how in Default Whippet I carefully positioned the dog’s internal skeleton to gain control over her form, and Figures 4.8b and c show how I meticulously laid keyframes to control her movement.

Figure 4.8: a) Viewport snapshot showing how I carefully positioned joints for Default Whippet; b and c) Viewport snapshots showing how I animated Default Whippet using keyframes.

Default Whippet uses a standard approach to 3D character animation; an approach which is nicely summarised in a 1987 paper by John Lasseter. A highly acclaimed animator and screenwriter, Lasseter is currently chief creative officer at Pixar and Disney studios.

In his 1987 paper “Principles of Traditional Animation Applied to Computer Animation” (Lasseter, 1987), Lasseter describes how principles used to create traditional 2D hand-drawn animation can be applied to 3D computer animation. He stresses that, for animators, knowledge of these principles is essential to the production of quality work. Lasseter also suggests that these principles should inform 3D animation software design and says “an understanding [of traditional animation principles] is important to the designers of the systems used by these [3D] animators” (Lasseter, 1987, p. 35). The animation principles that Lasseter describes were designed primarily for character animation and their main goal is to give a character “personality” (Lasseter, 1987). Traditional animation principles include “Squash and Stretch”, “Anticipation” and “Exaggeration” (to name a few) and they are familiar to many 3D animators, including myself. Having absorbed these principles by watching and creating animations over many years, they form part of the vast background which informs my research. However, rather than adhering to these principles, my experimental animations are trying to escape from or to disrupt them.

Rather than finding principles or methods that help animators achieve an already familiar style of movement, my research seeks strategies which encourage close attention to styles of movement that emerge.

For Lasseter, Pixar and many traditional animators, “it is important to make the personality [of a character] distinct, and familiar to the audience” (Lasseter, 1987). These animators want to communicate unambiguous stories and images. By contrast I am interested in imagery which is evocative but that might be difficult to define or to describe.

According to Lasseter “all actions and movements of a character are the result of its thought processes” (Lasseter, 1987). This is an interesting statement because it suggests an emphasis on deliberate, consciously motivated activity. The implications of Lasseter’s statement are that Pixar characters are largely motivated by conscious ideas or thoughts; but what about movements that aren’t consciously motivated; what about feelings or sensations that aren’t attached to thought; and what about thoughts and ideas that arrive through activity? Lasseter emphasises the realm of conscious and deliberate actions and this emphasis applies to Pixar characters as well as to Pixar animators. He says, “In order to get a thought process into an animation, it is critical to have the personality of the character in mind at the outset” (Lasseter, 1987). In the context of my research, I am wary of Lasseter’s advice because I sense that pre-designing the personality of a character or the style of an animation might result in missed opportunities. Working toward a predefined goal might distract me from paying careful attention to emergent characters.

My research aim is to develop strategies that focus on emergent properties of an image or animation. I think of these as “bottom up” strategies compared to traditional approaches, such as Lasseter’s, which are more “top-down”.

Spline Whippet

Figure 4.9: Viewport snapshot from Spline Whippet.

With Default Whippet, I wanted to control the dog’s form and movement as much as possible; but in Spline Whippet (Figure 4.9) I was seeking subtle surprises. As the name suggests, the construction of Spline Whippet involved the use of curves (or splines ) to guide the extrusion of faces on a mesh. This is not an unusual way to create a character model but normally, after completing the model, we would delete its construction history and bind the mesh directly to joints. In Spline Whippet the construction history was not deleted and the guide curves, rather than the mesh, were bound to the joints. As a result of this workflow there are subtle movements and deformations that I didn’t explicitly design. In Spline Whippet I didn’t entirely relinquish my control over the dog’s form and movement but I reduced it considerably. If my approach to Hand and Blinding Tree was like throwing paint at the canvas , Spline Whippet was more like painting with a brush attached to a long and flexible stick. As well as suggested form, I found that Playing with History can elicit suggested movement.

Library Man

Figure 4.10: The pencil sketch which I referred to when modelling Library Man (Figure 4.11below).

Library Man is another example of how a character’s personality or disposition can emerge through a process of discovery. After using a pencil sketch (Figure 4.10) to create a figure model, in this project I experimented with different code fragments to drive the rotation of joints. In the course of this project I asked myself simple questions such as “how do I make the amount that the head turns non-constant?” (Appendix A, Project Diary: Library Man, 24/01/13). And, as indicated by the following excerpt from my project diary, I found a variety answers:

found this to change direction;
if ( frame % 60 == 0) $xDir = rand( -45,45 );
pCube2.rotateY = $xDir;”
Found a way of creating random loop!!!!;
if (frame == 1) seed(1);
translateY = rand(5). (Appendix A, Project Diary: Library Man, 24/01/13)

Figure 4.11: Library Man animation.

The character of Library Man emerged through a process of continual discovery. In the movie above (Figure 4.11 he looks agitated or nervous but, throughout production, Library Man exhibited a variety of subtly different moods or behaviours.

Sine Whippet

Figure 4.12 shows early animation experiments which attempt to bring characters to life using maths functions such as noise, linstep and sine. In these examples a character or style of movement emerges through a mathematical process which is still playful but is more obviously constrained by self-imposed restrictions.

Figure 4.12: a) Detail from project diary showing working sketches for Sine Whippet and similar experiments; b) Viewport snapshot showing my attempt to animate Default Whippet using maths functions; c) Sine Whippet animation.

Rather than using a function to drive the motion on each part of a character separately, in these experiments I made links between parts (i.e. objects) so that the motion on one object defines the motion of others. For example, in Sine Whippet (Figure 4.12c) it is the movement on the large cube in the dog’s chest that drives the motion of other cubes in her body, neck and head. Using a time offset function, movement of the chest is distributed throughout the dog’s body.

Creating movement by defining the relationship between one object and another was a strategy used in Sine Whippet and also in Library Man. As my appreciation of this animation strategy increased I began to call it Animation as Relation and I would go on to explore it in a variety of ways.

Animation as relation

Experimental animations discussed so far in this document use standard interface tools in an unorthodox manner, but as my research progressed it became apparent that the ability to make my own custom tools would allow me to explore a wider variety of approaches.

After 15 years of using 3D software with no programming knowledge, for this research I needed to learn to program. According to Douglas Rushkoff (2010), we shouldn’t think of programming as the boring stuff; it doesn’t have to be a rote activity or involve the implementation of a preconceived design. Rushkoff is one of many programmers who insists that programming is creative and it can be improvisational (Maeda, 2004; Rushkoff, 2010). In accordance with this sentiment, the tools described below were developed in an improvisational fashion. Sometimes I was motivated by a perceived need (“I need to find a way of doing X”), other times I was motivated by a question (“What would happen if I did Y?”), and often I was intrigued and inspired by a surprise outcome (“Wow, look at that! I accidently did Z”). In the same way that the experimental animations emerged through an improvisational process, so too did the custom tools.

Auto Expression

One of the first custom tools that I created was Auto Expression. This is a custom user interface (UI) that allows the animator to quickly link different object properties using mathematical expressions. There are countless types of properties (or attributes) that can be linked using this custom tool including those that define an object’s colour, size, shape, position or orientation. The screen capture video at Figure 4.13 shows the tool being used to define relationships between elongated cubes.

Figure 4.13: Demonstration video showing my Auto Expression UI in use.

In the video above, the cube to the right of screen has been defined as the driver object. Rotation values of this cube have been animated using keyframes. In the first half of the video the cube on the far left is the single driven object; its location on the Y axis (i.e. its vertical position) has been linked to the driver cube’s rotation value. After defining this relationship, you can see how the driven cube (left of screen) moves up and down when the driver cube (right of screen) rotates. The second half of the video shows what happens when a single object (and this example uses the same driver cube on the right of screen) drives multiple objects. Toward the end of this example you can see the wave effect that results from changing the attribute which I call “child delay”. This attribute offsets the movement of driven objects in time. Using Auto Expression involves defining relationships and then playing through the timeline to view the resulting movement on driven objects. This movement feels like a suggestion from the software to the user responds by tweaking relationships between (driver and driven) objects.

Woman with a Bag

Woman with a Bag (Figure 4.14c) uses the Auto Expression UI to link cubes so that the height of one fence post drives the height of others.

Figure 4.14: a) The pencil sketch which inspired Woman with a Bag; b) A working playblast; c) Woman with a Bag animation.

Woman with a Bag was also the testing site for a second custom tool, this one I call Auto Keyframe because it creates many key frames with the click of a button.

Auto Keyframe

Auto Keyframe is a tool for creating animation which cycles, loops or repeats and, just like Auto Expression, it works by defining relationships.

Figure 4.15: Demonstration video showing my Auto Keyframe UI in use.

To use Auto Keyframe the animator first selects an object which they want to keyframe (again, this is the driven object). They then select another object which has already been keyframed (this object is the driver). After specifying how many keyframes define a loop, the user fills out a spreadsheet in order to define the relationship between existing keys (on the driver) and yet to be created keys (on the driven). After completing a spreadsheet for each driven object, the user clicks a button to create all the keyframes which have been defined. The animation is then viewed and, in response to what they see, the user can then move the new keyframes to tweak the movement of objects. Alternatively, they can tweak keys on the driver object or they can adjust the values on the spreadsheet and then press the red button to automatically recreate all keys. This process of watching and tweaking is likely to be repeated many times.

The Auto Keyframe UI is flexible, i.e. the number of tabs and columns expands and contracts according to the number of keyframes in a given cycle. Data defining the relationship between a driven object and its driver is stored as text files on the computer. One option for a user is to copy, create or manually edit these files in a text program then, from within Maya, text files can be loaded to create the interface with user-defined default values.

Of all my custom UIs, this one took the longest to make and it is perhaps the most ambitious. It is also the custom tool that I have used the least. Auto Keyframe was challenging and fun to build but (at least at first) it was less fun to use. When used on Woman with a Bag (to create the woman’s walkcycle), Auto Keyframe felt cumbersome (i.e. unnecessarily complex). However, there was to be a later experiment using Auto Keyframe which was more successful (see Figure 4.16 and 4.57).

Peds Prance

Figure 4.16: Playblast (i.e. series of viewport snapshots) of Peds Prance animation.             

Peds Prance (Figure 4.16) involved several characters (not just one as with Woman with a Bag) and Auto Keyframe worked well in this context because, having defined object relationships once, I could copy and alter these definitions for use with other objects. Auto Keyframe allowed me to quickly lay down a number of keys; pressing play I could then watch a character walk, sway or run. In response to what I saw, I then tweaked the keyframes and sometimes the spreadsheets.

In the past I have always created a character’s walk cycle from scratch, usually with a particular outcome in mind and with assumptions about the movement that would best convey a certain emotion or would best suit a particular character model. Apart from the obvious advantage of saving time, automatically generating keys and then responding to what you see feels different from starting with a clear objective and with a “blank slate” (i.e. with no movement). The difference between these approaches is subtle and difficult to describe but it’s true to say that with Peds Prance there were times when I felt that I was helping characters become what they wanted to be.

Flocking Whippet

A common way of working with movement suggested by 3D animation software is to use computer simulation. This means using a computational model which has been designed to simulate the behaviour of a system. 3D software simulations usually involve the use of particles. A particle is a point in space which, by default, doesn’t render but can be visualised in a number of ways. We can add colour to particles or we can replace them with geometry (as I did for Flocking Whippet, Figure 4.17 and 4.18). 3D animators commonly use particle systems to create things such as “dust, fire, rain, snow, flocking birds, swarming bees, or magic pixie dust” (Beane, 2012, p. 214). For phenomena such as dust and fire, particles are likely to be animated using virtual forces (e.g. virtual wind or turbulence) but for flocking birds or swarming bees, particles might be treated as autonomous characters or autonomous agents (C. W Reynolds, 1999) and this is the approach taken in Flocking Whippet.

Figure 4.17: Flocking Whippet animation with 500 whippets.

In Figure 4.17 you can see 500 whippets running together. The location and orientation of each whippet corresponds to that of a particle. For each frame in this animation the position of the particles is calculated by the software according to algorithms (or rules) which I have written using VEX, Houdini’s native coding language. In a sense, the movement of each whippet has not been directly specified by me, the animator, because, although I have written computer code specifying rules to describe the system, I have not specified the location of each individual particle.

Flocking Whippet was completed while participating in an online Houdini course hosted by computer graphics artist Shawn Lipowski, and it uses an approach to character animation based on the model proposed by Craig Reynolds in his 1987 paper, "Flocks, Herds, and Schools: A Distributed Behavioral Model" (Craig W. Reynolds, 1987). In this paper, Reynolds outlines how the behaviour of individual particles within a system can be defined by a few simple rules. In Flocking Whippet, the behaviour of particles is defined by rules for alignment, separation and cohesion. These algorithms ensure that the orientation of a particle is similar to that of its nearest flockmates; that it doesn’t get too close to its flockmates, and that it doesn’t get too far away. Each particle (or autonomous character) behaves according to these simple rules but complex flocking patterns emerge because the position of one character is determined by the position of others.

Figure 4.18: The same Flocking Whippet animation rendered through another camera.

While making Flocking Whippet I never tired of tweaking algorithms and playing through the animation, enjoying the fact that the whippets seemed to have a mind of their own and ran in a new formation every time. As described by Reynolds in his paper, when working with simulated characters, it can feel like you are working with something that is alive. Reynolds notes that “One of the charming aspects of the work reported here is not knowing how a simulation is going to proceed from the specified behaviors and initial conditions; there are many unexpected, pleasant surprises” (Craig W. Reynolds, 1987, p. 27). He also notes that there are times when our lack of control over simulated characters can be frustrating (Craig W. Reynolds, 1987, p. 27). The fact that character movement is not directly controlled by the user prompts Reynolds to suggest that “the person who creates animation with character simulation might not strictly be an animator” (Craig W. Reynolds, 1987, p. 27).

Given his insistence that an animator should precisely define their goal and achieve it by adhering to the 12 principles, Lasseter might agree with this sentiment. Published in the same year (1987), the papers by Lasseter and Reynolds describe different approaches to computer animation. Most contemporary 3D packages are designed to accommodate both these styles of approach, i.e. a “hands on” approach where character movement is set using key frames, and a more “indirect” approach where the user manipulates attributes of a system and the software is left to calculate the details.

Flocking Whippet started off as a flock of birds and ended up as a flock of whippets running frantically in a white void. I didn’t set out to make these characters and their environment but they evolved through a process characterised by discovery and response. I started by exploring the movement of particles and later decided to confine particle movement to a horizontal plane and add whippets. Whether using traditional methods (like those outlined by Lasseter) or using computer simulation, in 3D character animation we usually start with a character model (built in a static pose) and then make decisions about how that character will move. Flocking Whippet differed slightly from this approach because I started with movement and added characters to suit. At the time, the idea of characters emerging through movement intrigued me and the movies below (in Figure 4.19) show how I briefly explored it further.

Force First

Starting with a particle simulation, Force First began in the same way as Flocking Whippet. However in this experiment, instead of adding whippets, I added simple cubes to the particles and linked some of their properties (e.g. colour and shape) to their movement.

Figure 4.19: Force First playblast (showing particle movement without geometry attached) and rendered animation.

I’m still intrigued by what kind of characters could emerge through this approach, and would like to explore it further. A painter can begin by making abstract marks on a canvas. Prompted by what the marks suggest, the painter can then progressively add marks until recognisable forms emerge. The approach taken in ForceFirst is analogous and suggests that movement, rather than painted marks, can act as suggestions.

When he was writing, in 1987, Reynolds was describing an approach to animation which was revolutionary, but today simulation tools have become an indispensable part of most 3D animation packages. Knowing how to use these tools is part of being a 3D animator. But knowing how to use simulation tools is not the same as knowing how they work. Flocking Whippet and Force First involved writing simulation algorithms rather than simply using existing tools, as I have often done in the past. This experience shifted my view of computer simulation; I now appreciate these tools as ingenious ways of describing things but not as providing an accurate or definitive explanation.

As my research progressed, I learnt more about computer simulation and computer programming. As well as enabling me to create my own tools, acquiring programming experience shifted my general perception of 3D software. As my skills improved, I began to see that software is comprised of packages of code which can be explored and which potentially can be repackaged. I realised that before acquiring these skills I had experienced orthodox 3D tools and procedures as somehow “natural”, settled or fixed. To an extent, I had even assumed they were the final word in accurate representation.

According to Rushkoff, we don’t necessarily need to learn to program but we do need to learn that programming exists (Rushkoff, 2010, p. 8). It was by writing computer code that I gained awareness of its existence.

Modelling as animation

A 3D character is usually created at the centre of the world and in a default pose. And like other 3D objects, it is usually modelled in isolation, removed from any context or environment. “Modelling as Animation” is the name I have given to an unorthodox approach to 3D character animation enabled by a custom UI that I called Modelling as Animation (see Figure 4.20).

Figure 4.20: a) Viewport snapshot taken while coding the Modelling as Animation custom UI; b) Viewport snapshot showing the UI in use.

The Modelling as Animation workflow uses only observational sketches as reference, with no pictures or movies downloaded from the internet. Importantly, this workflow doesn’t require the model to be created in a default pose. Instead the model is created in an observed pose and, as the internal skeleton moves from one pose to the next, the deformed model is amended to fit the new pose. One result of this approach is that the model’s topology can be kept very simple and still capture key pose characteristics. Another result is that the model stretches and deforms in unusual ways.

The Modelling as Animation UI allows an animator to work quickly; one click of the red button completes a collection of tasks that might otherwise take several minutes.

With observational sketches as reference, the user starts by creating a simple polygon model. They then create joints (an internal skeleton) and attach (or bind) the model to the joints, which are then animated to deform the model. Up until this point, the only difference between Modelling as Animation and a typical character workflow is that it uses an observed pose, not a “relaxed”, generic or default pose. As I was to discover, this small change has significant ramifications.

Figure 4.21: Diagram showing steps involved in the Modelling as Animation workflow.

Because the model is simple, moving from one pose to the next requires model alterations. For example, at the start of Conference Figs (viewport snapshots shown above in Figure 4.21 and animation shown in Figure 4.22, below) the man’s hand and his head are one continuous geometric form but, as the man moves his hand from his head, that one form has to be separated into two. Scrolling through the timeline, the character deforms, and the user responds to these deformations by making changes to the model: adding detail, removing detail, joining models, or splitting them in two. As they work, the user periodically clicks the red button to duplicate the model, set visibility keyframes, and bind the model to the joints. I have described the workflow here as a linear set of steps but, after creating the initial joints, these steps are repeated many times, and in any order.

Conference Figure

Conference Figure was the first animation made using the Modelling as Animation UI. It took about two hours to make using the tools as described. What you see in Figure 4.22 is a number of different models, appearing and disappearing in quick succession. "Glitchy" moments (when the model deforms in unusual ways) are remnants of the unforeseen outcomes or accidents that have been deliberately kept.

Figure 4.22: Conference Figure animation.

In 3D animation, there are a number of modelling conventions which are aimed at minimising unpredictable deformations; examples include regularly deleting construction history and using three or four-sided faces. Modelling as Animation abandons these conventions and welcomes unexpected forms as suggestions to respond to; to keep, alter, or discard.

Like straight-ahead animation techniques such as paint-on-glass, charcoal or plasticine animation, this workflow encourages improvisation. For example, without knowing what this man’s face would look like, Figure 4.22 shows the enjoyable process of adding detail and watching the character emerge. As well as working straight ahead, with this workflow it’s easy to move backwards in the timeline. An animation can be amended at any time by adding or deleting models, and making timing or movement changes.

MasA whippet

In order to compare Modelling as Animation with a standard approach to 3D character animation, I created MasA Whippet, which uses as its basis the animated skeleton of Default Whippet.

Figure 4.23: MasA Whippet animation.

With Default Whippet, the production process was a means to an end. I was after a particular outcome but exactly how I achieved that outcome didn’t seem important – as long as the process was efficient. The way that I interacted with the software while making Default Whippet (e.g. whether I worked slow or fast or was bored or excited) is not evident in the final animation. Figure 1.6 ( above ) shows the dog model in various stages of completion, but these iterations of the model do not appear in the finished work. This means that the hundreds of intuitive decisions that I made while modelling, as well as the speed or style in which I worked, are not explicitly evident. By contrast, Modelling as Animation incorporates some of these decisions.

With Modelling as Animation, the process is visible in the outcome. This, along with the continual discovery of surprising forms, combined to make it an engaging experience. Without the need to achieve a single perfect model, I found that there was no temptation to obsess over one particular moment in the model’s evolution. Instead I was compelled to work fast, making many decisions quickly and duplicating the model at regular intervals.

With MasA Whippet , much more than Default Whippet , I sensed that the process itself, not just the outcome, was important.

Working from sketches

Most of the animations described in this research are motivated by everyday things in the world around me. Many of these animations use observational pencil sketches as reference and others involve working directly from life.

Figure 4.24: Sketchbook page showing some sketches of Ginger.

These practices (using observational pencil sketches as reference and working in the physical presence of things) are common among drawers and painters but they are less common among 3D animators. Comments made in the 2012 book “3D Animation Essentials” indicate why this might be the case. In this book the author gives the following advice to budding 3D animators:

Let’s say you need to model a tiger. Taking your desktop computer to the zoo and setting it up in front of the tiger exhibit is not really practical. Of course, you could take your sketchbook with you to the zoo and sketch a tiger – and this is a good way to get reference – but this would be time consuming. You could also take a camera to the zoo and take pictures, which would save you some time, but you’d still have to factor in the travel. As an alternative, the Internet is a great place to find images that you can use for references. It’s fast and you don’t have to leave your studio. (Beane, 2012, p. 85)

The author’s words are pragmatic and sensible and they reflect views dominant within the 3D animation community.

Even without having read this book, most 3D animators (including myself prior to this research), intuitively adhere to this advice. But is there something different about experiencing a tiger face to face instead of learning about tigers through internet images or books? If a 3D animator stayed away from internet images and photographic reference, how would their work be different?

Figure 4.24 shows the page from my sketch book which inspired Spline Whippet, discussed above. As we can see from these small scale dog studies, pen or pencil sketches are often indicative and unfinished. This is particularly true when sketching moving things (such as a person or a dog). These sketches are unfinished because I could only put down a few lines describing the dog’s head before she moved to look in another direction. The sketchy quality that we see in these images can be contrasted with 3D software's propensity to depict stable and completed forms.

Phone Figure

Like many of the animations described above, Phone Figure was inspired by a pencil sketch. For other projects (such as Spline Whippet and MasA Whippet ) I referred to a full page of drawings, but Phone Figure is based on just one (shown in Figure 4.25 a). With this sketch (which is the size of a postage stamp) as my only reference, I created a 3D character model. Rather than using one final model, Phone Figure (like the Modelling as Animation projects described above) cycles through various model iterations.

With Modelling as Animation, the user presses a button to save iterations of a mesh – but for Phone Figure, I developed a script which does this automatically, at prescribed intervals. This amendment means that duplication of the mesh now happened in the background while I worked and I no longer had to make decisions about when to duplicate the mesh. This made it even easier to achieve a state of “flow” (Csikszentmihalyi, 1990), i.e. to get lost in the process of modelling. With mesh duplication happening in the background it now seemed like I was working on a single model – however it felt different from standard polygon modelling because I knew that my working process was being recorded.

Figure 4.25: a) Sketch that I referred to while modelling Phone Figure; b) Phone Figure animation.

Inspired by an ambiguous sketch, I took Phone Figure in a number of subtly different directions (has he got a coat on? Is he wearing glasses?) and, in the end, I was left with a multitude of models which were like crumbs left to mark a journey. The movie (Figure 4.25 b) cycles through some of these model iterations.

Throughout this research I have found that Working from Sketches is a good way to avoid illustrating an image or idea that already exists, and instead to let the work itself take over. This is because sketches are “sketchy”; they are incomplete and don’t have the same authority as photographs or anatomy diagrams. Sketches can act as triggers or points of departure and, once the 3D project is underway, attention easily turns from a sketch toward an appreciation of the work on its own terms. In other words, the 3D work (in its current iteration) easily becomes the point of focus and calls for responses that aren’t necessarily planned in advance.

Building Phone Figure

In the foreground of the movie below (Figure 4.26) is Phone Figure, and in the background is a model of a building inspired by an apartment block situated across the road from the studio where I sometimes work. It is not an architectural masterpiece, but this building has often grabbed my attention on a sunny day because of the way that light plays across its facade. One afternoon I decided to work from the front seat of my car and, juggling a laptop computer and a graphics tablet, I looked out the windscreen at the building.

Figure 4.26: Building Phone Figure animation.

For Building I used the same basic approach as Phone Figure; a script that automatically duplicates models while I work. Phone Figure, like other projects described above, was made in the comfort of my studio – but for Building my working conditions were far from ideal. For example, the laptop screen size was limited and the sun setting behind me made it difficult to see the viewport display. Building was difficult and uncomfortable but it was also absorbing; I found that working from life feels different to working in the studio using sketches or other reference. With projects described above I was mainly responding to digital things (the screen display, the software architecture and the animated work), but Building involved an ongoing response to physical things in front of me. After my first life-modelling session I noted that “I delighted in various shapes and shades that I discovered as my eye moved across the forms, and I wanted to record these shapes and shades quickly” (Appendix A, Project Diary: Plein air Still life, 14/05/14). Despite the inherent discomfort and inefficiency, I decided to further explore Working from Life and made a suite of tools to help. I call this custom toolset Plein air Still life and describe it in detail below.

Working from Life

Figure 4.27: Photo showing the tools I used when Working from Life.

By “working from life” I mean working in the physical presence of the things which I am studying. This is a strategy I have practiced many times in the past with painting, and which I continue to practice with drawing.

For Franck, Monet, Cézanne, and many other artists using pencil or paint, working from life is common practice. As mentioned above, it is not a strategy that I have previously used (or seen used) with 3D animation software.

Plein air Still life

One of the purposes of the Plein air Still life UI is to make a number of Maya’s different tools easily accessible in one location, minimising the need to navigate the software’s many menus. Like other tools discussed above, Plein air Still life automates several processes so that a single button performs multiple actions, enabling workflows that would otherwise be untenable. Like Modelling as Animation, a major feature of Plein air Still life is that it automatically saves iterations of a mesh. The video, diagrams, and description below further explain the tool’s features.

Figure 4.28: Demonstration video showing Plein air Still Life tools in use.

Plein air Still life tools can be conceptually divided into two sets. The first set (which I refer to as “Production Tools” see Figure 4.29) is most useful during a life modelling session. These tools include things such as a colour palette, which makes it quick and easy to create a shading network and apply it to selected faces, as well as controls to specify how often a mesh is duplicated.

Figure 4.29: Diagram indicating several features of the Plein air Still Life Production Tools.

The second set of tools (which I refer to as “Post-production Tools”; see Figure 4.30) allows the animator to easily collate a multitude of models into a single animation by setting keyframes on the visibility of each model.

Figure 4.30: Diagram indicating features of the Plein air Still Life Post-Production Tools.

In Plein air Still life experiments I think of the life modelling session (which uses production tools) as a way of collecting data or “raw material”, which the animator then works with to create an animation. Using Plein air Still life -post-production tools, the raw material can be arranged in a variety of ways – which means that the same life-modelling session can result in a variety of different outcomes.

Plant

Plant (Figure 4.31) is the result of my first life-modelling session using Plein air Still life tools. Like Building, Plant started as a polygon cube and, while studying the plant in front of me, I altered the cube by adding more and more detail, applying colour to faces of the mesh as I worked.

Figure 4.31: Plant animation.

As well as simply storing iterations of a mesh, Plein air Still life also incorporates the option to blend or morph between mesh iterations. It achieves this by using Maya’s blend shapes (known in other 3D programs as morph targets). Instead of simply showing a number of models in sequence (as was the case with Modelling as Animation, Phone Figure and Building), in Plant the computer is interpolating between plant models. By adding the ability to interpolate between models, I hoped to create animations which were more than simple timelapse modelling videos. What I didn’t predict was the glitches that this addition to the workflow would introduce.

In the movie at Figure 4.31 there are moments when the plant seems to quiver, other moments when it turns itself inside out and sometimes it almost disappears completely. These glitches of varying intensity all result from the use of blend shapes to morph between model iterations. When model amendments are made by repositioning existing vertices, the software interpolates easily between one model and the next. When amendments involve the addition of new vertices, these newly defined points are left behind while the others slide into position. When amendments involve deleting existing vertices then the software needs to reassign numbers to vertexes on the model, and major glitches occur. With practice, I developed a feel for how particular topology changes would elicit particular styles of glitch, but there were always surprises.

Figure 4.32: Stills from Plant with glitched form on the left.

Making Plant was entirely absorbing and, in many ways, it felt similar to observational pencil sketching, with the addition of a virtual third dimension. As well as enjoying the process, I was intrigued by the outcome. I especially enjoyed the moments of quivering or twitching that occur throughout the animation. These glitching twitches are the result of my intuitive modelling decisions in combination with intricacies of the software’s architecture.

Plein air Still life projects involved paying attention to everyday things in the world around me. With these tools I could build things as I see them. This felt different from approaches I had used in the past, which focussed more on how an object is “known” or assumed to be.

For example, it might seem obvious that a plant is comprised of leaves and branches or stems, and creating a 3D model of a plant normally involves assessing it as a collection of these (or similar) component parts. This is true whether using a Paint Effects plant or using other standard approach to plant modelling. In Plant I wanted to avoid orthodox approaches in order to explore the thing in front of me with fresh eyes. To use Franck’s words, I wanted to explore a particular plant without considering it in terms of “plants in general”. As I worked on Plant, the mesh was often in a chaotic state and, reflecting on an afternoon of modelling, I wrote in my project diary:

resisting the urge to make sense ... i.e. resisting the urge to “wrap it up”, to simplify, to make the model look obviously “plant like”; by this I mean that I resisted extruding a stem from the base and then leaves from the stem/trunk. I tried to sit with it being chaotic. (Appendix A, Project Diary: Plein air Still life, 27/05/14)

While working on Plant, I had to continually resist the urge to “tidy up” my model and see it as a collection of leaves and stems. To borrow Merleau-Ponty’s words, I had to exercise a “tolerance for the incomplete” (Merleau-Ponty, 1993b, p. 88).

Castlemaine and Falls Creek

Inspired by this new approach, I went on to use Plein air Still life tools in a variety of locations, including my car, my house, cafes and public buildings as well as in parks and in the countryside. I also used them at different times of the day and I used them to study a variety of different things (e.g. plants, cars, buildings, people and animals).

Figure 4.33: Castlemaine animation excerpt.                        

For Castlemaine (Figure 4.33) I took a folding chair, a card table and my laptop computer into the countryside. While modelling Castlemaine I was surrounded by natural/organic/chaotic forms and I was somewhat overwhelmed by the complexity of my surroundings. It’s common to model a 3D landscape by creating different types of objects using different tools (e.g. one set of tools might be used for creating grass while another set is used for modelling rocks). As well as using different toolsets, each object is likely to be created separately before being moved into position. You can see in the Castlemaine excerpt (Figure 4.33) how I described many different things, including plants, rocks and trees using a single mesh.

Figure 4.34: Playblast of Falls Creek animation.                        

Falls Creek (Figure 4.34) uses a similar approach but in this project I modelled architectural forms within the landscape. Although there were shrubs and small trees within sight, I was compelled to focus on a small hut and ski lift pylon because, when polygon modelling, it’s easier to create hard edged geometric forms than subtle, ambiguous or organic ones.

Like most of my life modelling sessions, my work on Falls Creek was interrupted when my laptop battery ran out. At this point I returned to my hotel room and I continued to model what I saw. This is why Falls Creek transitions from an exterior to an interior scene.

Boots

Like many of my experimental animations, Boots avoids breaking a scene into component parts and it also avoids duplication. The four boots in Figure 4.35 are treated as one mesh even though (as two pairs) it would be more convenient to model just two boots, duplicate them, mirror them, and then move them into position. One result of modelling things in context is that they are not aligned to the grid. This makes manipulation tools, such as move, rotate and scale, difficult to control and means that model amendments are always imperfect.

Figure 4.35: Boots Animation.                               

As you can see in the movies above, many of my Plein air Still life projects avoid complex lighting algorithms and they also avoid smooth shading. Instead of defining the value of shading parameters (such as diffuse, reflectivity and specularity) and getting the software to calculate tonal and colour modulation, in these projects I have applied shades of colour directly to objects. I refer to this strategy as Colour as Light and it could be described as a back-to-basics approach to 3D software (discussed above ). Like modelling things in context, Colour as Light, by default, produces inaccurate results.

Colour as Light and Geometry as Shadows

Working with Plein air Still life encouraged me to explore dynamic and contextual features of perceptual experience. Focused on the physical world around me, I noticed how things are always moving, and lighting conditions are always changing. I also noticed how shifts in attention and changes in context changed what I saw. Using Colour as Light, I found that adding a colour to a mesh changes the look of existing colours on that mesh by changing their context. For example, adding a bright shade of orange makes existing colours look bluer. I also noticed that, as the hours passed and I became more attuned to the things in front of me (the shoes, the plant, the dog or the building), I could always discern more detail and find new colours.

I created Building (Figure 4.26, above) as the sun set behind me and I noted in my project diary that:

as the light changed I wanted to change the topology of the model (e.g. move verts of shadow faces upward) ... These topology changes were reflected or captured with the duplicated mesh, however the colour changes where not reflected. (Appendix A, Project Diary: Plein air Still life, 14/05/14)

What I had noticed when making Building is that adjustments I made to shaders throughout production had no impact on the finished work.

Chair

Figure 4.36: Chair animation.                               

The way that a painter refines colours as they work is captured in layers of paint. In my search for something similar, I added an option to Plein air Still life tools which allows the user to save “colour tweaks” (i.e. changes made to shaders during production). With this addition I could use Plein air Still life post-production tools to cycle through saved colours. In Chair (Figure 4.36) colours stored during a life modelling session are cycled through in a variety of ways.

Shoes

Figure 4.37: Shoes animation.

Cycling through saved colours is also evident in Shoes, Figure 4.37, particularly when shades of white and blue moving across the model. Obviously storing RGB values and cycling through them in a finished animation bears little resemblance to the richness of layered oil paint, but it was still an interesting addition to the Plein air Still life toolset.

Auto Camera

Figure 4.38: Viewport snapshot showing the many cameras created when using my Auto Camera script.

While I enjoyed the process of modelling from life and found the results of my life modelling sessions intriguing, I often wondered how I could turn animated models (which exist as three-dimensional data) into two-dimensional movies. My tendency was to position a virtual camera at a point relative to the mesh which loosely corresponded to my real world position relative to the things I was studying. Alternatively, I positioned the camera so that it framed parts the mesh that appealed to me. In most projects, the position of the camera moves between these two alternatives.

In order to remedy this arbitrary approach to camera animation, I decided to write a script which automatically saves the working camera (i.e. the viewpoint that I am working from) at regular intervals throughout the modelling session. The idea was that these cameras could then be culled and collated (based on certain criteria) to create an automatic animation. I thought that “taking the decisions away or at least having a starting point [from which] to work” (Appendix A, Project Diary: Plein air Still Life, 13/06/14) might reveal a new approach to camera animation. It took a lot of time to get the script working and ultimately I didn’t find the results interesting so I went back to animating the camera by hand (i.e. using keyframes).

Figure 4.39: Still from Shoes animation rendered from a different point of view.

Figure 4.39 shows same project, Shoes, from above. You can see how different the shoes look when rendered from a different point of view. Arbitrary camera animation is an issue that remains unresolved in these experimental animations and it would perhaps be more interesting for a viewer to be able to move around and interact with the models, like I can when I’m working. This is something to be explored in future research.

Books

In relation to Plein air Still life I continued to ask myself, “what makes this different from yr [sic] average timelapse modelling movie?” (Appendix A, Project Diary: Plein air Still Life, 30/05/14). I found that one answer to this question was:

the fact that in practicing this technique I actually work (i.e. model and texture) differently ... just like when doing a charcoal animation you might work differently than you would when just doing a charcoal drawing. I’m not necessarily taking the EASY PATH/ the EASY OPTION in terms of making a model. (Appendix A, Project Diary: Plein air Still Life, 30/05/14)

I have explained how this refusal to take the easy option applies to Plant, above. Another example of how Plein air Still life tools changed the way I model is evident in Books, below. In this animation I tried breaking the mesh into parts so that, after working on one part and then another, stored mesh iterations are divergent. Once blend shapes have been added, this divergence results in a dynamic animation. I wondered if this could replicate the way that our eyes move across a form or around a scene.

Figure 4.40: Books animation.

Franck calls his process “Drawing-Seeing” because, for him, drawing and seeing are one activity, not two (Franck, 1973). Similarly, in many of my experimental projects, modelling and seeing became one activity.

Rather than first observing the books in front of me and then modelling (or describing) them, it’s more accurate to say that Books involved observing (seeing and understanding) the pile of books through my use of the software. At the time of making this animation the pile of books had been on my table for weeks: making Books meant seeing them in a new way; exploring the ordinary as extraordinary.

Shadow play

Figure 4.41: Viewport snapshot taken while working on Shadow Whippet, discussed below.

Many of my experimental animations use Colour as Light in conjunction with another back-to-basics approach which I call Geometry as Shadows. This strategy involves describing shadows using geometry instead of using virtual lights. Using Geometry as Shadows means that shadows can be manipulated by moving vertices on a mesh which feels more direct than adjusting the parameters of a virtual light. Colour as Light results in colour and shading that is not mathematically perfect and, likewise, Geometry as Shadows results in discrepancies between models and the shadows that they cast.

Using Geometry as Shadows is often as simple as applying a dark (and sometimes semi-transparent) colour to a flat plane (this has been done in Shoes and Boots, above). But sometimes a shadow needs to move and deform with the main model, and for this I created a custom set of tools called Shadow Play. These tools are accessed via four shelf buttons which are used to create geometry that behaves either like shadows cast on the ground or like shadow rays. The movie at Figure 4.42 demonstrates how these tools can be used.

Figure 4.42: Shadow Play demonstration video.

In conjunction with Colour as Light, Geometry as Shadows provides an alternative to the use of complex lighting algorithms and it allows the user to focus on shadows as much as (or instead of) the objects that cast them.

Shadow whippet

Figure 4.43: Shadow Whippet1 animation.                               

Created using Shadow Play, the Shadow Whippet animations in Figures 4.43 and 4.44 indicate how new tools can result in unexpected creative opportunities. Without Shadow Play tools I could never have envisaged the animation at Figure 4.44. In this movie the dog model has been hidden, leaving only the shadow geometry.

Figure 4.44: Shadow Whippet2 animation.                               

Green Jumper

Compared to 3D software, paint and pencil are direct and immediate mediums. The image (or object) that a painter sees is the same image/object that will be before a viewer the moment the artist decides that the work is finished. This is not the case for a 3D user, because a 3D scene (as a digital file) exists as data and software gives us the ongoing capacity to modulate that data in various ways. Throughout the production of a 3D animation, a user has multiple ways to visualise their work; they can change how the viewport displays objects at any time.

For example, throughout production we can view a mesh in smooth shade or wireframe, view rig controls or see the simulation of forces displayed as arrows. The visualisation style that the user chooses will depend on the task at hand, and each style will differ in various ways from a final render. One of the satisfying things about Colour as Light and Geometry as Shadows is that what you see in the viewport while you work is very similar to a rendered image. By comparing the location photo with a still from the Green Jumper animation (shown side by side in Figure 4.45), you can see this similarity.

Figure 4.45: Photograph showing my setup and location when making Green Jumper and still from Green Jumper animation.

Francis Bacon states that, “moving – even unconsciously moving – the brush one way rather than the other will completely alter the implications of the image” (Sylvester, 1975, p. 121), and these words indicate that Bacon pays careful attention to the way that changes in one area of a painting alter the painting as a whole.

Using a digital medium, it’s more difficult for a 3D user to appreciate how and when localised changes alter the implications of their work. This is not such an issue when you are working in a modular fashion toward a predefined goal. But if you are interested in improvisation and emergent content, you need to appreciate a working iteration in order to respond to it appropriately. A digital medium is inherently flexible but, despite this, I have found that bringing the viewport image closer to a final render is one way of encouraging an appreciation of the work in progress (i.e. an appreciation of the work’s current iteration).

Cup and Specular Whippet

Cup and Specular Whippet are two works which use auxiliary computer graphics in a final work. I use the term auxiliary to refer to imagery which we normally think of as visual feedback because it has been designed to help the animator to complete a particular process; one of many processes/steps involved in making a main/finished work. These are graphics that the software user interacts with but that are not normally seen by a viewer. An example of such imagery is the Default Whippet texture map mentioned in the introduction to this document (Figure 1.11).

Figure 4.46: Cup animation.

Cup (Figure 4.46) shows the animated texture used to colour the cup model alongside the evolving cup mesh.

Figure 4.47: Specular Whippet animation.                               

Specular Whippet (Figure 4.47) consists of a rendered specular pass which is normally only one component used to create a realistic image.

Whippet in the Sun

Figure 4.48: A compilation of Whippet in the Sun animations.                               

Along with using Blend Shapes and storing colour tweaks, the addition of transparent trails is another Plein air Still life amendment which I made in order to distinguish my work from timelapse modelling videos.

Figure 4.48 shows a compilation of Whippet in the Sun animations with fading trails. Figure 4.49 show stills from Whippet in the Sun in which the RGB channels fade at slightly different rates, i.e. the fading of these channels is not in unison.

Figure 4.49: Whippet in the Sun stills.

I found that the addition of fading trails produces some interesting rendered images but it’s an effect that’s difficult to work with because the transparency of objects is often not accurately visible in the viewport.

Shelves

Even with the addition of transparent trails, my animations still showed a simple form becoming more complex over time. In order to move away from this visual logic, I decided to add a function that automatically deforms and Reduces the model while I worked. With the addition of this feature, the deformation and reduction of a mesh is based on the position of a bounding box. By moving the box, as well as setting the frequency and degree of deformation, subtly different styles of deformation can be achieved. Although the effect is very subtle, you may notice that I have used this automatic deformation function in Shelves (Figure 4.50).

Figure 4.50: Shelves animation.

Eventually I discovered that modelling moving objects is also an excellent way to avoid the predictable progression from a simple model to a more completed form.

Cafe Figures

As well as Whippet in the Sun (discussed above), Cafe Figures is another example of working with moving objects. The animation in Figure 4.51 combines the results of two modelling sessions which took place in a cafe. Seated inside at a cafe table, I observed and modelled passing vehicles and pedestrians as they waited at the busy intersection outside.

Figure 4.51: Cafe Figures animation.

I enjoyed working in a café so much that I repeated this activity several times. After the first session I wrote:

Had a lot of fun. was sorry when my battery ran out. was very absorbed. It was hard to “get anything down”. At first I thought it was a pointless, impossible task. first attempts were conventional Leggo men; I liked it when the man sat down and I moved the verts into position without trying to make sense ... i.e. move a vert to the position of a foot and let the edges be dragged where they will ... try to capture an aspect of the form while letting other aspects “go to the dogs”. (Appendix A, Project Diary: Plein air Still Life, 30/06/14)

After several working sessions I noted that:

As I use these plein air production tools I can begin to see in a different way; to approach the subject matter differently/ to see/attend to/ notice different aspects. (Appendix A, Project Diary: Plein air Still Life, 30/06/14)

Figure 4.52: Cafe Figures Colour Test animation.                               
Figure 4.53: Cafe Figures MoBlur animation.                               

After working on location (in the café) with the Plein air Still life Production Tools I used the Post Production Tools to iterate between the models and cycle through colour tweaks in different ways. I also experimented with other texturing and render options. The movies in Figure 4.52 and Figure 4.53 show some of these experiments.

There were many occasions throughout this research when I found that a still frame from an animated sequence was intriguing, and often these were images, forms or movement that I didn’t design in advance. A recurring theme throughout my diary is that “THE FORMS I DIDNT BUILD ARE THE MOST INTERESTING” (Appendix A, Project Diary: Plein air Still Life, 30/06/14). Of Cafe Figures I wrote “I like/enjoy/am intrigued by the colours and shapes in these images even though they are not what I would ‘choose’” (Appendix A, Project Diary: Plein air Still Life, 30/06/14).

Figure 4.54: Stills from Cafe Figures animation showing some of the glitched figures that appealed to me.

Similarly, of the last frame of Books (Figure 4.55) I wrote, “For some reason I really like this image (again, it’s an image that is born out of the process; not thought up in advance)” (Appendix B, Project Diary, 30/06/14). Of the image in Figure4.55 right (another frame from Books), I wrote, “I like the way that this image contains ‘observed colours’. It’s not an image I designed but it’s also not arbitrary” (Appendix A, Project Diary: Plein air Still Life, 30/06/14).

Figure 4.55: Stills from Books animation (described above).

Plein air Still life version 2

Toward the end of the research I made major revisions to the Plein air Still life Post Production tools. Like Modelling as Animation, the first version of Plein air Still life iterated through models by setting visibility keyframes but this method of showing and hiding models made it difficult to vary the speed of model iteration throughout the timeline. Rather than using keyframes, the revised version of Plein air Still life iterates by connecting objects and attributes using a selection of Maya Utility Nodes .

Figure 4.56: Video demonstrating Plein air Still life version 2.

Cycling through the models now gives real-time feedback and the speed of iteration can be easily varied. The extent to which these changes alter the “feel” of the medium is significant. If we accept that the life modelling session collects data that is raw material for use in post-production, then we could say that when using Plein air Still life V1 this data (or raw material) is not very pliable; it feels a bit like working with sheet metal. Using Plein air Still life V2, by comparison, feels more like working with a more pliable medium such as clay or plasticine.

Night Scene

Figure 4.57: Night Scene animation excerpt.         

Using the revised version of Plein air Still life, the movie in Figure 4.57 is the result of an evening spent at the RMIT library, looking out the window and modelling what I saw on the street several stories below. After about 2.5 hours (the extent of my laptop battery life) I ended up with vehicles, buildings and half-a-dozen figures in various stages of completion. The next day I continued working on some of the models using Plein air Still life tools to store iterations of a mesh as I worked. As described above, I also used Auto Keyframe for this project and found that because I was animating multiple characters it worked well in this context.

Night Building and Whippet in Bed

At the outset of this research I had trouble breaking away from conventional approaches to 3D software because my habits of use were deeply entrenched. Working from life was useful because it sometimes prompted me to intuitively depart from habitual practices and conventions. I found that what I saw sometimes influenced the way that I used the tools, prompting minor adjustments to practices and techniques.

Figure 4.58: Night Building animation.                              

For example, sitting in my car on a cold night in a deserted street I modelled the house in front of me (Figure 4.58). Extruding and colouring faces to describe the architecture was relaxing and familiar, but in response to messy areas of vegetation I started pulling faces through each other. This is an example of a life modelling sessions in which physical things called for tools to be used in an unorthodox way.

Figure 4.59: Whippet in Bed animation.                             

Departure from habitual practices also occurred because I had to work fast. For example, scrambling to describe the moving dog when making Whippet in Bed (Figure 4.59), I collapsed a number of vertices into one and positioned it to coincide with the dog’s nose. In subsequent projects such as Whippet in the Sun, above Figure 4.48, I used this technique (i.e. collapsing vertices and then adding detail) many times.

I've Plein air Still life tools to study and model a variety of things in a variety of contexts, and I’m still finding new ways of using these tools because with each working session I approach them in a subtly different way. I am often compelled to explore new features and have found that new features sometimes suggest new contexts for use. For example, it was the addition of the automatic bounding box deformation described above which prompted me to model my messy bookshelves (Figure 4.50). There are also instances where the particularities of a working context have suggested new features: for instance, after experiencing a change in real-world lighting while making Building (Figure [building|fignum]), I added colour tweak controls.

Coloured Whippet

Coloured Whippet is my favourite work from the Plein air Still life series. While making this work the automation controls were set to save the model very frequently, i.e. every couple of seconds. The Coloured Whippet animation in Figure 4.60 uses a large number of mesh iterations with no blend shapes interpolating between them.

Figure 4.60: Coloured Whippet animation.

Working from life can be uncomfortable, it involves working fast and the outcome is always uncertain. This is true when working in an unusual location and it is especially true when studying things that are moving.

Within a simple repertoire of actions, Working from Life allows physical things in the world around me to call for a subtly different style of response.