Sept 19 2022
Colour Time Verse

I’m really excited to share two new Colour Time works I’ve been developing for an upcoming show with Verse in London.

Colour Time Sync is a 20 minute long live-generated colour sequence, which uses clock time to synchronize the work for all viewers, allowing viewers in the gallery and online to inhabit the same temporal frame.

Colour Time Generative derives from Colour Time Sync colour data. Instead of animating it through time, Generative draws the data across time as a gradient. There is no fixed size for this generative collection - instead, as the pieces are minted, the gradient expands. Each buyer owns a section of a continuum, which is constantly shifting until the sale closes. 

At the core of this series of work is a fascination with the passage of time; an attempt to replicate the imperceptible flow of time. As we can see the effects of time as dust gathering in a corner, change in Colour Time becomes evident as optical effects - complementary colour after-images creating colours that are not there, blurred edges of form and colour. 

These works both play with time - in Sync, the binding to time turns the colour series into a sort of abstract timepiece. In Generative, the indicators of this timepiece are flattened into an two-dimensional representation, a dense gradient where time maps from left to right.

This gradient is then split using a novel generative sale mechanism - each token represents a slice of the whole gradient, and with each new sale the gradient expands. Expanding the decisive moment of a generative sale, both the buyer and the market become co-creators of the works. 

I’ve made a video walkthrough of the two projects - jump to 1m40s for generative intro.

Below are some outputs from the Generative system, which will on display in the gallery. The numbers in the titles indicate the token ID and hypothetical collection size. 

In a standard generative release, the decisive moment occurs during minting, a quick roll of the dice determining the output. In Colour Time Generative the seed of randomness is the market as a collective force, turning not just minter but also market in co-creators of the work.

The output of a generative work derives value from its relation to the larger generative series - rarity and aesthetics are judged in relation to the collection as a whole. This generative series embodies this relationship, with each piece together forming a continuous band of colour. 

The works will go on sale Sept 28th at 6:30 pm GMT, 1:30pm EST, and the sale will be open for four days until Oct 1. Colour Time Sync will be sold as an auction during that time.

More on the show:  
Apr 5 2022
Colour Time Development

Colour Time is a series of twelve on-chain animated SVGs created during a motorcycle trip from Montreal to Los Angeles in November of 2022.

This is the third iteration of Colour Time, which began as a website in the net-art tradition, The second edition of the work were three video-based NFTs created in March of 2021 and minted on Opensea.

The origins of the project lie in earlier explorations on colour - I had a collection of digital colour selections remaining from the development of Colour Calendar which I put into a video editing program and set to music.

As I played around with the idea of a colour sequence animation, I became curious about slowness, perhaps influenced by my friend Timothy Thomasson’s slow CGI video work.

I set out to make colours shift as slowly as the movement of the sun; movement perceptible over time but difficult to observe as it happens.

As with all simple goals, this was more difficult to achieve than expected; I had various pieces of desktop software that could create interesting effects on my computer, but trying to record these animations I ran into issues. Video compression would create jumps between colours, breaking the effect by giving the viewer a perceptible moment of change to hold on to.

I ended up creating a web-based version of my desktop software using WebGL in order to acheive adequate smoothness. To generate the colour sequence I built a timeline editor tool, which I used to create the 20 minute sequence that lives on

What I found in creating this sequence was that the slowness of the transition creates optical effects in the eye; colours appear that are not on screen, hard edges blur. The change is slow enough that the eye has time to become saturated with colour, which generates complementary colour afterimages of the on-screen colour. The sequences for were designed with these afterimages in mind; blues slowly shifting to meet the hallucinated red in the eye.  

Presenting this experience online creates a challenge for the viewer; in the browser where we are used to scrolling through endless content, the viewer must release control and let the work unfold at its own pace.

These new on-chain Colour Time works carry this gesture of slowness into the NFT space, resisting quick assessment. Where Proof of Work makes value easily legible, each Colour Time requires durational viewing for its qualities to be perceived.

Each animation in the series consists of two planes of colour, which shift between two points of colour. Each shift happens over a varying timeframe, ie 20 seconds for the background, 17 for the foregound. Each plane of colour loops on its own timeframe, allowing new hues of colour to meet and new rhythms to build. These phasing effects add a generative element to each piece, building complexity over time as loops move in and out of sync.

Homage to the Square, 1969, Joseph Albers

The works reference the colour work of Joseph Albers and the skyspace installations of James Turell, studies of colour and perception within the frame of the on-chain NFT.

These twelve pieces represent a selection of those produced. After a day of riding, I would produce a series of studies, letting the colours and rhythms of the day work their way in to the animations.

As in Proof of Work, there is a hypothesis that intangible experience can be transmitted visually. I see Colour Time as the completion of a year of exploration into the materiality of the NFT with an offer of pure experience.  

Sale opens Thursday April 14 at 10am PST, 1pm EST, 5pm GMT. Tokens are priced at 1 ETH and will be sold in a simple sale. View works at

Oct 3 2021
Token, Hash

To buy an NFT is to buy a number in a distributed database. Owning a CryptoPunk is paying to put your wallet address beside a given token ID.

The above statement may be factually correct, but it does not capture the experience of owning a CryptoPunk. In my experience as an artist, NFTs are not just numbers in databases; they are immaterial symbols around which cultural, social and financial value transacts. The feeling of selling an NFT is not the feeling of putting a name in a database, but an experience of joy, validation, gratitude, and possibility.

The desire for a CryptoPunk relates to the cultural position they hold; owning one signals an alignment with the culture of crytpo, a degree of wealth, and is an opportunity for self expression. But at a practical level, the anchor of this social and cultural utility is still a wallet address beside a number in a distributed database.

CryptoPunks attempt to solidify this tenuous link by embedding an encoded image of all the punks in their contract. This image circulates freely, but the authenticity of any given image can be verified by running the image through a SHA256 cryptographic hash function and comparing the output to the hash encoded in the contract.

Artists such as Deafbeef have taken further steps to strengthen the link between token ID and artwork. Deafbeef encodes the parameters for each audio-visual artwork on-chain, and embeds the scripts used to generate the work as input data on each transaction. These scripts provides the collector with all the code and parameters necessary to re-create the artwork, should the original render linked to the token be lost.

Projects such as Loot take on-chain one step further, generating the output images on-chain as SVGs. This removes the need for a collector to run parameters through scripts, but introduces new challenges.

Randomness is key to generative work, but standard random functions return different values each time they are called. If typical random functions were used in on-chain generative projects, the image for a given token ID would change each time it was requested.

To solve this issue, on-chain artists generate their random values deterministically. Deterministic number generation relies on a the same cryptographic hash function which CrytoPunks uses to encode their reference image, the same hash function which secures the entire Ethereum network. A hash function always returns a consistent output for a given input, but importantly, any small change in the input will result in a widely different output.

In the case of Loot, the random value that determines which asset a token is given is derived by feeding a hash function a piece of text such as ‘WEAPON’, in combination with the token ID. The combined value, for example ‘WEAPON56’, is fed into the function and returns a value. The hash function will return the same value time WEAPON56 is input, but will give a randomly different value if WEAPON57 is entered.

To become useful as an input, the hash value is divided by the number of items in the list of items and the remainder is used as the index to retrieve that token ID’s weapon. This approach can be applied to different lists of items, and if they are all of varying lengths, a single random hash value will return different values for each list.

Autoglyphs, Loot, Artblocks et al. each input different values to their deterministic generation functions, but the central concept remains the same; use a stable input to return a stable but random value.

Images for on-chain NFT projects are drawn anew each time it the image is requested, all randomness derived from the token ID and its hashed value.  

In Token Hash these two values which constitute the generative NFT are laid bare. Stripped of visual and narrative, the tokens display the scaffolding from which generative images are constructed. The numbers convey the core characteristics of the on-chain generative NFT; rarity, scarcity, symmetry, beauty. 

By reducing the on-chain generative NFT to its core elements, Token Hash enables us to look beyond the visual to examine the social and cultural mechanics these values generate.

Token Hash public sale opens Thursday Oct 7 at 10 am EST.
1000 tokens will available for sequential minting, at a price of 0.02 ETH each.

Sept 2021   
Proof of Work Origins

An interface is not just a portal for access, but a designed extension of the body that then designs the body in reverseRachel Ossip, N+1, 2018

Much of my work is concerned with the relationship between physical and digital worlds; how software reaches into and manipulates the world, and how expression or gesture is modulated as it enters the digital.

In (GUA), a group of participants are led to gesture and move via web-based interfaces on their mobile phones. The project attempts to make evident power dynamics between system creators and system users, by providing users with interactions so limited in scope that they require specific gestures to complete.

Proof of Work came from the idea of turning these systems of software guidance on myself. The first piece of software I created was an manual image generation program which required the entry of 10,000 values to fill a 100x100 pixel image. Scaled down from the broad gestures of GUA, this software induced small-scale repeated gesture of a keypress.

My first explorations with this application were to test how well I could generate random values. Randomness is famously hard to generate even for a computer. Rafael Lozano-Hemmer has a great artwork on this topic titled Method Random,  which visualizes the patterns that occur as randomly generated sequences scale.

Producing 10,000 random values took around 30 minutes. I posted the manually generated image with a random generated reference and asked viewers to guess which was which.


Most responders thought that the computer produced image was my production, though those with more computer experience guessed correctly. After this exploration, I was curious to see how different people might generate different images, and asked some friends to produce their own ‘portraits’ through the software.

It was only after these explorations that I entered the world of NFTs. I felt a strong drive to participate, but the raw speculative nature of the market felt off-putting. Not wanting to sell my friend’s productions, I generated a series of five 100x100 random images over one week.

The images show an interesting progression; pattern uniformity starting strong but dipping Wednesday, consistency returning Thursday and Friday.

Around this time, Beeple’s ‘The First 5000 Days’ sold for a record-breaking $69,346,250. Buyer Metakovan explained his rationale for the purchase as such:  

“When you think of high-valued NFTs, this one is going to be pretty hard to beat. And here’s why — it represents 13 years of everyday work. Techniques are replicable and skill is surpassable, but the only thing you can’t hack digitally is time. “ — MetaKovan, Christies Press Release

This assumption that one metric can be used to determine the value of an artwork seemed a perfect encapsulation of the speculative tendencies of the market, and so I set out to challenge this assumption by embodying it directly.

Adopting Beeple’s production pace, I generated one image per day. To provide varying levels of effort for the market to speculate on, I began each series with a pixel canvas of 1x1, and doubled it each day. A series would end when I could no longer complete one image in a day, my physiologically limitations ensuring the scarcity of the series.

While producing these images I was reminded of keystroke dynamics, a field of behavioural biometrics which explores typing dynamics. Researchers have found that the rhythm and pattern of keystrokes are unique to each user, with the potential for replacing passwords as an authentication method.

“[...] typing is a motor programmed skill and [...] movements are organized prior to their actual execution. Therefore, a person’s typing pattern is a behavioral characteristic that develops over a period of time and therefore cannot be shared, lost or forgotten.” Bannerjee and Woodward, Journal of Pattern Recognition

If we interpret the patterns that appear in the image as a visual representation of this gestural biometric, the images transform from records of effort to minimum viable artworks, the hand of the artist made visible in the digital image.

Sept 9 2021
Took most of August off, biked from Montreal to New York City to visit friends. The trip was a real exercise in listening to small nudges, following through on little insights. Ended up in Provincetown for a week and a half, meeting people and hanging out on the beach. 

Back in Montreal, happy to be home and back at work. Applied to Mars College this morning, excited to see what people I might meet there. Have always been interested in desert living. If they’ll have me, I’d like to get a motorcycle and drive down before the winter fully takes.

There has been new interest in Proof of Work. Blue Duration is basically sold out - three of the last four I minted to my wallet are sold, and I’m holding on to the 1x1 as a sort of artist’s proof. I feel that those are the most emblematic of the project, a single gesture.

I’ve been developing Red Pressure, and am reminded of what artistic work is; a slow refinement of an idea, a pushing away of the fear that the idea is not valid or interesting, a heeding of the desire for refinement.

Red Pressure maps the pressure of a touchscreen tap to intensity of colour. Originally I wanted to use the trackpad on my macbook, for visual continuity in the documentation. I wrote up an application that received trackpad pressure information, but in testing the trackpad revelealed itself to not deliver very consistent values.

I experimented with force sensing resistors, but these also had their issues, and aesthetically they departed from the visual narrative of human / computer interfaces.

I was looking around to see if I could calibrate the trackpad and came across, a website which uses a force-touch capable iphone to give quite accurate weight estimations for capactive objects.

I found an OSC controller app with 3D touch capabilties (Syntien). Fully editable, and sends granular touch pressure data. There is a slight delay in receiving the values using this approach, but the pressure is much more reliable.

Colour always takes longer than I think. In Blue Duration I was just using a single color value, and multiplying it by the elapsed time between keypresses, which resulted in a light/dark modulation of the blue.

Modulating red in this way results in muddy shades which I didn’t like, and so instead I’m modulating between two shades of red, a brighter/pinkish hue and a deeper red.

The variation between taps is less than it was in Blue Duration, and I like how the differences in the pixels are almost imperceptible. There seems to be less of a banding effect in the eary tests I’ve done, and a more scattered visual effect.

I’m aiming to start production of Red Pressure on Monday Sept 13, and have them up for sale by Sept 24, depending on how long the series runs for.