Sept 19 2022
Colour Time Verse

I’m really excited to share two new Colour Time works I’ve been developing for an upcoming show with Verse in London.

Colour Time Sync is a 20 minute long live-generated colour sequence, which uses clock time to synchronize the work for all viewers, allowing viewers in the gallery and online to inhabit the same temporal frame.

Colour Time Generative derives from Colour Time Sync colour data. Instead of animating it through time, Generative draws the data across time as a gradient. There is no fixed size for this generative collection - instead, as the pieces are minted, the gradient expands. Each buyer owns a section of a continuum, which is constantly shifting until the sale closes. 

At the core of this series of work is a fascination with the passage of time; an attempt to replicate the imperceptible flow of time. As we can see the effects of time as dust gathering in a corner, change in Colour Time becomes evident as optical effects - complementary colour after-images creating colours that are not there, blurred edges of form and colour. 

These works both play with time - in Sync, the binding to time turns the colour series into a sort of abstract timepiece. In Generative, the indicators of this timepiece are flattened into an two-dimensional representation, a dense gradient where time maps from left to right.

This gradient is then split using a novel generative sale mechanism - each token represents a slice of the whole gradient, and with each new sale the gradient expands. Expanding the decisive moment of a generative sale, both the buyer and the market become co-creators of the works. 

I’ve made a video walkthrough of the two projects - jump to 1m40s for generative intro.

Below are some outputs from the Generative system, which will on display in the gallery. The numbers in the titles indicate the token ID and hypothetical collection size. 

In a standard generative release, the decisive moment occurs during minting, a quick roll of the dice determining the output. In Colour Time Generative the seed of randomness is the market as a collective force, turning not just minter but also market in co-creators of the work.

The output of a generative work derives value from its relation to the larger generative series - rarity and aesthetics are judged in relation to the collection as a whole. This generative series embodies this relationship, with each piece together forming a continuous band of colour. 

Project page:
Apr 5 2022
Colour Time Development

Colour Time is a series of twelve on-chain animated SVGs created during a motorcycle trip from Montreal to Los Angeles in November of 2022.

The origins of this series lie in earlier explorations on colour, notably Colour Calendar  and Colour Calendar explores the effect of relative contrast on colour perception and our emotional relationship to colour. explores complementary colour after-images, presenting a slowly shifting pane of colour. Upon viewing, the eye becomes saturated and begins to generate a complementary colour after-image, which is then met or challenged by the on-screen colour.  

These on-chain NFTs are the third iteration of this exploration, and combine the effects of colour relativity and complementary colour afterimages. Named for the location in which they were created, they attempt to express the impossibilty of fully capturing and relaying an experience as rich as a motorcycle roadtrip, instead expressing a tiny slice of colour and rhythm. 

Each work consists of two planes of colour, which shift between two points of colour. Each plane shifts at a different speed, ie 20 second loop for the background, 13 second loop for the foreground, which allows for complexity to build, different hues coming into contact as the planes phase in and out of sync. 

For the full experience, these are best viewed in fullscreen mode, which can be accessed from the project website. 

Sale opens Thursday April 14 at 10am PST, 1pm EST, 5pm GMT. Tokens are priced at 1 ETH. 

Oct 3 2021
Token, Hash

To buy an NFT is to buy a number in a distributed database. Owning a CryptoPunk is paying to put your wallet address beside a given token ID.

The above statement may be factually correct, but it does not capture the experience of owning a CryptoPunk. NFTs are not just numbers in databases; they are immaterial symbols around which cultural, social and financial value transacts.

Desire for a CryptoPunk relates to the cultural position they hold; owning one signals an alignment with the culture of crypto, a degree of wealth, and provides an opportunity for self expression. But at a practical level, the anchor for this desire is a wallet address beside a number in a database.

CryptoPunks attempt to ground this tenuous link between image and ID by embedding an encoded image of all the punks in their contract. This image circulates freely, but the authenticity of any the image can be verified by entering it into a SHA256 cryptographic hash function and comparing the output to the hash encoded in the contract.

Artists such as Deafbeef have taken further steps to strengthen the link between token ID and artwork. Deafbeef encodes the parameters for each audio-visual artwork on-chain, and embeds the scripts used to generate the work on each transaction. These scripts provides the collector with code and parameters to re-create the artwork, should the original file be lost.

Projects such as Loot take on-chain one step further, generating the artworks on-chain as SVGs. This removes the need for a collector to run parameters through scripts, but introduces new challenges.

Randomness is key to generative work, but typical random functions return different values each time they are called. If these functions were used in on-chain generative projects, the image for a token ID would change each time it was viewed.

To solve this issue, on-chain artists generate their random values deterministically. Deterministic number generation relies on the same cryptographic hash function which CrytoPunks uses to encode their reference image. The hash function always returns a consistent output for a given input, but importantly, any small change in the input will result in a wildly different output.

In the case of Loot, the random value that determines which asset a token is given is derived by feeding a hash function a piece of text such as ‘WEAPON’, in combination with the token ID. The combined value, for example ‘WEAPON56’, is fed into the function and returns a value. The hash function will return the same value time WEAPON56 is input, but will give a randomly different value if WEAPON57 is entered.

To become useful as an input, the hash value is divided by the number of items in a list, and the remainder is used as the index to retrieve the token ID’s weapon. This approach can be applied to different lists of items, and if they are all of varying lengths, a single random hash value will return different values for each list.

Autoglyphs, Loot, Artblocks et al. each input different values to their deterministic generation functions, but the central concept remains the same; use a stable input to return a stable, random value.

Images for on-chain NFT projects are drawn anew each time it the image is requested, all randomness derived from the token ID and its hashed value.  

In Token Hash these two values which constitute the generative NFT are laid bare. Stripped of visual and narrative, the tokens display the scaffolding from which generative images are constructed. The numbers contain, in minimal form, the core characteristics of the on-chain generative NFT; rarity, scarcity, symmetry, beauty. 

By reducing the on-chain generative NFT to its core elements, Token Hash enables us to look beyond the visual to examine the social and cultural mechanics these values generate.

Token Hash public sale opens Thursday Oct 7 at 10 am EST.
1000 tokens will available for sequential minting, at a price of 0.02 ETH each.

Sept 2021   
Proof of Work Origins

An interface is not just a portal for access, but a designed extension of the body that then designs the body in reverseRachel Ossip, N+1, 2018

Much of my work is concerned with the relationship between physical and digital worlds; how software reaches into and manipulates the world, and how expression or gesture is modulated as it enters the digital.

In (GUA), a group of participants are led to gesture and move via web-based interfaces on their mobile phones. The project attempts to make evident power dynamics between system creators and system users, by providing users with interactions so limited in scope that they require specific gestures to complete.

Proof of Work came from the idea of turning these systems of software guidance on myself. The first piece of software I created was an manual image generation program which required the entry of 10,000 values to fill a 100x100 pixel image. Scaled down from the broad gestures of GUA, this software induced small-scale repeated gesture of a keypress.

My first explorations with this application were to test how well I could generate random values. Randomness is famously hard to generate even for a computer. Rafael Lozano-Hemmer has a great artwork on this topic titled Method Random,  which visualizes the patterns that occur as randomly generated sequences scale.

Producing 10,000 random values took around 30 minutes. I posted the manually generated image with a random generated reference and asked viewers to guess which was which.


Most responders thought that the computer produced image was my production, though those with more computer experience guessed correctly. After this exploration, I was curious to see how different people might generate different images, and asked some friends to produce their own ‘portraits’ through the software.

It was only after these explorations that I entered the world of NFTs. I felt a strong drive to participate, but the raw speculative nature of the market felt off-putting. Not wanting to sell my friend’s productions, I generated a series of five 100x100 random images over one week.

The images show an interesting progression; pattern uniformity starting strong but dipping Wednesday, consistency returning Thursday and Friday.

Around this time, Beeple’s ‘The First 5000 Days’ sold for a record-breaking $69,346,250. Buyer Metakovan explained his rationale for the purchase as such:  

“When you think of high-valued NFTs, this one is going to be pretty hard to beat. And here’s why — it represents 13 years of everyday work. Techniques are replicable and skill is surpassable, but the only thing you can’t hack digitally is time. “ — MetaKovan, Christies Press Release

This assumption that one metric can be used to determine the value of an artwork seemed a perfect encapsulation of the speculative tendencies of the market, and so I set out to challenge this assumption by embodying it directly.

Adopting Beeple’s production pace, I generated one image per day. To provide varying levels of effort for the market to speculate on, I began each series with a pixel canvas of 1x1, and doubled it each day. A series would end when I could no longer complete one image in a day, my physiologically limitations ensuring the scarcity of the series.

While producing these images I was reminded of keystroke dynamics, a field of behavioural biometrics which explores typing dynamics. Researchers have found that the rhythm and pattern of keystrokes are unique to each user, with the potential for replacing passwords as an authentication method.

“[...] typing is a motor programmed skill and [...] movements are organized prior to their actual execution. Therefore, a person’s typing pattern is a behavioral characteristic that develops over a period of time and therefore cannot be shared, lost or forgotten.” Bannerjee and Woodward, Journal of Pattern Recognition

If we interpret the patterns that appear in the image as a visual representation of this gestural biometric, the images transform from records of effort to minimum viable artworks, the hand of the artist made visible in the digital image.

Sept 9 2021
Took most of August off, biked from Montreal to New York City to visit friends. The trip was a real exercise in listening to small nudges, following through on little insights. Ended up in Provincetown for a week and a half, meeting people and hanging out on the beach. 

Back in Montreal, happy to be home and back at work. Applied to Mars College this morning, excited to see what people I might meet there. Have always been interested in desert living. If they’ll have me, I’d like to get a motorcycle and drive down before the winter fully takes.

There has been new interest in Proof of Work. Blue Duration is basically sold out - three of the last four I minted to my wallet are sold, and I’m holding on to the 1x1 as a sort of artist’s proof. I feel that those are the most emblematic of the project, a single gesture.

I’ve been developing Red Pressure, and am reminded of what artistic work is; a slow refinement of an idea, a pushing away of the fear that the idea is not valid or interesting, a heeding of the desire for refinement.

Red Pressure maps the pressure of a touchscreen tap to intensity of colour. Originally I wanted to use the trackpad on my macbook, for visual continuity in the documentation. I wrote up an application that received trackpad pressure information, but in testing the trackpad revelealed itself to not deliver very consistent values.

I experimented with force sensing resistors, but these also had their issues, and aesthetically they departed from the visual narrative of human / computer interfaces.

I was looking around to see if I could calibrate the trackpad and came across, a website which uses a force-touch capable iphone to give quite accurate weight estimations for capactive objects.

I found an OSC controller app with 3D touch capabilties (Syntien). Fully editable, and sends granular touch pressure data. There is a slight delay in receiving the values using this approach, but the pressure is much more reliable.

Colour always takes longer than I think. In Blue Duration I was just using a single color value, and multiplying it by the elapsed time between keypresses, which resulted in a light/dark modulation of the blue.

Modulating red in this way results in muddy shades which I didn’t like, and so instead I’m modulating between two shades of red, a brighter/pinkish hue and a deeper red.

The variation between taps is less than it was in Blue Duration, and I like how the differences in the pixels are almost imperceptible. There seems to be less of a banding effect in the eary tests I’ve done, and a more scattered visual effect.

I’m aiming to start production of Red Pressure on Monday Sept 13, and have them up for sale by Sept 24, depending on how long the series runs for.