Token, Hash
Oct 3, 2021 

To buy an NFT is to buy a number in a distributed database. Owning a CryptoPunk is paying to put your wallet address beside a given token ID.

The above statement may be factually correct, but it does not capture the experience of owning a CryptoPunk. In my experience as an artist, NFTs are not just numbers in databases; they are immaterial symbols around which cultural, social and financial value transacts. The feeling of selling an NFT is not the feeling of putting a name in a database, but an experience of joy, validation, gratitude, and possibility. 

The desire for a CryptoPunk relates to the cultural position they hold; owning one signals an alignment with the culture of crytpo, a degree of wealth, and is an opportunity for self expression. But at a practical level, the anchor of this social and cultural utility is still a wallet address beside a number in a distributed database.   



CryptoPunks attempt to solidify this tenuous link by embedding an encoded image of the punks in their contract. This image circulates freely,  but the authenticity of any image can be verified by running the image through a SHA256 cryptographic hash function and comparing the output to the hash encoded in the contract

Artists such as Deafbeef have taken further steps to strengthen the link between token ID and artwork. Deafbeef encodes the parameters for each audio-visual artwork on-chain, and embeds the scripts used to generate the work as input data on each transaction. This provides the collector with all the information necessary to re-create the artwork, should the original render linked in the token be lost. 

Projects such as Loot take on-chain one step further, generating the output images on-chain as SVGs. This removes the need for a collector to run parameters through scripts, but introduces new challenges. Randomness is key to generative work, but standard random functions return a different value each time they are called. If typical random functions were used in on-chain generative projects, the image for a given token ID would change dramatically each time it was requested.

To solve this issue, on-chain artists generate their random values deterministically. Deterministic number generation relies on a the same cryptographic hash function which CrytoPunks uses to encode their reference image, and ideed the same hash function which secures the entire Ethereum network. A hash function always returns a consistent output for a given input, but importantly, any small change in the input will result in a vastly different output.

In the case of Loot, the random value that determines which asset a token is given is derived by feeding a hash function a piece of text such as ‘WEAPON’, and appending the token ID. This combined value, for example ‘WEAPON56’, is fed into the function and returns a value. The hash function will return the same value time WEAPON56 is input, but will be give a drastically different value if WEAPON57 is entered.

To become useful as an input, the hash value is divided by the number of items in a predetermined list of weapons and the remainder is used as the index to retrieve that token ID’s weapon. This approach can be applied to different lists of items, and if they are all of varying lengths, the random hash value will return different values for each list.

Autoglyphs, Loot, Artblocks et al. each input different values to their deterministic generation functions, but the central concept remains the same; use a stable input to return a stable random value. A token’s image is drawn anew each time it is requested, built from the token ID and it’s hashed value.  

In Token Hash these two values which constitute the generative NFT are laid bare. Stripped of visual and story, the tokens display the scaffolding from which generative images are constructed. In their raw form the numbers convey the core characteristics of the on-chain generative NFT; rarity, scarcity, symmetry, beauty.

By reducing the on-chain generative NFT to its core mechanics, Token Hash enables us to look beyond the visual to examine the social and cultural experiences these numbers generate


Token Hash public sale opens Thursday Oct 7 at 10 am EST.
1000 tokens will available for sequential minting, at a price of 0.02 ETH each. 



Proof of Work Origins
Sept 23 2021

An interface is not just a portal for access, but a designed extension of the body that then designs the body in reverse” Rachel Ossip, N+1, 2018

Much of my work is concerned with the relationship between physical and digital worlds; how software reaches into and manipulates the world, and how expression or gesture is modulated as it enters the digital. 

In www.grindruberairbnb.exposed (GUA), a group of participants are led to gesture and move via web-based interfaces on their mobile phones. The project attempts to make evident power dynamics between system creators and system users, by providing users with interactions so limited in scope that they require specific gestures to complete.



Proof of Work came from the idea of turning these systems of software guidance on myself. The first piece of software I created was an manual image generation program which required the entry of 10,000 values to fill a 100x100 pixel image. Scaled down from the broad gestures of GUA, this software induced small-scale repeated gesture of a keypress.

My first explorations with this application were to test how well I could generate random values. Randomness is famously hard to generate even for a computer. Rafael Lozano-Hemmer has a great artwork on this topic titled Method Random,  which visualizes the patterns that occur as randomly generated sequences scale.


Producing 10,000 random values took around 30 minutes. I posted the manually generated image with a random generated reference and asked viewers to guess which was which. 


Most responders thought that the computer produced image was my production, though those with more computer experience guessed correctly. After this exploration, I was curious to see what how different people might generate different images, and asked some friends to produce their own ‘portraits’ through the software.

It was only after these explorations that I entered the world of NFTs. I felt a deep drive to participate, but the raw speculative nature of the market felt off-putting. Not wanting to sell my friend’s productions, I generated a series of five 100x100 random images over one week. 


The images show an interesting progression; pattern uniformity starting strong but dipping Wednesday, consistency returning Thursday and Friday. 

Around this time, Beeple’s ‘The First 5000 Days’ sold for a record-breaking $69,346,250. Buyer Metakovan explained his rationale for the purchase as such:  

“When you think of high-valued NFTs, this one is going to be pretty hard to beat. And here’s why — it represents 13 years of everyday work. Techniques are replicable and skill is surpassable, but the only thing you can’t hack digitally is time. “ — MetaKovan, Christies Press Release

This assumption that one metric can be used to determine the value of an artwork was a perfect encapsulation of the speculative tendencies of the market, and so I set out to challenge this assumption by embodying it directly. 

Adopting Beeple’s production pace, I generated one image per day. To provide varying levels of effort for the market to speculate on, I began each series with a pixel canvas of 1x1, and doubled it each day. A series would end when I could no longer complete one image in a day, my physiologically limitations ensuring the scarcity of the series.


While producing these images I was reminded of keystroke dynamics, a field of behavioural biometrics which explores typing dynamics. Researchers have found that the rhythm and pattern of keystrokes are unique to each user, with the potential for replacing passwords as an authentication method. 

“[...] typing is a motor programmed skill and [...] movements are organized prior to their actual execution. Therefore, a person’s typing pattern is a behavioral characteristic that develops over a period of time and therefore cannot be shared, lost or forgotten.” Bannerjee and Woodward, Journal of Pattern Recognition

If we interpret the patterns that appear in the image as a visual representation of this gestural biometric, the images transform from records of effort tominimum viable artworks, the hand of the artist made visible in the digital image. 


https://proofofwork.jonathanchomko.com/
https://opensea.io/collection/proof-of-work-v1




Blogpost Sept 9 2021 

Took most of August off, biked from Montreal to New York City to visit friends. The trip was a real exercise in listening to small nudges, following through on little insights. Ended up in Provincetown for a week and a half, meeting people and hanging out on the beach. 

Back in Montreal, happy to be home and back at work. Applied to Mars College this morning, excited to see what people I might meet there. Have always been interested in desert living. If they’ll have me, I’d like to get a motorcycle and drive down before the winter fully takes.

There has been new interest in Proof of Work. Blue Duration is basically sold out - three of the last four I minted to my wallet are sold, and I’m holding on to the 1x1 as a sort of artist’s proof. I feel that those are the most emblematic of the project, a single gesture. 

I’ve been developing Red Pressure, and am reminded of what artistic work is; a slow refinement of an idea, a pushing away of the fear that the idea is not valid or interesting, a heeding of the desire for refinement. 

Red Pressure maps the pressure of a touchscreen tap to intensity of colour. Originally I wanted to use the trackpad on my macbook, for visual continuity in the documentation. I wrote up an application that received trackpad pressure information, but in testing the trackpad revelealed itself to not deliver very consistent values. 

I experimented with force sensing resistors, but these also had their issues, and aesthetically they departed from the visual narrative of human / computer interfaces. 

I was looking around to see if I could calibrate the trackpad and came across http://touchscale.co/, a website which uses a force-touch capable iphone to give quite accurate weight estimations for capactive objects. 

I found an OSC controller app with 3D touch capabilties (Syntien). Fully editable, and sends granular touch pressure data. There is a slight delay in receiving the values using this approach, but the pressure is much more reliable. 

Colour always takes longer than I think. In Blue Duration I was just using a single color value, and multiplying it by the elapsed time between keypresses, which resulted in a light/dark modulation of the blue. 

Modulating red in this way results in muddy shades which I didn’t like, and so instead I’m modulating between two shades of red, a brighter/pinkish hue and a deeper red.

The variation between taps is less than it was in Blue Duration, and I like how the differences in the pixels are almost imperceptible. There seems to be less of a banding effect in the eary tests I’ve done, and a more scattered visual effect. 

I’m aiming to start production of Red Pressure on Monday Sept 13, and have them up for sale by Sept 24, depending on how long the series runs for.




Blogpost July 9 2021

Came across this clip today, in Michael Connolly’s blog. In it, Vera Molnar speaks about why she uses randomness in her art, saying that the old idea was that the artist would create from a place of intuition, but that including randomness into a process would allow a machine to create variations beyond what intuition could produce.



I like this way of seeing the computer; it also makes sense when thinking of art made with machine learning and AI. The discourse in the news is often that the AI created the work, but the truth is often that an artist or programmer finessed and sorted the output, selecting and compiling the best outputs.

The generative approach makes a lot of sense when dealing with visuals, because the eye can quickly scan a grid of generated visuals and pick out the most visually striking ones. A generative process  makes less sense for audio or video work; time-based work doesn’t scan the same way visuals do, requireing a much deeper time committment to interpret.  

I think my lack of interest in generative processes is partly because I’m not interested in creating visual experiences. My interest is more in the realm of creating systems which generate experience. 

For example, the development of Shadowing was focused on creating a system which would evoke a sense of exploration and play. The visual of the shadow on the sidewalk was only important as a method of communicating the action of the system, not as a visual in itself. 

The visual output of Proof of Work holds a similar place in the project, a result of my engagement with an interactive system. The image is produced not by asking the computer for randomness within a system, but by setting a task for the artist, the difficulty and repetitiveness of which generates visual variation. 

Perhaps in a world where computers were external workstations, the idea of outsourcing the intuitive possibilities of art felt more exciting than at this current moment, where the computer feels internalized and all-encompassing.

Proof of Work embodies my approach to the generative image, where instead of celebrating the potential for the computer to add to breadth of the artist’s practice, I see the computing experience as an encompassing set of systems which seek to modify behaviour and impede the flow of production and intuition.


Blogpost July 6 2021

Played around today with a random generator for Colour Time. I’m thinking of releasing an series of on-chain Colour Time animations, but I’m undecided as of yet whether I will make them manually or generative.

The advantage of generative is that the scale can be larger, and it speaks to the nature of the collectible NFT market.

But the problem of generative is that there is very little possibility for artistic expression, and you’re often just wrangling the generative system, bounding it in enough so that it creates image that are pleasing.

I don’t really find that type of work very interesting, and this is partly what I’m expressing in the Proof of Work series; that there is some specific value to art that is the direct result of the hand of the artist.

We can see the generative process as a sort of artistic tool, akin to AI, but I feel often the tool itself becomes revered rather than seen as just a tool.

A generative release does invite the audience into the artistic process; they work directly with the tool, often generating a random output. The artist and audience discover the series of works together, and the market makes judgements around which images are the most valuable.

Perhaps we can see the generative process as a way of adding variability to the concrete nature of digital creation; a way of adding a bit of randomness into an a quite specific vision.

If we look at Fidenza, this perspective makes sense. All the outputs belong clearly to the same family, and we can understand the bounds of the system by looking at the outputs as whole.

For Colour Time, because the visual structure is so minimal, I think these explorations are showing me that I should be making each one manually.