Ludwig Wittgenstein was an Austrian philosopher from the early twentieth century.
His two masterworks, “Tractatus Logico-Philosophicus” and “Philosophical Investigations,” focused on how we communicate ideas to one another, and why these communications frequently go sideways.
Wittgenstein was and is still a sensation. He is the probably last philosopher to earn the title “greatest living philosopher.” He was all that.
Wittgenstein believed that language existed to paint pictures inside the minds of others. How well we paint determines how well we communicate.Communication frequently fails because we are bad painters.
Today, whether we realize it or not, we are more frequently painting in the mind of AI. We call these prompts. So the question is, “How do we paint inside the mind of a synthetic intelligence?” with programs such as DALL-E, ChatGPT, or Midjourney.In the world of image generation, everyone needs to be a poet or prompt engineer
Learning how to “speak” to Midjourney, for instance, requires first understanding how it formed its worldview.
There are three key areas to understand.
Firstly, what data set shaped its worldview? What images form its view of the world? As much as the folks at Midjourney might want us to believe that they scraped the internet at random, their data set was built selectively and trained aesthetically.
Secondly, what words are most suggestive to AI? There are words that Midjourney understands immediately and others it simply doesn't grok.
Finally (and maybe most challenging), how do we convey our spatial intent to AI? How do we describe things like depth and space to AI?
Articulating the world in 3D is not our native tongue. We were raised on flat images.
When I talked with AI artist Darien Davis recently, he told me it took over 30 prompts to get the relationship between a dog owner and his pet just right. The key phrase was “marked space between” which seemingly prompted Midjourney to put a dog precisely alongside his master.
Learning to effectively paint in the mind of other people or in the mind of AI is really the same thing. It’s one of humanity's evergreen problems.
It will likely always be a work in progress.
In the last half of the 20th century, the art world went through a process that was the opposite of using words to prompt images.
In “The Painted Word” Tom Wolfe describes how images left the canvas and became abstractions. Theories and words replaced canvas and paintbrushes. Art became the description of art, not the thing.
In the end, all that existed was documentation.Today, things are picking up where Wolfe left off and the words are not replacing art but creating art.
We all have a stake in better understanding how the machine thinks.
Our existence may perhaps depend upon it. It’s a cruel turning of the tables.
Yet, just as the microscope allowed us to see an invisible world with a new level of granularity, AI will help us see what we call reality with greater fidelity.
If we can only learn to speak better speak in pictures.
Welcome to Nextness. More than a newsletter, a mindset.