When i tell people i’m writing a book about Generative AI, the first thing they often ask is ‘oh, are you for or against it?’, which feels rather like they are asking if i am in favour of gravity.
Amidst gaslighting by politicians, the hyperbole of the tech bros, and the speed with which people tend to find certainty whilst remaining anchored within legacy paradigms, it’s easy to see why.
There is something almost hysterical about the stance, feel and fragmentation of the dialogue – or, as it may often feel – the disconnected monologues.
I thought i would share a few of the lenses through which i am seeing these new technologies, and the ways in which i seek to understand their impact, both short and longer term.
Firstly, i find it useful to distinguish between ‘change within a system’, and change to the system itself. The first relates to the ways we adapt, but within a scaffold of certainty and existing structure. The second relates to how change may fracture that very structure, and the emergent competitors may come from a different perspective altogether. Another way to consider this is a optimisation vs disruption – whilst this language is not perfect, i hope it will illustrate the principle.
Already we are seeing the widespread deployment of optimising applications: essentially those that make things quicker, cheaper, more accurate, or more effective, but substantially within an existing structure. This includes inline ‘copilot’ type applications (helping us to write better – deployed into tools like Word and various email and blog composers etc), narrative engines (helping us summarise, and spot patterns, define actions and even support accountability – deployed into tools like Teams and Zoom) as well as more creative engines (like MidJourney and Firefly, enabling creativity to become democratised, creating assets at speed and scale). Also emergent ‘partners’, like Scribe, supposedly acting as inline critical friends. All of these are either piggybacked into existing software (and hence existing paradigms of operation, procurement, risk and control) or within existing and known ways of working (jobs, companies, legislative structures of copyright etc.
Alongside these, and perhaps most visibly, sit the direct dialogue engines: ChatGTP, Bard etc, which have a combination of power plus accessibility: pretty much anyone can find value in these within ten minutes of starting to play. They are in the flow of our existing dialogue expectations and mechanisms, so we do not have to ‘learn a language’, or even really any very specific vocabulary.
These are the instances that may help us create, or cheat, better (whatever the distinction ends up being).
All of this is still within the familiar.
But beyond optimisation, and probably moving more slowly, although irrevocably, is the disruption. This is distinguished from optimisation in that it may not fit within existing societal expectations and structure. So AI may break ideas around creativity, productivity, profit, and class. It may fracture systems of education and law. Or of conflict and power.
I’m not saying that it will (it will), but it might (it will).
Indeed, it already is. AI written books, AI composed songs, AI generated art and essays, these are all disrupting markets, systems of perception, and systems of control.
Organisations, no matter how good they are at change (and generally they are quite poor at significant change) can only flex and bend so far. In my more speculative work i would argue that our future Organisations will be lighter weight, more reconfigurable (less bound into codified structure), will disaggregate aspects of ‘task’ and ‘role’, will be permeable to expertise, probably held within diverse ecosystems of capability holding bodies (new Guilds), and led socially, at least partially.
It’s unlikely that all of our Organisations will survive: emergent structures and underlying models will be radically empowered, not to optimise, but to subsume, subvert and re-author markets and services. Things we never knew we needed.
But today, we stand on our certainty: just this week i’ve heard people talk about ‘ethics’ and the ways they are certain that they operate (i am far from certain that they are even real), about ‘capability’, and how AI’s will never be able to do certain things (i am uncertain i could identify anything that they won’t be able to do, in time), and a widespread conflation of hope, fear, or desire, with fact.
Someone told me we will not have GeneralAI within a thousand years: i am unsure it’s wise to hold such a long term view, with such certainty. One of the definitions of ambiguity is a breakdown of precedent and prediction.
There’s also widespread confusion about semantics and taxonomies, and what, exactly, counts as real. I’m a pragmatist: as there is no universal definition of intelligence, then conceivably when a system does something that looks, sounds, and smells intelligent, then it probably it – at least in the pragmatic view of the real world.
Everyone with an opinion can be a hero, but we tend not to reward the explorers: the people who are willing to be unsure and, specifically, to work very hard to remain uncertain.
Now is a great time to be uncertain: not to fall into a consensus view, but rather to learn, and be willing to build upon that learning. The comfort we may find in certainty may be a cold one.
For all of the polarisation – that AI will escape our control, doom us, is too biased to be trusted, or too basic to be creative, we may fail to spot the most valuable truths of all.
GenerativeAI is here, right now. Millions of people are getting the hang of what it can do. And many of the, are imagining new things that it may be able to do. There may be not such thing as a clear future ‘answer’, but rather a protracted period of disruption that will load progressive layers of pressure onto our legacy organisations, especially those who fail to create spaces to experiment, explore, watch and listen.
For all the conversations about the dramatic and often parroted outcomes, we may miss the small but important ones.
The everyday changes, the incremental waves of capability, the erosions of certain legacy structures of power and control, the blurring of certain boundaries, the gradual empowerment or disenfranchisement of whole segments of the population, and potentially both great productivity gains, but also the loss of certain valuable aspects of what makes us human.