a.i.envisions.

Crisis, What Crisis? Where’s the genAI house of cards collapse?

by | Oct 13, 2024 | Featured, Opinion

This article is adapted from my 2024 Q3 quarterly GenAI for Placemaking newsletter, which went out to subscribers at the start of October, and which you are able to sign up for below.

I was so sure

we’d be seeing the first signs of the genAI cool-off by now, and that the cool-off, as is the case with so many speculative stars, would represent the sudden dip of pressure that would cause the whole thing to suddenly fall in on itself.

And yet, I’m not seeing it. Not thus far. Although, I am seeing a cooling in the excitement about genAI. I thought that would precipitate the implosion in short order, but the pressure seems to be morphing from one thing into another, or several others, as though the star has found some new elements to fuse.

Partly it seems that genuine use cases do exist for genAI; it’s not all hot air. And those use cases, at least in the case of placemaking, are most obviously in the creation of visual media. That this was the first thing anyone alighted on, courtesy of Midjourney, Stable Diffusion, and similar image-generators, doesn’t seem to have undermined the basic value of being able to churn out innumerable pretty pictures.

Does this distract us from seeking deeper value propositions within design and placemaking using genAI? Possibly. That depends on whether there really are any, or whether the probabilistic nature of large language models will always prevent them from being that useful.

Does Midjourney undermine the basic value of art? A year ago, I’d have said yes, but now I’m not so sure.

To keep with the star analogy, column inches written about genAI is a form of stabilising pressure, and so is venture capital investment. However, as yet it’s still too early for any VCs to have taken a bath too deep to climb out of, or for any of the big tech players to have announced they’re taking their toys and going home. If it comes, and it surely will, it will come in a year or two, when the outer layers of ill-conceived bloat start getting ejected into space. At that point, the pressure of discourse in media, which keeps everyone’s attention glued on genAI, will shift from a positive pressure to a vacuum. I know that there are already many people talking about this, many commentators wishing it to be known that they successfully predicted the collapse, but the capital flowing into genAI is still large enough that it doesn’t matter. Yet.

Interestingly, try Googling “genai bubble” and you’ll get the following results:

That sure does sound pretty concerning for an investor. Certainly, the sort of discourse one would want to be paying attention to, I’d imagine.

Anyway, let’s have a look at Paul Allen’s search results. I mean, let’s have a look at the “genai venture capital” and “genai value” search results:

Ok, so that’s… not great.

I’m not saying that genAI is a black hole that’s going to swallow us all, just as I’m not saying the Internet, whose nascent economy imploded in the year 2000, was a black hole that lacked any kind of actual utility. But still. There’s nary a shred of reticence or caution on show in that second set of results. And who are they from? Consultancy firms who cash big cheques to tell companies gripped by FOMO how they can “transform their business with AI”, and media outlets that rely on hype-driven clicks.

I’ve been saying since last year (yes, I know, along with every other commentator, it turns out) that hype goes both ways – MONEY BEING MADE! and MONEY BEING LOST! are equally attractive as headlines.

Speaking broadly about genAI, there remain issues. They might be foundational.

  • The so-called hallucination problems have been patched up but not eliminated, and this has real implications. Due to the nature of hallucinations looking plausible, the better a job one does of catching them, the more likely those that make it past the guardrails will survive undetected for longer.

  • The issue of training on synthetic data remains, in which using AI-generated information to train new models leads to a kind of “inbreeding”, and eventually the degradation of response quality.

  • Related to this, scaling will run out. So far, capability increases have been unlocked every time the engineers made the model bigger – more parameters and more training data. However, there are reasons to believe this has a limit. In fact, many companies are putting out models that are smaller, not bigger. Does this mean smaller models aren’t useful? Not at all. But it does make the claim that AGI or “artificial general intelligence”, whatever that means, is round the corner, or even viable in the medium- to long-term, less plausible. And a lot of the sector’s inflated valuation is based on some flavour of AGI mythologising.

All of this suggests to me that this – what we have now in terms of genAI capability – is probably it. OpenAI isn’t hiding another moonshot up its sleeve. AGI isn’t just around the corner. This isn’t the “end of work”, just as the 1990s didn’t turn out to be the “end of history”.

Maybe it doesn’t matter. Let’s just focus on what we’ve got. As we currently stand, the image diffusion models have clear applicability to the realm of placemaking. I’d suggest their use is primarily in the realm of illustration and suggestion, of ideation, although I have also seen good progress being made over the past year towards essentially using them as a kind of render engine. This certainly may cut out some time and allow greater flexibility in architectural visualisation, but as great improvements in render speed due to ray tracing and denoising technology have also been made in recent years, and having a person with direct control over the result guarantees there won’t be any little unforeseen mistakes that might prove costly, it remains to be seen how useful this really is.

Want to work together?