This Week in AI: Google unleashes, EU Regulators take aim & AI predictions are problematic

This week Google unveiled its AI-based plan to retain its dominance in search, whilst wiping out yet more startups by adding AI functionality to its Workspace products. The EU fired some warning shots to the AI companies, and we realised that the heads of the AI companies have no idea how anyone is going to make money with this stuff. Including, in some cases, themselves.

Whew…

BTW you can watch the video of this week’s update over on our YouTube channel here:

Google’s AI Gameplan Finally Revealed

Since being caught on the back foot by ChatGPT in November, Google has been in ‘code red’ mode figuring out how to try to wrestle the AI crown back.

This week’s IO Developer conference did a decent job of doing just that. The main announcements:

AI Search Results

Called “Search Generative Experience” or SGE, generative text answers powered by Google’s new anguage model PaLM2 will give searchers a quick ‘lay of the land’ on complex questions.:

The Generative AI answer appears at the top of the page (or just underneath some Shopping ads for a commercial search they demoed). To the right are 3 websites so the user can ‘dig deeper’, and below are suggested follow up questions or a prompt allowing the user to ask their own follow up question.

Searchers can click an icon to expand this Generative AI section, which opens up links to more websites.

The websites linked aren’t being referenced in the answer, but are instead chosen to corroborate info given in the answer. This is inevitable, given that large language models can’t always pick a precise ‘source’ for their statements any more than a musician or artist can identify the precise source of inspiration for a lick or melody they wrote.

But clearly, getting your website featured in this space for relevant queries will become a hard-fought SEO battleground.

Elsewhere, there were demos showing products being recommended and some hints from Google about how Ads will feature.

We’ll be releasing a video shortly on the Exposure Ninja YouTube channel breaking down the SEO and marketing implications of these changes in more detail, so subscribe to make sure you don’t miss it.

Overall, this was the functionality that Google needed to drop to reassert itself as the dominant player in search. Provided that the answers are good, this should see them defend their marketshare and position as the go-to search destination… for now.

Microsoft, your move.

AI integration into Google Workspace tools

Effectively wiping out another tranche of AI startups focussing on generative AI in documents, spreadsheets and presentations, Google demoed its ‘Help me write’ and ‘Help me organise’ functions.

Think ChatGPT (or, more accurately, Google Bard) inside your docs. We have been expecting this since Google teased it in March, but it was great to see it in action.

Renaming its AI inside Workspace tools as ‘DuetAI’, Google also demoed Sidekick which reads, analyses and answers questions about documents, emails and sheets.

Google Bard takes on ChatGPT with ‘Tools’

Google Bard has been living in ChatGPT’s shadow: it launched with a really bad demo, suffered inferior performance and got left behind when OpenAI released GPT-4.

But Bard just got tools, and they might just give it an edge over ChatGPT…

For one thing, Google has natively integrated some of its own services:

  • Bard can search the knowledge graph to pull images and information

  • Google lens gives users the ability to throw in images as prompts or part of prompts

  • Google maps allows Bard to answer questions with directions

  • Exporting to Google Sheets and Docs makes the answers more usable

There are also some extensions with third party developers, with Google particularly keen to show off its integration with Adobe Firefly for image generation.

It’ll be interesting to see how Bard + tools stacks up against ChatGPT + plugins. We’ll update you as soon as we get access.

Safety

Google devoted a fair portion of its presentation to touting its safety policies, including reiterating their guiding AI development principles (which include things like “Be socially beneficial”, but don’t include mention of copyright or IP 🤔).

They teased some image watermarking plans and meta data functionality, designed to help distinguish AI-generated images from ‘real’ ones, and revealed a couple of image search enhancements that would allow users to ‘fact check’ images by reverse-searching them to see if they had been debunked as AI.

It’s interesting to see Google trying to carve out a position for itself as the safe and responsible player in this space, possibly in an attempt to demonstrate enough self-regulation to avoid (or at least position themselves well for) the tidal wave of govt regulation that you’d expect is heading the way of the AI companies.

Speaking of which…

EU Regulation

The EU MEPs voted in favour of the most aggressive set of regulations yet. Under the proposals for the Artificial Intelligence Act, developers like OpenAI would have to disclose content generated by AI and publish details of copyrighted data used for training purposes, “so that creators can be remunerated for the use of their work”.

How? Which creators? Who handles and polices the remuneration? Who pays the fees? Who decides how the money is split? Can anyone even attribute training data input to specific outputs?

So many questions, and it does make you wonder how much these folk truly understand the technology they are in charge of regulating. Nevertheless, with 4 federal agencies in the US last week and the EU this week circling AI developers, the intent is clear.

Of course, this is not law yet. Negotiations will now start with other institutions and member states, which could take a long time - it has taken 2 years to get this far. But this is the most aggressive regulatory proposal yet and it’ll be interesting to see how it pans out.

And finally… an observation on just how difficult it is to keep up with AI

Three pieces of evidence from the past 7 days that no one - including the leaders of companies building this software - have much clue what is going on.

Exhibit A: Sam Altman (OpenAI CEO) can’t predict how you - or anyone - will make money from OpenAI’s tools

The initial narrative about AI models was that you’d have general models that would be used for general output, but the real value would be in fine-tuned models that excelled in specific use cases. You know, build ‘ChatGPT for accountancy’ by feeding it data and fine-tuning it for accountancy scenarios.

Sam was a big proponent of this theory, suggesting this roadmap in an interview just 7 months ago as a way to create an enduring business (timestamped video here). 

Lo and behold, AI startups rush into creating their own fine-tuned LLMs. Bloomberg GPT, Harvey the lawyer bot etc.

Apparently that might not be where things are going though…

This week, in an interview at MIT though, Sam says:

“We thought this was going to be a fine tuning story, but people are doing pretty amazing things with the base model and for a bunch of reasons seem to prefer that”

“As the models get better, it does seem like there’s a trend towards less of a need to fine tune and you can do more in the context” (i.e. just write a more detailed prompt into GPT-4 and the answers are great)

Gulp goes every business who built their own fine-tuned GPT model.

Exhibit B: Reid Hoffman (LinkedIn founder, ex-OpenAI board and co-founder of AI startup InflectionAI)

Reid is a well-connected guy, having been on the board of OpenAI before starting a AI startup with the co-founder of Deepmind. And yet, in this interview, he hadn’t even heard of the OpenAI Code Interpreter - arguably OpenAI’s most impressive tool yet.

This might sound like nitpicking - AI moves fast, and Reid’s a busy guy. Surely we can forgive him for not hearing about the latest AI tool?

But when this is your business area AND you used to be on the OpenAI board AND when this plugin took so many in the world of AI by surprise for its capabilities… damn, Reid!

On top of this, he and the founder of Deepmind seem to have totally missed with their PI chatbot. It just doesn’t feel like a viable competitor to OpenAI and Google. The reasoning he gave to justify its existence in this podcast interview just wasn’t defensible. I’m yet to hear a single person comment that they like this thing, despite the company so far raising $225 Million.

Exhibit C: Box founder Aaron Levie also needs to read the PBAI Newsletter

In another episode of the same podcast, Aaron from enterprise Dropbox competitor Box also admits not having heard of Code Interpreter. This is despite having just pivoted his company to focus on AI, and Code Interpreter essentially being a direct competitor to Box’s AI offering.

Again, forgivable for an ‘ordinary person’ to not know about this tech. But for a well-connected CEO not to know about a competitor product like this is crazy!

###

I’m not criticising these people - they are busy and running companies.

But this does raise the question how are businesses and ambitious people supposed to navigate this world that is moving so quickly? 

With the pace that AI is moving, builders who want to pitch their tent over a goldmine are trying to hammer their tent pegs in an avalanche.

How do you make sure your industry isn’t wiped out by the next release you didn’t realise was coming?

My advice:

  • Stay on top of what’s going on

  • Stay familiar with the platforms, test and play. We’ll continue to do this too and will share what we find.

  • Don’t just look at the pattern, look at the direction and trends

  • Subscribe to the PBAI newsletter (and recommend your friends and colleagues do the same 😉 - just forward this on!)

If you have any feedback, just hit reply and let us know what you think of the newsletter!