I generally don’t get excited about technology until Google’s released their version of it.
Microsoft integrating OpenAI doesn’t ignite much in my brain.
Google Bard? You have my attention.
What is Google Bard?
This is the conversational AI tool by Google that’s the equivalent of Open AI’s ChatGPT.
These tools are what get headlines, but Google has been using AI in search for years (for things like serving up videos set to the exact timestamp you need and delivering information you didn’t ask for but clearly need, like crisis intervention). In the wake of so much discourse about ChatGPT, they’ve announced even more AI integration to come.
So, Google Bard: a sexier and more up-to-date chatbot. It’ll do everything ChatGPT does but hopefully with greater accuracy and fluency.
Will Google Bard be better than ChatGPT?
While ChatGPT is trained on data up to 2021, Bard allows live scraping of the internet. That’s going to cut down on erroneous answers and give it way more cultural awareness.
I also have more faith in Google’s Natural Language Processing, because they’ve been dealing with billions of uniquely worded thoughts every day for decades. That’s an insane amount of data on which to train a bot.
Google has the sense of responsibility that AI needs
While I hesitate to say I ‘trust’ Google, they have vast experience with power vs. responsibility. In their announcement about Google Bard, CEO Sundar Pichai used the phrase ‘bold and responsible’.
My first feelings about AI when I was introduced to this kind of language processing a decade ago were mixed: amazed, but concerned that this advancement would be led by companies with no obligation to put safety processes into place.
Conversational AI is uncanny – and it doesn’t make us feel concerned for no reason. The potential for harm or – at the very least – societal issues is massive. Already, I’ve seen a publisher halt submissions due to the overwhelming volume of AI-written books and multiple universities are struggling to respond to the threat to the status quo. Suddenly, being able to detect AI-written content seems imperative – but we’re also starting to question why essays and homework exist if they’re… redundant?!
Further afield, AI image generation is steps away from causing some serious issues for us in terms of news stories. An image is worth a thousand words, words that we’ll no longer pause to read. If I see what looks like an on-the-ground photograph of troops committing war crimes…that’s hard to ignore, and we know many people lack critical media awareness as it is.
Now, in fairness to the AI tools out there, safety nets do exist. MidJourney would refuse a prompt involving violence or sex, and ban you if you kept misbehaving. But we know how fast tech spreads and duplicates and mutates. The fast-moving dupes of MidJourney coming down the pike may not care about moderation – they’ll launch an app and get removed in a couple of months, then launch another. I’ve seen the creepy AI-generated images that are crowingly presented as the end of OnlyFans. People are making those somewhere.
In 2018, Google was one of the first organisations to launch extensive principles for developing AI. They’re worth reading for anyone, whether you’re a consumer of AI products, you’re integrating AI tools into your work practices or you’re developing AI products yourself.
Educate yourself on the ethics of AI
We live in an amazing and scary time. Technological advancement is brilliant and AI will help so many fields, from medicine to law, but we have to educate ourselves on safety and awareness.
Top tip: you can still spot an AI image by checking how many fingers a person has. The robots haven’t figured out how to do hands yet.