Happy New Year, dear readers! It was quite a year last year, eh? I’m sure that, like me, many of you are glad to move on. More on what we’re moving into in a moment, but first let’s use one of the biggest news stories (not just tech stories) of 2023 – generative AI – to look at the Top 5 news stories of the past year.
I asked both Microsoft’s Bing, powered by ChatGPT4, and Google’s Bard, powered by Gemini for their Top 5 picks. [Sidebar: I do think that, going forward, the AI companies will need to be easier on us with all these brand names!]
Their responses were very different and, interestingly, did not include generative AI, which is far from just a tech story and likely will turn out to be an even bigger news story than social media has been for 15+ years. Surely not an indicator of AI modesty!
I have to say, Gemini/Bard’s response seemed more intelligent to me overall – however, G/B did not provide sources, which was a very big count against it. But it put the Top 5 in categories – “Global Politics and Conflict (2 topics: the escalation of Russia’s war in Ukraine and “US government turmoil”), “Economics & Technology” (2 topics: “Global Recession and Tech Meltdown,” referring to the massive layoffs in the tech around the turn of the last year, and “Climate Change & Environmental Cataclysms.” Its fifth top story was “Societal and Cultural Shifts,” referring to a “resurgence of social movements” such as those calling for racial justice, women’s rights and LGBTQ+ equality.
As for Bing, it did reveal its source and did provide a link to it; kind of ridiculous, though, that it was just the one source: Time.com. When I asked it to use more sources and summarize their takes on the Top 5, it told me, “I’m sorry, but I am not capable of searching for news stories from multiple sources at once,” so it gave me a nice list: CNN, the BBC, the Guardian, the New York Times and Reuters and said I could check them myself. Gee, thanks, Bing, great list, but did you not even want to provide links to those news outlets? Anyway, here’s what Bing said Time said were 2023’s Top 5:
- Bing’s No. 1 was not in G/B’s collection: “The Covid-19 pandemic could be officially over, but how long the economic fallout lingers is another question,” Bing wrote. It linked to a bunch of news stories, but I felt the need to google *who* had declared the pandemic’s end (the World Health Organization, US Centers for Disease Control and other agencies), and I was glad this so-called No. 1 story included the economic angle, which we’ll probably all be experiencing to some degree.
- Ukraine: “Russia’s war in Ukraine grinds on,” which was in G/B’s
- “Iran bends toward democracy – or doesn’t” (not in G/B’s)
- Economics: “Recession hits much of the world” (in one of G/B’s categories)
- “Latin America‘s political tide shifts”
Bing’s No. 5 wasn’t in G/B’s, which – for media literacy fans – really illustrates the value of using more than one search tool and certainly of checking multiple sources. Your experience will be different from mine, if you try this, because these were just my first queries of the chatbots. The second and third times were different, probably because my subsequent prompts weren’t worded exactly the same as my first ones.
A war and humanitarian crisis missing
I guess it’s not surprising the chatbots didn’t agree on the top stories, but I was dismayed that neither included the Israel-Hamas war in Gaza, which the UN calls a humanitarian crisis that “has reached catastrophic proportions.”
That they didn’t may be a function of the chatbots’ inability to be up-to-the-minute – or rather up-to-the-month, even! – another sign that we can’t rely on them too heavily for now. Good old Google News is more current, so if you use Bard, scroll down past it too just to be sure you’ve got the most current sourcing.
So my subjective takeaway is that the search chatbots, which as we know make stuff up, aren’t bad when you’re looking for a summary of opinions out there, but people will need to rely heavily on their media literacy skills when seeking facts, and Wikipedia is still better for that kind of project. The second, hopefully obvious takeaway is, the need for media literacy education has grown exponentially in the past year, and it was already huge.
Looking out on 2024
As for 2024, we all know it will not be a cakewalk. Taking politics alone, this is not just an election year. It’s “the biggest election year in history,” according to the Economist, with national elections in 76 countries, including 8 out of 10 of the world’s most populous countries. Forty-three of them “will enjoy fully free and fair votes,” according to the Economist’s Intelligence Unit (27 are EU members). My country’s coming vote already feels existential, at least for democracy in America.
Digital technology and media will both document and further dramatize all the politics, quite likely to become one of the Top 5 stories itself. For one thing, regular news consumers can’t yet tell what is fake and what is real about AI-generated text, photos and videos. Gaining that capability is a production vs. detection arms race between the people creating increasingly sophisticated deepfakes and researchers upping the sophistication of their detection game. So take political ads with a huge grain of salt. Hopefully, the AI trainers will someday train their LLMs to detect prompts with harmful intentions. But is it any surprise that “authentic” was Merriam-Webster’s word of the year last year?
“Hierarchies of AI”
Meanwhile, one of the most interesting observations about generative AI I’ve seen in the past month is about the “hierarchies of AI” developing. That’s how Prof. Ethan Mollick at University of Pennsylvania described it, with categories of generalist vs specialist AIs emerging – though he wrote that the large language models so much in the news last year, e.g., ChatGPT, Bard, and Anthropic’s Claude, will not only always be ahead of the more specialist models, they will widen the gap in what they can “know” and create.
As for the sort of media that has become familiar to us – social media – the “public-facing” kind –X/Twitter, Instagram, Pinterest, etc. – “is fading away,” Joanna Stern of the Wall Street Journal reported last month. It’ll be gradual, because media users like all human beings resist change, but younger ones are less and less interested in living in a fishbowl.
Youth on social media
For Millennials and maybe even more Gen Z-ers, the Internet has gone back to its pre-social days, according to culture reporter Kate Lindsay, writing that it’s mostly about watching (videos, reels, TikToks) and small-group chat (including DMs, or direct messages). She cites TikTok influencer Taylor Stewart (68k followers) saying Instagram has gotten all “judge-y and filtered and corporate now,” with a whole lot of “ghost-watchers” lurking and with people holding on to what you used to be like. “No one is supporting you” – no likes, no comments, just lurking.
“It’s just icky,” Lindsay writes, adding, “This echoes a wider sentiment I’ve seen creeping into my personal feeds: No one really posts anymore, no one’s having fun, and it’s partly for this reason that no one seems excited about any of the newer apps and features.”
Her take is borne out in the (qualitative) 2023 report on youth and social media from UK regulator Ofcom, and she cites Adam Mosseri, head of Instagram, saying most of the platform’s recent growth has been in DMs and Stories – again, private communication and videos. It’s likely there will always be influencers, guides, tutors and other creators in social media, however it evolves, just as we’ve long had radio, TV and film personalities. Scrolling through video shorts is now just as important to entertainment, media-style, as binging on whole seasons of Netflix series – it’s the new short-form vs. long-form.
Please don’t panic
Meanwhile, generative AI is becoming more social. It’s no longer just humans prompting AIs, 1:1, it’s humans hanging out with them: chatbots in social media groups. Last month Meta announced that “all US users can now message the chatbots in WhatsApp, Messenger and Instagram, PC Magazine reported – chatbots that look like celebrities such as Tom Brady, Paris Hilton, Snoop Dogg and perhaps most importantly MrBeast. The jury is definitely still out on how popular this will be. What appear more promising at the moment, according to academic research, are chatbots for mental healthcare.
Given all that, I think it’s worth considering Kate Lindsay’s words: “AI is a tool for humans, and what makes it innovative depends entirely on how humans employ it” (and I suggest how humans train it, too). I agree with her that “moral panic is a distraction from the ways humans can (and will) use it for, well, not end-of-the-world evil, but certainly to do things like steal your credit card information” or impersonate a celebrity, extort a minor or bully a peer by sharing fake nudes.
Those are super troubling, and we need to focus more on harm reduction in those areas than on the existential risk narrative so prominent in the first half of 2023. Lindsay continued “I don’t care what the AI wrote – but I do care who asked it to.” Or trained it to. We mustn’t panic, but we do need transparency, starting this year, not only on how our data’s being used to train the AIs but on how they’re being trained and used. But what do you think?
Related links
- About US parents’ and kids’ current understanding of AI: On behalf of the Family Online Safety Institute and unveiled this past November, Kantar research group conducted a qualitative and quantitative study on teens’ and parents’ understanding of AI and generative AI and the “emerging habits, the hopes and the fears of parents and teens” thereof.
- Top story within a top story (so not a top story but related): You might call it “Effective Altruism vs. Effective Accelerationism.” Normally I hate binaries, especially the good-guys-vs.-bad-guys kind. But this binary helps explain all the end-of-year turmoil in the gen AI field a bit and, more importantly, how fraught AI safety is. “Effective accelerationism” is the new “move fast and break things” mentality often associated with the early days of social media. As the New York Times’ Kevin Roose explains it, it’s “a cheeky response to an older, more established movement — Effective Altruism — that has become a major force in the A.I. world,” a force for safety, to put it simply. At least half of OpenAI’s board – the board that fired CEO Sam Altman in November (four days before he returned) – had deep ties to the Effective Altruism movement. Bloomberg goes in-depth on its history. Around the time Altman returned, the “Altruists” were gone; the “Accelerationists” had come out ahead, the Wall Street Journal reported. Which seemed to confirm “move fast and break things” is back in force at OpenAI, at least. Can we not learn from history? Unfortunately, the Silicon Valley investment ecosystem is not set up for that. All this suggests to me that it’s good that people outside my country are thinking about AI governance. This just in…
- “Governing AI for Humanity“: That’s the title of the preliminary report just released by the UN’s new AI Advisory Body, first convened just over two months ago by Secretary-General António Guterres.
- EAs’ unusual presence in Washington: helpful perspective from Politico – are the Effective Altruists the new fear-mongers or just a counterbalance to Silicon Valley accelerationism (the classic question)? Among other things, Politico points out the argument that some EAs’ obsession with potential existential risk is at the expense of effectively addressing near-term ones such as algorithmic bias and deepfakes like these – which is itself a risk, I think.
- But we wouldn’t be seeing any of these arguments without Silicon Valley’s ability to marshal two things – massive compute power and huge amounts of data – which is what computer scientist Geoffrey Hinton explains at about 18 min. into this video interview from the University of Toronto. Dr. Hinton is one of the creators of deep learning, a technology generative AIs use to learn and create content. He and his colleagues won the 2018 Turing Award, often called the “Nobel Prize of computing.”
- Teens on social media: Here‘s a qualitative study British regulator Ofcom published last year on 21 teens’ practices and perceptions.
- Teens’ level of AI use: In quantitative research last year, Ofcom found that UK kids and teens “are driving early adoption of gen AI,” with 79% online 13-17-year-olds and 40% of 7-12-year-olds in the UK using generative AI tools and services. Snapchat’s My AI is the most popular gen AI tool among 7-17-year-olds, with 51% of them using it.
- Some early (2023) gen AI numbers: ChatGPT racked up over 600 million visits in January [’23]. The bank UBS estimates that it took two months for the software to gain 100 million monthly active users; for comparison, TikTok took nine months, and Facebook took four and a half years, according to a very thorough article at Vox last March.
- Meta’s approach: MIT Technology Review went in-depth on how Meta took the open source route with its large-language model, LLaMA 2, which is great for crowd-sourcing innovation and problem-solving but can also create or accelerate serious problems. To address that, apparently, Meta last month announced its Purple Llama project to provide “open trust and safety tools and evaluations meant to level the playing field for developers to responsibly deploy generative AI models and experiences.”
- The new kid. Well, sorta – more of an upgrade. Last month Google announced Bard’s upgrade, Gemini, which MIT Technology Review says “outmatches GPT-4 in almost all ways – but only a little.”
- X/Twitter’s CSAM nightmare: Stable Diffusion’s too, actually, and of course all platforms’ to some degree. In October, Rolling Stone published an in-depth report on how Elon Musk’s platform “is handling the child abuse content that should be caught with automated security tools” and why the Stanford Internet Observatory says X is ‘“definitely more vulnerable’ than other large players in tech.” [Stable Diffusion is a popular AI image generator like OpenAI’s DALL-E.]
- Other posts on gen AI in this blog: on whether kids should learn how to use it (Oct.); a July freeze frame, because so much was going on then; at the end of February, a look at the dark side where kids are concerned (and what we need to teach little ones); and, early in Feb., thoughts on ChatGPT for media literacy training
Leave a Reply