A teacher in an international school in Panama asks her students to critique essays “written” by ChatGPT, while others have their students fact-check AI material, according to Time. University of Pennsylvania professor Ethan Mollick requires his students to use generative AI while holding them accountable for the information in whatever they produce with it. These are great ways for students both to learn how to work with AI and grow their media literacy and digital literacy. But would it actually be helpful for our children to learn how to use this technology?
Apparently so. In “Everyone Is Above Average,” Dr. Mollick has just written about fresh research on the effects of generative AI in the workplace by researchers at Penn, Harvard, MIT and Boston Consulting Group. This statement of his leaped out at me: For people who “have a … learned gift for working with [large language models or LLMs like ChatGPT], AI is a huge blessing that changes their place in work and society.”
It’s not that AI is good at everything in multiple professions. It has “uneven abilities,” meaning it’s unlikely to replace whole jobs (contrary to seemingly everyone’s greatest fear). But “it will free up time to do more valuable, satisfying, and productive work,” Mollick writes, and it will “elevate the skills of the lowest performers across a wide range of fields to, or even far above, what was previously average performance.”
The technology and its applications are evolving fast, but so far the research is indicating that “the new stars of our AI age,” as Mollick puts it, are likely to be “sought out by every company and institution the way other top performers are recruited today.”
He uses a couple of interesting metaphors for working with AI: centaurs and cyborgs. These may be a little off-putting – whether they seem de-humanizing or if, like me, you hate seeing the term “cyber” applied to anything human. But there’s actually something helpful about these metaphors to students, parents and educators – or anyone interested in what the job market will look like when our kids head into it.
That’s because they describe two different ways that the new workplace stars already utilize AI in two different ways. Remember the half-human, half-horse centaur from your Greek mythology lessons? That’s the AI worker who knows what they do best and what AI does best and utilizes the strengths of each separately. Cyborgs, on the other hand, “blend machine and person, integrating the two deeply,” Mollick writes. What’s important, I think, is his suggestion not to use AI just as a replacement for other tools that already exist, like a thesaurus or search engine. Generative AI is a whole new kind of tool that is best used in new ways. It’s not easy working on a learning curve, but it’s looking more and more like doing so is well worth the challenge – for both educators and students.
So more and more research is indicating that, across many professions, human+AI work is going to be much in demand in many kinds of workplaces, in some cases more desirable than what either humans or machines produce by themselves. Which reminds me of what Marc Prensky wrote about “digital wisdom” over a decade ago (here‘s what I wrote about that then). And which suggests to me now that it would be very helpful to our kids to start learning how to use generative AI yesterday.
- This just in: Two days after I wrote this, Professor Mollick wrote “A FAQ of sorts,” answering a dozen questions, from whether AI writing can be detected to how best to get good at using AI to whether AI will get worse as the Internet fills up with AI-generated content. So I’m adding this later.
- Other posts I’ve written on the subject: my July freeze frame rounding up a range of insights; thinking about little Internet users and gen AI; and my first post on the subject last February on ChatGPT for growing students’ media literacy