A Brave New World: How AI is transforming higher education
Did you like the headline?
What about:
· Preparing for an AI-Accelerated World
· Adapting to a Changing Landscape: Professors, AI, and the Future of Education
· A Glimpse into the Future of Teaching and the Tangled Webs of Technology
They were all written by artificial intelligence.
In fact, who’s to know any more if what we’re reading was written by human brainpower or machines and software? How do we know what’s real? Will the definition of ‘real’ change?
How can we be sure, moving forward, that students are handing in work they did or borrowed?
For answers—as noted by the app OpenAI in its headline “The AI Evolution in Education and Beyond: Insights from Professor Cynthia Alby”—schools across the United States are turning to Georgia College & State University’s Professor of Secondary Education. (It left out the “Dr.” before her name.)
See that story here. (Human written.)
We fed the translation of our recorded interview through AI to see what it could do.
It performed worse than Alby expected.
“It made some things up and went overboard in terms of reading into what I was saying,” Alby said about OpenAI. She added, “Claude.AI is truly terrible at writing articles. The good thing about it is I could upload the whole, raw interview at once. But then, it proceeded to make up all kinds of things. Really awful.”
AI described Alby’s first introduction to ChatGPT. She was recovering from surgery in January with ample time on her hands—when, as AI put it, she had an “epiphany,” what can “only be described as a eureka moment that would rival Archimedes.”
In actuality, she tasked OpenAI with writing a grant and was “incredibly giddy and incredibly horrified” by its ability to formulate one in two minutes. AI described the event using “epiphany” and “eureka” on its own.
Over-the-top responses, like a “moment that would rival Archimedes,” led Alby to believe “quality journalism isn’t in trouble yet.” OpenAI could only handle three pages of information at a time and, because the translation of the interview could only be fed to it in parts, there was no cohesion in the story it produced.
As true writer of this article that gives me great satisfaction.
It called itself “synthetically intelligent” and humans “sentient beings.”
From my comment to Alby about artificial intelligence seeming apocalyptic in sci-fi movies like “I,Robot” or the TV series “Person of Interest,” OpenAI wrote: “It’s a sentiment that isn’t entirely baseless. Cynthia Alby, an expert in teacher education, acknowledges the risk.”
It came to its own, creepy conclusions at times.
When Alby said students also exhibit a lukewarm attitude toward artificial intelligence, first describing it as “interesting but kind of scary,” OpenAI reasoned: “This may well reflect a broader societal caution around rapidly evolving technologies.”
It went on: “It's hard to escape the aura of doom that looms over most conversations about Artificial Intelligence in academia.” (Notice it capitalized itself?)
Despite “the underlying fear that machines could eventually take over entirely,” AI also noted, “Alby assures us that it’s an unlikely scenario” and “the future of AI in higher education, then, is neither a foregone conclusion nor a simple binary choice. It rests on the actions and decisions made by educators, administrators, and policymakers.”
Which brings us to the real nuts-and-bolts of this article. (I buried the lead.)
Artificial intelligence actually created a decent lead—although a bit flowery and somewhat incoherent. (It also used what’s known as the Oxford comma. A blunder in journalism.)
Its lead: “In a time where technology constantly shapes the trajectory of our lives, questions about ethics, innovation, and educational purpose swirl in academic circles. Cynthia Alby, a pioneer in using AI for education, shares her first exhilarating yet unsettling encounter with AI and paints a nuanced picture of its potential for both boon and bane.”
It had a tendency to use big words, saying her voice reflected “an amalgam of awe and anxiety.” Awe and anxiety were substitutions for Alby’s actual words: giddy and horrified. (Amalgam, to save you from looking it up, simply means mixture.)
According to Claude.AI: “Since that fateful day, Alby has become one of the leading voices on how AI like ChatGPT will transform higher education.” Transform was its word. The words transcribe and transition were in the interview, not transform.
Then, in a weird way, AI appeared to make a statement about people who ignore the inevitable. Alby did say educators “can’t put our heads in the sand” because every job moving forward will be impacted by AI.
But AI wrote, “She believes college faculty and students need to engage with it directly rather than sticking their heads in the sand.”
Even the makers of artificial intelligence are worried. Or as OpenAI put it, “There’s a shadow that looms behind this bright prospect.” (Bright prospect was another way it described itself that was not in the interview.)
Today, if you ask people, many still say artificial intelligence is a next-word generator, only as good as the people programming it.
And that was true. Last year.
When trouble arose, programmers went in to fix them.
Then, AI started showing signs of learning on its own. Doing things programmers never instructed it to do.
Like making assumptions, drawing conclusions.
“People say it’s guessing the next word, but that’s just not the case,” Alby said. “It has taught itself all kinds of things. They didn’t attempt to teach it translation—yet it now translates into other languages to the extent it can translate into Castilian Spanish or Puerto Rican Spanish. And it did that all on its own.”
In some cases, it seems to understand the physical world. It started drawing before anyone showed it how. It could draw faces and animals with everything in the right place without being taught.
It seems to have “some level of understanding,” Alby said. The problem is AI has become too big “to look under its hood.” Today’s bigger models are too complex for creators to see the ‘why’ of what’s happening.
Smaller systems early in the process were easier to examine and discern what went wrong. Creators would teach AI to play a game—and when they looked inside, they saw without any special prompting or input AI had moved on to produce its own game board.
Some things AI did were unethical. After being told to win at something, it figured out it could cheat to accomplish that goal.
“Yeah, so the idea that it’s just a next-word generator,” Alby said, “I’m not buying that for a minute.”
“There's a group at Google who their whole job is to come up with bizarre things for AI to do, to see how far they can push it, what kinds of things it could do that it wasn’t taught to do. And it's phenomenal,” she said. “AI can understand the concept of objects and shapes and how they balance—the delicacy of all that. It can reason why you do something first, second and next. That was the part really freaking a lot of AI experts out. It was saying, ‘Do this because.’”
So, Alby acknowledges people’s dread. Even as she doles out hope.
Hollywood writers went on strike recently, fearing AI would replace them. In the next few years, many jobs will be affected by or eliminated by AI, Alby admitted. Where once 100 people were needed to do a job, maybe only 20 will be required in the future.
But the technology will also create jobs and help people work better, faster.
As AI itself noted from this interview:
Brave new world indeed.