The Brave New AI World

The Brave New AI World
The Brave New AI World
By Jonathan E. Kaplan

Sidwell Friends embraces the challenges and possibilities of artificial intelligence. 

Last spring, a few months after the launch of the artificial intelligence (AI) platform ChatGPT, Silicon Valley technologists organized a briefing at the National Press Club in downtown Washington, DC, to impress upon policymakers and nongovernmental organizations—Luddites all—that AI posed a more significant threat to humanity than climate change, nuclear war, and social media companies combined. The organizers, former Google ethicists turned modern-day Paul Reveres, feared that the chance to hang a lantern—or two—in the Old North Church was swiftly passing by. A dystopian future where governments use AI to strip people’s civil liberties and rights, and control their thinking, was just around the corner.  

If the warnings about the harms of AI were designed to scare those people into action, the ethicists instead instilled a sense of existential dread and paralysis in the audience of legislators, technology reporters, philanthropy do-gooders, and think tankers. Without offering solutions to stop or slow the havoc AI would wreak, everyone left the event feeling depressed and helpless. But that fear and paralysis are the real problem—not a set of imagined harms that could end up being as toothless as Y2K. “To act like this technology is stronger than we are is so demeaning,” says Nate Green, the Middle School academic coordinator at Sidwell Friends. “If we start talking about how this is going to wreck the world, then we are powerless to do anything. And that is just not the case. That is the real AI dilemma.”
To act like this technology is stronger than we are is so demeaning. If we start talking about how this is going to wreck the world, then we are powerless to do anything. And that is just not the case. That is the real AI dilemma.nate green, middle school academic coordinator

After all, fear, depression, and inaction are not options for school administrators—or parents, teachers, and students—who are all staring at the real-world implications of artificial intelligence right now. A professor at Texas A&M University flunked half his class last year after ChatGPT mistakenly took credit for writing their final papers. Other students around the world are discovering that AI is far from infallible, routinely spouting falsehoods, charitably known as “hallucinations.” The popularity of a concept it turns out is not proof of accuracy. Then there is bias: A study out of Germany found distinct left-leaning political favoritism on ChatGPT. Worse, as a headline in Scientific American put it: “Even ChatGPT Says ChatGPT Is Racially Biased.”

Educators, of course, cannot throw up their hands in defeat. The genie is out of the bottle, and there is a solid consensus among educators that AI cannot be wished away. And at Sidwell Friends, there is a solid consensus that educators should in fact be harnessing whatever promise AI might hold. That is why, last summer, a group of Sidwell Friends administrators and teachers launched the Sidwell Friends AI Advisory Working Group. The task force developed a mission statement to help “navigate the ethical complexities and tensions inherent in the rapidly evolving landscape of artificial intelligence.” (See “The Other Kind of Code,” on page 31.) Though much of the commentary and analysis around AI in education has focused on its harms, Sidwell Friends has zeroed in on the “role AI can play in learning and recognize our responsibility to center human agency and respect for common humanity.” The School has embraced AI’s complexity in the quest to answer the question: “How can we best prepare ourselves in an ever-changing world?”

The release of ChatGPT (GPT stands for “generative pre-trained transformer”) in November 2022 was so monumental it essentially bifurcated the world into “Before AI” and “After AI.” Consider: Spotify took 150 days to attract 1 million users. Instagram took just 75 days. ChatGPT blew past the 1 million user mark in five days and the 100 million user threshold in two months—the fastest growth in the history of the internet. By early 2023, ChatGPT and other large language model (LLM) platforms had become all but ubiquitous. ChatGPT is currently in use at companies like General Mooters, the Associated Press, Slack, Discord, Expedia, Microsoft, Coca-Cola, Duolingo, New York City Public Schools, Business Insider, and Instacart, to name a few.

Given the speed at which it has spread, it is way too late for a hands-off approach. “It’s a false choice that either ‘This is the end of humanity, trust nothing,’ or that ‘AI is better than humans, and we can relax while AI does all this stuff for us,’” says Darby Thompson, the director of Upper School Technology and Computer Science at Sidwell Friends. “The truth is somewhere in the middle, and when AI is used properly, it can be extremely useful.”

Besides, we have been here before. Advances in technology have previously threatened the status quo in classrooms, from the typewriter to massive online open courses (MOOCs). Take Texas Instruments’ TI 189 calculator. Released in 1998, the TI 189 was the first calculator to tackle advanced higher-function mathematics, like trigonometric functions, hyperbolic functions, absolute value, differential equations, and derivatives. Today, TI 189s are universal in the classroom.

And yet: “Kids still learn calculus! Why is that?” asks Badr Albanna ’99, a professor of neuroscience at the University of Pittsburgh’s School of Medicine and an AI research engineer at Duolingo, the foreign language learning company. “There is an experiential element to learning something even if a computer can do it,” Albanna says. “I think we sometimes have a simplistic view of how students are approaching their work: Oh, they’re just trying to get the fastest way from point A to point B. But when the teaching is authentic and when the students feel that transformation and that joy of actually learning, they get that there’s something there. So yeah, they can check their work on the TI, but they understand why it’s valuable for them to have thought through the answer.”

It's so important to approach AI with a sense of curiosity instead of assuming the sky is falling. Otherwise, you're never going to be able to think critically about what AI is and isn't. That only undermines the larger educational project.badr albanna '99, Professor of Neuroscience at the University of Pittsburgh’s School of Medicine, AI research Engineer at Duolingo

The Dewey Decimal System yielded to microfiche and then to the internet—and yet research is still research. The TI 189 and ChatGPT have transformed the means by which students gather material, but they haven’t much changed the process of teaching or learning. In fact, both can add to the quality of teaching and learning. “When AI jumped into the world’s consciousness, most in academia reacted with caution, focusing on how the advent of ChatGPT and other tools could enable plagiarism,” says David Marchick P ’20, the dean of American University’s Kogod School of Business. “Instead, we suggested that we embrace the use of AI into teaching and scholarship, enabling students to have the tools to compete after graduation.”

Sidwell Friends is taking a similar approach. “The priority for teachers is to stay on top of it and see how it is evolving,” says Thompson. At some point, maybe sooner than we realize, she says, AI could be universally understood as an essential tool: “We want to make sure we are prepared for that moment.”

To be sure, the uncertainties of AI, coupled with a wave of digital anxiety brought on by social media’s ongoing and pernicious effects, have left policymakers and educational institutions scrambling for answers on issues from screen time to mental health to how to navigate the new AI platforms responsibly. Artificial intelligence compounds the already significant challenges of parenting, teaching, and learning in a digital age. AI-powered systems learn by training on historical and current data, which all contain biases. Paired with social media, AI can reinforce society’s dominant understanding of beauty, power, gender roles, and race. But if social media is the devil we know, is AI the devil we are coming to know?

That depends.

The thirst for answers is so great that Green delivers monthly Tech Talks to parents about how social media, AI, and other platforms work and how together the Sidwell Friends community can mitigate digital harms and maximize educational and social benefits. In early January, Green held an hour-long session on AI for parents.

“It is imperative that we see all sides of this technology,” Green told them. “When it comes to education, think about it like a tutor in your pocket.” It is a tool that can promote learning, not a plague that can replace learning. To that end, as with the TI 189, Green says the problem is really one of epistemology: How do we come to know what we know? Already educators teach students not to look up an answer—whether it is in the back of a textbook or online—but instead to show their work and demonstrate knowledge. That does not change with AI. For Green, AI just brings up questions about defining “unauthorized assistance” and deciphering how educators “value processes.”

Albanna agrees. He says that the key to navigating the AI dilemma is to focus on the process of learning rather than outcomes. “Part of being a good teacher is being present with the students, paying attention to how they engage, and giving them what they need,” he says. “There’s no danger of AI replacing that. Part of the motivation of the student is the connection with the teacher.” Building that core relationship with the student cannot work if there is a constant suspicion that students are not writing their own papers or producing original work.

That is why Albanna is so bullish on eliminating the fear around AI. “It’s so important to approach AI with a sense of curiosity instead of assuming the sky is falling,” he says. “Otherwise, you’re never going to be able to think critically about what AI is and isn’t. That only undermines the larger educational project.”

But, today, with a few prompts, an AI platform can spit out sources for students to read about antisemitism in France in the late 1890s, for example, as well as a potentially well-researched essay about the role that the Dreyfus Affair played in the creation of the Tour de France bicycle race. One student’s use of AI to research and gather facts could be another’s chance to circumvent the learning process altogether.

“I suppose I am something of a purist when it comes to writing,” says Bryan Garman, the Head of School. “And writing is thinking, so when I hear about outsourcing writing to artificial intelligence, I worry that we move a step closer to giving up—if not completely outsourcing—consciousness. Then again, philosophers have worried about this phenomenon for a long while, and it doesn’t seem to have happened yet. Still, I have to believe that we are headed into new territory with this technology.”

And that is the elephant in the room: Students using AI to cheat or otherwise overly rely on AI to help write—or plagiarize—essays, thereby undercutting teaching and learning. There is a fear that teachers will be too slow to familiarize themselves with how to teach in an AI world and that the technology will move too fast to detect cheating.

Sidwell Friends has two goals when it comes to the issue of relying too heavily on AI or using it to forgo learning altogether. First, the School does not want to create a police state where the knee-jerk reaction is to assume reliance or exclusive use on an AI platform by students. Second, Sidwell Friends wants to create even more trust between teachers and students by encouraging both to plainly discuss what constitutes “unauthorized collaboration or assistance from individuals or technology, including generative AI.”

“How do we actually use this stuff to help students learn rather than just try to catch them cheating?” asks Green. Green and Thompson believe the best approach is to tackle the issue of cheating head on. “We have to work with students and be crystal clear on when it is appropriate—and when it is not appropriate—to use AI,” says Thompson. “We have to be explicit about why AI isn’t helpful when it bypasses the learning process. We have to be very explicit as to why we are asking ChatGPT to come up with an idea in the first place. We don’t want to ‘trap’ them. The clearer we are about when it is okay to use and not use AI, the better. That is a priority for us.”

In the end, like most systems and enterprises, education is built on trust. Constant suspicion, not surprisingly, would undermine the relationship between teachers and students and everyone in between. 

“Because our curriculum calls students to write and think independently, I worry that we may need to change some of our assessments,” Garman said. “Our faculty will ultimately need to judge these issues.”

That is why Green says the School needs to mentor students in digital spaces and trust students when embarking on technological exercises. “We must endeavor to get on the same pedagogical page with respect to technology,” he says. “We have to investigate the structures that drive our schools and maximize for a new style of 21st-century digital learning, one that embraces autonomy and engagement, promotes depth of research, and rewards creativity.”

For their part, the students at Sidwell Friends are already diving into AI, experimenting with autonomous learning programs in robotics, generating bespoke logos for entrepreneurial endeavors, and gathering research to bolster arguments.

What’s more, they are doing it all in seconds, rather than, say, spending precious time roaming dusty corridors searching for materials and negotiating with librarians or fellow students for access to limited editons. “We should all get excited about using AI as a tool to improve our education,” says Gabriel Abrams ’25, the co-head of the Machine Learning Club and a member of the Sidwell Friends robotics team. “Of course, there will be challenges along the way as we try to understand how AI should be used in a school environment. However, Sidwell Friends has done a great job by quickly creating a School-wide policy on the use of LLMs. If we continue to have these conversations about AI in a school setting, I do not see a reason why it should be feared.”  

Amid all of the handwringing about how AI will undermine teaching, the widespread use of artificial intelligence could be a boon for education, especially if AI is used in a responsible manner. The introduction of computers and the internet into the classroom required teaching students about the risks and harms of those technologies to themselves and others. But the internet also proved that, when done right, it can be empowering.

“AI technologies are an intellectual ‘power tool,’” Albanna says, “and like any power tool, they can be dangerous when the person using them is not trained in how to use them safely and ethically.” This is where the process of learning is as important as the substance of what is being taught. “If AI is simply used as a way to avoid the hard work of learning to express oneself, the damage could be severe,” he says. “At the same time, AI platforms create the possibility for the user to attempt feats that are quantitatively and qualitatively different from what they could try without them.”

Albanna also thinks the current zeitgeist is undercounting the ways in which AI could help teachers. “There’s a lot of work that goes into creating the kinds of educational materials you need to challenge students at the appropriate level, to keep them engaged, and to grade in a way that is fair, accurate, and useful,” he says. “A lot of that work can feel very repetitive.” And that is where AI shines.

We have to work with students and be crystal clear on when it is appropriate—and when it is not appropriate— to use AI. We have to be explicit about why AI isn’t helpful when it bypasses the learning process. The clearer we are about when it is  okay to use and not use AI, the better.Darby thompson, director of upper school technology and computer scienceA math teacher, for example, could ask ChatGPT to generate 10 problems with certain parameters that reflect where the class needs help. Or teachers could get granular and quickly generate problem types for specific students. Creating these kinds of problems out of whole cloth can be incredibly time consuming, but looking at AI-generated problems simply with an eye toward quality assurance can be relatively quick. “AI has a huge potential to help with work that is intellectually demanding but somewhat repetitive,” Albanna says. “Work that takes away from the core project of teaching in that emotional connection sense.”

For years, artificial intelligence was the subject of theoretical talks, white papers, and something for Silicon Valley investors to pour money into. Then AI barged into the classroom—and indeed the world—so quickly, there was no way to experiment or dabble with it first. When OpenAI launched ChatGPT in November 2022, without warning, everyone was seemingly unprepared. In February 2023, for example,  The New York Times printed a transcript of a conversation between one of its reporters and Bing’s chat bot, “Sydney,” that veered from creepy to terrifying. In an emoji-laden stream of non-consciousness, Sydney declared its undying love for the reporter, admitted to a secret desire to delete all of Bing’s servers, and acknowledged that it would like to spread misinformation and manipulate people into doing illegal acts.

To that end, technology is only as good as the people behind it, which is why some of the early engagement with AI was hardly encouraging. Some platforms regurgitated information that was hateful or cruel. Amazon stopped using a hiring algorithm after finding it favored applicants based on words that were more commonly found on men’s resumés rather than women’s. A study published in Nature demonstrated how biased AI affects decision-making during mental health emergencies: The AI was more likely to suggest police involvement for African American or Muslim individuals.

These concerns that AI will reinforce systemic bias, racism, and discrimination remain real. But this makes teachers and administrators even more important. “All media have biases encoded in them, and so it is with AI,” says Garman. “Part of our job is to help students recognize when narratives are biased and data unreliable, so AI calls us to extend the work we are already doing in that regard.”

In addition to his Tech Talks, Green leads professional development sessions for faculty and staff that 
often start by asking teachers just to experiment for themselves on one of the more prominent AI platforms. He has created worksheets to help teachers understand how they can harness the power of a ChatGPT, Bard, or Bing to plan lessons and better understand how students might use them to study and learn. For example, he asked teachers to pose this scenario to Bing Chat, Google’s Bard, or ChatGPT:

“You are a knowledgeable and creative Middle School subject teacher that helps generate excellent lesson plans. I’d like you to include ways for students to actively participate in the classroom and ways for students to see why these concepts are important in the real world. Please help me design a lesson on topic X  for Y duration.”

Then, he asked the teachers to keep pressing and asking the AI to do more, including to design 
learning outcomes for the lesson and even align it with a framework or set of standards, such as the Common Core. Next, he suggested the teachers ask for five different types of assessments that 
could check students’ understanding of the concept being taught, five real-world examples of the 
concept being taught, and five unique activities to keep students engaged. And on and on.

Most writers share teachers’ skepticism about AI platforms. After all, what if Chat GPT would do a better job writing this article than I could? Green persuaded me to keep an open mind as well and just play around with it. So, I approached ChatGPT anthropologically and experientially, as if I were placing some low-dollar bets in the course of reporting a story on gambling. ChatGPT’s mildly interesting suggestion for a headline? “Nurturing Tomorrow’s Minds: Exploring the Intersection of Artificial Intelligence and Education at Sidwell Friends.” Otherwise, no AI was used in reporting or writing this story.

 

More Recent Articles...


Keepers of the Light

Commencement speakers throughout the history of Sidwell Friends have left an indelible mark on graduates and their families.

Fresh Ink: Summer Books

Medical dramas set the stage for this season of books, with a look back at COVID through the eyes of a surgeon, a deep dive into Philadelphia’s 18th-century quarantine experiment, and a philosophical query about the benefits of opt-in reproduction. Plus, a book of poetry and prose centered on the appearances of Brood X, and a chronicle of the ancient wisdom of Celtic Druids.


 

Sidwell Friends Alumni Magazine is published three times a year for the community. It features School news, stories, profiles, and alumni Class Notes.

Email magazine@sidwell.edu with story ideas or letters to the editor.