RICHMOND — Gov. Glenn Youngkin recently issued an executive directive that emphasized the looming impact of artificial intelligence, though higher education is only beginning to grapple with how to utilize AI.
Youngkin’s order is to ensure AI is used responsibly, ethically and transparently in state government, job creation and education.
A survey released earlier this year found that 60% of college students polled have not been taught how to use AI tools ethically or responsibly by higher education instructors. The same percentage of students also think AI tools will become the new normal, according to the BestColleges survey.
A U.S. Department of Education policy report published in May stated support for using AI to improve teaching and learning. The department stated the need to develop clear policy for AI use and that the anticipated risks and unintended consequences must be addressed.
ChatGPT was released to the public less than a year ago. The chatbot uses language models to mimic human writing and dialogue. It can respond to questions and generate various written content, including emails, prompts and articles. The chatbot is a form of generative AI, which can also create images, videos, songs and code.
Educators at every level are now faced with how to appropriately address the new technology.
Like many universities, Virginia Commonwealth University faculty and staff continue to discuss AI’s role and how to guide professors moving forward, according to Mangala Subramaniam, the university’s senior vice provost of faculty affairs.
VCU will solicit feedback from faculty on Sept. 26 to learn how AI has impacted their classrooms. The university will create an advisory council of faculty who are familiar with AI, and who can provide updated guidance for professors.
Faculty at VCU are either fearful of the technology, or they’re willing to experiment with it, according to Subramaniam.
The university held two forums earlier this year focused on the potential challenges and opportunities of AI, including ChatGPT. Professors have the freedom to decide if they want to use AI in their classroom and are advised to make expectations clear in the syllabus about its use, according to Subramaniam.
Educators may face problems with AI, including plagiarism and how to detect if a student uses AI. Students may face uncertainty of acceptable and allowed use. VCU describes AI plagiarism and copyright as a “difficult topic” and advises that it should be made clear to students they will be punished if they submit AI-generated work as original content, according to VCU learning tool guide.
Educators and businesses need clear ways to detect AI-generated work, which has driven an industry response.
The software Turnitin allows educators to detect originality and plagiarism. It can now detect 97% of ChatGPT and GPT3 writing, according to its website.
VerifiedHuman is a relatively new company that seeks to differentiate human-made media versus AI-generated media, according to its founder Micah Voraritskul.
VerifiedHuman is conducting a study where the company will collect a thousand writing samples from college and high school students across the globe to see what is written by a human, written by AI, or put through an AI scrubber, according to Voraritskul. A scrubber is intended to modify AI-generated text and make it appear more human.
“I think what we’re trying to do is help institutions of higher learning have some kind of policy,” Voraritskul said.
Teachers are nervous about AI because their job is to assess student learning, he said.
“It’s hard to assess student learning … if 90% of assessment is done in writing and you can’t determine whether or not the student wrote that, you don’t know what the student has actually learned,” Voraritskul said.
Student and faculty reaction to AI use depends on the assignment, the outcome and the standards of learning. Arielle Andrews is a VCU interdisciplinary studies student, with a focus on media studies, sociology and creative writing. She is a contributing writer for the independent student newspaper The Commonwealth Times.
“I think the best thing to do for students is instead of teaching them to fear, or like have a disdain for AI, is to more teach them how to work alongside it and use it ethically,” Andrews said.
AI can be a beneficial tool and better used for things that are not “super impactful to the learning process,” Andrews said.
“If an assignment can easily be completed by AI, then it’s not testing those human traits of writing that it should,” Andrews said.
Voraritskul is “pro AI.” The tools can help students do better work in the future, he said. But he sees the potential danger of AI influence on critical thinking and understanding difficult concepts.
“When teachers are asking students to figure hard things out they want them to use their brains,” Voraritskul said. “They want them to exercise their brain muscle so they can figure out what’s going on in this problem.”
Although the BestColleges survey found students were concerned about AI’s impact on their education, more students were concerned about the impact of AI on society at large.
Voraritskul recalled that math teachers all over the world were concerned students would not learn how to add or subtract when Texas Instruments mass produced the first affordable calculators.
“Well that wasn’t true,” Voraritskul said. “And what are you going to do? Stop the calculator? Stop the computer? Stop the internet? Stop AI? No, you can’t. You have to adjust.”
Capital News Service is a program of Virginia Commonwealth University’s Robertson School of Media and Culture. Students in the program provide state government coverage for a variety of media outlets in Virginia.