Artificial intelligence can offer quick results and ideas, but its rapidity may produce biased or misleading outcomes. San Francisco State University professors warn that AI could propagate fake news and misinformation.
On the @SFSU Discord server, a computer science major, Zachary Weinstein, said one of his favorite things to do with AI is to use Bing AI image generator to make fake video game titles with covers and quote-on-quote gameplay footage.
For example, Weinstein said someone made a fake Halo Beach Volleyball video game. The cover had the main protagonist, Masterchef, and his big suit of body armor playing volleyball.
“There’s a screenshot of a first-person perspective of him on the beach playing volleyball and it’s just really funny and absurd,” Weinstein said.
When Weinstein was a mentor for introductory computer science classes, he would see many students do well while others struggled.
While mentoring students in those couple of years, Weinstein said one common issue was blatant plagiarism from students. Sometimes, he would see two students with almost identical, if not the same, codes. Weinstein argues we are reaching a point where it’s difficult to tell the difference between something that is AI-generated and human-created.
“At one point, I even had a student who didn’t even erase the name of who they stole it from,” Weinstein said.
Weinstein feels that if someone uses a program like ChatGPT for introductory classes, they’re only hurting themselves because ChatGPT is inadequate for some of the more complicated problems one has in upper-division classes.
“If you don’t have a strong grasp of those easier problems and these easier concepts because you use ChatGPT to solve them, then it’s just going to make all of these other things that ChatGPT can’t do much, much harder,” Weinstein said.
Weinstein cited the example of Photoshop’s release 20 years ago and the fear of people creating fake images to spread fake news. Weinstein argues that many people now fear that AI will facilitate the spread of fake news.
“There is a genuine concern with something like DALL- E 3 or any of the other image generators that are far easier to do,” Weinstein said. “Because with something like Photoshop, you still had to have the knowledge and the skills to use that software.”
Weinstein advises against using ChatGPT because it will only be as creative as you are. If you give it a prompt to write an essay about a historical event, like WWII or D-Day, ChatGPT will provide something generic, short and repetitious. Meaning, that if one were to try and make the prompt better or more specific, the output generated by the AI programs will not be affected and will remain the same as before. Therefore, improving the prompt may not be worth the effort or risk if the results do not change.
“If you’re not very specific with your prompts and not very creative, you’re going to get an uncreative result back,” Weinstein said. “Even then, the result that you’re going to get is going to need a lot of fine-tuning before it’s ready for any essay or anything like that.”
English professor Jennifer Trainor leads a CSU-funded AI in higher education program with SFSU’s Center for Equity and Excellence in Teaching and Learning to create teacher resources and learning circles.
Trainor is concerned about bias and racism in AI programs due to their reliance on online sources.
“These large language models are built from data from the internet, which is largely English speaking –– largely white,” Trainor said. “All of that seeps into the output because the input reflects all of the racism and problems in our society, the output is going to, too.”
Trainor worries that the widespread use of ChatGPT may cause language homogenization, leading to people sounding similar to ChatGPT.
“So we sound like flight attendants of the airline, speaking that very formal kind of rigid language,” Trainor said.
If you attempt to tell ChatGPT to speak in a standard voice, one reflective of a humanoid, the results you will get will essentially show results with homogenization and a racist undertone because AI will use what’s shown on the internet, according to Trainor.
“If you ask Cat to try to make it sound more voiced or more authentic, it goes quickly into stereotypes,” Trainor said.
Due to its evolving technology, there are ongoing concerns among faculty members at the SFSU campus regarding the use of ChatGPT. Trainor stated that faculty members lack consensus on using it. Still, there is also no contention because it is a shifting and fluid situation. Faculty members are worried about AI search capabilities and want students to be able to interrogate the answers the AI programs might provide and do their research rather than copy and paste responses.
“How can we make academic work meaningful to students to deter any kind of plagiarism? That’s something we always work on and we always care about and we care about it now,” Trainor said.
Trainor compared the line between AI and plagiarism to citing sources in a paper. ChatGPT helps writers learn and build ideas, but citing sources, acknowledging methods, and being transparent about text construction are essential. However, ChatGPT doesn’t provide sources, adding another layer to the process. Trainor said ChatGPT can still be a tool for research, brainstorming, and editing.
“Nobody wants a cut-copy approach, where you just put in your question and copy the answer and don’t think about it anymore,” Trainor said. “Now, that’s not the point of learning. It’s not the point of writing. It’s not how any of it works.”
Many professors have reported students for turning in assignments using AI according to Larry J. Birello, the manager for Student Rights and Responsibilities.
“It is becoming a serious problem,” Birello said. “There are many cheating, plagiarism cases now. Many excuses revolve around, ‘I just got lazy and ran out of time to write the paper, do the work and didn’t want to not turn in a paper, work.'”
When Birello hears that excuse, he informs the students that it is better to get a zero or points deducted for turning in work late than to get a zero and a conduct record or suspension/expulsion.
Briello said if a professor catches a student using AI in an assignment without explicit permission, they will be disciplined. All professors must submit a report so that there is a record of a student cheating/plagiarizing.
“Submitting an AI-generated work is no different than a student having a friend write a paper, then turning it in as their own,” Birello said.
Birello said professors have varying attitudes about AI.
“Some more artsy, creative professors encourage the use as a muse or starting point for some of their classes,” Birello said. “Other professors are vehemently against it.”
Birello stated that the Office of Student Conduct does not have the authority to affect a student’s grades if they are caught using AI. It is the professor’s responsibility to do so.
“We would sanction anywhere from a warning letter to expulsion, depending on what the situation is or the student’s conduct record,” Birello said.
SFSU professors have complained to Birello about AI. They ask Birrello if he can proactively do something to stop it.
“I tell them I can only react, not prevent people from breaking policy,” Birello said. “I believe ChatGPT is able to modify its own code now, so keeping ahead of it is almost impossible at this point.”
David Gill has been a lecturer in the Department of English Language & Literature at SFSU for the past 17 years.
Gill said our ideas and voice are the only things we have in life, mainly if we come from an underprivileged environment. Like other professors, Gill feels that ChatGPT is creating an environment where people are becoming voiceless.
“In our culture, your voice is already being asked to be a different voice,” Gill said. “It’s really important that students not use ChatGPT to the point where it’s doing the thinking for them.”
One ethical question about AI, Gill said, is that AI programs are trained on authors’ manuscripts who aren’t compensated for that use. They perpetuate biased ideas about writing, which to him is very problematic.
“There’s no democracy to what these AIs are trained on, which makes them prone to the same kinds of colonialist, oppressive, discriminatory ideas about writing that are problematic in regular writing,” Gill said.
Gill assigns his English 114 students to write about a movie or an album that is culturally, historically, or aesthetically significant enough to be preserved. If you give ChatGPT that essay prompt, Gill stated it will produce an impressive essay on that topic.
“It can write pretty much an A (grade) essay, like a much better essay than I’ll get from an average student with very little prompting and that’s really scary,” Gill said.
According to Gill, English teachers don’t have a tool to detect plagiarism. There’s no reliable way to detect AI use; the problem is Turnitin gives false positives.
“In my understanding, the false positives are so common that if you do it in a single class of 25 students, you will accuse, on average, one student of using ChatGPT falsely,” Gill said.
Gill was searching the internet when he came across a ChatGPT detection software. This software could detect ChatGPT quickly and offered to sell an app that could scramble its language to avoid detection.
“From an English perspective, what we’re scared of is the student comes through a year of English classes in San Francisco State, and they never write a word,” Gill said.
According to Gill, the fear of AI is still present in subjects other than English. For example, he cited a physics class where one can enter their physics problems into ChatGPT and get the answer without effort.
“Then you’re not just talking about going through a class without writing; you’re talking about going through an entire university without thinking,” Gill said. “That is terrifying.”
Gill mentioned that the reason that would be terrifying is that all of the educators are here regardless of their discipline; they are here to inspire people to use their brains, so if they’re not doing that, there’s something wrong.
AI is being developed in capitalism, which pre-determines how programs will function, according to Gill. Because of this, AI programs are being trained on various works without the author’s instruction or permission and are taking away jobs from people. AI programs are being designed and programmed as time-saving and efficiency-boosting technologies that benefit the owner class, the bourgeoisie. Gill argues that they may help the workers in the short term and make their lives easier, but at a certain point, they will eventually make it so easy that the workers aren’t there anymore.
AI programs are being trained in marketing, which raises ethical issues regarding how they manipulate people. Gill said we must consider how the product is developed, marketed, and sold.
“That was Philip K. Dick’s point in his story, the AIs are creating propaganda. They’re creating these speeches that make these people want to make more robots. Make the workers work more. The point he was making was that it is really easy to mechanize something like propaganda or manipulation,” Gill said.