San Francisco State University has been offering training and events for faculty on various AI-powered technologies. However, the implementation of these tools has sparked a range of reactions from faculty and students.
As part of the recently announced AI Initiative, the California State University system has signed an 18-month contract with OpenAI, set to run until July 2026 with an option for an extension. This agreement will be funded by the chancellor’s office, according to Nish Malik, senior AVP and chief information officer at SFSU.
Malik, along with other SFSU staff members, including Michelle Anolin, Robyn Ollodort and Andrew Roderick, led an informational meeting for faculty and staff regarding licenses available at no cost to campus until July 2026 per an agreement with OpenAI ChatGPT Edu, enterprise commercial data protection, interactions, multi-platform access and custom GPTs.
“Our goal is to launch OpenAI ChatGPT Edu by the end of this month for all active faculty and staff, and the student rollout is still being determined,” Anolin said. “We’ve conducted two pilot phases. We completed the first phase which ended on March 7th, and we’re currently in pilot phase two.”
In the first phase, a select group of users tested user management, custom GPT development and access control.
In phase two, a group of users is now testing automated provisioning and deprovisioning. If the pilot phase is successful, SFSU will send a campus-wide email announcement about the access and ChatGPT account setup instructions for faculty and staff by today.
A question that staff and faculty raised during the Zoom meeting was, “What happens after July 2026?”
“We don’t want to build a bunch of functions for the university for something that might not be here past that date,” Roderick said. “That’s something we’re really looking as to whether it gets continued funding.”
With the recent university budget cuts, there is uncertainty about whether the rollout of AI integration will be in full effect after July 2026.
“Like many SF State faculty, I’m concerned about the use of AI in higher education,” said Martha Lincoln, an associate professor of anthropology at SFSU. “It is very important for students to develop the ability to think critically and independently — and many uses of generative AI actually undermine that skill. Though there are probably good uses of AI applications in some research activities, I don’t think it has good uses in most of the teaching and learning we do on our campus.”
Lincoln also said AI use has negative effects on the world.
“It has very significant environmental costs, especially in terms of its energy and water use,” Lincoln said. “A number of AI companies have also made use of very cheap, exploited labor in the Global South to train AI tools.”
The course objectives are to summarize how and what generative AI is, describe the accuracy and the ethical considerations associated with AI, identify task-related opportunities with generative AI and provide prompts to produce practical outputs.
Ali Kashani, a political philosophy professor at SFSU, fears for the future of faculty and students now that AI is being introduced into schooling. Kashani’s concerns are not only for students’ critical thinking abilities but also for the future of faculty and their jobs.
“There was no meeting, it was just surprisingly announced,” Kashani said. “There are several ethical and political issues with this. They didn’t do enough research because this is a new technology, because there’s not enough research on harms and whether it’s accurate.”
Kashani said that based on his research for a book project, he fears the implementation of AI could reduce students’ cognitive abilities.
“I do not use it,” Kashani said. “It doesn’t generate any original knowledge. Whatever knowledge is out there is human knowledge. Therefore, if there are any biases, it also goes into the system. We as faculty are concerned about both the harm on us and the students.”
Satvik Verma, the president of the AI Club at SFSU, is constantly studying the effects of AI and the small details that are behind the scenes. Verma believes that this new technology is something that people have to come to terms with.
“I’ve held some workshops myself as well, just getting people up to speed because AI is the buzzword everybody has been using,” Verma said. “It’s probably what everyone will be working on in maybe five to 10 years.”
Mark Kim, a computer science graduate student at SFSU, researches the potential defaults to the use of AI in the schooling system when it’s not used in the appropriate way.
“I’m a firm believer in the right tool for the right task and I think that in industry and in general AI is overused,” Kim said. “I think that they’re trying to apply it in places that may not be appropriate and part of the reason why I believe that is because it’s expensive. Incorporating AI into everything is really, really expensive.”
Kim is currently working on a project in the early stages for transfer students through Microsoft, the software that SFSU is using for generative AI. The project will equip them with the knowledge on what courses they will need to take, what courses are eligible for credits and their academic track from when they get started at SFSU to when they graduate.
Staff and faculty at SFSU are concerned about the ethics of integrating a technology and corporation to vulnerable populations.
“It’s all about money. These corporations don’t really care about the ethics and politics of this,” Kashani said. “They care about the politics of it as far as how they can maximize profit.”