As the sun began to set, two dozen students gathered in the J. Paul Leonard Library’s Events Room last Wednesday to hear about SF State’s newly created Ethical Artificial Intelligence curriculum — the first triple discipline graduate certificate in the country that focuses on AI.
The three people responsible for the creation of the new certificate, Dragutin Petkovic, professor of computer science, Denise Kleinrichert, interim associate dean from the college of business, and Carlos Montemayor, from the philosophy department took turns presenting what each of the three departments offer students and how they all interact with each other.
Petkovic, Kleinrichert and Montemayor came together to create a certificate that is the first of its kind to explore the issues of ethics in relation to artificial intelligence.
“Other universities teach AI courses or have AI institutes and AI seminars but they don’t offer an interdisciplinary certificate,” Montemayor said. What sets this certificate apart from the others is that it is addresses ethical issues related to artificial intelligence, such as fairness and bias.
The promise of AI was that it could be free from bias, thus making more rational decisions than a person, decisions based on data rather than emotion. As Petkovic pointed out, that hasn’t been the case yet.
Last year, Amazon, got rid of its AI powered recruiting tool because it was biased against women. The AI was supposed to vet all the resumes it received and suggest the top candidates for each position. Instead, it penalizes women because the majority of the resumes the AI had been given to train itself to find qualified candidates came from male applicants.
The course will be a total of three classes plus a research and reflection paper to show potential employers what the students learned. Because this is a multi-disciplinary certificate and not everyone has taken computer science classes, there will be an option of taking “Philosophy and Current Applications of Artificial Intelligence” for the AI Technologies and Applications section of the certificate program. None of the philosophy or business courses in the program have prerequisites.
“We’re asking the questions from the perspective of the producers and the manufacturers of the technology, but also from the perspective of society who will receive the benefits,” Montemayor explained. “They will be the main benefactors, but also the people at risk of whatever goes wrong with the technology.”
To exemplify the need for this program, Petkovic spoke of a hypothetical medical business buying the right to use an AI that has almost 100% accuracy in diagnosing patients. Petkovich’s concern is that during the development of the AI, few if any, people of color might have been added to the AI’s database, meaning that the AI’s decision-making abilities are not attuned to the needs of minority patients, which could harm the patient, as well as open the business up to a lawsuit.
“The ethical part is that intelligence is supposed to be related to rationality and rationality to responsibility, and responsibility to political and larger norms in society that really frame labor, for example, and legal systems and the economy,” Montemayor said.
The program will focus not just on the effects that AI will have on business but also the effect that it will have on the workforce and society at large. For now, Petkovich is skeptical of the possibility to create an AI that is free from bias. After all, what is ethical in one country may be unethical in another.
“So it’s not trivial, the deployment, and no one knows how to make adjustable AI,” Petkovic explained, “They give you a black box and say ‘Oh, it works with 90% accuracy.’ What does that mean?”
While the U.S. uses AI mainly for automation and business, militaries around the world are looking into creating autonomous drones to replace live soldiers in combat. According to the New York Times, the Chinese government uses AI to comb through surveillance camera footage to find criminals and more controversially, track the Uighur Muslim minority in the country.
The problem with building an ethical AI is that coding is purely mathematical and there are no equations to solve fairness and diversity. The certificate is not focused on creating AI, but understanding how an AI could lack context for the data it uses to make decisions. Students will come out of the program with the skills necessary to help guide businesses through the tricky future that comes with artificial intelligence.
The graduate certificate program, which begins in the Spring semester, is currently taking applications through the CalState Apply website.