񱦵

Skip to main content

ChatGPT is Forcing Us to Do Some Deep Reflecting: Are You Ready? (Herbst Fellow Essay)

ChatGPT is Forcing Us to Do Some Deep Reflecting: Are You Ready? (Herbst Fellow Essay)

People are freaking out about ChatGPT’s ability to generate complex and unique texts that can be hard to distinguish from human writing (Dale, 2020). With ChatGPT gaining 100 million users in two months (The Economic Times, 2023) and becoming the fastest growing consumer application in history, many people feel like AI development is coming out of left field and cannot help but to anticipate the AI apocalypse. For example, credible news sources like Politico, the New York Post, and The Washington Post have posted headlines like “Tracking the AI apocalypse” (Robertson, 2023); “Rogue AI ‘could kill everyone,’ scientists warn as ChatGPT craze runs rampant” (Cost, 2023); and “Opinion | ChatGPT might be the end of civilization” (Leibbrand, 2023).

Another pressing question many people have is “Will AI take over my job?” First of all, white collar jobs that involve processing data, writing text, and even programming are most likely to be affected. But to answer the question, I can see this going two ways: possibly yes and possibly no. Sam Altman, OpenAI's CEO, advocates for universal basic income (AI News Base, 2023) which makes us think he thinks the answer is, “yes.” On the other hand, affected doesn’t have to mean replaced. Instead of AI replacing lawyers, this could mean lawyers working with AI would replace lawyers not working with AI. (John Oliver, 2023).

However, asking if AI will take over isn’t productive. To figure out what we should do with AI, we must ask the question, “What does it mean to be human in the age of AI?” Before we discuss this question, let's first gain a better understanding of what exactly ChatGPT is.

ChatGPT: A Deeper Dive

AI research started in the 1950s, but its performance was not very significant. Until recently. After large language models (LLMs) were fed billions of parameters (texts and images from the internet), LLMs finally displayed intelligent behavior. ChatGPT is a LLM, and it is like a vast scrapbook created from a huge pile of snippets of text from the internet that it then glues together on demand (Heaven, 2020). ChatGPT’s acceleration in intelligence was unexpected. In 2022, OpenAI’s GPT-3.5 only scored in the 10th percentile of the bar exam, but, in less than a year, GPT-4 scored in the 90th percentile. However, ChatGPT is not truly intelligent. 

There are two types of AI: narrow and general. Narrow AI can perform only one defined task while general AI demonstrates intelligent behavior across a range of cognitive tasks. An example of general AI would be J.A.R.V.I.S from Iron Man.

ChatGPT's only task is to generate text, so generative AI is narrow… for now. Mr. Altman’s ultimate goal is to reach general AI. In recent interviews, he has said general AI has benefits for humankind “so unbelievably good that it’s hard for me to even imagine.” However, be cautious in believing his words because he did also mention that general AI could kill us all (Roose, 2023).

ChatGPT’s Current Limitations

Regardless of how ChatGPT seems like it understands what it is saying, we must note that it actually doesn’t. When prompted for sources, it will provide fake articles that don't exist. This is caused by LLM’s learning of likelihood: when asked to provide sources, it produces a very likely title that a human would have written for that topic. Tellingly, AI spouting false information is called hallucinating (Johnson, 2022), and this poses a serious problem for the public good. Since everyone is using ChatGPT but are not aware of this limitation, we will believe the false information. Furthermore, GPT-4 is able to make false facts more convincing and believable than earlier GPT models. Thus, overreliance occurs when users excessively trust the model, leading to inadequate oversight (OpenAI, 2023).

As this Twitter and ChatGPT user has pointed out, we should be aware that ChatGPT can be misleading if not scrutinized. 

Some limitations can be explained, but exactly what’s going on inside ChatGPT isn’t clear. Developers don’t fully understand how the massive amounts of data are being linked together, and they can't explain how ChatGPT’s unique results are derived. In addition, developers can’t explain why internet scaling allowed for ChatGTP’s intelligent behavior to emerge. This is like a black box. We’re able to see the responses AI is generating (the box), but we’re unable to see how the system is making its decision (what’s happening inside the box). So it is concerning to me that there is a black box problem happening here, but developers continue to create more black boxes as they release more LLMs applied to non language processing issues like predicting protein structures (Timmer, 2023). It is wonderful that LLMs are advancing science, but it would be more beneficial if we could follow along and understand how the LLMs are deducting their predictions. 

Because ChatGPT learns off of billions of parameters, its scale also makes it harder for OpenAI to test every single test case. For example, GPT-4-early was observed to have serious safety challenges including harmful content and privacy problems (OpenAI, 2023). Intentional probing could lead to advice for self harm, hateful content, content for planning violence, and instructions for finding illegal content. In addition, GPT-4-early had the potential to be used to identify individuals with a large online presence if a user possesses outside data and then gives it to GPT-4-early (OpenAI, 2023). Being able to identify someone without their consent or even knowledge raises serious privacy concerns. 

OpenAI implemented safeguards to mitigate these challenges, but again, they will most likely never catch all test cases and, therefore, never be able to create safeguards for them. However, creating safeguards can be problematic as well. Attempts to filter out toxic speech in systems like ChatGPT can come at the cost of reduced coverage for texts about marginalized groups (Welbl et al., 2021) Essentially, this safeguard solves the problem of being racist by erasing minorities, which, historically, doesn't put it in the best company (Oliver, 2023). Honestly, the list of limitations continues, but Big Tech continues to roll out more LLMs for commercial use. This is reckless and seriously threatens public safety. 

Silicon Valley’s Irresponsibility 

Even the decision to release ChatGPT early was rash. OpenAI’s original plan was to release GPT-4 after it was done with thorough testing. But before GPT-4 was ready, the company’s executives urged workers to release a chatbot to the public fast. They were worried that rival companies might upstage them by releasing their own A.I. chatbots before GPT-4, according to the people with knowledge of OpenAI. So they decided to dust off and update an unreleased chatbot that built on GPT-3, the company’s previous language model, creating GPT-3.5 (Roose, 2023).

Clearly GPT-3.5 didn’t go through the proper testing because it was released within two weeks. GPT-3.5 still produced biased, sexist, and racist text, but OpenAI wanted to be the first, probably for the money and power. If that was their goal with the early release, they achieved it. Because of ChatGPT, OpenAI is now one of Silicon Valley’s power players. The company recently reached a $10 billion deal with Microsoft and another deal with BuzzFeed. In addition, Mr. Altman has met with top executives at Apple and Google (Roose, 2023). OpenAI’s mission statement says that they will  “ensure that artificial general intelligence… benefits all of humanity” and that generative models are safe and align with human values (OpenAI, 2023), but the company has seemed to become too profit driven, undermining its original spirit.

However, ChatGPT isn’t the only product that was released irresponsibly and reflects the culture of Silicon Valley. For example, according to the National Transportation Safety Board (NTSB), an experimental automated driving system by an Uber Advanced Technologies Group was deployed when the system did not account for jaywalking pedestrians yet; the system did not consider pedestrians as human if they were not walking on a crosswalk (National Transportation Safety Board, 2019). 

Everyone knows the mantra of Silicon Valley is “move fast and break things”, but you would think they’d make an exception if their product literally moves fast and can break people (Oliver, 2023). 

Why We Need Guardrails in Legislation

AI does have the potential to help humans achieve great things (that I will discuss later), but we urgently need legislative guardrails. If we are not careful, such progress might come at the price of civil rights or democratic values. Suresh Venkatasubramanian, Computer Science Professor at Brown University and appointed to the White House Office of Science and Technology Policy, says “These technological systems impact our civil rights and civil liberties with respect to everything: credit, the opportunity to get approved for a mortgage and own land, child welfare, access to benefits, getting hired for jobs — all opportunities for advancement” (News for Brown, 2022). For such reasons, the Federal Trade Commission (FTC) has declared that the use of AI should be “transparent, explainable, fair, and empirically sound while fostering accountability.” OpenAI’s product GPT-4 satisfies none of these requirements, yet the FTC has taken no action (Federal Trade Commission, 2023). The Center for Artificial Intelligence and Digital Policy (CAIDP) recognizes how this lack of action could allow OpenAI to impact our civil rights, so they issued a complaint, demanding the FTC to act. In their complaint, the CAIDP states, “There should be independent oversight and evaluation of commercial AI products offered in the United States” (Federal Trade Commission, 2023).

Nonetheless, as the constant adoption of information technologies deepens the uncertainty of the future, it is less likely that traditional governance instruments will be adequate. Thus, we need to create frameworks using systems such as virtue ethics that are better suited to navigating uncertainty (Bauer, 2022). Like Aristotle’s eighteen virtues, Shannon Vallor recently proposed twelve techno-moral virtues, including humility, justice, courage, magnanimity, empathy, care, and wisdom. In her book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Vallor argues the cultivation of these virtues will help individuals live well with AI (Bauer, 2022). If we can use such values to evaluate developmental AIs, we will increasingly ensure that AI aligns with substantive human values.

We may not have solid legislation yet, but guardrails build trust in the technology and allow innovation to flourish without fear of liability. In Venkatasubramanian’s testimony to the U.S. Equal Employment Opportunity Commission, he says that to argue against guardrails is the same as “advocating for sloppy, badly engineered and irresponsible technologies that would never be deployed in any other sector.” (U.S. Equal Employment Opportunity Commission, 2023).

The effort of national governments to develop formal frameworks for AI policy is a recent phenomenon, but the pace of AI policymaking is anticipated to accelerate in the next few years (Center for AI and Digital Policy, 2022). For example, a “Recommendation on the Ethics of Artificial Intelligence” by Unesco came out in November 2021, suggesting how countries should start to evaluate AI. In October of 2022, the United States also created a Blueprint for an AI Bill of Rights, which was developed in consultation with not only agencies across the federal government, but also with the private sector, civil society advocates, and academics (Venkatasubramanian, 2023).  As we continue to formulate our values, it would be most productive to ask, “What makes us human?” Then, we can better understand where we want to go with AI and start creating real legislation.

What makes us human in the age of AI? What is the human interest?

Like the discovery of the heliocentric system, AI will change the worldviews we live by, especially the modern experience of what it means to be human. Humans are no longer the only talking thing in a world of mute objects, and I cannot believe so few people are talking about the philosophical stakes of generative AI. According to Descartes’ “Discourse on the Method”, he considers language as a power only humans possess because animals cannot understand what our words mean, and it sets us apart in an exceptionally qualitative way from animals and machines (Rees, 2022). Because of language, humans are capable of reasoning and methods to elevate the mind. Now with OpenAI working towards general AI, there is a chance that this distinction between human and non-human will no longer be maintained. 

In several cases, AI has benefitted human lives and the US government acknowledges this. In the “Blueprint for an AI Bill of Rights,” the White House says from “automated systems that help farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases earlier in patients, these tools hold the potential to redefine every part of our society and make life better for everyone” (The White House, 2023). Another plus is that people have generated harmless videos for entertainment purposes like Youtuber Grandayy creating an Eminem song about cats (HAL-9000, 2023).

All jokes aside, if OpenAI is able to achieve general AI, where the system is genuinely capable of knowledge – remember current generative AI is only doing predictive analysis – language and intelligence will no longer be a reason to set us apart from everything else. For even further into the future, what if AI ever reaches the point of super intelligence? Super intelligence is a hypothetical agent where AI’s intelligence will far surpass human intelligence. It’s likely that not too long after general AI is accomplished, super intelligence will emerge. It may still seem well into the realms of science fiction, but AI could develop emotional intelligence as well. Machines would become capable of feeling emotions. Many people say connections and the ability to love makes us human, but if general AI, and thus super intelligence, is the direction we want to be headed in, we may create other sentient beings. Is this what we want? More specifically, can humanity even handle this? 

Without a deeper discussion of what direction society should go in, legislation will not be able to help increase some certainty for the future. AI will only continue to accelerate. There must be a point at which we draw the line at, so we must be proactive to ensure we don’t hit the worst case scenario.


References

  •  
  •  
  •  
  •