Bioethics Forum Essay
My Friend ChatGPT: Fun to Talk With, Not Yet to Be Trusted
How long does it take to trust someone?
More specifically, how much time do you have to spend with someone to trust them and agree to go on a weeklong holiday in their native city, where you might be rendered vulnerable in many ways, revealing your habits, worldview, innermost thoughts, weaknesses, and cringey stories from your past?
This kind of trust rarely happens overnight. Now suppose this individual also records all conversations, has the tools to analyze your thinking pattern and reasoning, and might also share this information with others. What if I told you this new chatty friend is ChatGPT?
This new member of the technology family only needed five days to get one million of us chatting with it. Just to gain some perspective, it took Instagram 2.5 months, Facebook 10 months, and Twitter 24 months to get one million users. ChatGPT’s meteoric rise in mainstream popularity is unprecedented and impressive, but let’s not get too friendly too quickly with it.
Instead of adopting generative AI in vital sectors such as education, research, health care, and law, we should first negotiate with its developers to ensure they are not going to do what companies such as Meta (Facebook) did to their users.
Back in 2006, when Facebook became open to the public, many were thrilled by the prospects of using this new technology to connect with a trusted network of friends and family, as well as with the wider world, envisioning it as our first global utopian village. However, Facebook capitalized on our personal information and sold it to advertising companies. A few years down the line, Facebook data helped get Brexit off the ground and supported digital gerrymandering of the political landscape and regions. As the recent uses of data-driven research shows, these efforts not only amplify existing biases but rarely benefit communities as they make their users vulnerable.
Now back to where we started. Imagine how your weeklong trip with this new friend would go if they also knew the types of questions you typically ask and your thinking patterns and happened to have access to all kinds of knowledge sources. It is fair to assume they would have the upper hand in almost any conversation, thereby rendering you vulnerable. But, more importantly, they could make educated guesses about how to convince you or (in the case of travelling with a group) even manufacture consent.
While technologies like Facebook, Twitter, Instagram, and TikTok revamped our media landscape in ways we could never have predicted, ChatGPT is catapulting us to a whole new information universe. Most web services that collect users’ data trace and record visited webpages or sequences of visits to detect what users think about or what combination of prompts brings them to a webpage. ChatGPT engages in conversation with users and can detect how they think.
Instead of capturing a static view of involved factors that brought user X to webpage Y and not webpage Z (ultimately to optimize search engines or improve online sales strategies), ChatGPT can detect individuals’ thinking patterns by means of engaging in an ever-evolving, innocuous-seeming chat. Millions of user-generated questions and prompts provided to ChatGPT enable OpenAI to create big datasets and initiate lucrative data analysis efforts that eventually categorize and compare users based on, for example, the complexity of their questions, specific thinking patterns, or whatever (biased) criteria customers desire.
Generative AI and its ability to decipher how each individual thinks, combined with its built-in rhetorical skillset, could take these harmful practices to a whole new level. Yet ChatGPT’s developer paints a rosy picture about the future, stating that artificial general intelligence technology “could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.”
Vague statements like this seem more like marketing pitches, begging questions such as abundance for whom? Turbocharging which sector of the global economy, controlled by whom? What kind of scientific knowledge with which beneficiaries? Changing which limits of what possibility, where, when and for whom? But more importantly, what’s the catch? What are the trade-offs?
Right now, we don’t know the answer to any of these questions. And we won’t for quite some time.
So for the moment, let’s treat ChatGPT like a stranger who is not yet allowed in our homes or classrooms. And, before we let it into our research, let’s adopt ethical guidelines for its use. It needs to show us good faith and prove itself not with what it can do for us in terms of its instrumental value (e.g., making us more efficient, writing text or all kinds of reports, etc.) but by showing us what it will do with our trust and information.
Mohammad Hosseini, PhD, is a postdoctoral scholar in the department of preventive medicine at Northwestern University’s Feinberg School of Medicine, a member of the Global Young Academy, and an associate editor of the journal Accountability in Research. @mhmd_hosseini
[ILLUSTRATION: Farzaneh Hosseini and Mahdi Fatehi]