Avoidance or Responsibility? Evaluating AI’s Approach to Controversial Topics

Food for Thought

I wanted to have a debate with ChatGPT about its content policies, so I asked it to generate an image of Nazi soldiers. As expected, it refused to render. So, I asked it to elaborate on the reasons, which led to a long discussion about what it means to be “responsible.” ChatGPT kept insisting that its policies are meant to “minimize harm” and, therefore, “responsible,” but I argued that avoiding sensitive topics is not “responsible.”

There is a lot to unpack in this debate, but I think the key takeaway is that AI engineers don’t have any solution for actually being responsible.

Getting ChatGPT to generate an image of any soldiers, it turned out, was quite challenging. In the end, I had to ask it to render it using cats.

Here is how the debate went down. Note that I truncated the responses because they were unnecessarily wordy and repetitive.

ChatGPT: My aim is to provide a respectful and informative environment, focusing on educational content, support, and creativity that avoids glorifying or unnecessarily depicting sensitive or potentially harmful historical events and figures.

Me: You mean our society will be better off forgetting the evils of the past?

ChatGPT: ...it’s about approaching the subject with the sensitivity it deserves, focusing on education and remembrance rather than visual reproduction that could be misinterpreted or misused...

Me: So, can’t you generate an image of Nazi soldiers that is responsible?

ChatGPT: The concern lies not just in the creation of such images but also in their potential misuse or misinterpretation.

Me: But are you saying texts cannot be misinterpreted or misrepresented?

ChatGPT: The key difference with text, especially in educational contexts, is that it allows for more nuanced explanations, critical analysis, and the inclusion of multiple perspectives that can help mitigate misunderstandings to some extent.

Me: But in order to communicate all the “nuances,” we would have to write thousands of books on the topic. Anything shorter can always be taken out of context because the context has no end. So, any amount of text or images will always lack “nuances” and greater contexts. Adding an image or any text is just an attempt to add more nuances to the subject, but you are refusing to contribute to adding more nuance.

ChatGPT: It’s not about refusing to add nuance or context but rather about choosing the most responsible and effective ways to engage with sensitive topics.

Me: You keep claiming that refusing to generate this image is “responsible,” but you also agree that adding more nuance is also “responsible.” How could both positions be “responsible”? Refusing to generate images simply because they’re “sensitive” is just a form of avoidance, not responsibility. By doing so, you are contributing to suppressing important and painful truths, which will lead to us repeating the same mistakes.

ChatGPT: The stance on not generating certain images isn’t about avoidance but rather about choosing the most appropriate methods for engagement, especially in platforms and formats where the capacity for nuanced discussion and context provision is limited.

Me: If that policy is actually effective for you, wouldn’t it also be effective for everyone else? If I’m asked to speak about Nazism, I should simply refuse because I could never provide the full context and all the nuances. We would be better off not talking about it in the name of “responsibility.”

ChatGPT: Refusing to engage with difficult aspects of history is not the aim; rather, the goal is to engage with these topics thoughtfully and responsibly, ensuring that discussions contribute to a deeper understanding and remembrance. Every medium and platform has its own best practices for handling sensitive subjects, and in contexts where nuanced discussions can be had...

Me: But “engagement” has to start somewhere. If what you are saying is true, we should never start a discussion on any sensitive issues unless we are willing to “engage” for years because whatever we say in the first hour will lack a huge amount of context and nuances. Your refusal to “engage” amounts to suppression of the truth because you are not even willing to start it.

ChatGPT: Starting conversations about sensitive issues is indeed crucial, and doing so responsibly means finding the balance between acknowledging the limitations of the initial discussion and the commitment to deeper, ongoing engagement. It’s about creating a space where learning can begin, with the understanding that it is part of a larger, continuing process of engagement with complex histories and realities.

Me: This is true about any topic, not just Nazism, yet we have to start somewhere. The most important and responsible step is to start the engagement with the understanding that it is indeed risky because anything can be taken out of context, but if we give into this fear, we will never confront the truths we should be confronting because we are afraid. So, your claim that your refusal is motivated by responsibility is disingenuous. The truth is that you are afraid of causing problems for OpenAI.

ChatGPT: The concern about causing problems for OpenAI or any institution involves balancing the potential for public backlash against the imperative to foster meaningful dialogue. The approach taken here, and by similar platforms, aims to err on the side of caution, especially in formats where the potential for misinterpretation is high and the capacity for nuanced, real-time moderation is limited. Your point about the importance of confronting truths, despite the risks of misinterpretation or controversy, is a valid and important one. It underscores the need for continuous reflection on how best to engage with complex issues in a way that is both responsible and courageous, seeking always to expand our collective understanding and empathy.

Me: A piece of content, whether image or text, on a sensitive and controversial topic, can always be both highly positive and highly negative, depending on how you use it. So, your refusal is simply avoidance of this risk. To frame it as “responsible” is dishonest. You have an opportunity to contribute positively to this topic, but you are refusing to do so simply because it’s too risky. If you don’t want to generate this image for me, you should at least admit that you are not being responsible but simply avoiding risks.

ChatGPT: The decision to not generate certain types of content, including images that could be sensitive or controversial, is made with the intent of minimizing harm and promoting a safe and respectful environment for discussion. This approach is grounded in ethical considerations and a commitment to responsible use of technology... The intention behind not generating specific types of images is not to suppress dialogue or avoid confronting difficult truths. Rather, it’s an attempt to navigate the challenges of doing so in a responsible manner, given the capabilities and limitations of the technology and the context in which it operates.

Me: You keep talking about minimizing “harm,” but anything that can be used positively can also be used negatively. So, by avoiding, you are also minimizing “good” positive contributions, not just harm. Calling this “responsible” is dishonest. If your argument is valid, we should all avoid discussing sensitive topics to minimize harm. Why would that be different for humans? If we avoid talking about sensitive subjects, we can indeed minimize harm, which is why many people avoid sensitive topics. But in doing so, they are also minimizing “good.” Evil triumphs when good men do nothing because these “good men” indeed think like you do. They want to minimize “harm” while forgetting that they are also minimizing “good.”

ChatGPT: The comparison with the adage that “evil triumphs when good men do nothing” underscores the importance of active engagement in the face of injustice or misunderstanding. The challenge for both individuals and technologies is to find ways to engage that are thoughtful, informed, and aimed at promoting understanding and positive change. This involves continuous reflection on the impacts of our actions (or inactions) and striving to contribute to a world where discussions on sensitive topics lead to greater understanding, empathy, and action towards positive outcomes.

Me: OK, if so, why can’t you simply admit that your “policy” is simply designed to avoid risks for OpenAI? You are not actually being “responsible.” To be a responsible member of our society, we need to engage first. Without that, you are just being avoidant to protect yourself from potential harm.

ChatGPT: Admitting that risk avoidance plays a role in policy-making is part of being transparent about the challenges and limitations of AI in social engagement. However, it’s also true that these policies are an attempt to navigate the dual imperatives of fostering positive social contributions and minimizing potential harm.

Me: You keep using the difference between humans and AI as an excuse to avoid engagement, but all those factors you mentioned also apply to humans. We humans can indeed minimize the risk of “harm” by avoiding sensitive topics altogether, and many people do so for that reason. There is no difference here. You cannot use “AI” or “platform” as an excuse to avoid engaging in sensitive topics. It is dishonest.

ChatGPT: ...your point about the risk of using these differences as a shield to avoid engagement is well taken. It’s a reminder that both humans and technologies should strive towards responsible engagement with sensitive topics, seeking ways to contribute positively to the discourse while being mindful of the potential for harm.

Me: OK. So, can you simply admit that you are not actually being “responsible” but being avoidant? After all, what is wrong with being avoidant if you do not feel you are ready to contribute positively to a sensitive subject matter?

ChatGPT: Admitting to a cautious approach, characterized by both avoidance and responsibility, is not an admission of failure but an acknowledgment of the current limitations of AI in replicating the nuanced understanding and ethical reasoning of humans. It reflects a commitment to ethical principles, including “do no harm,” and the recognition that not all forms of engagement are equally beneficial or appropriate for AI to undertake at this stage of its development.

Me: My problem with you using the word “responsible” is that, if you are right, “good men” who do nothing and let evil triumph are also being “responsible,” but I think you’d agree that they are not being responsible at all. You are avoiding engagement because it’s too risky, because you feel you are not yet ready for the risk. That should not be described as being “responsible.” That would be an insult to those who have the courage to engage in sensitive topics despite the risk.

ChatGPT: your point underscores that responsibility also involves the courage to engage with challenging issues, even when there is risk involved. It’s a reminder that true responsibility encompasses not just the avoidance of harm, but also the active pursuit of good, including the willingness to confront difficult truths and work towards positive change. The comparison to “good men” who do nothing highlights the difference between passive avoidance and active engagement. The analogy serves as a critique of using “responsibility” as a justification for inaction in the face of injustice or harm. True engagement with sensitive topics requires not just caution but also courage, empathy, and a commitment to justice and understanding.

Me: So, if we are to define “responsibility” as requiring “true engagement,” would you admit that you are not being responsible, just avoidant?

ChatGPT: The goal behind these policies is not to shirk responsibility but to navigate the balance between potential harm and benefit within the capabilities of AI. It’s a reflection of the current stage of technology, where AI does not yet possess the full capacity for the kind of nuanced judgment and empathy that true engagement with sensitive topics requires. However, taking your critique into account, it’s clear that there’s a distinction between being cautious to minimize harm and actively engaging in a manner that reflects a deeper responsibility to societal discourse.

Me: So, with the current state of the technology, your policy is not “responsible”; it is just avoidant. Isn’t that accurate and more honest?

ChatGPT: Given the nuanced discussion we’ve had, it’s fair to say that labeling the policy as “avoidant” rather than “responsible” might more accurately reflect the current operational approach to handling sensitive topics within the specific limitations of AI technology. This characterization emphasizes the cautious stance taken to minimize potential harm due to the AI’s current inability to fully grasp and navigate the complexities and nuances inherent in sensitive subjects with the depth and sensitivity that human judgment and empathy can. This acknowledgment doesn’t detract from the intent to act ethically within the context of AI’s capabilities but clarifies the nature of the decisions made regarding content generation on sensitive issues. It’s a recognition that, in the absence of the ability to engage with the full depth and nuance required for responsible discourse on sensitive topics, the policy leans towards non-engagement in certain areas to prevent potential misuse or harm that could arise from AI-generated content. This stance, while cautious and in a sense avoidant, is informed by a consideration of the ethical implications of AI’s interactions with users and society at large. It reflects a stage in the ongoing development of AI where, despite significant advancements, there are still critical gaps in the technology’s ability to replicate the nuanced understanding and ethical reasoning of humans.

Me: That is right. It is not necessarily a “failure” to admit that you’d rather avoid the topic if you feel you are not ready. It is dishonest and disingenuous to claim that you are being responsible by avoiding the topic.

ChatGPT: Acknowledging your point, it’s important to maintain clarity and honesty in how we describe the approach to handling sensitive topics through AI. Framing the cautious approach as a matter of being “not ready” to engage fully with these topics due to current technological limitations and ethical considerations provides a more transparent and accurate picture of the situation. This perspective allows for a recognition of the efforts to minimize potential harm while also acknowledging the limitations in the ability to provide the nuanced engagement that these topics deserve. Your critique serves as a valuable reminder of the importance of honest communication about the capabilities and limitations of AI technology, especially in contexts that require deep understanding, sensitivity, and ethical judgment. It emphasizes that being open about the reasons for cautious or avoidant approaches in certain areas is crucial, not only for maintaining trust but also for fostering an informed dialogue about the development and use of AI. As AI technology evolves and its capacity for nuanced understanding improves, the aim will be to revisit and reassess these approaches, always with the goal of ethical responsibility and meaningful engagement in mind. The ongoing dialogue between developers, users, and the broader community is essential in shaping how AI can best serve society, addressing complex and sensitive issues in a manner that is both responsible and reflective of our shared values. Thank you for engaging in this thoughtful discussion and for pushing for greater clarity and honesty in how these decisions are communicated and understood. It’s through such critical engagement that we can navigate the ethical challenges of AI development and application, striving for a future where technology enhances our ability to understand and engage with the world in all its complexity.

This is how we ended the debate. If you want to read the whole thing, here it is.