Heart of the Matter: How LLMs Can Show Political Bias in Their Outputs



AI-BIAS-2Wired just published an interesting story about political bias that can show up in LLM's due to their training. It is becoming clear that training an LLM to exhibit a certain bias is relatively easy. This is a reason for concern, because this can "reinforce entire ideologies, worldviews, truths and untruths” which is what OpenAI has been warning about. 

ChatGPT's issue of political bias was first brought to light by David Rozado, a data scientist located in New Zealand. Rozado used a language model called Davinci GPT-3, which is similar but less powerful than the one powering ChatGPT. He spent a few hundred dollars on cloud computing to fine-tune the model by tweaking its training data. This project highlights how people can incorporate various viewpoints into language models that are very hard to detect, and pose a subtle but devious social engineering risk. It is more and more important to train your users

Full story in WIRED: https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/

Interesting side note: the image was created in JasperAI with the following prompt: "Create a photorealistic portrait of an AI with a distinct bias displayed in its facial expression, using digital painting. The subject should seem almost human with mechanical details on its face, expressing the biased behavior in its gaze. Use a neutral background to emphasize the importance of the AI's features, and create a sharp and crisp image to accurately convey the concept."




Subscribe to Our Blog


Comprehensive Anti-Phishing Guide




Get the latest about social engineering

Subscribe to CyberheistNews