Because the very topic of this blog, for the most part, is the science of using language to try to manipulate people’s beliefs, my reaction to the idea of allowing artificial intelligences (AIs) to harvest my content was a big nope. We definitely don’t need AI to improve its skills in influencing us. Already, the Claude AI is plenty friendly – when you have a conversation with “him” it feels very much like you’re chatting with a real person. Some people may even be improving their mental health by chatting with closely monitored AIs – it can be easier to open up to a machine.
However, my husband pointed out that if I want people in the AI community to read what I wrote yesterday, I should rethink my position. Many people working on AI use it as a tool to summarize what’s being said online, and if I’m not letting AI read my posts, these people won’t know what I’m saying.
I’m reminded of a conversation our research team had a while back, when we were analyzing the language used to promote genocide. By laying it all out in one easy-to-read document, were we providing a recipe for future disaster? But we weren’t revealing any hidden information, only the patterns within. History’s genocidal leaders have already been learning from each other perfectly well. When Hitler was talking about annihilating the Jews, he explicitly referenced the Turkish genocide against Armenians. It’s all pretty much intuitive, anyway. The point was that what we were doing could be used as the basis for tools to monitor the speech of problematic leaders and let people know what was going on before it went further.
So, thinking about it more, I realized that, well, first, it would be trivial for anyone who wanted AI to know the content of my blog to input it manually. The same goes for any papers or books I write – they’ll already exist in electronic format, so uploading them would be simple.
Second, why should I think I’m smarter than a near-term AI? The techniques I’m writing about should be even easier for them to learn about than it was for me. Reading through a vast body of the texts promoting mass violence and the literature of the environmental movement, and looking for common patterns, and connecting them up with psychological theories… that’s probably a matter of seconds for even today’s AI.
What I’m doing is bring it to humans’ attention. I’m telling humans that, hey, meta-narratives and salience markers are things people use to influence each other, sometimes with dramatic consequences. Humans need to know this. And if I want to bring it to human attention, I should maximize my opportunities for networking, no?
So… I’ve turned “third-party sharing” on (or back on, if that had been the default). If you agree with me that it’s important for humans to know at least as much as machines do about how language can influence humans, please – help by sharing my work with other humans.
Image source (cropped)