Three Signs You’re Dealing with Corporate Disinformation
What the political class calls disinformation has become an important topic of conversation lately. Modern networks of communication and old tricks of propaganda make it easy to spread sophisticated lies. In the hands of masters, the lies are woven, sometimes deliberately and sometimes subconsciously, into frames that transform people’s world views.
Disinformation doesn’t only happen in the political sphere. Big businesses and business/political alliances use campaigns of disinformation, too. Right now, you have Republicans arguing that investing according to environmental standards is bad business. Two decades ago, you had merger advocates telling us our health care would get better if it was delivered by big companies; in retrospect, I think the drive was actually to get to lower (profitable) standards of care. At the moment, you have tech companies and others telling us that AI is inevitable, and the media telling us it is fearful, and I’m wondering what’s really driving the surge of AI news right now?
These are two red flags to me that there is a disinformation campaign swirling around:
2. What I think of as dead language. If see language that is removing the emotion from human-to-human interactions, beware. Thus you have caregivers, like doctors and nurses, being turned into “providers.” I was a young reporter when hospital mergers first started happening; if I got a press release today that talked about great benefits for “providers,” I’d start looking around the corners for the big-money players. Watch for this as various AI apps are introduced.
3. The presence of fear. This week, we had the introduction of ChatGPT, an AI writing tool, which caused a lot of really good journalists and educators to retreat into a terrified corner, wringing their hands about pointlessness. Chat GPT was both fluid and wildly wrong: Because it developed its essays by scraping the Internet, it scraped up a lot of bad information.
The tech world stands to benefit mightily from turning AI into the next big thing, especially as software loses steam as a lever of outsized investment returns. These are the questions on my mind as I’m looking at the possibility of another long technology-led slide down to lower standards. Is the replacement of humans by AI really inevitable? How could it be, if humans are inventing it? Is ChatGPT really that good? Not if you consider that it has no ethical standards. And then, why are we reframing the idea of quality to exclude ethics?