
I spent some years in the early 70’s in Berkeley, associated with a commune. It was not exactly a crazy cult, but it had peripheral contact with some of the more fanatic tribes of “the movement.” I was more amused than intrigued.
But I did understand how anyone can start with a logical (although usually false or warped) perspective and then use anything that happens as support. And automatically dismiss or hate anyone who interprets or evaluates things differently.
Off the deep end, this leads to Manson or Hitler. Avoiding that is the best case for promotion of free speech, in the fond hope that two extremes can somehow synthesize in the middle, even though that rarely happens.
Such Artificial Ignorance _ using “facts” to support a previously determined outlook is what we should most fear from AI. Humans, after all, are complex and can often change their minds. Machines, not so much.
But the more current danger is the artificial ignorance of the various tribes formed by background, media, and social computerization. Good examples are cable news, newspaper editorial pages, and all the various organizations on one mission or another. They hear what they want, and make anything fit their preconceived notions.
The saving grace has been that fanatic obsessions are often (not always) taken over by exploiting cynics _ another human trait missing in silicon circuits.
