Don’t brood: have difficult conversations

Many times in my life, in myself and others, I have seen how inaccurate, incomplete, often negative views can be reinforced by brooding or brainwashing — by a person going into some kind of echo chamber (in their head, on the Internet, only spending time with similar people in the real world, etc.) for a long time and repeating certain thoughts or feelings over and over until they become more and more extreme. The same was apparently true in the Buddha’s time, about 2,600 years ago: “‘He insulted me, he hit me, he beat me, he robbed me’ — anger will never cease in those who dwell on such thoughts” (Dhammapada, 3).

But real people are small and complicated. Everyone finds themselves born into a certain body, family, country, etc., which can be hard to escape. Everyone has had many unique past experiences that informed them. No one can see or learn everything. The only way to understand the complexity of life or people is to get out of your comfort zone (either mentally or physically) and have strange, new, different experiences. Brooding or brainwashing in isolation usually only makes one’s views more xenophobic, unrealistic, inaccurate, and incomplete; having difficult, new conversations and experiences usually makes one’s views more connected, realistic, accurate, and complete.

Here is a nice Ted talk, which says pretty much the same thing:
https://www.ted.com/talks/theo_e_j_wilson_a_black_man_goes_undercover_in_the_alt_right

Advertisements

The reality of complexity

In my experience, every large group, nation, etc. contains the full range of people, from terrible to wonderful. “They” are not all bad, and “we” are not all good. Please stop seeing the world in terms of simplistic categories, and start seeing the incredible complexity of life.

How smart is the meat you eat?

By the same logic that it’s more ethical or moral to eat plants than animals, because plants are less cognitively complex than animals, shouldn’t people who need to eat meat for health reasons choose from among the least cognitively complex animals (i.e., small fish, birds, rodents, etc.)?

The swarm of self

According to early-to-medieval Buddhism, as I understand, the self and (probably) world are like swarms/flocks of bees, birds, or fish: each particle more-or-less doing its part for some larger purpose with more-or-less thought, each particle itself a swarm of smaller particles — momentary configurations of some basic, common-to-everything, connected-but-separate flashes (not stable points) of energy, with the swarm’s complexity having slowly aggregated/evolved over billions of years. A feeling of a stable self emerges from the swarm, but it is an illusion. Swarms of food, water, air, thoughts from other people or objects, etc. are constantly affecting or replacing parts of oneself. These are several ways in which ancient Buddhism was/is similar to modern physics, biology, and complex adaptive system theories.

“All composite things are impermanent. Strive for liberation [from this state of existence] with diligence” (the Buddha’s final words, my translation from Pali).

The (not so) innocence of babes

Unlike in one-life-only creationist religions (e.g., the Abrahamic religions), in religions that have concepts of rebirth or reincarnation (e.g., the Dharmic religions), babies often are not seen as completely innocent or heavenly, but as a possibly millions of years old mindstream or soul, with all of the baggage that implies, taking a new body. As the new body develops, more and more of the complexity of its mind/heart/soul is able to manifest.

Love.txt

Though an artificially intelligent (AI) robot might someday look and behave just like a human, how do its internal ‘mental’ states compare with a human’s. Is it possible for a robot, which behaves in a way that a human interprets as kindness or empathy, actually to be internally loving, kind, compassionate, sympathetic, attached, etc.? Can love be stored in a file on a computer disk, and what would be in such a file? Was the file designed by someone and/or was it constructed inductively from a history of sensor (infrared, microphone, etc.) data organized by machine learning algorithms? Can those algorithms modify themselves; if yes, to what extent?

Similarly, can different species (or even different people) ever really empathize with or understand each other, and does it matter? Does anyone care whether the happiness of a dog is the same as the happiness of a human, as long as the dog is wagging its tail or behaving affectionately, and as long as we believe the dog isn’t secretly plotting to hurt us?

I suspect that robots might some day reach this ‘close enough’ stage, where humans develop enough of a degree of apparently mutual love and trust with them to live with them, but I also suspect that robot minds and bodies will evolve differently, and much more rapidly, than biological ones (perhaps unless an artificial version of a human is made), such that our communications with robots will be similar to inter-species communications, and it might be hard to trust that the robot’s intelligence and/or motivations didn’t drastically change overnight. Limited hardware capabilities, similar to the way that numbers of neurons limit the complexity of biological thought, might provide some comfort to humans, though computer processors are becoming smaller and denser by the day.