I’ve seen other people explain why “everyone is entitled to their own opinion” and “everyone can believe whatever they want to believe” are bad arguments. But I continue to see people use these arguments, so I’m doing it again. And because I’ve seen it come up recently, I want to apply it to people who say that they “just don’t believe in gay marriage.”
A legal designation isn’t even the type of thing you can believe or not believe in to begin with, but we use the word “belief” in very bizarre ways in the 21st century. Let me clear that up first.
A belief is an acceptance that something is true or that it exists. You can’t really say “I don’t believe in gay marriage,” because marriage is a social convention that already exists, and “gay marriage” is just extending that convention to gay people, who also already exist.
An opinion is a judgment that is not necessarily based on fact or knowledge. Opinions are not conclusive and cannot ground an argument. Opinions are things like matters of taste. Here’s an example: I don’t like Jello. That’s an opinion. Jello is not objectively awful. It’s not wrong for other people to like Jello. It’s just not to my taste.
If your response to neo-Nazi rallies or domestic terror attacks is to say something like “love conquers hate” or “hate never wins,” please pause and challenge yourself to dig a little deeper into what those words means.
If you criticize people who praise Nazi-punching or antifa or black bloc for defensive acts of violence and say things like “violence is never the answer” or “hate is hate,” please stop and consider more nuance.
And if you are a white person and have the nerve to point out to black people that Martin Luther King Jr. promoted nonviolence, for the love of God, please just stop.
These seemingly well-meaning slogans don’t address the severity and prevalence of everything that falls under “hate” and “violence.”
“Post-truth.” “Post-fact.” “Fake news.”
These are terms I hear flying around lately, and I feel obligated to step in.
Talk to this guy about formal and objective reality.
There are and have been philosophical debates on truth and reality for centuries. Is truth only what is verifiable? Is it correspondence? Is it coherence? Is reality only the material reality through which the natural sciences are practiced? Is reality always already filtered through a subjective, phenomenological perspective?
Truth and reality are messy concepts, because truth and reality are created, defined, and evaluated by human-made standards.
Truth isn’t one thing. The truth of an event isn’t “what actually happened,” because when anything happens to a person that particular experience is happening to someone with a perspective. And with any perspective comes bias. Bias from actual limitations of human sensing, pattern recognition, and comprehension, but also bias from socialized beliefs and bias from a personal agenda, all of it.
You are biased.
Disclaimer: There are spoilers of Captain America: Civil War in what follows. I also realize I’m being a huge nerd about this, and it is necessary to suspend belief to watch superhero movies. Also, I’m a philosopher, not a political scientist, so my understanding of U.N. procedures is rudimentary.
The fictional UN session in Vienna gone awry.
It was curious to me that in Captain America: Civil War the writers decided to use an existing organization–the United Nations–instead of continuing to use fictional groups like the World Security Council, S.H.I.E.L.D., etc.
As I understand it, the purpose of the UN is to do things like mediate and maintain world peace, promote human rights, and protect the environment. So, ideally they are in the business of promoting humanitarianism.
The UN isn’t the world police, and there’s no such thing as a world army. The UN Security Council can use armed coalition forces to maintain peace and security, but those forces are voluntarily provided by nation-states (and the UN can’t force a nation to send troops). The UN also has an International Court of Justice, but it only looks at cases brought about by nation-states against other nation-states (and it doesn’t even really have jurisdiction over them).
So, to have a UN panel that would determine when a group of superheroes would–what? be used as a “peacekeepers”?–is dubious to begin with.
Why do people do such horrible things to each other? When will people stop fighting? When will the threat of terrorism no longer exist?
I see the laments every time a terrorist attack happens in a Western nation, and my own response is–why are these the questions we ask?
What follows isn’t criticism, it isn’t an argument, it’s just a reflection. It’s just another way to ask why.
(Would love to credit this properly.)
“Baby Hitler” was trending on Twitter on Friday. After investigating, I found that New York Times Magazine had posed this question:
Dylan Matthews wrote this response: “The philosophical problem of killing Baby Hitler, explained“ over at vox.com. He takes up the classical responses to posing such a hypothetical problem, and he makes good points about time travel and consequentialism. I want to go further and explore the only thing I’ve ever gotten out of such thought experiments–the further affirmation that philosophy, and ethics in particular, doesn’t (and shouldn’t) happen in a vacuum.
Wibbly wobbly timey wimey stuff.
Granted, it can be kind of fun to think about hypothetical situations, especially about time travel. And maybe thought experiments reveal something about our intuitions. Ethical thought experiments can show the basic idea behind consequentialism, and perhaps they can make you reflect on how you would act differently if faced with an ethical dilemma. The problem, of course, is that you are never going to be in a situation where there are five people tied to a trolley track and your mother tied to another. Just like you are never going to be able to go back in time and kill baby Hitler.
The theme of artificial intelligence keeps cropping up in things I read and watch, enough that I’m compelled to write a post about it. In the last week and a half:
- I saw Ex Machina. It’s a film wherein a young computer programmer goes to spend a week at the estate of an eccentric genius (the creator of the world’s most used search engine). There, he participates in a Turing test with a robot programmed to look and act like a human woman, resulting in unfortunate consequences.
- I saw the full preview for Jurassic World. It appears as though scientists at Jurassic Park thought it would be a good idea to create a dinosaur with an advanced level of intelligence, the Indominus rex, that goes on a rampage and starts killing for sport.
- I saw Age of Ultron (twice). Ultron’s goal is to save the planet by destroying human life as we know it and replacing it with intelligent metal beings—the next evolution of man!
- I re-read an article in which Stephen Hawking and Elon Musk were quoted as being worried about A.I. and another in which Bill Gates echoed the sentiment.
I’m sorry, Dave.
Regarding #4, Stephen Hawking said recently: “The development of full artificial intelligence could spell the end of the human race,” and, “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Elon Musk referred to A.I. as “our biggest existential threat.” Bill Gates “[doesn’t] understand why some people are not concerned” with the threat super intelligence poses.
This all sounds an awful lot like a fear that robots will take over while human beings can still survive on Earth. Sure, the human species will eventually die out. We’re already intent on creating a planetary environment that won’t be able to sustain human life. But will we be superseded by artificial beings that we create?
The thing that I can’t quite wrap my mind around is that in all of this discourse, the common theme is that advanced and artificial intelligence is somehow inherently dangerous. A.I. will be intent on harming humanity in some way— purposefully superseding it or even destroying it.