A good presentation on the topic of Pseudoscience and bullshit, titled:

Why Bother? The Nature of Pseudoscience, How to Fight It, and Why It Matters | Massimo Pigliucci

Why bother? I just checked on the Skeptical Inquirer website. I've written
166 articles for Skeptical Inquirer. The first one came out in 1999 and it was a
skeptical look as a biologist at the origin of life. So once you do this kind
of stuff for many years, like Jim and like Barry and like a lot of others have
done, at some point, inevitably, you do ask yourself, like, why? Why are we here?
Why bother considering especially how much nonsense and bullshit there is in
the world that doesn't seem to go down by a single iota? In fact, it seems to
multiply. So I'm gonna give you a skeptical pep talk and since it's
skeptical it ain't gonna be much of a pep talk. Deal with it, okay? All right,

So, first of all, let's start with the basics.
We heard last night that you shouldn't be referring to yourselves as skeptics, but as
skeptical inquirers.
Bad idea.
I mean, I like the general idea, but the thing is, as some people know here, because I've
heard from some people about this, skeptic, actually, the word skeptics comes, not surprisingly,
from the Greek, skeptikoi, and skeptikos in the singular.
And it just means inquirer, which means that if you refer to yourself as skeptical inquirer,
like our beloved magazine, what you're saying is inquirer, inquirer.
You're like an ATM machine.
You know, not, not, it's a little redundant.
But it's fine.
Most people don't know that, so you can fake it.
It's all right.
But the point is, actually, we are, it is in the word.
Skeptics do have this reputation for being, you know, nihilists and don't, and not believe
in anything.
That sort of stuff.
But, in fact, the very word means inquirer, somebody who with as open a mind as is possible
for a human being actually looks into things, as opposed to just dismiss them out of hand.
And it's a very, very, very old tradition.
Skeptics have been around at least for two and a half millennia that we know of in the
Western traditions.
This guy that you're looking at here, Marcus Tullius Cicero, was a Roman skeptic.
He actually used the word.
In, to refer to himself.
And he wrote a book on divination, which is basically on astrology and other ways to predict
the future.
And it is the first treatise on pseudoscience in the Western tradition.
It's more than 2,000 years old.
And he said at the beginning of that book, to hasten to give assent to something erroneous
is shameful in all things.
So Cicero, 2,000 years ago, not only wrote for the first time about what we today call
pseudoscience.
But it connected the topic with ethics.
It's shameful to believe in things for which you don't have any evidence.
This is not just an epistemological problem, as the philosophers say.
It's actually an ethical one.
There are consequences in believing in bullshit.
That same debate on the ethics of belief flared up much later on between these two gentlemen,
mathematician William Clifford.
Clifford famously said that it is wrong always, everywhere, and for everyone to believe anything
on insufficient evidence.
Again, he's making a moral point here, an ethical point.
This isn't just a question of, oh, well, your opinion, my opinion, what's the harm in believing
this or that?
There is harm.
And it goes into it.
If you haven't read it, check out the original essay.
It's available for free on the internet.
And it makes a very compelling argument.
It's a compelling case, to which James just responded with an equally famous, even more
famous, probably, essay, The Will to Believe, in which he says, my first act of free will
shall be to believe in free will.
In other words, I want to believe in bullshit, and who the hell are you to tell me not to?
I think you can tell on which side I am in that debate.
Now let's take a look at what works against us.
So what is it?
What is it we're facing here, and why is it so difficult?
One reason why we're facing our task is so difficult, and it seems never-ending, is something
that is informally referred to as Brandolini's Law, named after Alberto Brandolini, who is
an Italian engineer who published this famous thing on X, of all places.
So you know that it's reliable.
He said, the bullshit asymmetry principle, the amount of energy needed to refute bullshit,
is, in order of magnitude, bigger than to produce it.
If you have ever been in a debate with a ufologist, an astrologer, a creationist, I've done a
number of them with creationists, intelligent design people, and all that sort of stuff,
you know exactly what Brandolini is talking about, because they will rapid-fire at you
a number of claims that are clearly unsubstantiated.
But it will take you 10 or 15 minutes to just respond to one of them, and the audience,
at the end of the debate, will come out with an answer.
So what is it?
It will come out with the impression that, yeah, sure, you've answered some of those
things, but the rest you didn't even address, so obviously there must be something there.
So the Brandolini principle is actually something to take seriously into consideration.
If you ever decide to engage directly, as opposed to other ways, with a purveyor of
pseudoscience, beware.
The Brandolini principle says that you're going to have to work 10 times as much, and you're
probably not going to have the time to do it, which means you probably shouldn't do
it.
I stopped doing debates a long time ago.
I also recommend, if you're interested in fighting the good fight, this book that came
out a few years ago, it's become a bestseller, not just among philosophers.
It's called On Bullshit by Harry Frankfurt, who was a philosopher, recently deceased philosopher
at Princeton.
And the book starts out with this sentence, one of the most salient features of our culture
is that there is so much bullshit.
I hear you, brother.
Now the book launches into a philosophical analysis of the concept of bullshit, so much
so that now there is actually a little cottage industry in philosophical circles called Bullishtology,
the study of bullshit.
And Frankfurt distinguishes between a bullshitter and a liar.
He says,
1.
Bullshitters have to be aware and to some extent even respectful of the truth, because
otherwise they're not efficient liars, they're not effective liars.
But the bullshitter doesn't really care necessarily about the truth per se, it doesn't matter
for him.
There is an agenda that he has, political, ideological, whatever it is, or simply for
personal gain, and it is the agenda that matters, which means that what he or she, usually it's
an E, but he or she will do is to mix and match randomly.
And stuff, some of it is completely made up, some of it is half true, some of it is
actually true.
He doesn't even know necessarily, and he doesn't care, because that's not the point.
The point is to bullshit you into believing whatever it is that he wants to pursue.
So the prevalence of bullshit and the Brandolini Principles are already two strikes against
us.
But we're not done, unfortunately.
What also works against us, and I'm sure you're aware of this, is the…
2.

The human brain is a process of cognitive biases that come with the human mind.
The human mind is a process of evolution, regardless of what my creationist friends
would argue.
And it's therefore evolved over a long period of time, and all sorts of messy stuff happened.
Natural selection kept what was working for whatever immediate purposes, and threw the
rest in kind of a more or less chaotic fashion.
So the result of it is that it's…
The human brain is a beautiful piece of biological machinery, but it also has a lot of problems.
Some of these problems manifest themselves as what we, what psychologists now refer to
as cognitive biases.
What you're looking at here is a very nice actual graphic summary of all the major cognitive
biases that people have discovered so far.
And one of the problems with cognitive biases, which are essentially heuristics, there are
ways in which the brain automatically thinks, jumps to certain conclusions.
And that's what we're looking at here.
And that's what we're looking at here.
And these heuristics actually sometimes, or a number of times, actually work, which
is probably why they're actually there in the first place.
Sometimes they backfire.
Sometimes they lead us to thinking in directions that are not useful, that are not truthful.
And it's very difficult to get out of them.
Even when…
The research shows that even when people are aware of a cognitive bias, they have a really
hard time overcoming it.
The only…
The only sure way to fight a…
A cognitive bias is to engage in critical conversation with somebody else, other people
pointing out to you, you know, what you're doing there is you're selecting the evidence
or some other stuff.
You may also be familiar with the concept of logical fallacies.
That comes from philosophy.
These are especially so-called informal logical fallacies.
These are things like Stroman, Adam and Em, all that sort of stuff.
And…
The nice thing is that there is actually a fairly good correspondence between logical
fallacies, which have been described by philosophers since Aristotle, and cognitive biases as discovered
more recently by psychologists.
It turns out that the reason people engage in logical falla…
Logically fallacious thinking is because of their underlying cognitive biases.
So familiarizing yourself with these, it's actually a good idea.
One thing that I wouldn't suggest is the shortcut that I've seen of even a few fellow skeptics'
using, which is pointing somebody to a fallacy and say, this is what you're doing.
Oh, that's a Stroman, that's a Adam and Em, et cetera, et cetera.
It doesn't help.
If you just point people to things and you tell them that they're mistaken about it,
that they're making a mistake, that's not going to convince anybody.
Not only that, but the other side is learn to play the same game, right?
So whenever I quote, let's say, a colleague, a physicist, or an astronomer, somebody on
the other side, meaning a purveyor of pseudoscience, would say, aha, you're committing the fallacy
from authority.
It's not a fallacy.
I'm looking at an authority because these are the people we're talking about.
When you have a toothache and you go to the dentist, you're not committing a fallacy.
You'd be stupid if you didn't go to the dentist.
And that's the problem with logical fallacies, the informal logical fallacies, that it depends
on the situations, on the certain conditions that are actually not fallacious at all.
They actually work.
They're good heuristics.
Again, just like the cognitive biases.
So it's a little bit more complicated than just say, oh, that's a cognitive bias.
That's a logical fallacy.
Now we are a reality-based community, presumably.
So while the reality, as I briefly summarized it to you, is not pretty, and the result is,
the outcome is probably that we're never going to be able to do it.
So we're not going to be able to do it.

We're not going to actually win the fight that we are fighting.
It's fine.
That's the situation as it is.
So now relax and enjoy the mildly good news for the rest of this talk.
For one thing, for instance, you may have heard some social psychologists like Jonathan
Haidt claiming that the human brain basically almost always, if not always, engages in rationalization
rather than rational thinking.
If that were true, that would be a really big issue.
It would be that whatever we're trying to do, it would be completely doomed from the beginning.
Not only that, worse, it would mean that we ourselves, without understanding it, without realizing it,
are actually engaging in rationalization, confabulation, and things like that.
Now one reason you shouldn't believe everything that Haidt writes is because if he's right,
that applies also to his own papers.
And…
I'm pretty sure he would argue that he doesn't.
Now he is, of course, correct that there is a lot of rationalizing going on in the human
mind.
We all do it to different degrees and to different extent.
But we do it under very specific and fairly well-understood circumstances, which means
that we can become aware of it when we do it ourselves or when others do it.
For instance, we do it when we do not have good quality, high-quality information.
If somebody asks you,
you know, why are you doing this or why do you think this, and you do not actually have access to good
explanations, you'll make up something, no matter how ridiculous it may be, because you don't want to be
embarrassed by not having an explanation, not having a reason for doing certain things.
So the counter to that is not to argue with the person that the explanation is incorrect, fallacious,
and all that sort of stuff.
It's just to give them better information that hopefully, over time, will bring them to better reasoning.
Another time when we do this kind of thing, when we rationalize, is when we engage in
motivated reasoning.
That is, when we have a more or less conscious or subconscious ideological agenda, and we
all do it.
Don't ever think that you don't have the ideological agenda.
Everybody else does, but you don't.
We all have certain things that we prefer.
We have a worldview.
We have a framework.
We have a way of thinking about things.
And that's motivated reasoning.
We want always things.
We look for things that support that way of looking at things, and we tend to discard
things that actually oppose it.
So we know when rationalizations and confabulation happens, and what we have to do is to make
people aware of those circumstances, and better equipped to defend themselves against those
circumstances.
The idea here is not to have a fight.
It's to have a conversation.
It's to help people.
Think of purveyors of pseudoscience, bad thinkers, and so on and so forth.
As people that are, in a certain sense, sick.
They need our help.
They're not the enemy.
They're somebody who is in a situation that requires help.
Also, we do actually have a number of strategies that counter, at least to some extent, the
problems that I just outlined.
I know that rhetoric has a bad reputation because it tends to be associated with lawyers
and politicians.
But in fact, I think it's a good thing.

Rhetoric is a very old tradition.
It's about persuasion based on logic and evidence.
And so it actually is, I don't know how many people here have ever picked up a book on
rhetoric, but if you haven't, I strongly encourage you to pick up one of these two or both.
One is called The Socratic Method by Ward Farnsworth.
It's a really fun book to actually read.
And the other one is The Ancient Art of Thinking for Yourself by Robin Reams.
They will actually teach you techniques.

For how to engage constructively with other people.
One of the things we know, for instance, is that when you explain something to other people
and you lecture them in a way essentially what I'm doing here, unless the audience is
already receptive, I'm counting on you people to be receptive about what I'm saying.
But unless the audience is already receptive, you're actually wasting your time.
People don't respond well to being lectured.
People don't respond well to being shown by facts or by the way they're said.
arguments that they're wrong. However, what you can do instead is to engage in
what is sometimes referred to as the Socratic method. If you actually read any
of the Socratic dialogues, what you'll see is that Socrates just asks questions.
And the point of asking these questions is to actually literally generate
confusion in the other person. The term in Greek is aporia, literally means
confusion, because the beginning of wisdom is when people are less certain
of what they believe and begin to be confused about stuff and say, wait a
minute, hold on, I thought I knew this thing. Most of the dialogues, in most of
the dialogues you will see that there is a recurring pattern. It's not that Socrates is
just asking random questions, he's asking leading questions. He's asking the kind
of questions where he wants to bring people in a certain direction. It's like
the comedian Jordan Klepper, have you ever seen his segments of
when we talk to people at political rallies? Yeah, so what he does, observe him,
he's using the Socratic method. What he does is starts out with one question and
has
the
The person in front of you say, yeah, this is what I think.
And then three questions later, he asks another question.
The person gives an answer.
And then Jordan pauses and says, but wait a minute.
Five minutes ago, you said something
that seems to be a tension with what you just said.
You yourself told me two things that don't actually
go together.
And people pause.
Because Jordan has done just like Socrates
two and a half millennia ago.
It has generated cognitive dissonance.
And cognitive dissonance is very uncomfortable.
And people try to get out of it.
And that is the time when you actually want to walk out.
OK, now you deal with it.
Try it out.
It's actually fun.
Now another thing you hear often is,
bah, it's a well-known thing that intelligence and language
evolved according to the so-called Machiavellian
hypothesis.
That is.
That's Machiavelli over there.
The Machiavellian hypothesis meaning that language and reason
are actually evolved in order to manipulate other people
in a social environment.
That's why we have all these cognitive biases
and engage in logical fallacies, et cetera, et cetera.
That's why we rationalize all the time.
So who are we to fight against evolution?
Well, the reality is that it's not well-known at all.
Nobody knows why large brains, intelligence, and language
evolved.
If anybody tells you otherwise that they
have a very good idea about it, they're bullshitting you,
even if they're scientists.
Nobody really knows, because it's hard to come up with the,
you know, imagine what kind of fossil evidence, for instance,
would count to test these kind of hypotheses.
Biologists have come up with a number of hypotheses
for why we developed language and intelligence.
They're probably all true, or at least partially true.
There's probably many reasons.
Certainly, one reason is, in fact, too,
to engage with other people in a social group to our advantage.
No question about it, because other people
are part of our environment.
But it's also to look for truth in the sense
of whatever kind of factual proofs actually
helped us survive and reproduce, which
is what the natural selection cares about.
So no, it isn't well-known at all.
It's OK to keep thinking that rationality is a thing.
We don't just rationalize.
We actually do.
We are capable of using our brain to discover truth.
We are capable of doing all sorts of things.

And there is a new kid on the problematic block,
and that's so-called artificial intelligence.
It's artificial, for sure.
Whether it's intelligence is much more debatable.
Here's Noam Chomsky, for instance,
who a little bit provocatively, but recently said
to the New York Times, the human mind is not like ChatGPT,
and it's like a glutton statistical machine
for structure recognition.
Swallows hundreds of terabytes of data
and snatches the most plausible answer to a conversation
or the most likely to a scientific question.
The other way around, the human mind
is a surprisingly efficient and elegant system
operating with a limited amount of information.
It doesn't try to injure in your raw correlations from data,
but tries to create explanations.
Let's stop calling it artificial intelligence
and call it what it is, plagiarism software.
Don't create anything.
Copy existing works from existing artists
and alter it sufficiently to escape copyright laws.
It's the largest theft of property
ever since Native American lands by European settlers.
There may be a slight degree of hyperbole here,
but I think Noam is onto something here.
In fact, philosophers are particularly
interested in, of course, the new phenomenon of ChatGPT,
artificial intelligence in general.
They've been thinking and writing
about artificial intelligence for a long time.
There's a whole branch of philosophy of mind.
There's a philosophy of mind that is devoted to that.
One of the interesting articles that, if you have any time
on inclination, I would suggest you check out,
came out recently in the journal Ethics and Information
Technology.
And it's entitled ChatGPT is Bullshit.
The analysis by these three authors
is that ChatGPT is, in fact, not directly a bullshitter,
because it doesn't have any consciousness, any intentions.
The intentions are on the programmer side.
But it is a bullshit generator machine.
Why?
Because if it doesn't know the answer to a question,
it makes it up, the famous hallucinations, right?
So that doesn't mean that all the time the answers are wrong.
It just means that you better check.
You never, ever go ask something to ChatGPT
and then copy it into your paper, because that's a bad idea.
It could be an example of bullshit.
This is going to be a major problem, I think,
moving forward.
People will come up with all sorts of deep fakes about UFOs.
They probably already have.
Right?
If we have, there will be, this will be an additional challenge.
Then we're just beginning to recognize
the broad outlines of that challenge.
So be ready, because this is already happening.
And we don't know where it's going.
One of the problems, for instance,
that ChatGPT is already apparently running into
is that it has already trained itself on most of what's
out there on the internet, which creates the problem.
Where does the new information come from?
Because it's not like we replace entire internet worth
of information.
Every year or something like that.
So it's already plateauing in some sense.
But in fact, it's even worse than plateauing,
because now an increasing fraction of the information
that you find on the internet, it's
actually produced by ChatGPT.
So it's now feeding itself.
So it's a bullshit generator that feeds into itself.
You can connect the dots and extrapolate from there
and see where that is going.
We are not going to have an easy time moving forward.
Now, I mentioned rhetoric a few minutes ago.
And some of you may be aware of this,
but I think it's worth bringing it up again for a minute.
So the first author in the Western tradition
to write about rhetoric was Aristotle, who was also
the founder of logic.
And Aristotle taught us that the way in which most people, most
of us, especially skeptics, scientists,
approach things in terms of rhetoric is wrong.
I've done it myself, and especially when I was younger.
And I've seen lots of my colleagues do it.
You walk on a stage, let's say on a debate,
and your thinking is, I'm going to crush this thing,
because the other guy doesn't know anything about science.
I'm a scientist.
I have the facts.
Surely, all I have to do is to explain them to the audience.
And by the end of the day of the debate,
they will carry me on their shoulders in triumph.
Or not.
So Aristotle said that rhetoric, of course,
is about persuasion, right?
And you don't persuade people just by arguments and facts.
This may come as a shocking surprise to you,
but that is the case.
So he says there are three components to persuasion.
And unfortunately, we tend to pay attention
to only one of them.
And in fact, a lot of us actively despise the third one.
And we should really consider changing our mind about it.
What we tend to do is to focus on the logos.
The logos means the evidence and the arguments.
And we certainly should make sure,
in terms of intellectual honesty,
that we do have the right arguments and facts.
Otherwise, we are ourselves the bullshitters, right?
So that's fine.
But that's a necessary, not sufficient condition
to even begin to persuade other people.
There are the two things.
One is the ethos.
That is, you have to establish your credentials
in front of the audience.
Many of my colleagues, again, are
done the same myself, think that ethos is just, hey,
I got a PhD.
That's enough to justify my new credentials with an audience.
Now, in fact, it sometimes even undermines you
in front of an audience, depending
on what the audience is.
Ethos means establishing a rapport with the audience that
makes them, reassures them that you are on their side,
that you can be trusted.
And that's much more difficult, especially
to do impromptu with an audience that doesn't know you.
OK.
And then finally, the really tricky one is the pathos.
Pathos means emotion.
And this is about connecting with your audience
at an emotional level.
Make them laugh.
You guys have been kind enough to laugh at it a couple of times
already during this presentation.
Make some kind of connection.
Make them feel like you care about the same kinds of things
that they care.
And that's why they should be taking you seriously.
If you don't do all that, you're not going to be successful.
And number three, Aristotle says, you're doomed.
Good luck doing just the logos.
So don't focus just on the logos.
A few final words, if you don't mind.
And that is, we as skeptics, of course,
tend to spend a lot of time trying to correct others,
to point out to others that they're
engaging in logical fallacies, motivated reasoning, blah, blah,
blah.
All that sort of stuff.
Great.
We also need occasionally to pay attention, however,
to cleaning our own house and doing things well ourselves.
This is just a matter of professional ethics,
so to speak.
What we need to do is to engage in what philosophers
call virtue epistemology.
That is, epistemology, of course,
is the study of truth, the study of knowledge.
Virtue, it means doing things properly, from the point of view
of capitalism.
It means doing things in the right direction.
So for instance, we should actively be mindful and practicing
those epistemic virtues that you see on the left,
and trying to stay away from the epistemic biases that you see
on the right, as opposed to just telling other people,
oh, you're engaging in this epistemic bias,
and you should be actually virtuous.
So epistemic virtues include things like attentiveness,
benevolence, conscientiousness, creativity, curiosity,
and all the other ones that you see on that list.
Epistemic biases, close-mindedness, dishonesty, dogmatism,
now, nobody wants to be told that they're dogmatic.
But I'm sure you know the other side does accuse us of dogmatism.
And sometimes that accusation may be on the mark,
uncomfortably close to the mark.
So, you know, let's clean up the house before we go out there
and tell other people to clean up their house.
For instance, edit an exercise in practical virtue epistemology,
and the next time you engage somebody in a discussion on,
you know, X, Facebook, or whatever you're doing,
you get your pain from,
try to pause and ask yourself the following questions.
Did I carefully consider my opponent's arguments
and not dismiss them out of hand?
Truly carefully consider them.
Did I interpret what my opponent said in the most charitable way possible
before mounting a response?
Or did I just, was I just waiting there for him to finish
so that I could post my response?
Did I seriously entertain the possibility that I may be wrong?
Or am I too blinded by my own preconceptions?
Am I an expert on this matter?
If not, did I consult experts,
or did I just conjure my own unfounded opinion out of thin air?
In other words, was I actually bullshitting?
Did I check the reliability of my sources
or just search online for whatever was convenient to throw at my opponent?
I mean, I love it, people sometimes do this.
Topic, they go on Google, topic, skeptic.
Or topic, criticism.
And then they copy and paste the first link or two that come out
without reading it.
It's like, here, go and read that.
It's like, really?
That's not a conversation.
That's just wasting my time.
And your time.
Having done my research, do I actually know what I'm talking about?
Or am I simply repeating someone else's opinion?
Plato, two and a half millennia ago,
said that knowledge is justified through belief.
Justified through belief.
Justified through belief.
That is, if you claim that you know something,
you ought to believe it.
That would be really weird if you said,
I know that the Earth goes around the sun,
but I don't believe it.
You'd be intellectually schizophrenic.
True, well, true, you know, truth,
I could talk to you about truth for another hour,
but let's say as true as we can reasonably assume
or verify that it is, right?
So if I say, you know,
Saturn has rings around it,
okay, well, there's a number of ways
I can actually verify that this is actually truth.
The tricky part is the first one, justified.
As it turns out, if people ask you,
you know, how do you know this?
And that is, they ask you justification for your belief.
Most of us will simply say, I read it somewhere.
Or an expert told me.
Which means you're not,
you're not actually know it.
It just means you've heard it.
It's hearsay, right?
So it turns out we actually all know
far less than we think we know.
And that is, it should be a source of humility,
like a source of, okay,
let me begin by admitting my own ignorance.
Let me begin by saying, you know,
I don't actually know as much.
I have sources for a lot of my,
of what I think is true,
but I actually don't know it myself.
That makes us more charitable
to other people as well.
Finally, I think we should never forget
Carl Sagan's modest suggestion.
One of the best books, in my opinion,
that Sagan ever put together
was The Demon Haunted World.
And the subtitle of that book was
Science as a Candle in the Dark.
The general metaphor was that science,
reason, are like a candle in the dark.
They are surrounded by darkness,
and it's a candle.
That means that the darkness
is never going to go away.
Our job is not to expand the candle
so that everything is lit by the light of truth,
because that's not going to happen.
That's unrealistic.
Our job is, at the very least,
to keep that candle alive and present
and maybe make a second one or a third one.
And if we lower our expectations
reasonably based on, you know, our experience,
on the Brandolini laws,
on the bullshit and all that sort of stuff,
then I think we can be far more optimistic
about what, as a community, we can accomplish.
Thank you very much.

发表回复