Showing posts with label Thinking Skills. Show all posts
Showing posts with label Thinking Skills. Show all posts

Thursday, September 5, 2019

Relevance and Sufficiency

The relevance and sufficiency conditions of a good argument appears to be an expansion of the validity concept in classical logic. Splitting the validity condition provides the benefit of allowing the making of nuanced judgement about the level of premise support.(1)

I. Relevance
Relevance refers to premises that provide some evidence or offer reasons that support the conclusion or can be arranged in a way from which the conclusion can be derived. Relevance can be categorized as positive relevance, negative relevance and irrelevance.

A. Positive Relevance, Negative Relevance & Irrelevance
1. Positive Relevance
When assessing an argument we would say that statement A is positively relevant to statement B if the truth of A counts in favor of the truth of B. In other words, A provides some evidence or reason to believe that B is true.

In each of the following cases, the first statement is positively relevant to the second:
P. Smith has appendicitis, gout, and cancer of the bladder.
C. Smith is not healthy enough to run the 26-mile Boston Marathon.

Here the first statement provides evidence for the second statement, because it describes adverse aspects of Smith’s health, and good health is required to run a marathon.(2)

2. Negative Relevance
Statement A is negatively relevant to statement B if the truth of A counts against the truth of B. So if A is true, it provides some evidence or reason to believe that B is not true.

Consider the following examples of negative relevance:

P. Jogging often results in knee injuries.
C. Jogging improves a person’s general health.

In (d), the first statement is negatively relevant to the second, because having knee injuries counts against having good general health.(2)

3. Irrelevance
Statement A is irrelevant to statement B if it is neither positively relevant nor negatively relevant to B. In other words, when statement A does not logically support or logically undermine statement B, we would say it is irrelevant.

P. Natural catastrophes such as earthquakes are beyond human control.
C. Human beings have no freedom of choice concerning their actions.

Here, the first statement cites some natural events that are beyond human control and the second statement is about human choices about their own action. The truth of the first statement would not count as any reason to accept or reject the second statement, which is why (f) exemplifies irrelevance.(2)

B. Argument Analysis
If the premises of an argument, considered together, are irrelevant to its conclusion, or are negatively relevant, the argument is not cogent. Any case in which the relevancy condition of argument adequacy is not satisfied will be a case in which sufficiency is not satisfied either. (If premises are not even relevant to the conclusion, they cannot provide sufficient support for it.)(2)


II. Sufficiency
Sufficiency refers to the degree of support provided by the premises to support the conclusion. Whereas relevance is a property of individual premises, sufficiency is a judgement made about all the premises that support the conclusion. Hence, to be considered sufficient, the premises must provide enough support to reasonably accept the conclusion. 

This is perhaps the most difficult principle to apply because we have no clear guidelines to help us determine what constitutes sufficient grounds for the truth or merit of a conclusion. Most argumentative contexts are different and thus create different sufficiency demands. 

The feature of the sufficiency principle that is most difficult to apply is the assignment of weight to each piece of supporting evidence.  Some sciences have well-developed sufficiency criteria in place. Statisticians, for example, have determined what constitutes a proper sample from which to draw defensible conclusions. But in informal discussion, it is usually very difficult to determine when enough evidence or evidence of the appropriate kind has been presented.





(1)Stanford Encyclopedia of Philosophy: Fallacies
(2)A Practical Study of Argument: Grover
Informal Logic and logic: Blair
Attacking Faulty Reasoning: Damer


Wednesday, May 8, 2019

The Map Is Not the Territory

The Map Is Not the Territory by Shane Parrish



The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. A map can also be a snapshot of a point in time, representing something that no longer exists. This is important to keep in mind as we think through problems and make better decisions.

“The map appears to us more real than the land.”

— D.H. Lawrence
The Relationship Between Map and Territory

In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.

However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.

In Korzybski’s words:


A.) A map may have a structure similar or dissimilar to the structure of the territory.

B.) Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.

C.) A map is not the actual territory.

D.) An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.

Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. A map with the scale of one mile to one mile would not have the problems that maps have, nor would it be helpful in any way.

To solve this problem, the mind creates maps of reality in order to understand it, because the only way we can process the complexity of reality is through abstraction. But frequently, we don’t understand our maps or their limits. In fact, we are so reliant on abstraction that we will frequently use an incorrect model simply because we feel any model is preferable to no model. (Reminding one of the drunk looking for his keys under the streetlight because “That’s where the light is!”)

Even the best and most useful maps suffer from limitations, and Korzybski gives us a few to explore: (A.) The map could be incorrect without us realizing it; (B.) The map is, by necessity, a reduction of the actual thing, a process in which you lose certain important information; and (C.) A map needs interpretation, a process that can cause major errors. (The only way to truly solve the last would be an endless chain of maps-of-maps, which he called self-reflexiveness.)

With the aid of modern psychology, we also see another issue: the human brain takes great leaps and shortcuts in order to make sense of its surroundings. As Charlie Munger has pointed out, a good idea and the human mind act something like the sperm and the egg — after the first good idea gets in, the door closes. This makes the map-territory problem a close cousin of man-with-a-hammer tendency.

This tendency is, obviously, problematic in our effort to simplify reality. When we see a powerful model work well, we tend to over-apply it, using it in non-analogous situations. We have trouble delimiting its usefulness, which causes errors.

Let’s check out an example.

***

By most accounts, Ron Johnson was one the most successful and desirable retail executives by the summer of 2011. Not only was he handpicked by Steve Jobs to build the Apple Stores, a venture which had itself come under major scrutiny – one retort printed in Bloomberg magazine: “I give them two years before they’re turning out the lights on a very painful and expensive mistake” – but he had been credited with playing a major role in turning Target from a K-Mart look-alike into the trendy-but-cheap Tar-zhey by the late 1990s and early 2000s.

Johnson’s success at Apple was not immediate, but it was undeniable. By 2011, Apple stores were by far the most productive in the world on a per-square-foot basis, and had become the envy of the retail world. Their sales figures left Tiffany’s in the dust. The gleaming glass cube on Fifth Avenue became a more popular tourist attraction than the Statue of Liberty. It was a lollapalooza, something beyond ordinary success. And Johnson had led the charge.

“(History) offers a ridiculous spectacle of a fragment expounding the whole.”

— Will Durant

With that success, in 2011 Johnson was hired by Bill Ackman, Steven Roth, and other luminaries of the financial world to turn around the dowdy old department store chain JC Penney. The situation of the department store was dour: Between 1992 and 2011, the retail market share held by department stores had declined from 57% to 31%.

Their core position was a no-brainer though. JC Penney had immensely valuable real estate, anchoring malls across the country. Johnson argued that their physical mall position was valuable if for no other reason that people often parked next to them and walked through them to get to the center of the mall. Foot traffic was a given. Because of contracts signed in the ’50s, ’60s, and ’70s, the heyday of the mall building era, rent was also cheap, another major competitive advantage. And unlike some struggling retailers, JC Penney was making (some) money. There was cash in the register to help fund a transformation.

The idea was to take the best ideas from his experience at Apple; great customer service, consistent pricing with no markdowns and markups, immaculate displays, world-class products, and apply them to the department store. Johnson planned to turn the stores into little malls-within-malls. He went as far as comparing the ever-rotating stores-within-a-store to Apple’s “apps.” Such a model would keep the store constantly fresh, and avoid the creeping staleness of retail.

Johnson pitched his idea to shareholders in a series of trendy New York City meetings reminiscent of Steve Jobs’ annual “But wait, there’s more!” product launches at Apple. He was persuasive: JC Penney’s stock price went from $26 in the summer of 2011 to $42 in early 2012 on the strength of the pitch.

The idea failed almost immediately. His new pricing model (eliminating discounting) was a flop. The coupon-hunters rebelled. Much of his new product was deemed too trendy. His new store model was wildly expensive for a middling department store chain – including operating losses purposefully endured, he’d spent several billion dollars trying to effect the physical transformation of the stores. JC Penney customers had no idea what was going on, and by 2013, Johnson was sacked. The stock price sank into the single digits, where it remains two years later.

What went wrong in the quest to build America’s Favorite Store? It turned out that Johnson was using a map of Tulsa to navigate Tuscaloosa. Apple’s products, customers, and history had far too little in common with JC Penney’s. Apple had a rabid, young, affluent fan-base before they built stores; JC Penney’s was not associated with youth or affluence. Apple had shiny products, and needed a shiny store; JC Penney was known for its affordable sweaters. Apple had never relied on discounting in the first place; JC Penney was taking away discounts given prior, triggering massive deprival super-reaction.

“All models are wrong but some are useful.”

— George Box

In other words, the old map was not very useful. Even his success at Target, which seems like a closer analogue, was misleading in the context of JC Penney. Target had made small, incremental changes over many years, to which Johnson had made a meaningful contribution. JC Penney was attempting to reinvent the concept of the department store in a year or two, leaving behind the core customer in an attempt to gain new ones. This was a much different proposition. (Another thing holding the company back was simply its base odds: Can you name a retailer of great significance that has lost its position in the world and come back?)

The main issue was not that Johnson was incompetent. He wasn’t. He wouldn’t have gotten the job if he was. He was extremely competent. But it was exactly his competence and past success that got him into trouble. He was like a great swimmer that tried to tackle a grand rapid, and the model he used successfully in the past, the map that had navigated a lot of difficult terrain, was not the map he needed anymore. He had an excellent theory about retailing that applied in some circumstances, but not in others. The terrain had changed, but the old idea stuck.

***

One person who well understands this problem of the map and the territory is Nassim Taleb, author of the Incerto series – Antifragile , The Black Swan, Fooled by Randomness, and The Bed of Procrustes.

Taleb has been vocal about the misuse of models for many years, but the earliest and most vivid I can recall is his firm criticism of a financial model called Value-at Risk, or VAR. The model, used in the banking community, is supposed to help manage risk by providing a maximum potential loss within a given confidence interval. In other words, it purports to allow risk managers to say that, within 95%, 99%, or 99.9% confidence, the firm will not lose more than $X million dollars in a given day. The higher the interval, the less accurate the analysis becomes. It might be possible to say that the firm has $100 million at risk at any time at a 99% confidence interval, but given the statistical properties of markets, a move to 99.9% confidence might mean the risk manager has to state the firm has $1 billion at risk. 99.99% might mean $10 billion. As rarer and rarer events are included in the distribution, the analysis gets less useful. So, by necessity, the “tails” are cut off somewhere and the analysis is deemed acceptable.

Elaborate statistical models are built to justify and use the VAR theory. On its face, it seems like a useful and powerful idea; if you know how much you can lose at any time, you can manage risk to the decimal. You can tell your board of directors and shareholders, with a straight face, that you’ve got your eye on the till.

The problem, in Nassim’s words, is that:


A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks.

In order to come up with the VAR figure, the risk manager must take historical data and assume a statistical distribution in order to predict the future. For example, if we could take 100 million human beings and analyze their height and weight, we could then predict the distribution of heights and weights on a different 100 million, and there would be a microscopically small probability that we’d be wrong. That’s because we have a huge sample size and we are analyzing something with very small and predictable deviations from the average.

But finance does not follow this kind of distribution. There’s no such predictability. As Nassim has argued, the “tails” are fat in this domain, and the rarest, most unpredictable events have the largest consequences. Let’s say you deem a highly threatening event (for example, a 90% crash in the S&P 500) to have a 1 in 10,000 chance of occurring in a given year, and your historical data set only has 300 years of data. How can you accurately state the probability of that event? You would need far more data.

Thus, financial events deemed to be 5, or 6, or 7 standard deviations from the norm tend to happen with a certain regularity that nowhere near matches their supposed statistical probability. Financial markets have no biological reality to tie them down: We can say with a useful amount of confidence that an elephant will not wake up as a monkey, but we can’t say anything with absolute confidence in an Extremistan arena.

We see several issues with VAR as a “map,” then. The first that the model is itself a severe abstraction of reality, relying on historical data to predict the future. (As all financial models must, to a certain extent.) VAR does not say “The risk of losing X dollars is Y, within a confidence of Z.” (Although risk managers treat it that way). What VAR actually says is “the risk of losing X dollars is Y, based on the given parameters.” The problem is obvious even to the non-technician: The future is a strange and foreign place that we do not understand. Deviations of the past may not be the deviations of the future. Just because municipal bonds have never traded at such-and-such a spread to U.S. Treasury bonds does not mean that they won’t in the future. They just haven’t yet. Frequently, the models are blind to this fact.

In fact, one of Nassim’s most trenchant points is that on the day before whatever “worst case” event happened in the past, you would have not been using the coming “worst case” as your worst case, because it wouldn’t have happened yet.

Here’s an easy illustration. October 19, 1987, the stock market dropped by 22.61%, or 508 points on the Dow Jones Industrial Average. In percentage terms, it was then and remains the worst one-day market drop in U.S. history. It was dubbed “Black Monday.” (Financial writers sometimes lack creativity — there are several other “Black Monday’s” in history.) But here we see Nassim’s point: On October 18, 1987, what would the models use as the worst possible case? We don’t know exactly, but we do know the previous worst case was 12.82%, which happened on October 28, 1929. A 22.61% drop would have been considered so many standard deviations from the average as to be near impossible.

But the tails are very fat in finance — improbable and consequential events seem to happen far more often than they should based on naive statistics. There is also a severe but often unrecognized recursiveness problem, which is that the models themselves influence the outcome they are trying to predict. (To understand this more fully, check out our post on Complex Adaptive Systems.)

A second problem with VAR is that even if we had a vastly more robust dataset, a statistical “confidence interval” does not do the job of financial risk management. Says Taleb:


There is an internal contradiction between measuring risk (i.e. standard deviation) and using a tool [VAR] with a higher standard error than that of the measure itself.

I find that those professional risk managers whom I heard recommend a “guarded” use of the VAR on grounds that it “generally works” or “it works on average” do not share my definition of risk management. The risk management objective function is survival, not profits and losses. A trader according to the Chicago legend, “made 8 million in eight years and lost 80 million in eight minutes”. According to the same standards, he would be, “in general”, and “on average” a good risk manager.

This is like a GPS system that shows you where you are at all times but doesn’t include cliffs. You’d be perfectly happy with your GPS until you drove off a mountain.

It was this type of naive trust of models that got a lot of people in trouble in the recent mortgage crisis. Backward-looking, trend-fitting models, the most common maps of the financial territory, failed by describing a territory that was only a mirage: A world where home prices only went up. (Lewis Carroll would have approved.)

This was navigating Tulsa with a map of Tatooine.

***

The logical response to all this is, “So what?” If our maps fail us, how do we operate in an uncertain world? This is its own discussion for another time, and Taleb has gone to great pains to try and address the concern. Smart minds disagree on the solution. But one obvious key must be building systems that are robust to model error.

The practical problem with a model like VAR is that the banks use it to optimize. In other words, they take on as much exposure as the model deems OK. And when banks veer into managing to a highly detailed, highly confident model rather than to informed common sense, which happens frequently, they tend to build up hidden risks that will un-hide themselves in time.

If one were to instead assume that there were no precisely accurate maps of the financial territory, they would have to fall back on much simpler heuristics. (If you assume detailed statistical models of the future will fail you, you don’t use them.)

In short, you would do what Warren Buffett has done with Berkshire Hathaway. Mr. Buffett, to our knowledge, has never used a computer model in his life, yet manages an institution half a trillion dollars in size by assets, a large portion of which are financial assets. How?

The approach requires not only assuming a future worst case far more severe than the past, but also dictates building an institution with a robust set of backup systems, and margins-of-safety operating at multiple levels. Extra cash, rather than extra leverage. Taking great pains to make sure the tails can’t kill you. Instead of optimizing to a model, accepting the limits of your clairvoyance.

When map and terrain differ, follow the terrain.

The trade-off, of course, is short-run rewards much less great than those available under more optimized models. Speaking of this, Charlie Munger has noted:


Berkshire’s past record has been almost ridiculous. If Berkshire had used even half the leverage of, say, Rupert Murdoch, it would be five times its current size.

For Berkshire at least, the trade-off seems to have been worth it.

***

The salient point then is that in our march to simplify reality with useful models, of which Farnam Street is an advocate, we confuse the models with reality. For many people, the model creates its own reality. It is as if the spreadsheet comes to life. We forget that reality is a lot messier. The map isn’t the territory. The theory isn’t what it describes, it’s simply a way we choose to interpret a certain set of information. Maps can also be wrong, but even if they are essentially correct, they are an abstraction, and abstraction means that information is lost to save space. (Recall the mile-to-mile scale map.)

How do we do better? This is fodder for another post, but the first step is to realize that you do not understand a model, map, or reduction unless you understand and respect its limitations. We must always be vigilant by stepping back to understand the context in which a map is useful, and where the cliffs might lie. Until we do that, we are the turkey.

Wednesday, September 16, 2015

Denying the Antecedent

Denying the Antecedent is a formal logical fallacy which consists of a conditional premise, a second premise that denies the antecedent of the conditional and a conclusion which denies the consequent of the conditional. The general form of the argument is:

P1. If P, then Q
P2. Not P
C. Therefore, not Q

Since P was never asserted as the only sufficient condition for Q, other factors could account for Q. Therefore, the argument is deductively invalid.

For example:

P1. If Queen Elizabeth is an American citizen, then she is a human being
P2. Queen Elizabeth is not an American citizen
C. Therefore, Queen Elizabeth is not a human being

With this example, both premises are true statements yet the conclusion is false. This of course is due to the fact that being an American citizen is not the only sufficient condition for being a human being. 


Monday, September 14, 2015

Affirming the Consequent

Affirming the consequent is a formal logical fallacy which consists of a conditional premise, a second premise that asserts the consequent of the first conditional premise, and a conclusion which asserts therefore the antecedent of the conditional is true. The general form of the argument is:

P1. If P then Q.
P2. Q
C. Therefore P.

Since P was never asserted as the only sufficient condition for Q, other factors could account for Q. Therefore, in terms of deductive logic, the argument form is invalid.

For example:

P1. If Bill Gates owns Fort Knox, then he is rich.
P2. Bill Gates is rich.
C. Therefore, he owns Fort Knox.

Obviously the consequent that Gates is rich is the result of factors other than owning Fort Knox.

It is important to understand that though affirming the consequent is a fallacy in terms of deductive reasoning, it can be used as a perfectly acceptable form of inference when used inductively or abductively. This of course is due to deductive reasoning's requirement that with a valid argument, if the premises are true then the conclusion must be true. On the other hand, induction and abduction do not have this certainty requirement and instead make inferences based on probability and plausibility.

For example:
P1. If the baby is hungry she will cry
P2. The baby is crying
C. The baby is hungry

This is not deductively valid since there are other reasons the baby may be crying. Perhaps she needs a diaper change or maybe she hit her head. On the other hand, depending on the circumstances, this could be considered a strong abductive argument. If we add more information through additional premises the strength of the argument becomes less ambiguous.

P1. If the baby is hungry she will cry
P2. The baby is crying
P3. The baby eats about every three hours
P4. The baby last ate about three hours ago
P5. The baby does not have a dirty diaper
C. The baby is hungry



Wikipedia: Affirming the Consequent 09/10/15

Reasoning: K. P. Mohanan and Tara Mohanan


Tuesday, August 11, 2015

Poisoning the Well

Poisoning the well is the use of a preemptive abusive or circumstantial ad hominem attack against an opponent with the purpose of discrediting or ridiculing everything they are about to say. It generally has the following form:

1. Unfavorable information (true or false) about person A is presented.
2. Therefore, (explicitly or implicitly) any claims about to be made by person A should be dismissed. 

Examples:

"Don't listen to anything Steve may tell you, he's a socialist."

or 

"Before you listen to my opponent, may I remind you that he has been to prison."

Practically speaking, poisoning the well is a form of ad hominem, and as such, one should follow the guidelines of analyzing an ad hominem to determine if it is being used in a fallacious manner. This essentially means questioning the relevancy of the attack on the claims presented by the person for whom the attack was directed against. 



Wednesday, July 8, 2015

Abductive Arguments (Inference to the Best Explanation)

An abductive argument (also known as an inference to the best explanation) is an argument in which a hypothesis is inferred from some data on the grounds that it offers the best available explanation of that data.1 Though it may appear as a special type of induction, many philosophers view it as a separate type of inference.

The following example is useful in drawing the distinction between deduction, induction and abduction:

Deductive Reasoning: Suppose a bag contains only red marbles, and you take one out. You may infer by deductive reasoning that the marble is red.

Inductive Reasoning: Suppose you do not know the color of the marbles in the bag, and you take out a handful and they are all red. You may infer by inductive reasoning that all the marbles in the bag are red.

Abductive Reasoning: Suppose you find a red marble in the vicinity of a bag of red marbles. You may infer by abductive reasoning that the marble is from the bag.

Hence we can say that with a deductively valid inference, it is impossible for the premises to be true and the conclusion false. With an inductively strong inference, it is improbable for the premises to be true and the conclusion false. In an abductively weighty inference, it is implausible for the premises to be true and the conclusion false.

Abduction is essentially a kind of guessing by forming the most plausible explanation for a given set of facts or data. It's inference comprises of three steps. First, it begins with the observation of the data, evidence, facts, etc. Second, it forms various explanations that can be given to explain the observations in the first step. Third, it selects the best explanation and draws the conclusion that the selected explanation is acceptable as a hypothesis. Here is the process in standard form:

P1. D exists.
P2. H1 would explain D. 
P3. H1 would offer the best (available) explanation of D. 
C. Therefore, probably, 4. H1

Abductive arguments are commonly used in many areas including law, archaeology, history, science and medical diagnosis. A medical example would include when a doctor examines a patient with certain symptoms and tries to reason from those symptoms to a disease or condition that would explain them. A legal example would be when a police detective gathers evidence then forms a hypothesis as to who committed a crime.

Evaluating Abductive Arguments
The strength of an abductive argument depends of several factors.
1. how decisively H surpasses the alternatives.
2. how good H is by itself, independently of considering the alternatives (we should be cautious about accepting a hypothesis, even if it is clearly the best one we have, if it is not sufficiently plausible in itself)
3. judgments of the reliability of the data
4. how much confidence there is that all plausible explanations have been considered (how thorough was the search for alternative explanations)

Additional factors to consider are:
1. pragmatic considerations, including the costs of being wrong, and the benefits of being right 
2. how strong the need is to come to a conclusion at all, especially considering the possibility of seeking further evidence before deciding.

1. A Practical Study of Argument

2. Abductive, presumptive and plausible arguments

Tuesday, July 7, 2015

Bradford Hill Criteria for Causation (epidemiology)

The Bradford Hill criteria for causation are a group of criteria or guidelines used to help determine if an observed association is potentially causal. They were established in 1965 by the English epidemiologist Sir Austin Bradford Hill.

Research to determine the cause of disease is a principal aim of epidemiology. As most epidemiological studies are observational rather than experimental, a number of possible explanations for an observed association must be considered before a cause-effect relationship can be inferred. In his 1965 paper The environment and disease: association or causation, Hill proposed the following nine guidelines to help assess if a causal relationship exists:


1. Strength: (effect size): A small association does not mean that there is not a causal effect, though the larger the association, the more likely that it is causal.

2. Consistency: (reproducibility): Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect.
3. Specificity: Causation is likely if a very specific population at a specific site and disease with no other likely explanation. The more specific an association between a factor and an effect is, the bigger the probability of a causal relationship.

4. Temporality: The effect has to occur after the cause (and if there is an expected delay between the cause and expected effect, then the effect must occur after that delay).

5. Biological gradient: Greater exposure should generally lead to greater incidence of the effect. However, in some cases, the mere presence of the factor can trigger the effect. In other cases, an inverse proportion is observed: greater exposure leads to lower incidence.

6. Plausibility: A plausible mechanism between cause and effect is helpful (but Hill noted that knowledge of the mechanism is limited by current knowledge).

7. Coherence: Coherence between epidemiological and laboratory findings increases the likelihood of an effect. However, Hill noted that "... lack of such [laboratory] evidence cannot nullify the epidemiological effect on associations".

8. Experiment: "Occasionally it is possible to appeal to experimental evidence".

9. Analogy: The effect of similar factors may be considered.


Friday, June 19, 2015

Mill's Methods

Mill's Methods
The nineteenth century philosopher John Stuart Mill devised five methods for reasoning about cause and effect. Though they have serious limitations, they are still useful and widely taught today.

1. The Method of Agreement - Mill wrote "If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or the effect) of the given phenomenon." In other words, if there is a single circumstance that is present in all positive instances, then we can conclude that this circumstance was the cause of the phenomenon. Note that in textbooks this is often referred to as the direct the method of agreement and only looks at positive instances of the effect in question.

For example, lets say four students dined together at the cafeteria and two of them became ill with food poisoning. The students were questioned about what they ate which resulted in the following list:

STUDENT   STEAK?   FRIES?   PASTA?   BEANS?   FOOD POISONING?
Carla            No             Yes          Yes           Yes            Yes
John             Yes            No           No            Yes             Yes
Tom             Yes            Yes          No            No              No
Mary            No             Yes          Yes           No              No

Based on the above information, we can conclude that it was the beans that gave Carla and John food poisoning as this was the only potential cause that was present in both instances.

Though not listed by Mill, some textbooks also refer to what is called the Inverse Method of Agreement (or Negative Method of Agreement). The Inverse Method of Agreement allows one to conclude that a certain circumstance is the cause of the phenomenon under investigation if this circumstance is the only circumstance (of those considered) that is absent in all negative instances.

Using the above example, the inverse method of agreement would lead us to look at the negative instances of Tom and Mary not getting food poisoning. Here we find the beans to be only potential cause which were absent in both cases and can thus conclude them to be the cause.

2. The Method of Difference - "If an instance in which the phenomenon under investigation occurs and an instance in which it does not occur, have every circumstance in common save one, that one occurring only in the former, the circumstance in which alone the two instances differ, is the effect, or the cause, or an indispensable part of the cause, of the phenomenon." In other words, if there is a positive and a negative instance where the presence or absence of all possible causes are the same except one cause which is present in the positive instance and absent in the negative instance, then it can be concluded to be the cause of the phenomenon. Note that the method of difference looks at both  positive and negative instances of the effect in question.

Using the food poisoning example above there are two relevant instances where the method of difference can be applied:

STUDENT   STEAK?   FRIES?   PASTA?   BEANS?   FOOD POISONING?
Carla            No             Yes          Yes           Yes            Yes
Mary            No             Yes          Yes           No              No

Since the only potential cause in which they differ is present in the positive instance and absent in the negative instance, we can conclude it was the beans that caused the food poisoning.

3. The Joint Method of Agreement & Difference - "if two or more instances in which the phenomenon occurs have only one circumstance in common, while two or more instances in which it does not occur have nothing in common save the absence of that circumstance; the circumstance in which alone the two sets of instances differ, is the effect, or cause, or a necessary part of the cause, of the phenomenon."  There seems to be a fair amount of controversy over this method among those scholars that examine such things. The biggest criticisms seem to be that The joint method/indirect method is not really a combination of the method of agreement and method of difference. Also, the definition above as provided by Mill is restrictive in that it does not allow full achievement of the intended purpose of the joint method. A more usable amended joint method of agreement & difference is provided by Skorupski:

"If two or more instances in which the phenomenon occurs have a circumstance in common, while in two or more instances in which the phenomenon does not occur that circumstance is absent, and if there is no other circumstance or combination of circumstances which is present in all the instances in which the phenomenon occurs, and absent in all the instances in which it does not occur, then the given circumstance is the effect, or the cause, or an indispensable part of the cause, of the phenomenon."

This can be summarized as the circumstance which alone is present in all the positive instances and absent in all the negative instances.

Here is a modified version of the food poisoning example which demonstrates the amended joint method:

STUDENT   STEAK?   FRIES?   PASTA?   BEANS?   FOOD POISONING?
Carla            No             Yes          Yes           Yes            Yes
Ann              Yes            Yes          No            Yes            Yes
Doug            Yes            No           No            No              No
Byron           No             Yes          No            No              No

With this example, the method of agreement does not give a unique answer since there are two positive circumstances (fries and beans) present in both positive instances. The method of difference also does not provide an answer since there is not a positive and negative instance where all causes are the same except a single cause which is positive in one instance and negative in the other. However, using the amended joint method we find that the beans are the cause as they are the only circumstance which is present in all positive instances and absent in all negative instances.

4. The Method of Residue - "Subduct from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents."

5. The Method of Concomitant Variation - "Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation." 

Friday, June 12, 2015

Causal Inductive Arguments

A causal inductive argument is an inductive argument in which the conclusion claims that one event(s) causes another.

Causality
Causality is the relationship between an event (cause) and a second event (effect), where the second event is understood to be the consequence of the first. Intuitively,


Types of Causes
The term "cause" can be used in several different ways:

1. Necessary Cause - A necessary cause (or condition) is one that is required to be present for the effect to occur. This relationship can be written as, C is the cause of E in the sense that C is a necessary condition of E. That is to say, without C, E will not occur. This relationship implies that the presence of E necessarily implies the presence of C. The presence C, however does not imply that E will occur.

For example, if a professor says that one can pass his class only by completing all the assignments, then completing the assignments is a necessary cause of the effect of passing the class. It should be noted that completing the assignments won't guarantee passing as there are other things (causes) that must happen such as having scores that average out to a passing grade.

2. Sufficient Cause - A sufficient cause is one that by itself is enough for the effect to occur. This relationship can be written as, C is the cause of E in the sense that C is a sufficient condition of E. That is to say, given C, E will occur. However, another cause may alternatively cause E. Thus the presence of E does not imply the presence of C.

For example, boiling a potato is a sufficient condition for cooking a potato, but it is not a necessary condition since there are many ways of cooking potatoes, such as baking or frying them.

3. Necessary & Sufficient Cause - A necessary and sufficient cause leads to an effect that always occurs when the condition is met and never occurs unless the condition is met. This relationship can be written as, C is the cause of E in the sense that C is a necessary and sufficient condition of E. That is to say, without C, E will not occur, and with C, E will occur.

For example, being a male sibling is necessary and sufficient for being a brother.

4. Contributing Cause - Commonly, when we speak of one event causing another we are referring to it being a contributing cause. This relationship can be written as, C is causally relevant to E. It is a condition that makes E more likely to occur than it would be were C not there.

A Contributing cause is neither necessary nor sufficient in and of itself to bring about a certain effect.

For example, being physically inactive is a general contributing causal factor to being overweight. It is not a necessary condition as some overweight people are physically active. Nor is it a sufficient condition as some physically inactive people are not overweight. Nevertheless, it is causally relevant being one of a number of contributing factors.


Distinguishing Between Correlation and Causation
A correlation is an association of two variables. When judging an association between variable, three possibilities exist:

1) Positive correlation - if a higher proportion of Qs than non-Qs are H, then there is a positive correlation between being Q and being H. In other words, Q and H increase and decrease in synchrony (parallel).
2) Negative correlation - if a smaller proportion of Qs than non-Qs are H, then there is a negative correlation between being Q and being H. In other words, Q tends to increase when H decreases and vice versa.
3) No correlation - if about the same proportion of Qs as non-Qs are H, then there is no correlation between being Q and being H.

The phrase 'correlation does not imply causation' is commonly used in science and statistics to emphasize that a correlation does not necessarily imply that one event causes the other. The reason for this is that a positive correlation generally allows for the existence of four possibilities:

1. Q is a cause of H. 
2. H is a cause of Q. 
3. The positive correlation of Q and H is a coincidence. 
4. Some other factor, X, is a cause of both Q and H

To automatically infer that a positive correlation between Q and H means that Q causes H is to disregard the other three possibilities. This is why correlation alone is generally thought to be insufficient grounds to establish cause.

Though a correlation alone is not enough evidence to establish causation, the absence of a correlation does establish the absence of a causal relationship.  This is true since correlation is a necessary aspect of causation even though it is not sufficient for it. The general form of this argument is:

P1. If Q is a cause of H, Q must be positively correlated with H. 
P2. It is not the case that Q is positively correlated with H. 
C. Therefore, It is not the case that Q is a cause of H.

Cogent Causal Arguments
We've established that correlation is a necessary condition of arguing for causality yet alone is not sufficient evidence. To establish a cogent causal argument, premised on a positive correlation, it is necessary to provide evidence which seeks to exclude the other possibilities which correlation allows for. There are various methods from diverse fields of science and philosophy available to help investigate causal claims, some of which are listed below.

1. Mill's Methods
2. Bradford Hill criteria for causation




A Practical Study of Argument

Logical Reasoning

Khan Academy: Fundamentals: Necessary and Sufficient Conditions

A Preferred Treatment of Mill's Methods

Mill (Arguments of the Philosophers)

The Logic of Causal Conclusions: How we know that fire burns, fertilizer helps plants grow, and vaccines prevent disease

Causality and Causation: The Inadequacy of the Received View
A Short History of ‘Causation’
Causation

http://changingminds.org/explanations/research/conclusions/inferring_cause.htm

http://psych.cf.ac.uk/home2/white/white%20bjsp%202000.pdf

http://www.skeptic.com/insight/the-logic-of-causal-conclusions-how-we-know-that-fire-burns-fertilizer-helps-plants-grow-and-vaccines-prevent-disease/

http://science.jrank.org/pages/8545/Causality-Inus-Conditions.html

http://see.library.utoronto.ca/SEED/Vol4-2/Hulswit.htm

https://en.wikipedia.org/wiki/Bradford_Hill_criteria

https://en.wikipedia.org/wiki/Causal_reasoning

http://www.skepticalob.com/2011/02/if-correlation-is-not-causation-what-is.html

https://books.google.com/books?id=vaU0AAAAQBAJ&pg=PA189&dq=douglas+walton+causation&hl=en&sa=X&ei=v6iSVeuBMMimNpKAgvgF&ved=0CEAQ6AEwBg#v=onepage&q=douglas%20walton%20causation&f=false

Thursday, June 4, 2015

Appeal to Authority

An appeal to authority is an argument that something is true because someone of authority says it is true. The basic form of the argument is:

P1. Person X has asserted claim P
P2. Person X is an authority on subject K
C. Therefore P is acceptable

In practice, there are many instances where it is reasonable to accept inductive arguments where an authority is used to to support a claim. This is something I believe most would find intuitively true given that we rely on the advice and counsel of experts all the time.

The difference between a legitimate appeal to authority versus one which is fallacious is generally dependent on whether the authority being cited is an expert on the matter under consideration, whether there is general agreement among experts in the area of knowledge under consideration and whether the area of knowledge under consideration is credible.

Govier provides the following form of an acceptable appeal to authority:

1. Expert X has asserted claim P
2. X is a reliable and credible person in this context 
3. P falls within area of specialization K
4. K is a genuine area of knowledge
5. X is an expert, or authority, in K. 
6. The experts in K agree about P 
Therefore, 
7. P is acceptable

Given that the above guidelines provide for acceptable appeals to authority, then a violation of one or more of these conditions would lead to what is commonly referred to as a fallacious appeal to authority. Some ways an appeal to authority can go wrong or be weakened include:

1. The authority cited is not really an expert or is not an expert in the area pertaining to the issue at hand.
2. The authority is an "expert" in an area which is not a genuine area of knowledge (An "expert" in homeopathy promoting a treatment does not carry weight as homeopathy is not a genuine area of knowledge).
3. The authority's opinion is unrepresentative of what the majority of experts believe to be true about the subject.
4. There is widespread disagreement among experts on the subject.


Introduction to Logic and Critical Thinking 
A Practical Study of Argument
Fallacy Files: Appeal to Authority

To review later:
http://www.dougwalton.ca/papers%20in%20pdf/89reasoned.pdf

Thursday, May 28, 2015

Statistical Syllogism

A statistical syllogism is an inductive argument in which a statistical generalization is applied to a particular case. For example:

Most surgeons carry malpractice insurance.
Dr. Jones is a surgeon.
Therefore, Dr. Jones likely carries malpractice insurance.

This sort of argument can be written in the general form:

P1. Most A's are B
P2. x is an A
C. Therefore, probably x is a B

When the proportions are known the form can be written as:

P1. Z percent of A's are B
P2. x is an A
C. Therefore, it is probable to the .Z degree that x is B

In the general forms presented above, A is called the reference class,  B the attribute class and x is the individual object.

We often use informal versions of the statistical syllogism in everyday reasoning. For instance, if you read in the New York times that the President is visiting China and you believe it to be true, on what basis do you justify this belief? Most people understand that you can't believe everything you read in a newspaper but recognize that certain kinds of reports published in certain newspapers tend to be true. This is one of those kind of reports so it is likely true.

Strength/Weakness of a Statistical Syllogism
There are two primary standards which determine the strength of a statistical syllogism. First is the strength condition which is, the closer to 100% the reference class is to the attribute class the greater the confidence in the truth of the conclusion. Conversely, the closer to 0% the weaker the argument.

Second is the available evidence condition (also called the rule of total evidence) which requires using all available evidence in constructing or assessing such arguments. With statistical syllogisms this essentially means questioning if there is additional relevant information available concerning the individual object (x) that has not been included in the premises? Another way of saying this is that the individual object must be included in the reference class most specifically relevant to the conclusion. Failure to use all available evidence is commonly referred to as the Fallacy of Incomeplete Evidence.

For example:

P1. Sixty percent of students at the University believe in God.
P2. Fred is a student at the University.
C.  It is sixty percent probable Fred believes in God.

But if we also know that Fred is a history major and that only forty percent of history majors believe in God then it would not be appropriate to use the reference class in the example since it excludes this relevant information.

Due care must be taken when judging individuals using statistical syllogisms as their misuse can contribute to stereotyping and prejudice.


A Practical Study of Argument

Critical Thinking: An Introduction to Basic Skills

Critical Reasoning and Philosophy: A Concise Guide to Reading, Evaluating and Writing Philosophical Works




Wednesday, May 20, 2015

Analogical Arguments

An analogical argument is the use of a comparison between two or more things which have some similarity and from this basis inferring that they share some other property. The central topic which we want to draw a conclusion about is often referred to as the primary subject and the thing(s) to which the primary subject is compared to is called the analogue. The things the analogues and primary subject have in common are referred to as shared attributes. The attribute which the analogues possess that is being inferred to the primary subject are called the target attribute.

As described by Govier "An argument based on analogy begins by using one case (usually agreed on and relatively easy to understand) to illuminate or clarify another (usually less clear). It then seeks to justify a conclusion about the second case on the basis of considerations about the first. The grounds for drawing the conclusion are the relevant similarities between the cases, which show a commonality of structure."

The general form of an analogical argument is:

P1. A has properties p, q, r
P2. B has properties p, q, r
P3. A has property s
C. Therefore B probably has property s

where,

Analogue = A
Primary Subject = B
Shared Attributes = p, q, r
Target Attribute = s

For example:

P1. John's brother and parents smoked two packs of cigarettes a day and ate fatty foods.
P2. John smoked two packs of cigarettes a day and ate fatty foods.
P3. John's brother and parents all died prematurely of heart attacks.
C. Therefore, John will probably die prematurely of a heart attack.

Here is another example in non-standard language:

Tom goes to Las Vegas for his first time. He goes into huge casino with lots of slot machines, gambling tables, bars and an all you can eat buffet. He goes into a second huge casino that also has lots of slot machines, gambling tables and bars. He becomes hungry and remembers the first casino had an all you can eat buffet and concludes that this casino probably as one as well.

Evaluating Analogical Arguments
The strength or weakness of an analogical argument depends upon a number of considerations:

Similarity - Verify that the properties proposed as being shared among the comparison group (shared attributes) do indeed exist. As analogical arguments are rarely actually presented in the form above, it may even be necessary to first list just how it is assumed the comparison groups are similar. Here is a simple example. "John is like Mike. Mike is smart. Therefore John must be smart". In this example none of the assumed similarities between John and Mike have been presented. Before the argument can carry any weight these similarities must be listed and verified.

Relevance - The more relevant the shared attributes are to the target attribute, the stronger the argument. Here is an example of an analogical argument which lacks relevance. "Book A and Book B both have a hardbound cover, pages, words on the pages and numbers at the bottom of the pages. Book A is a boring story. Therefore we can assume that Book B has a boring story." Though I have given a number of similar properties between Book A and Book B, none of these properties are relevant and thus do nothing to increase the probability that Book B is boring.

Number - The more shared attributes the primary subject and analogues share in common with each other, the stronger the argument. This is based on the notion that the more two things are alike, the more likely they also share the property stated in the conclusion. As stated above, relevance plays a key role in determining how much weight these similarities are given.

Disanalogy - Relevant disanalogies or dissimilarities must also be considered when determining the strength or weakness of an analogy. For example if I say, "I have known three people who have had surgery at this hospital with the same surgeon and they have all turned out successfully. Therefore Jane's surgery will also be a success." But what if the three success stories all had minor surgery and Jane is scheduled for a high risk procedure? This of course would be a very relevant disanalogy.


Critical Thinking Web: Analogical Arguments

Monday, May 18, 2015

Hasty Generalization

The hasty generalization is an informal fallacy in which an inductive generalization is made from a sample that is inadequate to support the generalization in the conclusion. As discussed in the post on inductive generalization, this may be because the sample is too small or biased.

Hasty generalizations often result from anecdotal arguments, which are short stories typically taken from the personal experience of the arguer. Generally, these anecdotal arguments describe only one or a few episodes which are then used to generalize about the population.

For example:

"Acupuncture works. My friend Tom tried it and he said it cured his back pain.".

or

"Smoking isn't harmful. My dad smoked a pack a day and lived until 95."

When countering an argument which uses a hasty generalization the book "Attacking Faulty Reasoning" suggests using an absurd example to demonstrate the flaw. For example,

“Faculty kids are real brats. I babysat with one the other night, and he was spoiled, rude, and uncontrollable.”

If necessary, put the argument into standard form:
Since the faculty child I babysat for recently was a brat, (premise)
[and one faculty child is a sufficient sample of faculty children to determine what is true of all faculty children,] (implicit premise)
Therefore, all faculty children are brats. (conclusion)


The Nizkor Project: Hasty Generalization
Fallacy Files: Hasty Generalization
Attacking Faulty Reasoning


Thursday, May 14, 2015

Inductive Generalization

An inductive generalization is an argument that moves from particular premises to a generalized claim. As defined by Trudy Govier "In inductive generalizations, the premises describe a number of observed objects or events as having some particular feature, and the conclusion asserts, on the basis of these observations, that all or most objects or events of the same type will have that feature."

Example:
P1 - Pavlovian conditioning caused dog Fido to salivate when a bell rings.
P2 - Pavlovian conditioning caused dog Rover to salivate when a bell rings. 
P3 - Pavlovian conditioning caused dog Spot to salivate when a bell rings.
P4 - (etc.)
C - Therefore, Pavlovian conditioning causes all dogs to salivate when a bell rings.

It seems intuitive that the strength of the example above largely relies upon how many particular instances Pavlovian conditioning resulted in a dog salivating. A thousand instances of a salivating dog would be a stronger argument than only ten instances. This leads us to the concept of sample.

Sample
"In inductive generalizations, features that have been observed for some cases are projected to others. Following established practice in statistics and in science, we call the observed cases the sample and the cases we are trying to generalize about the population." Statistical sampling methodologies are beyond the scope of this post but the basic idea is that the strength of an inductive generalization largely depends on sample size and how representative it is.  

In general, increased sample size is associated with a decrease in sampling error as it is more likely to represent the population (though there are diminishing returns). For more on why this is so, see the law of large numbers and central limit theorem.

A representative sample is one in which the selected segment closely parallels the whole population in terms of the characteristics that are under examination (for example, if one third of the population has relevant characteristic X, then one third of the sample should have characteristic X). We try to make samples representative by choosing them in such a way that the variety in the sample will reflect variety in the population.

Sampling methods include Random Sampling, Stratified Sampling, Systematic Sampling, Convenience Sampling, Quota Sampling and Purposive Sampling.


Monday, May 11, 2015

Induction: Inductively Strong & Inductively Cogent

Induction is the process of reasoning from one or more premises to reach a conclusion which is likely, though not certainly true. Hence, ainductive argument is one in which if the premises were true, then the conclusion is likely to be true.

"In the most general sense, Inductive reasoning is that in which we extrapolate from experience to what we have not experienced. The assumption behind inductive reasoning is that known cases can provide information about unknown cases."1 Govier goes on to describe inductive arguments as having the following characteristics:

1. The premises and the conclusion are all empirical propositions.
2. The conclusion is not deductively entailed by the premises.
3. The reasoning used to infer the conclusion from the premises is based on the underlying assumption that the regularities described in the premises will persist.
4. The inference is either that unexamined cases will resemble examined ones or that evidence makes an explanatory hypothesis probable.


Inductively Strong Arguments - An inductively strong (forceful) argument is one in which, if the premises were considered to be true, the conclusion is probably true. In other words, if we assume the premises are true the likelihood that the conclusion of an inductively strong argument is true is greater than 50%. If the probability that the conclusion is true is 50% or less, than the argument is inductively week.

Inductively Cogent Arguments - An inductively cogent argument is one which is inductively strong and all of its premises are actually true. 

To fit into our informal logic model, instead of requiring that the premises be true we would require that they be acceptable. This, of course, is due to the difficulty often encountered in establishing with certainty whether or not something is true. 

Types of inductive arguments include inductive generalization, statistical syllogism and analogical arguments.



1 A Practical Study of Argument

Critical Reasoning and Philosophy, A Concise Guide to Reading, Evaluating, and Writing Philosophical Works 


Probability and Induction

Thursday, May 7, 2015

Outlines

An outline is a method of presenting the main and subordinate ideas of a document by organizing them hierarchically.

Alphanumeric outline
An alphanumeric outline uses Roman numerals, capitalized letters, Arabic numerals and lowercase letters as prefix headings. 

The Chicago Manual of Style (CMS) uses the following outline format:

I. (Roman numeral) 
     A. (Capital letter) 
          1. (Number) 
               a) (Lowercase letter followed by closing parenthesis) 
                    (1) (Number enclosed in parenthesis) 
                         (a) (Lowercase letter enclosed in parenthesis) 
                              i) (Roman numeral with lowercase letters followed by a closing parenthesis)

The Modern Language Association (MLA) uses essentially the same method except the first lowercase letter is followed by a period instead of a closing parenthesis.

Decimal outline
The decimal outline uses only numbers as prefix headings making it easier to see how every item relates within the hierarchy. It uses the following outline format:

1.
     1.1
          1.1.1
               etc.
Here is a sample decimal outline:

1.0 Choose Desired College
     1.1 Visit and evaluate college campuses 
     1.2 Visit and evaluate college websites 
          1.2.1 Look for interesting classes 
          1.2.2 Note important statistics

Friday, February 6, 2015

Propositional Logic / Sentential Logic

I. Introduction
Propositional logic (also called propositional calculus, sentential calculus, sentential logic, etc.) is a branch of logic that studies ways of joining simple (atomic) propositions to form more complicated propositions using logical connectives, and the logical relationships of these propositions.

To highlight the difference between term and propositional logic, think of the sentence "All dogs are mammals". Using term logic we would say the fundamental units of the sentence are the categories of dogs and mammals. In contrast, with propositional logic the fundamental unit of the sentence is the entire statement or proposition "All dogs are mammals". It's important not to equate propositions with sentences as a sentence can have more than one statement. For instance, the sentence “All dogs are mammals, and all cats are mammals too” clearly contains the two propositions.


II. Propositions
A proposition can be defined as a declarative sentence, or part of a sentence, that is capable of having a truth-value of either true or false, but not both. For the purpose of this post, the term "proposition" and "statement" are used interchangeably.

For example, "Paris is the capital of France." is a proposition with a truth value of "true" while the sentence, "Everyone born on Monday has purple hair" has a truth value of "false."

Some sentences are not propositions such as commands like "Close the door" or questions like "Is it hot outside?"

The smallest indivisible units in propositional logic are statements referred to as simple or atomic propositions. These are statements which are either true or false and cannot be broken down into other simpler statements. For example, "The dog ran" is an atomic proposition. Atomic propositions can be used to form complex propositions by using connective words (see connectives below). For example, "the dog ran and the cat hid" is a complicated proposition which combines two atomic propositions using the connective word "and".

III. The Language of Propositional Logic
Classical truth-function propositional logic utilizes a simple symbolic language to represent propositions expressed in natural language such as English. Simple (atomic) statements are represented by capital letters 'A', 'B', 'C', etc. The logical signs '∧', '∨', '→', '↔', and '~' are used in place of the truth-functional operators, "and", "or", "if... then...", "if and only if", and "not", respectively. Parentheses are used to group propositions similar to how they are used in algebra and arithmetic.

A. Connectives (logical operators)
Connectives (logical operators) are words or phrases used either to modify a statement or join simple statements together to form a complex statement. Though there are many connectives, the five basic ones are:

NOT, AND, OR, IF_THEN (or IMPLY), IF AND ONLY IF.

Here is a chart of which shows the logical operator (connectives formal name), the symbol and its natural language usage.


As there are no absolute standards in regards to symbols
 used in Propositional Logicdifferent authors use different symbols.
This link provides a list of alternate symbols you may come across.

1. Negation
Not-P.     For example, “It is not green.”

The negation of statement P, simply written "~P" in language PL, is regarded as true if P is false, and false if P is true. Unlike the other operators, negation is applied to a single statement. The corresponding chart can therefore be drawn more simply as follows:


Though the word "not" is generally thought of as the English equivalent of "~", it can be expressed in many ways such as:

It is not the case that...
It would be false to say that...
...failed to...


2. Conjunction 
p and q.     For example, “It is wet and it is cold.”

Conjunction is a truth-functional connective similar to "and" in English and written "∧" in PL. When dealing with a conjunction, you must consider both p and q. That is, a conjunction is true if and only if both conjuncts are true.


Though "and" is generally thought as the English equivalent to "∧" there are many ways it can be expressed. Some include:

both...and...
...as well as...
...but...
...however...
...though...
...nevertheless...
...still...
...while...
...despite the fact that...

Though these may not seem similar to the word "and" it is important to remember that propositional logic treats as a conjunction any sentence whose truthfulness depends on both conjuncts being true and false if any or both conjuncts are false. For instance, the sentence "John loves Mary even though she barely tolerates him" is true only if both propositions "John loves Mary" and "She barely tolerates him" are true. If either or both are false, the whole proposition is false.

3. Disjunction
p or q (or both).     For example, “It is wet or it is cold.”

Disjunction is a truth-functional connective similar to "or" in English. The disjunction of two statements p and q, written in PL as "(p ∨ q)", is true if either p is true or q is true, or both p and q are true, and is false only if both p and q are false.

Though we say that "or" is the rough English equivalent to PL "∨", it should be noted that "∨" is used in the inclusive sense. More often than not when the word "or" is used to join together two English statements, we only regard the whole as true if one side or the other is true, but not both, as with the statement "Either we can buy the toy robot, or we can buy the toy truck; you must choose!" This is called the exclusive sense of "or". However, in PL, the sign "v" is used inclusively such as with the statement "Her grades are so good that she's either very bright or studies hard" which does not exclude the possibility of both.


Though the word "or" is generally used (imperfectly) as the English equivalent to "∨", there are other ways it can be expressed such as:

either...or...
...and/or...

4. Conditional (material implication)
If p, then q.     For example, “If it is green, then it is heavy.”

The conditional is a connective similar to "if_then_" statements in English and generally represented as "→" in PL. The first simple statement in a conditional is referred to as antecedent and the second simple statement is known as the consequent. For example, with the statement "If you flip the light switch, the lights will go out" the antecedent is "You flip the light switch" and the consequent is "the lights will go out".

A conditional statement asserts that if the antecedent p is true, the consequent q will be true as well. In other words, a conditional statement is only false if the antecedent p is true and the consequent q is false.


5. Biconditional (material equivalence)
p if and only if q.    For example "A triangle is equilateral if and only if it is a triangle with three equal sides".

The biconditional is a connective similar to "if and only if" statements in English and generally represented as "↔" in PL. A biconditional is regarded as true if the antecedent and consequent are either both true or both false, and is regarded as false if either have different truth-values.



B. Scope, Parentheses and the Main Connective
Whenever more than one connective is used in a statement, there is a chance of ambiguity. Consider the statement S∨C∧T where S is "I will show you stamps", C is "I will make you some coffee" and T is "I will give you $1000". Hence, in English it would be written, "I will show you my stamps or make you coffee and give you $1000".

As it is presented, the statement is ambiguous and could be interpreted in two different ways:

1) "I will show you my stamps or make you coffee but in any event, I will give you $1000".

2) "Either I will show you my stamps or make you coffee and give you $1000".

The problem here is that we don't know what the scope is of either of the two connectives used in the statement. Does the disjunction in S∨C∧T connect S to C or does it connect S to C∧T? Does the conjunction in S∨C∧T connect C to T or does it connect S∨C to T?

To deal with this problem, parentheses are used to establish scope. So going back to our two interpretations of S∨C∧T, we would write:

1) "I will show you my stamps or make you coffee but in any event, I will give you $1000" as ((S∨C)∧T).

2) "Either I will show you my stamps or make you coffee and give you $1000" as (S∨(C∧T)).

*Note that in actual practice you will often find the outermost parentheses are omitted. 

In propositional logic, the main connective is the connective with the greatest scope in a statement. For example, with ((S∨C)∧T) we see that the scope of ∨ is limited to S and C while the scope of ∧ encompasses the entire statement. Therefore, in this example ∧ is the main operator.






Internet Encyclopedia of Philosophy: Propositional Logic
Introduction to Logic (second edition): Harry Gensler
Critical Thinking: An Appeal to Reason by Peg Tittle: Supplemental Chapter: Propositional Logic
Wikipedia: Negation
Wikipedia: Logical conjunction
Wikipedia: Logical disjunction
Logic Self-Taught: A Workbook

https://proofwiki.org/wiki/Definition:Scope_(Logic)
http://www.bu.edu/linguistics/UG/course/lx502/_docs/lx502-propositional%20logic.pdf

Monday, December 29, 2014

Thinking Skills

Thinking Skills (Average Joe Edition/Addition)

The following is an attempt to organize my posts related to clear thinking, problem solving and decision making into a coherent, easy to use format. As the title suggests, I'm just an average guy trying to get a better understanding of these topics and as such, have no doubt oversimplified or misunderstood some of these concepts. Perhaps they may be of some benefit to someone out there but please do your own research.

I. Fundamentals of Informal Logic

  A. Informal Logic in a Nutshell: Argument & Argument Analysis

  B. Deductive Arguments
      1) Categorical Logic
      2) Propositional Logic

  C. Inductive Arguments
      1) Inductive Generalization
      2) Statistical Syllogism
      3) Analogical Arguments
      4) Causal Arguments
          a. Mill's Methods
          b. Bradford Hill criteria

  D. Abductive Arguments

  E. Logical Fallacies
     -Ad Hominem
     -Appeal to Authority
     -Appeal to Popularity
     -Argument from Ignorance
     -Begging the Question/Circular Reasoning
     -Equivocation
     -False Analogy
     -False Dichotomy
     -Guilt by Association
     -Hasty Generalization
     -Non-Sequitur
     -Post Hoc Ergo Propter Hoc
     -Red Herring
     -Tu Quoque
     -Straw Man


II. Cognitive Psychology

  A. Dual Process Model of Thinking
  B. Cognitive Bias
     -Confirmation Bias and Positive Test Strategy
     -Dunning-Kruger Effect
     -Framing Effect
     -Gambler's Fallacy
     -Outcome Bias


III. Problem Solving
  A. Problem Solving Process


IV. Information/Knowledge Management
  A. Outline