Propositional logic, or 'sentence logic' in clearer terms, is the most gentle of the forms of logic that you can learn. It also forms the basis for every other type of classical logic, whether that's predicate logic, set theory, lambda functions, or all of standard mathematics. In short, if you want to start anywhere, start with sentence logic.
There are several mathematical conventions I wish I'd known when I first began to seriously study sentence logic. I'll give the most important of those conventions here.
Mathematical conventions (or logical ones — they really stem from the same source) are things that we've all agreed to adhere to. However a convention doesn't necessarily have to be adhered to. It just makes mathematical and logical communication easier if you do.
For example, though classical logic has no causality (see below), there are in fact other forms of logic that disregard one or more of these conventions (e.g., fuzzy logic, causal logic, intuitionistic logic, modal logic, and so on).
However if you want to learn the kind of logic that gets computers to run, gets rockets to re-land, and matches flawlessly with physics, then classical logic is the way to go. And learning sentence logic first will give you a sense of how it all hangs together: predicate logic is really just sentence logic with a few extras.
This last fact has been made explicit in Paul Teller's amazing Modern Formal Logic Primer (see Volume 2, pp. 20--21, available here for quick reference).
This book, which the author has been kind enough to make available for free, is one of the best books on the topic I have ever read. If you prefer a hardcopy, you can very probably still get the out-of-print title at Amazon.com.
If you're just starting, read through the first three chapters of Volume 1. Do all the exercises. Then read through this webpage again, and finally read the third chapter again. My hope is that doing this will give you a solid basis for sentence logic. You can extend this basis well into predicate logic, set theory, and discrete (and even continuous) mathematics.
And always remember that nothing ever happens over night.
Here they are, the more important mathematical conventions implicit in the study of logic:
Truth Values. Every sentence letter is semantically either true or false, not both ('mutual exclusivity'), and not neither ('exhaustivity' — all parts of the system, without exclusion, must have a truth value. That is, if and only if ('iff') you include semantics, which are truth values. Logic consists of syntax — the sentence letters — and semantics — the truth values. It is possible to do just syntax without semantics. One of the (many) systems for this is Natural Deduction).
If the sentence letter is both true and false, you have a contradiction.
If the sentence letter is neither, you are no longer doing logic. To be within the system of logic, it must be either true (T, \(\top\)), false (F, \(\bot\)), or both (a contradiction).
Law of the Excluded Middle. There are no shades or grades of truth and falsity in classical logic. Either the sentence letter is completely true, or the sentence letter is completely false.
There are other, less important conventions, but these are the ones which cause the most trouble for beginners. Especially because except for the Law of the Excluded Middle, these conventions aren't mentioned anywhere that I have seen.
The logical if-then is not the causal if-then of natural languages like Japanese or Swahili. We tend to think 'if event A happens, then event B will happen because of event A. If event A did not occur, event B would not have occurred either', and this is reflected in all the natural languages on this planet.
But this is not what is meant by the logical 'if-then'.
In logic, the following statements are logical truths: 'If my grandmother owns a set of wingèd snakes, then 2+2=4'. This is true under all circumstances if we go by standard conventions in the mathematics (e.g., '+' means addition, base 10 digits, etc.). Remember that for the final truth value of the if-then connective ('\(\longrightarrow\)'):
Given two inputs A and B (A can be any sentence whatever, as can B), the logical connective '\(\longrightarrow\)' (the logical if-then) is completely defined as follows:
\(A\) | \(B\) | \(A \longrightarrow B\) |
---|---|---|
\(\top\) | \(\top\) | \(\top\) |
\(\top\) | \(\bot\) | \(\bot\) |
\(\bot\) | \(\top\) | \(\top\) |
\(\bot\) | \(\bot\) | \(\top\) |
In the above table, \(\top\) = 'true' and \(\bot\) = 'false'. I wrote 'true' and 'false' in this way to reinforce that any symbols can stand for truth and falsity: {1, 0}, {⋆, ♪}, {♀, ♂}, etc. Furthermore, although these two symbols are in 1:1 correspondence with actual truth and actual falsity, in themselves, the symbols mean absolutely nothing.
{\(\top\), \(\bot\)} are conventional, as are {0,1}, {F, T}, and {t, f}. The only reason we think of them as being true and false is because of our natural language giving meaning to everything.
\(A\) is known as the Antecedent or Hypothesis. \(B\) is known as the Consequent or Conclusion. (I much prefer referring to them as 'Hypothesis \(\longrightarrow\) Conclusion', rather than 'Antecedent \(\longrightarrow\) Consequent', especially because 'consequent' in English very strongly implies causality, which classical logic does not have).
If you notice, any time the hypothesis \(A\) is False, the conclusion is True. And if the conclusion \(B\) is True, then no matter what the \(A\)ntecedent is, the entire if-then statement is true. This latter fact is the reason for `If my grandmother owns a set of wingèd snakes, then 2+2=4' being a logical truth. The equals sign of '2+2=4' is constantly true: it is logically true. Thus what we're really saying is: `If my grandmother owns a set of wingèd snakes, then \(\top\)'. More abstractly: \(A \longrightarrow \top\), and looking at the table, anything \(\longrightarrow \top\) is a logical truth.
One last thing: \(A \longrightarrow B\) is a statement, the main connective of which is implication ('\(\longrightarrow\)'). Whenever you see \(A \longrightarrow B\), if you have an additional \(A\) on its own line in natural deduction, that means you can immediately substitute the \(A\) with a \(B\). The other way around — having a \(B\) and substituting with \(A\) — is not allowed. Officially, it's a fallacy: 'affirming the consequent', which means treating the consequent, the conclusion of your argument, as though it were the hypothesis — the points that you must make to get to that conclusion.
The biconditional ('\(\longleftrightarrow\)') is much more what we mean by 'if-then' in natural language. That's just a coincidence: the truth values of the biconditional work out so that if A is true, B is also true; if A is false, B is also false. However, remember that classical logic (and by extension mathematics, set theory, lambda notation, etc.) has no causality at all. It just happens to be that the definition of the biconditional operator is on a 1:1 correspondence with causality. But the biconditional operator itself belongs to a system without causality.
Here is the table defining the biconditional:
\(A\) | \(B\) | \(A \longleftrightarrow B\) |
---|---|---|
\(\top\) | \(\top\) | \(\top\) |
\(\top\) | \(\bot\) | \(\bot\) |
\(\bot\) | \(\top\) | \(\bot\) |
\(\bot\) | \(\bot\) | \(\top\) |
I don't think there is any better book that has been published than Paul Teller's to ease you in to a study of logic. Universities I'm familiar with usually use either Copi (Copi, I., Cohen, C., and McMahon, K. (2010). Introduction to logic. Routledge, NY, 14th edition), or The Logic Book (Bergmann, M., Moor, J., and Nelson, J. (2013), McGraw-Hill Education, Columbus, OH, 6th edition).
In my opinion, they're missing out, if for only this reason: the language used in both the texts just mentioned is academic, formal, dry, at times even pompous. This is no way to speak to anyone learning logic for the first time. Teller's book, on the other hand, is like being with him one-on-one. It has of course its flaws (all things do), but within its pages, there is no sense that logic is impenetrable, or knowledge that you either get or you don't. He shows that logic is something innate in being human. And you may as well try Teller's book: it's free.