This is a really good introduction to computational complexity if you've found yourself shying away from it. There are two other sites that I think might be helpful as well, here and here.



The rest of this post assumes you have some set-theoretical background, as well as being comfortable with the standard logical operators (especially '\(\rightarrow\)' and `\(\leftrightarrow\)'), and the set-theoretic operator '\(\in\)' — whenever you see it, you know you're dealing with set-theoretic notation.



Let's take one very typical example: \(O(n) = O(n^2)\).



The problem here is with the '=' sign.  It is being used in a very different sense here from the way we learned it from our youth.



Normally, the '=' sign means the same thing as a biconditional ('\(\leftrightarrow\)') in logic: any side of the equation can be substituted with the other side. It doesn't matter if its the right-hand side (RHS) or left-hand side (LHS) of the equation: see one, substitute for the other; they are entirely interchangeable, because they are saying the same thing. Both sides of the equation may be syntactically different, for example '2' or '1+1'; but both of their semantic meanings are identical: the concept of 'two'.



In the case of \(O(n) = O(n^2)\), the RHS of the 'equation' can be substituted for the LHS, but not the other way around. By standard mathematical convention, then, the equals sign in this example is not a true equals sign: it is not truly the concept of identity.



One explanation for this oddity comes from Donald Knuth in a draft of a letter he wrote to the American Mathematical Society, available here, though in \(\TeX\) format (I believe as a tribute to him).  Also, the final published letter does not include this next section, which sheds some light on why:



"I would begin my ideal calculus course by introducing a simpler '\(A\)-notation,' which means 'absolutely at most.' For example, \(A(2)\) stands for a quantity whose absolute value is less than or equal to 2. This notation has a natural connection with decimal numbers: Saying that \(\pi\) is approximately \(3.14\) is equivalent to saying that \(\pi = 3.14+A(.005)\). Students will easily discover how to calculate with \(A\) [...] I would of course explain that the equality sign is not symmetric with respect to such notations; we have \(3=A(5)\) and \(4=A(5)\) but not \(3=4\), nor can we say that \(A(5)=4\). We can, however, say that \(A(0)=0\)."



Let's take a look at the following formal definition of our favourite \(O(g(n))\):



\((1) \ \ O(g(n)) := \{f(n) \mid (\exists c)[n \geq n_0 \rightarrow (0\leq f(n)\leq c\cdot g(n))] \}\)



'\(O(g(n))\)' returns a set, whose elements consist of a function of growth directly dependent on the \(n\)umber of data items present in the abstract algorithm. '\(g(n)\)', '\(f(n)\)' are variables that are instantiated into e.g., '\(T(n)\)' (the relative \(T\)ime it takes for the algorithm to complete, that is, relative to other algorithms).  '\(c\)' in the above formula just means \(c\)onstant, which is a combination of the hardware and compiler/interpreter you're using.



A brief digression into humanizing the set notation: very simply, \(O(n) \equiv c \times n\) — a \(c\)onstant (your hardware and compiler speed) times the \(n\)umber of inputs in the algorithm.  \(O(1)\), then, is \(c \times 1\) — the \(n\)umber of elements in the algorithm no longer make a difference on your hardware/compiler speed.  Therefore, we call \(O(1)\) \(c\)onstant time, because we have essentially eliminated the \(n\)umber of items as a factor to this \(c\)onstant.



In \((1)\) above, the two sets \(O(g(n))\) and \(\{\ldots\}\) are in fact identical.  But in conventional computer science notation, we gloss over the fact that elements of a set are not identical with the set itself, and that subsets are not necessarily identical with each other.  We just write '=' in both of these situations.  But the equivalence relation, in any of its forms ('=', '\(\leftrightarrow\)', '\(\Longleftrightarrow\)', '\(\equiv\)', etc.) has three properties which make it a relation of equivalence: reflexivity, symmetry, and transitivity.  The '=' as used in computational complexity is not symmetric, and therefore not an identity relation.



As we have seen with Knuth's quote above, it was never supposed to be: it had the additional notation of '\(A\)' to signal that we are talking about '\(A\)bsolutely at most'.