
Yes, I know, it sounds crazy, but I'm serious:
Can anyone prove that 1 plus 1 equals 2?
infinisa wrote:  biljap wrote:  Ahh… I will never forget my first class, where we got a long and confusing proof that one plus one equals two… 
Really? I'm impressed! I never saw such a proof at school, but once read a book (in German!) in which this result was proudly presented as theorem no. 50! 
Good luck to you all!
Hello All
The idea here is to understand that concepts that we normally take for granted, and so apparently don't need defining, actually do need defining if we are to prove such "obvious" results as 1 + 1 = 2.
So, just to get you started (on the right track ), what do we mean by "1", "2" and "+"?
Good luck to you all!
Hmmmm... I'm going to have to think about this for a while.
Can you tell me if I'm starting off correct?
I think we'll have to define a set that's a subset of the natural numbers. We can then show that it forms an injection with the set containing the natural numbers which means that the set is countable.
We then need to show that this set contains the empty set. *axiom*
We then need to find a way to show that every element in the set has a successor.
From this, we may be able to define "1" by somehow showing that it's a successor of the empty set?
Maybe I need to visit the TA or professor's office hours for my set theory class... If you can confirm that I am at least headed in the right direction, then I'll put some more effort into it and then visit office hours... I hate visiting a professor empty handed or when I'm in the completely wrong direction...
We standardized the symbol "1" as the number one. The first natural number.
Then people said, hey what if we put together two 1's? And so, they invented addition.
So any number greater than 1 was expressed as 1+1, 1+1+1, 1+1+1+1, ... But those lazy people just thought: hey, why don't we shorten it? Maybe we could use the symbol "2" to denote 1+1, and "3" to denote 1+1+1, and so on.
And it deemed to be practical. And everyone lived happily ever after.
Well it's easy.
We define the natural numbers. We define addition and multiplication. We define 1 as (for example) the identity of multiplication. We know 1+1 exists, because this is one of the axioms of addition in N. We define 2 as 1+1.
it's a hard job i think...
and in Decimal or Binary or 6 decimal?
No, I can't yet.
But I know that you prove it using Linear Algebra concepts.
Something like this: http://mathforum.org/library/drmath/view/51551.html
Enjoy it.
It's proven by this equation.... 1+1=2
It's that simple and you are trying to make it into something it isnt.
1+1=2 because thats just how it is... these are the basic building blocks of algebra, how can you prove these building blocks with other complex concepts that rely on this? Simple answer is you can but it's pretty stupid.
coreymanshack wrote: 
It's proven by this equation.... 1+1=2
It's that simple and you are trying to make it into something it isnt.
1+1=2 because thats just how it is... these are the basic building blocks of algebra, how can you prove these building blocks with other complex concepts that rely on this? Simple answer is you can but it's pretty stupid. 
AFAIK, Set Theory doesn't rely on Algebra.
Hello All
Thanks for all the feedback on this problem!
When I set this problem, I could think of three different kinds of approaches to a solution that might be tried:
1.) The pseudoaxiomatic approach: 1 + 1 = 2 by definition!
2.) The axiomatic approach, based on the Peano axioms (more about this later)
3.) The constructive approach, based on a construction of the natural numbers as sets
I hoped to get attempts at a solution of type 2, expected to get type 1, but didn’t expect to get type 3. In fact, within a couple of weeks I got types 1 & 3, but no type 2...
I’ll discuss each type of approach in a separate post to follow, but for now, I’ll make the following comments:
Solution type 1.) "1 + 1 = 2 by definition" is not a satisfactory solution as it stands, as it assumes the answer as the definition of 2  obviously too simple! And yet it contains the idea behind a correct approach as we shall see later.
Solution type 2.), based on the Peano axioms, is the kind of solution I was looking for.
Solution type 3.) based on a construction of the natural numbers as sets, is an advanced approach, and although perfectly valid, makes for a much harder approach than type 2.).
Stubru Freak wrote:  Well it's easy.
We define the natural numbers. We define addition and multiplication. We define 1 as (for example) the identity of multiplication. We know 1+1 exists, because this is one of the axioms of addition in N. We define 2 as 1+1. 
As remarked in my last post this approach is not a satisfactory solution as it stands, as it assumes the answer as the definition of 2.
And yet it contains the idea behind a correct approach, because it encourages us to look more carefully for a more appropriate definition of 2.
In fact, when we count 1, 2, 3, ... we know that we start with 1, and have a "recipe" to keep on getting from one number to the next.
So the definition of 2 is actually that it is the number after 1.
You might think (and some of you did!) that since we get from one number to the next by adding one, this means 2 = 1 + 1 by definition.
What is wrong here is that we are using a sledgehammer (addition) to crack a nut (finding the next number), and yet everyone learns to count before they learn how to add!
So we need a way to say that 2 is the number after 1 without using addition.
Let’s use the notation S(n) to mean the next number after n, where S stands for successor.
Using this notation we say that the definition of 2 is:
2 = S(1).
Looked at in this way, we count by starting at 1, then the next number is 2=S(1), followed by 3=S(2), and so on.
So far, so good. But if we look at the statement:
1 + 1 = 2
how do we define the rest of the terms?
We still need to define 1 (one) and + (addition), given that = (equals) just means "the same as".
To get this sorted out, we need a proper definition of the natural numbers (0, 1, 2, 3, ...), which leads us to the Peano axioms...
As I said in my last post, the way to get down to business with this problem is to define our terms (natural numbers), and their most basic properties, from which we can derive all their other familiar properties.
Mathematicians have been operating like this for thousands of years, most notably in Euclidean geometry, which was developed in Ancient Greece, and famously written up in Euclid's Elements about 300BC – a standard text book until the 20th century!
The most basic properties of a system (it could be numbers, or the geometrical plane, for example) are called axioms – these are unquestioned and (hopefully) selfevident truths. We then figure out everything else we need to know from these axioms using rules of deduction (i.e. logic).
Let’s see what these most basic properties (or axioms) of counting (or natural) numbers, N, might look like. By the counting (or natural) numbers we mean the numbers used for counting (1, 2, 3,...) together with zero (since the answer to the question “how many ...?” can be zero).
First of all, as I pointed out in my last post, counting involves starting somewhere (at 1) and knowing what the next number is – we used the notation S(n) for the number after n.
Of course, 1 follows zero, so we also have S(0) = 1. On the other hand, there is no natural number before 0: it’s the first.
Also, the counting procedure never repeats: every time we find the next number, it’s different from any other number we’ve had before – otherwise we’d get to a point where we couldn’t count any further – and the power of counting is that (in theory, anyway) it can be carried on forever. In other words, there are infinitely many numbers.
Finally, if we carry out the counting procedure, any number we can think of is eventually reached.
And that’s it – we have the Peano axioms for the natural numbers! Put more formally, here they are (in simple form):
1. 0 is in N.
2. If n is in N, then its "successor" S(n) is in N.
3. There is no n such that S(n) = 0.
4. If n is not 0, then there is a m in N such that S(m) = n.
5. If A is a subset of N, such that 0 is in A, and whenever n is A, S(n) is also in A, then A = N.
Note: Axiom 5, known as "mathematical induction" is the hardest to grasp, but won’t be needed here!
But this is just the starting point: we now know what 1 and 2 are: 1=S(0) and 2=S(1).
So now we need to define addition: What do we mean by m + n where m and n are any two natural numbers?
Let’s try defining addition by defining the operation + n for increasing values of n (0, 1, 2, ...)
For any value of m, we know:
m + 0 is just m
m + 1 is the number after m, S(m)
m + 2 is the number after that, S(m+1) and so on
In fact, if we know what m + n is, we can define m + (n+1), or more correctly m + S(n), as follows:
m + S(n) = m + (n+1) = (m + n) + 1 = S(m +n)
So, if we can add n, we can also add (n + 1), i.e. S(n).
This means, that by repeating the procedure often enough, we can add any natural number n.
Putting this all together, we can now define addition (by any natural number) in terms of natural numbers and the successor operation, S.
So now we have defined all our terms, and so should be able to prove that 1 + 1 = 2.
Now that I’ve given all this help, it’s up to you guys to finish this off.
If you want to take a short cut, you can look up the solution already referenced by davividal
However, that solution is very terse; I hope that most of you will have found my presentation easier to understand.
Afaceinthematrix wrote:  Hmmmm... I'm going to have to think about this for a while.
Can you tell me if I'm starting off correct?
I think we'll have to define a set that's a subset of the natural numbers. We can then show that it forms an injection with the set containing the natural numbers which means that the set is countable.
We then need to show that this set contains the empty set. *axiom*
We then need to find a way to show that every element in the set has a successor.
From this, we may be able to define "1" by somehow showing that it's a successor of the empty set?
Maybe I need to visit the TA or professor's office hours for my set theory class... If you can confirm that I am at least headed in the right direction, then I'll put some more effort into it and then visit office hours... I hate visiting a professor empty handed or when I'm in the completely wrong direction... 
As I said in an earlier post, this is an advanced approach, which I didn’t expect anyone to try.
However, since the natural numbers are a specific object and not a class of objects (such as triangles in the plane), it is reasonable to try to actually construct them, rather than state their basic properties (axioms). The usual construction is based on sets. The idea is to construct a set representing the number n containing exactly n elements.
For 0, this is easy: the only set containing 0 elements is the empty set: Ø = {}.
For 1, we need a set containing 1 element: so we choose {0}.
For 2, we need a set containing 2 elements: so we choose {0, 1}.
By now, you get the idea: having defined the sets that represent the natural numbers up to n, we define the set representing n + 1 (or S(n)) to be the set {0, 1, 2, ..., n}, which, of course, has exactly (n +1) elements.
From here, you need to show that these numbers have a successor operation, and that they satisfy the Peano axioms.
First of all, the successor operation S is easy to define:
S(n) = {0, 1, 2, ..., n} = {0, 1, 2, ..., n  1} U {n} = n U {n}.
I don’t expect anyone to go to the trouble of showing that the Peano axioms hold for these numbers – this explanation is just to give you a flavour of the construction method – so let’s assume that this has been done.
The problem now, if you think about it, is that this whole business of defining numbers in terms of sets just takes the definition problem back a stage further – you still need to define sets!
In fact, in modern mathematics, we use sets (and their basic properties, or axioms) as the basic building blocks for (practically) everything else.
The starting point for modern mathematics is called Zermelo–Fraenkel set theory. This is supposed to be an axiomatic system strong enough to let us prove or disprove anything we wish in Mathematics.
Unfortunately, as it turns out (thanks to to Gödel's incompleteness theorems), it doesn’t!
In particular Gödel shows that there is a true statement about natural numbers that cannot be proved using the Peano axioms. Worse still, if you add in this true statement as another axiom, then there is another true statement about natural numbers that cannot be proved using the new set of axioms, and so on This problem is unfixable! That is, there is no (consistent) finite set of axioms about the natural numbers that is strong enough to allow you to prove all true statements and disprove all false ones!
As a small consolation, you can prove 1 + 1 = 2!
Take an apple out of a bag and put it on an empty table.
Take another apple out of the bag an put it too on the table.
Now look at those happy apples sitting there together and ask yourself how many apples are on that table. In English a sane person would answer there are two apples. What else would you call them?
Conclusion: 1 + 1 = 2.
Dennise wrote:  Take an apple out of a bag and put it on an empty table.
Take another apple out of the bag an put it too on the table.
Now look at those happy apples sitting there together and ask yourself how many apples are on that table. In English a sane person would answer there are two apples. What else would you call them?
Conclusion: 1 + 1 = 2. 
Problems:
1) It requires a linguistic concept of 'one' and 'apple'
2) It requires a semantic agreement that a certain entity is, in fact, an apple. *
3) It is not a proof  a proof must be true under all circumstances (within the postulates). You have demonstrated, not proved.
* Why, for example, do we call a Cox and a Pippins both apples? What characteristics must an apple have? Are there a set of fixed rules that distinguish an apple from any other entity? Try to think of such a set of rules if you can...
so let me get this striaght, you've got to prove 1+1=2, without using a plus sign? then why are you allowed an equals sign?
the solution posted is: S(n) = {0, 1, 2, ..., n} = {0, 1, 2, ..., n  1} U {n} = n U {n}
and thats REALY supposed to be easier to understand? i see brackets and commas, algebra etc..
c'mon seriously? some of you guys and these maths people must have been seriously bored to even bother doing this, there is no need to prove this, i have proved many equations and have been allowed to use a plus sign in the final solution, why cant it be used to something so simple?
1+1=2
2=2
(1)
1=1
it cannot be simpler.
Nemesis234 wrote:  so let me get this striaght, you've got to prove 1+1=2, without using a plus sign? then why are you allowed an equals sign?
the solution posted is: S(n) = {0, 1, 2, ..., n} = {0, 1, 2, ..., n  1} U {n} = n U {n}
and thats REALY supposed to be easier to understand? i see brackets and commas, algebra etc..
c'mon seriously? some of you guys and these maths people must have been seriously bored to even bother doing this, there is no need to prove this, i have proved many equations and have been allowed to use a plus sign in the final solution, why cant it be used to something so simple?
1+1=2
2=2
(1)
1=1
it cannot be simpler.  If you use a symbol it is assumed that the symbol is defined. Since the object is to define plus, then obviously you cannot use a plus sign  that is just common sense...
What you have probably done is demonstrate some equations  not at all the same thng. Proofs are rigorous and must apply in all legal cases  I seriously doubt that you have done a proper proof before.
You will find that the proof provided above by infinitesma is rigorous (ie, given the original postulates, it works in all cases). THAT is a proof.
Bikerman wrote:  Nemesis234 wrote:  so let me get this striaght, you've got to prove 1+1=2, without using a plus sign? then why are you allowed an equals sign?
the solution posted is: S(n) = {0, 1, 2, ..., n} = {0, 1, 2, ..., n  1} U {n} = n U {n}
and thats REALY supposed to be easier to understand? i see brackets and commas, algebra etc..
c'mon seriously? some of you guys and these maths people must have been seriously bored to even bother doing this, there is no need to prove this, i have proved many equations and have been allowed to use a plus sign in the final solution, why cant it be used to something so simple?
1+1=2
2=2
(1)
1=1
it cannot be simpler.  If you use a symbol it is assumed that the symbol is defined. Since the object is to define plus, then obviously you cannot use a plus sign  that is just common sense...
What you have probably done is demonstrate some equations  not at all the same thng. Proofs are rigorous and must apply in all legal cases  I seriously doubt that you have done a proper proof before.
You will find that the proof provided above by infinitesma is rigorous (ie, given the original postulates, it works in all cases). THAT is a proof. 
hmm, still not buying it.
if we have to assume everyone is so dumb they cannot understand what a plus sign is, how can they read? how could they see a difference when using algebra, how could they multiply as this was the method posted.
i may have missed the point with this, but if you are saying you have to prove everything, then you need to start with the english language, then onto shapes and signs.
i know im being a douche and you all think im a moron, but tbh im finding this thread amusing.
it's the definition of addition. all the other rules of addition come from that single fact.
Nemesis234 wrote:  hmm, still not buying it.  So what? Buy what you like, it is a fairly free country. It doesn't effect basic facts...
Quote:  if we have to assume everyone is so dumb they cannot understand what a plus sign is, how can they read? how could they see a difference when using algebra, how could they multiply as this was the method posted.  For someone who doesn't know what a proof is, you are pretty quick to throw words like 'dumb' around.....
No, you don't need English  or did you think that there are no nonenglish speaking mathematicians?
The only language you need is maths. You start by defining your number system.
I'll do it in english because the symbols are not available on Frih..
Rule 1 For each natural number x, x = x. (reflexive).
2. For each natural number x and y, if x = y, then y = x. (symmetric).
3. For each natural numbers x, y and z, if x = y and y = z, then x = z. (transitive).
4. For all a and b, if a is a natural number and a = b, then b is also a natural number. (closed under equality).
Now you have defined the number series, you can start to define the operations...
and so it goes...
Not everyone is capable of doing a 'proof'  mathematical proofs need to be extremely rigorous, since maths is the one area where we can be certain(ish) about Right and Wrong  unlike science, or any other discipline. Unfortunately Maths is actually not complete  there are some exceptions to the 'universal' truths  but that is another story.
If a theorem is true in maths, it will ALWAYS be true. Not like a scientific theory which is definitely NOT true and will one day be bettered.
Try as you might, you will not find a flaw with Pythagoras theorem in any Euclidean geometry. The quickest way to immortality is to prove something in maths and do it elegantly so the proof remains long after you are gone. Pythagoras was nearly 3000 years ago, but people still know him better than any king, pharaoh, emperor....
Eulor's Identity is a perfect example  e^i.pi + 1 = 0
(looks terrible without the proper symbols though...)
Nemesis234 wrote:  hmm, still not buying it.
if we have to assume everyone is so dumb they cannot understand what a plus sign is, how can they read? how could they see a difference when using algebra, how could they multiply as this was the method posted.
i may have missed the point with this, but if you are saying you have to prove everything, then you need to start with the english language, then onto shapes and signs.
i know im being a douche and you all think im a moron, but tbh im finding this thread amusing. 
Bikerman's response is correct, but I have something to add.
The problem seems to arise from the way you (and everyone else) were taught mathematics. When you first learn about mathematics, you rely a lot on "common sense". You use analogies like "one apple and one apple equals two apples". This is a good way to learn about mathematics, because it's very simple and easy to understand.
But when you learn more advanced mathematics, you will see that a lot of things you think are common sense can actually be derived from other things you think are common sense. So you're assuming too much: you could be assuming less, and still get the same results.
That's what axioms are about: you define a set of absolute truths, that is as simple as possible, and where none of the absolute truths can be derived from the others. (Note that you define these things. They don't necessarily have to correspond to reality, but it's useful if they do, like the branches of mathematics that are used by regular people.)
Of course, most people don't need to do that. It's something for mathematicians. My mathematical education isn't bad at all, and I still rely on common sense for addition, without a strict axiomatic definition.
But to claim that such a basic thing "shouldn't be proven" reduces mathematics to an inductive science, rather than a deductive science (i.e., you rely on experiment to show things, instead of rigorous proofs). And the inductive methodology, while certainly often useful where the deductive methodology isn't, gives much weaker results (we're not 100% certain about anything in physics, just 99% certain or more, while we're absolutely 100% certain about most of mathematics).
Stubru Freak wrote:  Nemesis234 wrote:  hmm, still not buying it.
if we have to assume everyone is so dumb they cannot understand what a plus sign is, how can they read? how could they see a difference when using algebra, how could they multiply as this was the method posted.
i may have missed the point with this, but if you are saying you have to prove everything, then you need to start with the english language, then onto shapes and signs.
i know im being a douche and you all think im a moron, but tbh im finding this thread amusing. 
Bikerman's response is correct, but I have something to add.
The problem seems to arise from the way you (and everyone else) were taught mathematics. When you first learn about mathematics, you rely a lot on "common sense". You use analogies like "one apple and one apple equals two apples". This is a good way to learn about mathematics, because it's very simple and easy to understand.
But when you learn more advanced mathematics, you will see that a lot of things you think are common sense can actually be derived from other things you think are common sense. So you're assuming too much: you could be assuming less, and still get the same results.
That's what axioms are about: you define a set of absolute truths, that is as simple as possible, and where none of the absolute truths can be derived from the others. (Note that you define these things. They don't necessarily have to correspond to reality, but it's useful if they do, like the branches of mathematics that are used by regular people.)
Of course, most people don't need to do that. It's something for mathematicians. My mathematical education isn't bad at all, and I still rely on common sense for addition, without a strict axiomatic definition.
But to claim that such a basic thing "shouldn't be proven" reduces mathematics to an inductive science, rather than a deductive science (i.e., you rely on experiment to show things, instead of rigorous proofs). And the inductive methodology, while certainly often useful where the deductive methodology isn't, gives much weaker results (we're not 100% certain about anything in physics, just 99% certain or more, while we're absolutely 100% certain about most of mathematics). 
He's probably still in high school.
I'm not saying I understand all of this, nor can I write a formal proof, but you definitely get a lot more exposure to it in college. I know a lot of people taking advanced mathematics do some crazy stuff. It's really interesting
In my opinion, such a fact doesn't need to be proved, which is definitely one of the most basic rules of mathematics. Mathematics needs some basic rules to support all the further caculations.
For a quite simple example, can you prove that what we called water is water?
I don't think it necessary to prove what I stated just now as well. It's one of the basic rules of a language. It's water, just so simple.
inuyasha wrote:  In my opinion, such a fact doesn't need to be proved, which is definitely one of the most basic rules of mathematics. Mathematics needs some basic rules to support all the further caculations.
For a quite simple example, can you prove that what we called water is water?
I don't think it necessary to prove what I stated just now as well. It's one of the basic rules of a language. It's water, just so simple.  You are mixing up science with maths. Mathematics relies on proofs. Science relies on disproofs. Of course it needs to be shown that 1+1=2  what else do you do? Just believe it? That way lies madness. Do you believe that
Is it true? How are you going to demonstrate that if you cannot even demonstrate that 1+1=2?
If you are going to build a rigorous system of expression then you need to start with as few assumptions (axioms) as possible and prove everything based on those axioms.
Are you so confident that 1+1 always = 2? Does it work when the unit quantities are imaginary quantities such as the root of 1? Can you say that SQRT(1) + SQRT(1) = 2 SQRT (1) ?
You can if it can be rigorously shown that 1+1=2, but otherwise you are blowing smoke...
Yeah, there are entire sections of mathematics devoted to explaining and deriving simple principles such as this. Entire books have been written.
At certain points in my studies, I don't think I would have been terribly surprised to see this as an exam question.
So yes, it is possible.
Well done davividal, this is exactly the solution I was looking for, based on Peano's axioms. (Too bad you didn't find the proof yourself!)
Sorry I overlooked this post in my posts dated Mon Dec 28/29, 2009.
Peano's axioms allow us to state the "obvious" properties of counting numbers, from which the concept of addition can be defined, so it now makes sense to ask for a proof of a statement like 1 + 1 = 2.
To show you really got it, how about a proof that 2 + 2 = 4?
I don't think that need to be proved. Just look up the definition of two. Nothing is more natural.
inuyasha wrote:  I don't think that need to be proved. Just look up the definition of two. Nothing is more natural. 
You are obviously unclear on the concept. Just look up the definition of "formal axiomatic systems."
I don't think we really need to show that 1 plus 1 is two. Its like a theorem (or something else, I think there is a more appropriate word for it) that doesn't need any proof at all.
Oh if you are interested in some crazy mathematics this should interest you
http://www.frihost.com/forums/viewtopic.php?t=104254&start=0
metalfreek wrote:  I don't think we really need to show that 1 plus 1 is two. Its like a theorem (or something else, I think there is a more appropriate word for it) that doesn't need any proof at all. 
Someone else unclear on the concept. You might look up the definition of a "theorem" (which is something which requires, and has been, proven). You're thinking of "axiom," which "1+1 = 2" is most certainly not. It can be proven, and has been proven, as you can discover if you look up "Peano postulates," or "formal arithmetical systems".
If i have 1 carrot... Then I add another carrot to it then I have 2 carrots.. Everyone else is just over thinking things... Thats the problem with today's world...
So the question shouldn't be is 1+1=2 it should be why isn't it 2.
An elegant proof for a mathematician is one that is fully rigorous, yet as simple as possible.
For posting on a forum with a general audience, I think it's appropriate to drop the full rigour, and not expect all axioms to be explicitly stated. So if it gets from point A, simple premath ideas assumed but not precisely stated, to point B, what we want is shown with minimal handwaving, and is clear, it might be considered an elegant fuzzy demonstration.
I would start with the concept of "unity", which I will dare to give the name 1, with no formal definition.
"Increment" is the next concept, the "next" number. I would not consider zero, as I think that is a more advanced concept from the commonsense point of view. I know of professional mathematicians who hesitate to make statements about empty sets, so I may be onto something there.
Addition can be defined as repeated incrementation, grouping the operations into two sets.
Multiplication can be defined as repeated addition in a similar way.
Powers can be defined as repeated multiplication in a similar way.
I consider "towers" of numbers to be an additional extension of the idea. In that case, as with powers, the operation is no longer commutative so you have to be careful about the order of operations.
Now if only I knew how to write this up clearly, I'd be able to teach 4 year olds addition, 5 year olds multiplication, and 7 year olds exponents.
Comments and/or implementation of this type of demonstration are welcomed.
edit: I think the terms "one" and "one more" would be more natural.
there are some concepts which have to be taken for granted. If you ask me what do I mean by 1 + 1 =2, I'll ask you to look up and find out who invented the numerals 1 and 2. The man who did it always knew that this was going to happen, as adding one takes you to a number's successor, and that is what is happening!
infinisa wrote:  Yes, I know, it sounds crazy, but I'm serious:
Can anyone prove that 1 plus 1 equals 2? 
One plus one equals three.
I thought everyone knew that!
... guess not
The proof is a bun in the oven.
Bluedoll wrote:  infinisa wrote:  Yes, I know, it sounds crazy, but I'm serious:
Can anyone prove that 1 plus 1 equals 2? 
One plus one equals three.
I thought everyone knew that!
... guess not
The proof is a bun in the oven. 
Nope. That just demonstrates (doesn't prove) that 1/2 + 1/2 = 1.
This concept of adding two numbers come from the base which we use while adding ,you can also say that it's subscript ...The number system we use for ADDING two numbers is "DECIMAL NUMBER SYSTEM " i.e the base of each number is given 10 as its base ...so THE COUNTING IS ACTUALLY LIKE THIS...
(1)base10
(2)base10
(3)base10.....and so on ...
The meaning of this notation Explains Why 1+1=2....... ?
writting the problem as " (1)base10+(1)base10=(2)base10 "
You would be thinking why to give a base to each no: when there is no use ...Let me tell you that while writing the base we are actually dividing the number to the base .....
Lets take an example >>>> adding a number by 0 gives you the same answer
like 4+0=4
writing the above problem by writing the base :
(4)base10+(0)base10=(4)base10;
See what happens when you divide the number by the base 10;
4 divided by 10 gives you a remainder and a quotient i.e
*( DONT DIVIDE IT TO DECIMAL ) *
REMAINDER=4 and the QUOTIENT=0
REMAINDER IS ACTUALLY THE ANSWER AND THE QUOTIENT IS THE CARRY
GIVEN TO THE NEXT NO: IF PRESENT ..........
NOW TAKING THE PROBLEM =1+1=2
1 divided by 10 + 1 divided by 10= (2)base10
EXPLANATION:
1 divided by 10 gives you REMAINDER= 1; and QUOTIENT =0
1 divided by 10 gives you REMAINDER= 1; and QUOTIENT =0
BOTH REMAINDERS ARE SIMPLY ADDED 1+1= (2)base10
in our common life " WE IGNORE THE BASE 10 FOR SIMPLE CALCULATION "
as the digits on addition provides only one digit so there is no carry....
REPLY IF NOT SATISFIED >>>>>
THERE ARE OTHER NUMBER SYSTEM ALSO, BUT WITH THEM THE CALCULATION BECOMES MORE DIFFICULT LIKE THE " HEXADECIMAL COUNTING " whose BASE IS GIVEN BY 16.......
vineet wrote:  This concept of adding two numbers come from the base which we use while adding [remainder deleted] 
No, it does not. Addition is a concept from set theory. Number bases are completely irrelevant. What you posted was a prescription for how to perform addition, not a proof of addition as a valid mathematical concept.
I suggest you read through the rest of the thread, and perhaps also look up some of the terms "set theory," "number theory," "axiomatic arithmetic," "Peano's postulates". Come back to us if you have specific questions about any of what you read while conducting that research.
one plus one equals two, or 1 + 1 = 2.. this is true because of the operation "+" plus sign. It is use in adding numbers or something countable. Another thing that makes the 1 + 1 = 2 is the equals sign, it is use to indicate the operation to be equal to the answer, 1 + 1 is equals to 2.. but if we use it on another way like cat + cat is equals to dog. it is not right. even though 1 is not 2 as well as cat is not dog. counting is not comparison so if you want to count, i guess your fingers can help
Numbers are developed along with the human civilization, human feeling and human necessities. The complexities are developed later. As the complexities are based on those simple logic, findings, discoveries, or whatever you say, those simple things obviously can be proved using the complex logic latter developed. It is something you are rely on or standing on.
See... I have one thing here, and I add one thing to it, and I have two things. 1+1=2, proven. Sorry to be so simplistic about it, but the answer is pretty simple to me...
Tuvitor wrote:  See... I have one thing here, and I add one thing to it, and I have two things. 1+1=2, proven. Sorry to be so simplistic about it, but the answer is pretty simple to me... 
That's not a proof, it's an assertion. You might consider reading the rest of the thread, rather than repeating what's already been posted. Or not.
Well of what I have understood
There's no proof 1+1=2
Its an axiom.
when we have one and another one object we say it is two objects.
That's just some suppositon that 1+1=2 and not 3 or 4
Arrogant wrote:  Well of what I have understood
There's no proof 1+1=2
Its an axiom.
when we have one and another one object we say it is two objects.
That's just some suppositon that 1+1=2 and not 3 or 4 
No, it's not a supposition, nor is it an axiom. It is a theorem which can be proven from the Peano postulates (specifically, from axiom 1 and two successive applications of axiom 6). It really would help if you actually read the postings in the thread rather than posting redundant and wrong responses.
Infinisa: This was surely thought provoking. Liked reading your posts.
take one finger in one hand, and take another one finger in other hand.
now we know counting.. 1 2 3 4 ....
so count all the upright fingers in left hand, (that's one), now continue from where you previously left and count all in right hand.. you get two..
____________________________________________________________________________
MOD  Spam link removed. Please review The Forum Rules and the Frihost FAQ
Ankhanu
I agree with the intent of the original poster.
Look harder. Somewhere along the line, there are things being taken for granted.
How could any of this be discussed, if there wasn't some implied agreement about the meanings of words, numbers and symbols?
Never the less, useful things can be said within a limited context.
i can proved that 1 plus 1 equals 2.
solution:
1+1=2 simple mathematics.
zimmer wrote:  i can proved that 1 plus 1 equals 2.
solution:
1+1=2 simple mathematics.

Indeed, so very simple that it's hard to break it down and analyze the meaning. I suppose you think that 2 + 3 = 5, etc. But there has to be a pattern of simpler concepts for that to make any sense. My understanding is that addition is an abstraction describing repetitive counting operations.
Let me suggest that there is more than one way to "count", therefore more than one way to "add", and that both ways make sense and are actually used in our world. The addition operator "+" does have a conventional meaning of the arithmetic sense in ordinary use, but I will use the word "plus" in another sense and those who have studied music will understand me clearly.
In music, a forth plus a fifth is an octave. A fifth is two thirds, and a third is two seconds. The theory of musical intervals is not normally taught by looking at combining them as addition, somewhat like arithmetical addition but with a different rule, but if you count two thirds as 1 2 3 and 3 4 5 and do the same with all interval combining, you will quickly see how it works out.
Music is not the only field using this form of "addition", either, but most of the others are probably somewhat obscure.
 +  = 
One plus one equals two
واحد زائد واحد يساوي اثنين
一加一等於二
אחד ועוד אחד שווה שתיים
ஒரு பிளஸ் ஒன்று அல்லது இரண்டு சமம்
один плюс один равняется двум
Mathematics is the only language in which anyone in the world can look at the first statement and know what it means...separate from their language... It is true because it is an accepted constant in the language we use to represent more than 1
for this first u have to know why 1 is one and 2 and two and so no..
http://binscorner.com/pages/w/whyoneis1andtwois2.html
go to this website to and understand why 1 is one .
then you will know its because it have one angle in it thats why.0 is zero because it doesnt have any angles.
1 + 1 means the answer must have two angles in it. (as 1 have one angle in it ).
so which is the number that can form two angles in it . its number 2 .
so nw we know that 1 + 1 = 2.
