author: niplav, created: 2019-03-20, modified: 2022-07-23, language: english, status: in progress, importance: 2, confidence: likely
“Naive Set Theory” by Paul Halmos is a short introduction to set theory. Here, I present solutions to the explicitely stated exercises and problems in that book.
It seems like there is no way one could use either insetting (putting a given set into another set) and pairing or pairing on two different inputs to obtain the same set. However, if one sees pairing the same set, then pairing with would result in , which is also the result of insetting .
Proof:
Insetting and pairing must have different results because insetting will always result in a set with 1 element, and pairing will always result in a set with 2 elements. Therefore, they can't be the same set.
Pairing the sets a, b and c, d can't result in the same set unless and or and . Otherwise, would contain at least one element not in .
□
I am not exactly sure what I'm supposed to do here. I guess "observe" means "prove" here, so "prove that the condition has nothing to do with the set B".
Proof:
The last statement is trivially true.
□
Proof:
□
Proof:
□
Proof:
□
Proof:
□
Proof:
is true because and and .
□
Proof:
This is the case because (since intersections with are subsets of ), and the union with doesn't change the equation.
□
To be shown: The power set of a set with n elements has elements. Proof by induction.
Proof:
Induction base: The power set of the empty set contains 1 element:
Induction assumption:
Induction step:
To be shown: .
contains two disjunct subsets: and . Those are disjunct because every element in contains (), but there is no element of that contains . Also, it holds that , because elements in the power set can either contain or not, there is no middle ground. It is clear that , therefore .
□
To be shown:
Proof:
If , then . Therefore, and and thereby and . This means that .
If , then a very similar proof can be written: and , so and . Then and therefore .
□
To be shown:
Proof:
If , then or . Since it is true for any set that , it is true that (similar argumentation if ).
□
A reasonable interpretation for the introduced notation: If , then
Similarly, if , then
The symbol still stands for the power set.
To be shown:
Proof by induction.
Induction base:
Induction assumption:
Induction step:
The last step uses , since is also just a set.
□
To be shown:
Proof by induction.
Induction base:
Induction assumption:
Induction step:
The last step uses , since is also just a set.
□
To be shown:
Proof:
. Furthermore, . Since , it holds that .
□
And "E is always equal to (that is ), but that the result of applying and to in the other order is a set that includes E as a subset, typically a proper subset" (p. 21).
I am not entirely sure what this is supposed to mean. If it means that we treat as a collection, then . But that doesn't mean that : If , then , and .
If we treat simply as a set, then , and it is of course clear that , as for all other subsets of .
□
"find an intrinsic characterization of those sets of subsets of A that correspond to some order in A"
Let be the set of all possible orderings of . In the case of , would be .
Some facts about every element :
, where is the smallest element in the ordering , the element in for which there is no other element so that (which means that ).
. must therefore be in and be the biggest element (no element so that ).
For all elements except there exists at least one element so that .
Similarly, for all elements except there exists at least one element so that .
For every except and , there exist two unique elements so that is the only set in for which it is true that .
For every , there must exist two sets so that (except for ). This means that the , the size of is the size of .
These conditions characterise intrinsically and are the solution to the question.
(i) To be shown:
Proof:
□
(ii) Te be shown:
□
(iii) To be shown:
Two-sided proof by contradiction:
1.
Let . Then , and . Suppose . Then . Then . Then and . But if , then ! Contradiction.
2.
Let . Then and . Because must be in , and there is no flexibility there, . Suppose . Since necessariliy , . But if , must be an element of . Then , and there is a contradiction.
□
Reflexive, but neither symmetric nor transitive (symmetry violation: , transitivity violation: ):
Symmetric, but neither reflexive nor transitive (reflexivity violation: , transitivity violation: ):
Transitive, but neither reflexive nor symmetric (reflexivity violation: , symmetry violation: ):
We shall write for the set of all equivalence classe. (Pronounce as “X modulo R,“ or, in abbreviated form, “X mod R.“ Exercise: show that is indeed a set by exhibiting a condition that specifies exactly the subset of the power set ).
— Paul Halmos, “Naïve Set Theory“ p. 38, 1960
To be honest, I'm not quite sure what exactly I am supposed to do here. has been defined as being a set, how can I prove a definition?
But I can try and construct from :
□, I guess?
Basically, the question is "Which projections are one-to-one", or, "Which projections are injective"?
The answer is: A projection is injective iff . Or, simpler: Every element of occurs at most once in the relation. This can be extended easily to relations composed of more than 2 sets.
(i) To be shown:
1. (the empty set is a function from to ).
This is true because is a relation so that , and . Or: is a set of pairs that maps all elements in to , and therefore a function from to .
2. Assume
Then , and . But can only be , but it was assumed that . Therefore, no such can exist.
(ii) To be shown:
Assume . Then (or: maps all elements of to an element in ). However there are no elements in the empty set (that I know of), so can't exist.
However, if , then (i) applies. So .
Here, the reader is asked to formulate and prove the commutative law for unions of families of sets.
For context, the associative law for unions of families of sets is formulated as follows:
Suppose, for instance, that is a family of sets with domain , say; write , and let be a family of sets with domain . It is then not difficult to prove that
Okay, this is all fine and dandy, but where am I going with this? Well, I have probably misunderstood this, because the way I understand it, the correct way to formulate the commutative law for unions of families does not hold.
Let's say is a set indexed by , or, in other words, is a family. Let's then define a new operator index-union to make these expressions easier to read: as .
The associative law then expanded reads as
or, simpler, as .
Then, simply, the commutative version of the law would be .
Then there is a very simple counterexample:
, . Then a family from A to B could be and a family from B to a could be . Then and , and those two are different sets.
Despite my obvious love for unnecessarily inventing new notation, I'm not a very good mathematician, and believe (credence ) that I have misunderstood something here (the rest is taken up by this being an editing/printing mistake). I am not sure what, but would be glad about some pointers where I'm wrong.
Other ways of interpreting the exercise also have obvious counter-examples.
If are two families in (as functions: ), then the counterexample is
If are two families of sets in (as functions: ), then the counterexample is
To be shown:
(i):
1.
Let . Then there exists an so that and a so that . Then , and furthermore . Since , is contained in there as well.
Only in this case I will attempt to write out the the proof more formally:
2.
Let . Then there exists so that . But then and , which means that is also in their intersection.
□
(ii):
1.
Let . Then is an element of all of or an element of all of (or both). Since that is the case, is always an element of . Then is also an element of the intersection of all of these unions .
This can be written down more formally as well:
2.
Let . Then for every and , . That means that for every , : if there is just one so that , for the last sentence to be true, must compensate for that. Or, if not an in every , then this must be true for every (with a similar reasoning as for ). But if in every or every , it surely must be in their union.
□
To be proven:
One can say that because there must be at least one so that , and because all combinations of elements are chosen from and .
□
To be proven:
□
To b proven:
Proof:
□
(i) To be shown:
If , then there exists an for which . Then also (since the same set of indices is iterated over).
If otherwise , then there is also an for which . Then is at least once an element of .
(ii) To be shown:
Example:
Then , but .
A necessary and sufficient condition that map onto is that the inverse image under of each non-empty subset of be a non-empty subset of . (Proof?)
But that's—false? Unless I understand something different by "map" (which I just take as relates from one set to another, possibly in a many-to-one relation).
Example:
. , etc for all subsets of $X$. Then .
To be proven: $h(gf)=(hg)f$
Proof:
$((hg)f)(x)=((hg)(f(x)))=h(g(f(x)))=h(gf(x))$
what do , , , and mean?"
: father of , : is brother of . $$xRSy$: $x$ nephew of $y$. $xR^{-1}S^{-1}$.
To be proven:
Proof:
Let . Then . Then . Then and , and then .
is there a connection among I, , and ?
Shouldn't ? No. Because if and , then . But , and . I think alsso that , and (which is true by the previously proven theorem). I don't think there's any further connections here.
(i) Given and , then is "one-to-one" and " maps onto ".
Judging from what I understand, " is one-to-one" would mean that is injective, and " maps onto " just means that is surjective?
Wikipedia agrees with these hunches.
, because if for , then and $f(y)=x_2$, which is not possible.
$g$ is surjective, since every element in has exactly one corresponding element in $X$ through $g$, but elements in must also be mapped to (it could be that , but that is not guaranteed).
□
(ii) To be proven: iff for all subsets of and , injective.
First case: injective .
Let , then there exists exactly one so that . That means . Then .
Second case: injective.
More formal:
.
Let then , so that . Let then so that and .
Then , but . So those can't exist, therefore f is injective.
□
(iii) To be proven: f injective for all subsets of A.
First case: f inj.
Let and . Then so that , and there is no (since f is injective). But since (by definition) and , e must be in .
Second case: inj.
Assume so that . Let then and . Then (since ). But since , . Contradiction, f must be injective.
□
(iv) To be shown:
First case: f surj.
Assume and . If , then so that . But since f is surjective, there must be an x so that and ! Contradiction.
Second case: .
Assume . Then for all A, but (since y is not in the range). Contradiction of the assumption, such a y can't exist.
Since the intersection of every (non-empty) family of successor sets is a successor set itself (proof?)
Let be a family of successor sets.
Every is a successor set, then it is guaranteed that . One can assume that , which implies that . But since all are successor sets, must also be an element of all Then , and the intersection of the family is a successor set.
□
However, remember the definition of a family: a family of sets is a function that maps from an index set into the powerset of a set, so this function is .
In this case, we could set and , with . That would make f a non-empty family, but , which is definitely not a successor set. Since the solution I presented above seems exactly like the thing Halmos would want me to do, I guess I have misunderstood the definition for "family of sets".
Prove that if $n$ is a natural number, then
For , :
Let be the number such that , and assume that .
Then for to be equal to , would need to be equal to or . That would contradict the assumption, so .
□
This would be much easier to prove if the Axiom of Regularity had been introduced: if we know that it can't be that , then would need to be for to be equal to , which could only be the case if , violating the axiom.
Prove that if is a natural number, […] if , then for some natural number .
This feels true almost by definition?
On pg. 44 a natural number is defined as "an element of the minimal successor set ". Then, if , there must be an element of so that (by the second Peano axiom and the condition that can contain no superfluous elements, which would be the third Peano axiom).
Prove that is transitive.
I don't understand what is being asked from me here. is not a relation since it's not a set of ordered sets with two elements (tuples), so this question is underspecified.
Prove that if is a non-empty subset of some natural number, then there exists an element in such that whenever is an element of distinct from .
Trying to decode this first into first-order logic and then natural language again, I get that if , then . Decoding into natural language, this roughly means that for every subset of the natural numbers, that subset has a minimum.
Statement: for all .
Basis: For .
Assume this holds for .
Step . , so .
Assume such a did not exist. Then for every , it would hold that there is some so that .
But then it must hold that (it can't be that and for natural numbers). If that isn't the case (so ), then there must exists some that . But since contains only finitely many elements, this leads to a cycle (which is not possible, since that would mean that and ). So such a must exist.
The discovery and establishment of the properties of powers, as well as the detailed proofs of the statements about products, can safely be left as exercises to the readers.
To be shown: .
Induction over .
Induction base: .
Induction assumption: .
Induction step:
To be shown: .
First, because , and if , then .
Second, for , because (as per distributivity).
Induction over .
Induction base: If , then .
Induction assumption: .
Induction step: (as per definition of the product).
To be shown: .
Induction over .
Induction base: If , then (since ).
Induction assumption:
Induction step: (using distributivity).
As for the properties of the power, I don't really want to spend much time and energy proving them. Let it suffice to say that the powers are neither associative (counterexample: ) nor commutative (counterexample: ) nor distributive over either addition (counterexample: ) or multiplication (counterexample: ).
They have two interesting properties: , and .
For commutative hyperoperators, see Ghalimi 2019.
To be shown: if , then
Induction over
Induction base: If , then
Induction assumption:
Induction step:
We know that .
We can prove that :
In the base case, , in which clearly holds. Assume that . Then .
Therefore, we know that and therefore and therefore .
To be shown: if and , then .
Induction over . Base case: (I don't think the text proves that , but I also don't think it would be that useful for me to do that here).
Induction assumption: .
Induction step: It must hold that . In general, if and , then because (two applications of the theorem above, and using the commutativity of addition), so .
Therefore, .
To be shown: Ever non-empty set has a minimum.
(I tried to prove this constructively, but the result was some weird amalgam of a constructive and and an inductive proof. Oh well.)
Let be any element in . Then if , then there can be no element of smaller than , and is smaller than any other natural number.
If , then for any , if we know that and , then is the number so that no natural number smaller than , since we know that all numbers are not in .
Assume that there exists a bijection between and some finite natural number . Then , but , since . Then, by the pigeonhole principle, can't exist: there is at least one element too many in to be matched to .
Therefore, such an can't exist, is infinite.
The proof here is the same as the one in exercise 2.
To be shown: the union of a finite set of finite sets is finite (wouldn't this better be if it was a finite family of finite sets? Whatever.).
Proof: Let be our set, and be the union of the finitely many elements of .
If , then the union is the empty set , which is clearly finite (equivalent to .
If , then the resulting set is just the only element of , which is per definition finite.
Assume that , and that is finite. Let then , where is a finite set. Then . Since we know that both and are finite, we know that is therefore also finite, per the statement in the text that the union of two finite sets is also finite.
But we can also prove that is finite if and are finite: If , then is still finite (either or ). We can then set and . Since we can repeat this only finitely many times, the resulting must be still finite when (this is one of my very few constructive proofs, be gentle please).
To be shown: if is finite, then is finite, and .
Proof: This was shown in Exercise 2 in Section 5 (I use the notation of for the size of a set there). Since we can construct the size of , it is equivalent to some natural number, and therefore finite.
To be shown: If is a non-empty finite set of natural numbers, then there exists an element in such that for all in .
We start and set . We then choose an element from , and set . If , we set . Else we do nothing. We know this process is repeated finitely many times, and once we end up with , we have created a such that for all .
(Is this a constructive proof? It feels like one, but also like writing code.)
Totality:
Antisymmetry:
A set may have no lower bounds or upper bounds at all, or it may have many; in the latter case it could happen that none of them belongs to . (Examples?)
Example for set without lower & upper bounds is (where always ), if and the partial order on is and and then the lower bounds are , if , then every element of is a lower bound of (since the empty all-quantification is true), but none of them are in .
To be proven: The Cartesian product of a finite family of sets , at least one of which is empty, is necessarily and sufficiently also empty.
Base case: As suggested by the book, I start the induction at 1: In this case , the set containing the empty set. The Cartesian product of with no other set is also the empty set.
Induction assumption: If is a family of sets with elements, at least one of which is empty, then .
Induction step: If is a family of sets with elements, at least one of which is the empty set, then either is the empty set, or the empty set is is the first elements.
Let . Then in the former case we know that (as per the text). In the latter case we know that , so in both cases we get the empty set as a result.
To prove:
there exists a function with domain such that if , then
and
if is a collection of pairwise disjoint non-empty sets, then there exists a set such that is a singleton for each in
are equivalent to the axiom of choice:
Axiom of choice. The Cartesian product of a non-empty family of non-empty sets is non-empty.
We need to prove two directions: and .
For , let . Let be any element of (Can we do that? Or does this then circularly depend on a choice function on ?). Then we can define by (since is a tuple).
For , we can construct at least one element of the Cartesian product of by constructing the element . This tuple must be an element of the Cartesian product of the elements in , therefore that Cartesian product can't be empty.
This is actually a special case of I: we just define . This gives us the results from above.