Proof: a and n are relatively prime iff ax≡b(modn) has a solution modulo n

The lemma in p. 231 of the book “A book of abstract algebra” by Charles C. Pinter states that if $a$ and $n$ are relatively prime, then the equation $ax\equiv b(\mbox{mod} n)$ has a solution modulo $n$. It is also stated below the lemma that if $a$ and $n$ are not relatively prime, then the equation $ax\equiv b(\mbox{mod} n)$ has no solution modulo $n$. By contraposition, the latter statement means that if the equation $ax\equiv b(\mbox{mod} n)$ has a solution modulo $n$, then $a$ and $n$ are relatively prime.

More generally, less emphasis is given on the converse of the lemma in the literature. The purpose of this post is to clarify that the equivalence holds. To start with, $a$ and $n$ are relatively prime if and only if their greatest common divisor satisfies $\mbox{gcd}(a, n)=1$. Along these lines, the lemma can then be stated in the following way; $\mbox{gcd}(a, n)=1$ if and only if the equation $ax\equiv b(\mbox{mod} n)$ has a solution modulo $n$.

Proofs of the fact that $\mbox{gcd}(a, n)=1$ means that the equation $ax\equiv b(\mbox{mod} n)$ has a solution modulo $n$ are available in Pinter’s book and elsewhere in the literature, so interest is in proving the converse. It will be shown that if the equation $ax\equiv b(\mbox{mod} n)$ has a solution modulo $n$, then $\mbox{gcd}(a, n)=1$.

If the congruence $ax\equiv b(\mbox{mod} n)$ has a solution, then it has a solution modulo $m$, where $m = n/\mbox{gcd}(a,n)$ (see for example theorem 2, p. 231, in Pinter’s book). Obviously, $m\le n$.

It will be shown that $m = n$. Towards this end, assume that $m < n$. So, it has been assumed that the congruence $ax\equiv b(\mbox{mod} n)$ has a solution modulo $n$. This means that it has a solution, therefore it has a solution modulo $m$.

Let $x\equiv c(\mbox{mod}m)$ be the solution modulo $m$ and $x\equiv r(\mbox{mod}n)$ the solution modulo $n$. Since $x = c$ is a solution of the congruence $ax\equiv b(\mbox{mod} n)$, it follows that $c\equiv r(\mbox{mod}n)$. The congruences $x\equiv r(\mbox{mod}n)$ and $c\equiv r(\mbox{mod}n)$ lead to $x\equiv c(\mbox{mod}n)$.

So, $x\equiv c(\mbox{mod}n)\Leftrightarrow x\equiv c(\mbox{mod}m)$. Notice now that $x\equiv c(\mbox{mod}n)$ implies $x-c\equiv 0(\mbox{mod}n)$, hence the coset equality $\overline{x-c}=\overline{0}_n$ holds in $\mathbb{Z}_n$, where $\overline{0}_n=\{\dots,-2n,-n,0,n,2n,\dots\}$. Similarly, $x\equiv c(\mbox{mod}m)$ yields $\overline{x-c}=\overline{0}_m$ in $\mathbb{Z}_m$, where $\overline{0}_m=\{\dots,-2m,-m,0,m,2m,\dots\}$. It has thus been concluded that $\overline{0}_n=\overline{0}_m$, which is a contradiction since for example $m\in\overline{0}_m$, while $m\not\in\overline{0}_n$ for $m. Thereby, $m=n$.

$m=n$ and $m = n/\mbox{gcd}(a,n)$ give $\mbox{gcd}(a, n)=1$, which completes the proof.

An example of how to select ideals to obtain quotient rings with desirable properties

The following self-contained post demonstrates how to construct quotient rings with desirable properties by elaborating on an example taken from chapter 19, p. 194, of the book “A book of abstract algebra” by Charles C. Pinter.

The fundamental homomorphism theorem for rings employs the kernel $K$ of a homomorphism $f:A\rightarrow B$ to construct a quotient ring $A/K$ isomorphic to ring $B$. Conversely, the homomorphic image $A/J$ of a ring $A$ is useful for factoring out unwanted features of $A$ to obtain a quotient ring $A/J$ with the desirable properties. Selecting an appropriate ideal $J$ is the key for acquiring the desired quotient ring $A/J$.

In some cases, picking a suitable ideal $J$ can be done methodically by tracing the desired property of $A/J$ back to an analogous property of $J$. To demonstrate the process of selecting $J$, the simple example of obtaining a quotient ring $A/J$ that doesn’t contain any divisors of zero will be considered.

The example under consideration is stated as follows. Let $A$ be a commutative ring with unity. An ideal $J$ of $A$ is prime if and only if $A/J$ is an integral domain (see chapter 19, p. 194, in Charles Pinter’s book “A book of abstract algebra”).

Briefly recall the associated definitions. An integral domain is a commutative ring with unity that doesn’t have any divisors of zero. An ideal $J$ of a commutative ring $A$ is prime if $\forall a,b\in A$ with $ab\in J$ it follows that $a\in J$ or $b \in J$.

It is straightforward to prove the equivalence of this example. To prove that the integral domain has no divisors, it suffices to observe that $a\in J \Leftrightarrow J=J+a$. Emphasis will be shifted towards understanding why one would think of using a prime ideal to retain the cancellation property in $A/J$, i.e. to ensure that there are no divisors of zero in $A/J$.

Think first how the lack of divisors of zero is stated for $A/J$. The product of two cosets $J+a$ and $J+b$ in $A/J$ is $(J+a)(J+b)=J+ab$. Moreover, the zero element of $A/J$ is the coset $J=J+0$, so to say that $A/J$ has no divisors of zero implies that $J+ab=J\Rightarrow J+a=J$ or $J+b=J$.

Recall also that for any $x\in A$, $x\in J \Leftrightarrow J+x=J$. It then becomes obvious that the property $J+ab=J\Rightarrow J+a=J$ or $J+b=J$ in $A/J$ corresponds to the property $ab\in J\Rightarrow a\in J$ or $b\in J$, which is the defining property of a prime ideal $J$.

So it appears that in some cases it is easy to choose an ideal $J$ whose properties are conveyed to a quotient ring $A/J$.

Proof: any element of a group of prime order generates the group

The following self-contained post clarifies the proof of theorem 4, chapter 13, p. 129, of the book “A book of abstract algebra” by Charles C. Pinter.

Let $G$ be a group of prime order $|G|=p$. It will be shown that for any $a\in G$ with $a\ne e$ it holds that $\langle a\rangle=G,$ where $\langle a\rangle$ is the set generated by $a$ and $e$ is the identity element of $G$.

Since the order of group $G$ is $|G|=p<\infty$, the order $\mbox{ord}(a)$ of element $a$ is also finite, i.e. $\mbox{ord}(a)=n<\infty$ for some positive integer $n$.

Note that the set $\langle a\rangle=\left\{a^0,a^1,\ldots,a^{n-1}\right\}$ generated by $a$ is a subgroup of $G$, since $\langle a\rangle$ is closed with respect to multiplication and with respect to inverses. Indeed, for any $a^i,a^j\in\langle a\rangle$, the division algorithm of integers ensures that there exist integers $q,r$ with $0\le r such that $a^i a^j = a^{i+j}=a^{nq+r}=(a^n)^q a^r=e a^r=a^r\in \langle a \rangle$. Moreover, the inverse of any $a^i\in \langle a \rangle$ is $(a^i)^{-1}=a^{n-i}\in \langle a \rangle$.

The order $|\langle a \rangle|$ of the cyclic subgroup $\langle a \rangle$ of $G$ is equal to the order of generator $a$, that is $|\langle a \rangle|=\mbox{ord}(a)=n$.

By Lagrange’s theorem, $n$ is a factor of $p$. Since $p$ is a prime, $n=1$ or $n=p$. It follows from $a\ne e$ that $n\ne 1$, so $n=p$, which means that $|\langle a \rangle|=|G|$.

$\langle a \rangle\subseteq G$ and $|\langle a \rangle|=|G|\Rightarrow\langle a \rangle=G$.

Clarification on the proof of isomorphism of finite cyclic groups

The following self-contained post clarifies the proof of theorem 1, chapter 11, p. 113-114, of the book “A book of abstract algebra” by Charles C. Pinter.

In order to show that any two cyclic groups of finite order $n$ are isomorphic to each other, it suffices to show that every cyclic group $\langle a\rangle=\left\{a^0,a^1,a^2,\ldots,a^{n-1}\right\}$ of finite order $n$ is isomorphic to the group $\mathbb{Z}_n$ of integers modulo $n$ with the operation of addition modulo $n$, where $\mathbb{Z}_n=\left\{0,1,2,\ldots,n-1\right\}$.

Towards this end, it can be shown that the function $f:\mathbb{Z}_n\rightarrow\langle a\rangle$ defined as $f(i)=a^i$ is a group isomorphism. Recalling the definition of group isomorphism, it suffices to show that $f$ is a bijective homomorphism. It is straightforward to check that $f$ is a bijective function. The fact that $f$ is a homomorphism is proven also easily using laws of exponents for groups, as follows: $f(i+j)=a^{i+j}=a^i a^j=f(i)f(j)$.

Proving the homomorphic property of $f$ as above works well if $f$ is taken to be $f:\mathbb{Z}\rightarrow\langle a\rangle,~f(i)=a^i,$ that is from the additive group of integers to the cyclic group $\langle a\rangle=\left\{\ldots,a^{-2},a^{-1},a^0,a^1,a^2,\ldots\right\}$ of infinite order generated by $a$. However, in the case of $f:\mathbb{Z}_n\rightarrow\langle a\rangle$ with $\langle a\rangle=\left\{a^0,a^1,a^2,\ldots,a^{n-1}\right\}$, there is a slight oversight; the addition implied by $f(i+j)$ is addition modulo $n$.

Let’s denote the addition modulo $n$ by $\oplus$. So, now the proof of homomorphic property takes the form $f(i\oplus j)=a^{i\oplus j}=a^i a^j=f(i)f(j)$. The identity$a^{i+j}=a^i a^j$ is not the same as$a^{i\oplus j}=a^i a^j$; the former is one of the common exponent laws for groups, while the latter is not.

What needs to be shown is that the exponents of addition and of addition modulo $n$ coincide, i.e. $a^{i+j}=a^{i\oplus j}$. The proof of this identity is simple, yet it is useful to be aware of such potential oversight.

According to the division algorithm for integers, there exist unique integers $q$ and $r$ such that $i+j = nq+r$ and $0\le r < n$. Thus, $a^{i+j}=a^{nq+r}=(a^n)^q a^r$. Since the cyclic group $\langle a\rangle$ is of order $n$, it holds that $a^n=e$, where $e$ is the identity element of $\langle a\rangle$. So, $a^{i+j}=a^r$.

Notice that $r$ coincides with the result of the addition modulo $n$, that is $r=i\oplus j$. It follows that $a^{i+j}=a^{i\oplus j}$, which completes the proof.

Proof: every cycle expresses as a product of transpositions

The following self-contained post provides a proof of the fact that every cycle can be expressed as a product of one or more transpositions, as stated in chapter 8, p. 83, of the book “A book of abstract algebra” by Charles C. Pinter.

To show that a cycle $(a_1~a_2~\dots~a_{r-1}~a_r)$ can be written as a product of one or more transpositions, it suffices to prove the validity of the identity $(a_1~a_2~\dots~a_{r-1}~a_r)=(a_r~a_{r-1})\dots (a_r~a_2)(a_r~a_1)$, which will be done by using mathematical induction.

Assume that the identity holds for $k$. It will be shown that it also holds for $k+1$, as in $(a_1~a_2~\dots~a_{k-1}~a_k~a_{k+1})=(a_{k+1}~a_k)(a_{k+1}~a_{k-1})\dots (a_{k+1}~a_2)(a_{k+1}~a_1)$.

Start by noticing that $(a_1~a_2~a_3~\dots~a_{k-1}~a_k~a_{k+1})=(a_2~a_3~\dots~a_{k-1}~a_k~a_{k+1})(a_{k+1}~a_1)$.

Due to the induction assumption, which says that a $k$-length cycle can be decomposed into a product of transpositions, $(a_2~a_3~\dots~a_{k-1}~a_k~a_{k+1})$ can be written as $(a_2~a_3~\dots~a_{k-1}~a_k~a_{k+1})=(a_{k+1}~a_k)(a_{k+1}~a_{k-1})\dots (a_{k+1}~a_3)(a_{k+1}~a_2)$.

By combining the last two equations, the conclusion follows for $k+1$.

Proof: the product of disjoint cycles of a permutation is unique

Theorem 1, p. 82, in the book “A book of abstract algebra” by Charles C. Pinter states that “Every permutation is either the identity, a single cycle, or a product of disjoint cycles”. The theorem is proved in the book. After the proof, it is stated that it is easy to see that the product of cycles is unique, except for the order of factors. I have seen online proofs of this claim of uniqueness based on the concept of orbit. In what follows, an alternative proof is offered without relying on orbits.

Assume that there exists a permutation $\pi$ which can be written as a product of disjoint cycles in two ways. So, there are two collections, each consisting of disjoint cycles, $C=\left\{c_1,c_2,\dots,c_n\right\}$ and $S=\left\{s_1,s_2,\dots,s_m\right\}$ such that $C\ne S$ and $\pi=\prod_{i=1}^{n}c_i=\prod_{i=1}^{m}s_i$.

$C\ne S\Rightarrow C\not\subseteq S~\mbox{or}~S\not\subseteq C$. Without loss of generality, assume that $C\not\subseteq S$. Thus, $(\exists~\mbox{cycle}~c_i \in C)(\forall~\mbox{cycles}~s_k\in S) c_i\ne s_k$.

Let $a_q$ be some element in cycle $c_i$. Since $a_q\in c_i$ and the cycles in $C$ are disjoint, $a_q$ is permuted by $\pi$. Therefore, $\exists~\mbox{cycle}~s_l\in S$ such that $a_q\in s_l$.

The cycles $c_i$ and $s_l$ are disjoint from the rest of cycles in $C$ and $S$, respectively. Thus, $c_i(a_q)=s_l(a_q)=\pi(a_q)$.

Since $c_i\ne s_l$, there exists an element $a_p$ that breaks the equality arising from the successive implementation of permutation $\pi$ in each of the two cycles $c_i$ and $s_l$ starting from $a_q$. Thereby, there exists an element $a_p$ in $c_i$ and in $s_l$ with $c_i(a_p)\ne s_l(a_p)$. This is a contradiction, since the permutation $\pi$ is a function, so $a_p$ should be mapped to a unique element $\pi(a_p)$.