Jun 21 2014

Building a CPU Part I – Implementing logic gates

For a long time now, I was thinking to myself: “I know how to develop softwares, And I also have a little bit of practice with drivers and compiling operation systems (I even tried to write my own code to a boot sector and run it), But I never really understood how CPU works”.

So I’ve decided to take matter into hands and try to build a CPU. At first I thought about creating it all from scratch, But quick enough I discovered that the amount of transistors I will have to solder is enormous (I will soon enough show that even a simple NAND gate requires 4 transistors). So I’ve decided to build it on FPGA which will make life easier, and instead of creating my own architecture, I’ve decided to try and mimic the 6502 microprocessor, which is the CPU used by NES. If everything will go according to plan, I will be able to play Super-Mario on my own CPU.

The first post will have nothing to do with FPGA, as logic gates are provided out-of-the-box when using FPGA, so this part will have nothing to do with my implementation of the 6502 microprocessor. But according to my belief this part is the most important one, because the logic gates are the basic building blocks of any electronic device, in particular a CPU.

Continue reading

Feb 5 2014

Introduction to Lie Algebras

In my last post I’ve described the structure called Algebra. Although this structure has some other uses, I wanted at first to define a new structure, called Lie Algebras.

Lie Algebras is a very useful structure, and it is being repeatedly used in Quantum mechanics and Analytical mechanics.

Definition:Lie algebra \mathfrak{g} is an algebra with the operation (usually denoted by [-,-]) that satisfying the following two axioms:

  1. Anti-commutativity: \forall a\in\mathfrak{g},\; [a,a]=0.
  2. Jacobi identity: \forall a,b,c\in\mathfrak{g},\; [a,[b,c]]+[b,[c,a]]+[c,[a,b]]=0.

One can easily show that a subalgebra or a quotient algebra of a Lie algebra is a Lie algebra.

Using the definition we can prove a quick lemma:

Lemma: Let \mathfrak{g} be a Lie Algebra (over a field \mathbb{F}, and let a,b\in\mathfrak{g}. Then: [a,b]=-[b,a]

Proof: From the bi-linearity of the operation [-,-] we have:

    \[ [a+b,a+b]=[a,a]+[b,b]+[a,b]+[b,a] \]

If the characteristics of the field \mathbb{F} is not 2 (we will always assume that), from the first axiom of Lie Algebras we know that:

    \[ [a+b,a+b]=[a,a]=[b,b]=0 \]


    \[ [a,b]=-[b,a] \]

as required. ■

We can also define the center of an Lie algebra (also of a general algebra, but that’s not the case here) as follows:

Definition: The center of a Lie Algebra \mathfrak{g} is the set:

    \[ \mathcal{Z}(\mathfrak{g})=\{z\in\mathfrak{g}\mid\;[a,z]=[z,a]\} \]

Note that in the case of Lie Algebra, because of the lemma we’ve proven, we can conclude that:

    \[ \mathcal{Z}(\mathfrak{g})=\{z\in\mathfrak{g}\mid\;[a,z]=0\} \]


  1. Take any vector space V, and define that operation (usually called the bracket) to be: [-,-]=0. This is a commutative (also called abelian) Lie algebra. And of course the center of \mathfrak{g} is \mathfrak{g} itself.
  2. \mathfrak{g}=\mathbb{R}^3, [a,b]=a\times b (vector product). In this case the center of \mathfrak{g} is zero.
  3. Let A be an associative algebra with the product ab(For example the algebra of n\times n matrices, The the space A with the bracket [a,b]:=ab-ba is a Lie algebra denoted by A_- or by \mathrm{Lie}A

Moreover, In Analytical mechanics, for the physicists readers, you’ve met the Poisson Brackets which maintains the Jacobi identity. And in quantum mechanics, physicist always talking about the commutator of two operators (For example the Hamiltonian and the momentum). And theorems from Lie algebra can easily be applied to these subject, and it’s another way of showing how we can learn facts about the universe simply from playing with math.

I hope I will have time to add a little bit more posts about the subject (such as main theorems and such).

This site is hosted by:

Feb 5 2014


It’s been a long time since my last post about mathematics, which is kind of a shame, because I wanted math to be one of the main subjects of this blog.
So, I decided to start and write a little bit more about things I loved in mathematics (and hopefully I will follow).

In this post, I’m assuming some basic Linear Algebra knowledge. This post is a little boring, but it defines an important algebraic structure that is used in some very beautiful subjects and theorems, so stay tuned.

Algebras and Subalgebras

Definition: An algebra A is a vector space over field \mathbb{F}, endowed with a binary bilinear operation (*) s.t. \forall a,b,c\in A, \lambda,\mu\in\mathbb{F}:

    \[ a*(\lambda b+ \mu c)=\lambda a*b+\mu a*c \]

    \[ (\lambda b+ \mu c)*a=\lambda b*a+\mu c*a \]

For example, The polynomials with n variables, \mathbb{F}[x_1,\dots,x_n], with the multiplication as the operation * is an associative and commutative algebra. This algebra is usually called the Polynomial algebra.

But please note, NOT all algebras must be commutative! For example, we can look at the vector space of the n\times n matrices (M_n(\mathbb{F})), with matrix multiplication at the operation *. M_n(\mathbb{F}) is an associative, non-commutative algebra.

We can also define a subalgebra:

Definition: A subspace A'\subset A is called a subalgebra if \forall a,b\in A', a*b\in A'.

For example, the diagonal matrices are a subalgebra of the matrices algebra.


And just like any other algebraic structure, we can define an homomorphism between two algebras as a linear map between the algebras the preserve the operation, that is:

Defenition: Let A,B be algebras, a linear map \phi:A\to B is called an homomorphism if:

    \[ \forall a,b\in A\;\phi(a+_Ab)=\phi(a)+_B\phi(b) \]

    \[ \forall a\in A, \lambda\in\mathbb{F}\;\phi(\lambda a)=\lambda\phi(a) \]

    \[ \forall a,b\in A\;\phi(a*_Ab)=\phi(a)*_B\phi(b) \]


Definition: A subspace I of an algebra A is a left (resp. right, resp. two-sided) ideal if:

    \[ \forall a\in A, b\in I\;a*b\in I \]

(resp. b*a\in I, resp. a*b,b*a\in I).

It is clear, by the definition, that any ideal is a subalgebra.

Using the ideals, we can look at the quotient space A/I (a space where any vector in I means the zero vector, meaning that two vector that differ by only a vector in I are identical). The quotient space has the canonical algebra structure given by:

    \[ (a+I)$(b+I)=a*b+I \]

which called the quotient algebra A/I.
It’s easy to see that the canonical map A\to A/I that is given by a\mapsto a+I is an algebra homomorphism.

Moreover, if \phi:A\to B is an algebra homomorphism, the kernel \ker\phi is a two-sided ideal of A and the image \mathrm{Im}\phi is a subalgebra of B.