Table of Contents

Using nothing but logic gates, I implemented a fully functional Hi-RISC processor. This Highly-Reduced-Instruction-Set-Computer was coded in HDL and compiled onto an FPGA. This gave me a thorough understanding of how processors function at a microarchitecture-instruction level. Uncovering the mysteries of these shinny chips

hdl

The beauty of Computer Systems Engineering

The Computer Systems Engineering degree I am enrolled on satisfies my thirst of curiosity by falling and rising through levels of computational abstraction like a man bungee jumping. One day I learn about the physics of transistors and light waves; the next day I learn about high-level software, data structures and algorithms (standing on stacks and stacks of technology, abstracted away from human observation); and then the very next day I learn how a CPU can be implemented by using 1s, 0s, and logic gates (such concepts power supercomputers predicting weather, cars autonomously driving, antennas providing internet connectivity to thousands of people, and so on, and so on).

Processor specification

This 16-bit processor had an instruction set which could do bitwise logic, integer arithmetic, and data manipulation. It decoded the instructions - of which there were 16 - and passed the control signals to the ALU or the registers depending on the instruction. There was a main register file which could hold up to 62 words (16-bit words). There was a flag register - holding 6 flags that were affected by results of operations. And there was a program counter - keeping track of what instruction to execute next.

Future improvements

The next step would be improving the pipelining. By improving the control unit I could have mutliple instructions executing at the same time. This would not improve latency - instructions will still take a cycle to process - but the throughput (IPS) could be greatly increaed. This would be implemented by starting the next instruction before the last is finished while not using the same resources. Branch prediction would need to be implemented too - as otherwise the program would not know what instructions are (likely) safe to perform next.

Looking back

Modern CPUs can seem incredibly intimidating, with their complex designs and billions of cycles per second. Breaking them down into their core components and implementing them unlocked in me deep understanding for them.