BSc (Hons)/FdSc Computer Science Computer Systems Assignment 1: Computer Systems Building Blocks and logical processes: Evolution, Components, and Features

0
2

 

BSc (Hons)/FdSc Computer Science

Computer Systems

Assignment 1: Computer Systems Building Blocks and

logical processes: Evolution, Components, and Features

Table of Contents

Introduction 2

Task 1: Foundational Elements and Logical Procedures 2

Task 2: Influence of Important People and Their Roles in Developing Technologies 7

Conclusion 9

References 9

 

Introduction 

Hardware improved in the past decades and thus broadened the idea of computers becoming more than only machines for computations, but as systems where sophisticated technologies could be used. While software has mostly enjoyed the limelight in modern development, hardware realities remained crucial in addressing the impacts of virtualization, cloud computing, fog computing, and the Internet of Things (IoT). This report moves to the building blocks of a computer machine in terms of processor, memory, registers, and cache and discusses the logical processes behind their operations. It also evaluates the contributions of many great figures, who have significantly contributed to the field of hardware and its application in emerging technologies. 

Task 1: Foundational Elements and Logical Procedures 

Individual components which comprise the computer including the processor, the memory, the register, and the cache, therefore, are essential for the success of the operations (Hosseininia et al., 2024). Each of these components has some form of logic, such as signalling, data representation, and instructions which facilitate the overall performance of the computer.

The processor, known in some circles as the central processing unit or CPU, is the control centre of the computer. It is the phase that carries out the instructions that catalyze the operation of application programs. However, the basic structure of a processor has distinct subparts like the Arithmetic and Logic Unit (ALU) and control Unit (CU).  The logical processes within the CPU involve an instruction cycle comprising three key stages: fetch, decode, and execute (Pawar et al., 2024). In the fetch stage, the CPU fetches an instruction from memory then the decode stage tries to understand the operation that needs to be done. Finally in the execute phase, is where the ALU acts on the instruction, to do something. Internal communication within a processor is traditionally done through a means called sports, which is an asynchronous mechanism based on the clock speed that tells the CPU how many instructions per second it can perform.

Memory specifically known as Random Access Memory (RAM) is another component of a computer. RAM can be described as a workspace where Data and Instructions are stored and temporarily processed by the CPU. Short-term storage devices on the other hand include the RAM which is random access memory but its content is erased when the computer is switched off (Zahoor et al., 2024). RAM is therefore associated with logical processes such as data retrieveability and data storage ability. RAM is used as a temporary storage place when a program is executed instructions and related data are held in RAM so that it is easy to access by the CPU. 

Memory addressing provides interaction in such a way, uprooting storage locations in RAM and allocating them unique names, or analogy in data terms, as presentation in the standard binary format. All these ensure speed for the storage and retrieval of data. Registers are small, high-speed storage within the CPU, playing a key role in the immediate processing of data (Zahoor et al., 2024). Unlike RAM, registers are under direct access of the processor and are only used when processing to hold data temporarily; an example is holding operands and results in registers during arithmetic operations. The logical processes of registers are quite synchronized to executing instructions. When an instruction fetch is made at the CPU level, it usually implies that there will be a transferring of data between registers and memory; quite concisely, it contains the logic that authored control signals, which enables the processor to continue performing tasks without being delayed accessing slower memory components.

Further, cache memory, a memory type that is allocated relatively closer to the processing unit, decreases time in computation. Cache memory is the one that acts between CPU and primary memory accessing frequently needed data and instructions (Yaghoobi, 2024). The cache logic development uses caching algorithms to know which data has to be cached or replaced. This involves Least Recently Used (LRU) and First In, First Out (FIFO) to make sure that the cache uses data, which will be soon most likely accessed by the CPU. Balanced in speed and storage capacity, the cache contains hierarchical structures, which are most often caused by levels (L1, L2, and L3). Each of these components–the processor, memory, registers, and cache–works together in extremely complex logical workings. 

Processor (CPU): The Heart of Computing

The processor, memory, registers and cache are related structures in a computer and each is responsible for a particular task in the finished product. As such they are built with programmatic functions like instruction cycles, data output and input, signalling, and memory organizational structures. This paper discusses these components in detail to explain how they interrelate and what impact they have on computational performance. The processor (maximum the term central processing unit) is the part that computes or in other words, executes operations by following instructions. PCs of today are built with multiple cores, which means they can perform many instructions at the same time with a process called parallel processing. Single-core processors were a far cry from this design as these could only perform a single operation at a time. Multi-threading technology has gone a long way in improving the workflow of the CPU since each core in single and multi-core CPUs can support threads (Bhutani and Shinde, 2024).

The instruction set for a CPU implies logical structures that dictate how operations within the chip are to be executed. The fetch-decode-execute cycle remains an essential concept in a current computing system. In the fetch cycle, the CPU pulls an instruction from memory through the use of PC and memory addressing techniques (Alamen and BenYousuf, 2024). During the decode phase, instruction is converted to machine language, i.e., the binary language understood by the processor by the instruction decoder. Last, of all, the execute phase in which the processor executes the operation of instruction were performed an arithmetic operation or data transfer. Another logical process that characterizes CPU enhancement is called Instruction pipelining. In a manner that may overlap the fetch, decode, and execute stages of many instructions the processor can execute many instructions within a shorter period. This process is similar to the cycle, where all the stages in this model function individually and at the same time like an assembly line. This technique enhances a lot of system throughputs and keeps the processor inactive for a short time between operations (Jiang et al., 2024). 

Memory (RAM): Temporary Storage for Active Tasks

RAM is called the computer’s temporary data storage as well as working memory, which the CPU needs for quick access. RAM in contrast to hard drives and SSDs which are long-term storage apparatus is a temporary storage arrangement because its content is not retained when the computer is shut off. This characteristic makes RAM particularly useful for temporarily storing data because access to and from RAM is very fast. RAM-related correlations involve cognitive operations as far as data storage and retrieval are concerned. During program execution, the program instructions and its data are moved into the RAM. These instructions are stored and can be accessed by the CPU through a memory address, in this way every data location in RAM has a special address (Hon, 2024). Due to all these procedures, the retrieval of information by the CPU will be efficient and at a one-stop click. Techniques attentively also follow the design of RAM modules. SDRAM is synchronous with the clock of the CPU thus it can support medium to high transfer rates. DDR SDRAM has the next development, DDR SDRAM transfers data on both the rise and the fall of the clock signal and, therefore, has double the data transfer rate. These developments point to formalism, which provides insights into how the operations of memory can be optimized (Chae, 2024). 

Registers: High-Speed Storage for Immediate Data Access 

Figure 1: Role of registers

(Source: Aste, 2025)

Registers are small, fast-access storage units which are embedded within the CPU core. While the RAM is used by the CPU for the storage of certain data that the processor can manipulate, the register is a central part of the processor (Li et al., 2024). These store data temporarily during the computations so that the CPU can get the data it requires at a moment’s notice. It tries to understand that the processes lying with the register are most times related to the execution of instructions. For instance, in an addition process, data components; the operands, or the results are saved in registers. Control signals regulate this data transfer, thus minimizing the time that the processor takes to execute tasks because the processor cannot directly access the slow-running memory components (Merinen, 2024).  

Cache: Bridging the Speed Gap Between CPU and Memory

Figure 2: Cache Memory

(Source: Putert, 2025)

The cache is a specific type of memory which is closer to the CPU than the memory Random Access Levitra. Its main function is to decrease the amount of time it will take for a computer to access data from main memory by storing much-used data and instructions. This is faster than RAM and allows the CPU to retrieve data at a much faster rate than the RAM’s data retrieval rate (Lanka et al., 2024). The operations within the cache include the caching algorithms that are used to select what data should be cached or to be replaced. These algorithms are used to predict the use of cache, so they can manage the most used areas by CPU. For instance, the LRU algorithm emphasizes data that have been used most recently and the FIFO on the other hand, removes data stored in the cache. cache there are logical units where the different components of the computer are interrelated, we have the coherency of the cache. In multi-processor or multi-core-based systems, cache coherency guarantees that all the caches in the system have the most current copy of a data item. This becomes vital, especially for data Consistency in Having extensive nodes in parallel computing systems to eliminate disk synch errors (Perera, 2024). 

Task 2: Influence of Important People and Their Roles in Developing Technologies

Hardware and how it has been incorporated into new related technologies cannot be discussed without reference to some key figures. These pioneers have endeavoured to create imperative advancements in computing right from the evolution of the hardware elements to the concepts as complex as cloud computing and the Internet of Things (Ullah et al., 2024). Another compelling character in the history of computing is John von Neumann for whose architecture most modern computers still draw influence. One of his concepts divided the storage and the processing units, and he was credited with the invention of the stored-program computers. Instruction, in turn, means that you can store instruction along with data and forget which is which, which opened the discussion for more elastic and efficient computing structures. Granting, Von Neumann’s works is very much related to the formation of logical processes like the instruction cycle which at present time remains in the processors.

Another is Gordon Moore, – founder of Intel and namesake of Moore’s Law. His observation that the number of transistors, on a microchip double roughly every two years has put into motion the ongoing scramble for smaller and faster processors. Moore’s Law might be seen in the progress of CPUs; modern processors are capable of performing billions of instructions per second. This exponential growth has also brought innovations to cache memory resulting in enhanced data retrieval and increased system performance (Verma et al., 2024). We therefore seem to agree with earlier assertions that players such as the Oracle Corporation co-founder Larry Ellison have been influential in the area of cloud computing. Ellison’s vision of networked computing was a field that paved the way for cloud services where one’s hardware resources are virtual. Cloud computing uses the principles of virtualization allowing users to interact with hardware in an effective and scalable manner. The basic premises of how workload is distributed or how the resources available in the cloud are divided are embedded into the structure of the hardware systems.

The other key emerging technology enabled by the contributions of Tim Berners-Lee who invented the World Wide Web is another technology that has made substantial progress. As a result of having an international WAN in place for the sharing of information, Berners-Lee effectively contributed to the growth of IoT devices that have networks for connection and as well as information exchange. Data gathering and signalling in IoT systems, or processing and analytics that may occur in real-time, are intrinsically linked with the evolution of hardware; more potent processors and memory-efficient stowing, for example. These individuals among others, have contributed in the direction of the development of the basic hardware systems, and the transformation of such systems into contemporary structures (Tong, 2024). It is crucial therefore to recognize that hardware is not just a foundation upon which software is deployed, but a field that begins innovation in computing. Another very important figure in the history of computing is John von Neumann, credited with producing an architecture for modern computers. Von Neumann’s architecture or this design, as it is known, brought a notion of storing both instructions and data into memory. is the idea that these stored instructions could be processed sequentially in a free-choice manner, revolutionising computing as we know it today. It can be said that he laid the foundation of all structures of present-day computers, right from the processor, memory and registers. Even many of the underlying concepts, which were described by the von Neumann architecture such as the FD, still function in modern CP us (Rech, 2024). From our reflections, it can be said that Von Neumann’s work was crucial to making hardware more flexible and to building the type of software that is to be run on computers of a given architecture. Another area also attributable to his contribution is the ability to virtualise the hardware and the ability to support more than one operating system on a single computer which is a major aspect of the modern-day cloud computing system (Shilpa et al., 2024). 

The history of computers has mostly been written by a few select personalities who injected their innovations into design and developments in emerging technologies such as cloud computing, virtualization, and the Internet of Things (IoT) (Yalli et al., 2024). They’ll go well beyond the design of the hardware into such interesting things but very different from designing software. People can’t imagine a world where there is no hardware and software collaboration toward creating more powerful and efficient computing systems. Then there is Gordon Moore, co-founder of Intel, who also came up with Moore’s laws. According to this the number of transistors on a microchip would double approximately every two years. That said, the constant miniaturization of transistors has been a blessing for advances in processing speed and dynamic energy consumption to facilitate the development of energy-efficient computing. It has also laid the grounds for cloud computing logic operations within processors to become faster and more intricate with this enablement, being coupled with their ability to evoke real-time systems across storage over distribution (Grigoryeva and Klimentov, 2024).  

Conclusion

Unfortunately, there are caveats. However, on the other hand, the hardware components and their logical processes give the cage to modern computing systems. This report discusses processor, memory, registers, and cache highlighting these building blocks of computing that indeed have great influences on the successful execution of computation. It is also very interesting to note how visionary contributions by people such as John von Neumann, Gordon Moore, Larry Ellison, and Tim Berners-Lee reflect the hardware revolution’s power on emerging technologies. More is thus to be learned since the computer continues to evolve, which very much needs to become a requisite with hardware.

 

References

Alamen, D. and BenYousuf, A., 2024. Design and Implementation of Five Stages Pipelined RISC Processor on FPGA.

Aste (2025). [online] Fastercapital.com. Available at: https://fastercapital.com/i/Bits-to-Bytes–The-Essence-of-Register-in-Computer-Science–Role-of-Registers-in-Data-Storage-and-Processing.webp 

Bhutani, P. and Shinde, A.A., 2024. Exploring the Impact of Multithreading on System Resource Utilization and Efficiency. International Journal of Innovative Research in Engineering and Management, 11(5), pp.66-72.

Chae, J.H., 2024. High-Bandwidth and Energy-Efficient Memory Interfaces for the Data-Centric Era: Recent Advances, Design Challenges, and Future Prospects. IEEE Open Journal of the Solid-State Circuits Society.

Grigoryeva, M.A. and Klimentov, A.A., 2024. Grid Computing Evolution in Scientific Applications. Supercomputing Frontiers and Innovations, 11(1), pp.4-50.

Hon, K.W., 2024. Hardware. In Technology and Security for Lawyers and Other Professionals (pp. 38-62). Edward Elgar Publishing.

Hosseininia, M., Salahvarzi, A. and Monazzah, A.M.H., 2024. TVTAC: Triple Voltage Threshold Approximate Cache For Energy Harvesting Nonvolatile Processors. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

Jiang, X., Zhou, Y., Cao, S., Stoica, I. and Yu, M., 2024. Neo: Saving gpu memory crisis with cpu offloading for online llm inference. arXiv preprint arXiv:2411.01142.

Lanka, S., Konjeti, P.C. and Pinto, C.A., 2024, June. A Review: Complete Analysis of the Cache Architecture for Better Performance. In 2024 Second International Conference on Inventive Computing and Informatics (ICICI) (pp. 768-771). IEEE.

Li, M., Bi, Z., Wang, T., Wen, Y., Niu, Q., Liu, J., Peng, B., Zhang, S., Pan, X., Xu, J. and Wang, J., 2024. Deep learning and machine learning with gpgpu and cuda: Unlocking the power of parallel computing. arXiv preprint arXiv:2410.05686.

Merinen, V., 2024. Use of AI in System Administration.

Pawar, S., Dhepe, S., Lothe, D., Gebise, P. and Chopde, A., 2024, February. Implementation of FPGA-Based 3 Stage Pipelined RISC-V. In Congress on Control, Robotics, and Mechatronics (pp. 147-159). Singapore: Springer Nature Singapore.

Perera, C., 2024. Optimizing Performance in Parallel and Distributed Computing Systems for Large-Scale Applications. Journal of Advanced Computing Systems, 4(9), pp.35-44.

Putert (2025). [online] Ecomputertips.com. Available at: https://r2.ecomputertips.com/imgs/glossary/cache-memory/cover.webp 

Rech, P., 2024. Artificial neural networks for space and safety-critical applications: Reliability issues and potential solutions. IEEE Transactions on Nuclear Science.

Shilpa, H.K., Girija, D.K., Rashmi, M. and Yogeesh, N., 2024. 6 Assessment Sustainable Developments of ICT for with Reference to Fog and Cloud Computing. Intelligent Systems and Sustainable Computational Models: Concepts, Architecture, and Practical Applications, p.82.

Tong, A., 2024. The Evolution of AI Engineering: Hardware and Software Dynamics, Historical Progression, Innovations, and Impact on Next-Generation AI Systems. Library Progress International, 44(3), pp.19715-19737.

Ullah, I., Khan, I.U., Ouaissa, M., Ouaissa, M. and El Hajjami, S. eds., 2024. Future Communication Systems Using Artificial Intelligence, Internet of Things and Data Science. CRC Press.

Verma, S., Allah, A.J. and Al-Mamun, A., 2024, March. Optimizing Data Retrieval from Secondary Storage with a Proactive Intermediate Cache. In SoutheastCon 2024 (pp. 216-221). IEEE.

Yaghoobi, S., 2024. LEVERAGING MACHINE LEARNING FOR FAST PERFORMANCE PREDICTION FOR INDUSTRIAL SYSTEMS: Data-Driven Cache Simulator.

Yalli, J.S., Hasan, M.H. and Badawi, A., 2024. Internet of things (iot): Origin, embedded technologies, smart applications and its growth in the last decade. IEEe Access.

Zahoor, F., Nisar, A., Bature, U.I., Abbas, H., Bashir, F., Chattopadhyay, A., Kaushik, B.K., Alzahrani, A. and Hussin, F.A., 2024. An overview of critical applications of resistive random access memory. Nanoscale Advances.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here