Assignment 1: Computer Systems Building Blocks and logical processes: Evolution, Components, and Features BSc (Hons)/FdSc Computer Science Computer Systems

0
2

 

Assignment 1: Computer Systems Building Blocks and

logical processes: Evolution, Components, and Features

BSc (Hons)/FdSc Computer Science

Computer Systems

Table of Contents

Introduction 2

Task 1: Building Blocks and Logical Processes 2

  1. Processor (CPU) and Its Logical Processes 2
  2. Memory (RAM) and Data Handling Processes 3
  3. Registers and Instruction Execution 4
  4. Cache Memory and Optimization of Data Flow 4

Task 2: Impact of Key Individuals and Their Contributions to Emerging Technologies 6

  1. Foundational Contributions to Hardware Evolution 6
  2. Virtualization and Cloud Computing 6
  3. Internet of Things (IoT) and Edge Computing 7
  4. Artificial Intelligence and Machine Learning 8
  5. Quantum Computing and Future Technologies 8

Conclusion 9

References 9

 

Introduction 

Within the current computing landscape, hardware is crucial in supporting the various disciplines of computer science, software engineering, and information systems. However, hardware is often treated as a second-class entity by software designers and developers. Yet, the new technologies of virtualization, such as cloud computing, fog computing, and their resultant offspring-the Internet of Things (IoT)-have shown enough of the importance of appreciating the basic hardware foundations that support these systems. The discussion in this report is directed toward the core building blocks from which a computer is constructed, that is the processor, memory-primarily RAM-registers, and cache, and it analyzes their logical processes and evaluates their developmental paths in comparison with the key players behind the emergence of new technologies.

Task 1: Building Blocks and Logical Processes

1. Processor (CPU) and Its Logical Processes

The central processing unit (CPU), often referred to as the processor, is the principal computational device of a computer system. The software operates instruction and controls the flow of information to different subsystems. In the fetch stage, the processor pulls an instruction from memory; usually, the program counter shows the address it carries. The Control Unit in the decode stage interprets the instruction in binary form and converts it into a form that will make the computer execute a command. Lastly, in the execution phase, all the ALU or some other part executes the many operations that were intended. This cycle demonstrates how one cycle involves the completion of not only the binary signalling and instruction pipelines but also points to how modern processors move with such incredible speed owing to such organisation (Johnsen,  2024). 

The speed of the processor also depends on its design and the count of cores which means how capable it is to perform the operations. Current generation cores in most processors provide the facility for parallel execution of instructions due to the current multi-core designs. This capability is important for managing additional application sorts, for example, real-time data processing in the cloud computing system or AI operations. Another named feature is hyper-threading technology which allows us to solve in parallel more than one instruction for each core (He et al., 2024).  Also, reasoning or the analytical capability of a CPU is linked with frequency, called GHz, defining how many cycles the processor can perform per second. The higher rates of clock speed mean that the chip can process large amounts of data in a shorter time while at the same time, the system uses more power and produces more heat. Modern features such as pipelining and especially speculative execution are applied to internal structures to manage the execution of instructions. These techniques prevent the formation of bottlenecks hence guaranteeing a continued feed of instructions through the processor technology (Venkatesha and Parthasarathi, 2024).

2. Memory (RAM) and Data Handling Processes

RAM is the abbreviation for Random Access Memory and this one is the computer’s temporary storage that keeps data needed by the processor for an individual computation. Thus, its volatility guarantees high speed of read and write operations that are critical for an effective operation of the system. Signalling and addressing can be described as logical processes within the memory called RAM. In the case where a program needs information, the processor will prompt the address signal towards the RAM module. It tells the memory controller where the particular location is to be found and the memory controller gets this data and sends it back to the processor through the system bus. RAM’s design is oriented towards speed; DRAM and SRAM are enhanced for access times and present less latency. Data in RAM is also binary and each memory cell in the RAM is maintained at high or low voltage (Aswini, 2024). 

 The role of RAM is crucial keeping it in between the processor and storage systems because it holds data and instructions that are relevant to current activities. RAM is a temporary storage device, bringing data quickly to a computer’s processing chips, but it loses its contents when power is shut off, different from long-term storage devices. This speed is made possible by its nature which permits data to be retrieved on a random basis (random access) as opposed to a sequential one. The processes that take place in RAM are the addressing and signalling activities. Whenever the processor needs some data it sends the memory address of the required location using a memory controller. This controller also helps in decoding the request made to it and getting the data from the particular cell into the system bus and then forwarded to the CPU (Wilson, 2024).  

Therefore, different types of RAM have been invented because of technological enhancements namely DRAM and SRAM. The most widespread type is DRAMS working with capacitors and transistors and needs to be refreshed periodically to avoid losing data. SRAM on the other hand is much faster and much more reliable, but also a lot more costly and is used typically in CPU caches (Gajaria et al., 2024).  

 

3. Registers and Instruction Execution

Registers are small fast storage units that exist within the CPU. They temporarily store data and instructions in the meantime of processing which greatly shorten the time necessary to access the data than RAM. Control registers are part of the instruction cycle where operands and results can be stored during a computation process of hardware mathematics and logic data inputs and outputs. Registers in general; follow the processing of data using transfer and can use binary logic gates to perform complex operations internally at the lower level. Because they are close to the processor core, and response rates are fast, cache memory is essential for computations. Registers are important sub-complexes of the CPU that contain very high-speed storage spots of the data and control instructions used in processing. As for registers, it is said they are within the processor’s architecture and therefore, the access is immediate. Its primary purpose is to hold intermediates, addresses or instructions that are in between the CPU used as a part of the program. Registers run with the same clock as the CPU which makes them even greater than the RAM or cache (Li et al., 2024).  

A register may be of diverse forms and all the forms play various roles in the execution of instructions in the computer. For instance, the Accumulator Register, ALU holds the answers to an arithmetical and/or logic operation. The Instruction Register (IR) stores an instruction at a given time and the general purpose register holds operands or immediate data for arithmetic. Locally registers contain logic data transfer and logic data manipulation in addition to storage during the f–d–e cycle. Transmission gates in an actual CPU mean that registers can read, write and modify data at the rate determined by the control unit. Registers also work with other elements such as the cache or a storage area called RAM in the effective management of data (Lerner and Alonso, 2024).  

4. Cache Memory and Optimization of Data Flow

Cache memory is an abstract layer of high accessibility storage that is located between the processor and the RAM to help reduce the separation between the two. They hold often-needed data and instructions, decrease the time needed to access it and lessen the processor’s dependence on the much slower main memory. Cache levels are divided into structurally subordinated L1, L2, and even L3, which differentiate in terms of capacity and speed (Hässig,  2024). The workings at the managerial level of cache involve such concepts as prediction algorithms and data replacement. The processor instructs caches with temporal and spatial locality, to determine what information will be required in future and stores it. This optimization makes the system improve on the flow of data hence providing faster results. Cache memory is used to close the gap between processor and main memory (RAM), it is faster than the main memory. As time has moved on, this difference has continued to grow as processors have improved and continue to speed up the more the RAM access time is slower. Cache memory has an elegant solution for this issue; it helps place data and instructions that are most used closer to the CPU itself.  

Figure 1: Cache Memory

(Source: Putert, 2025)

Usually, the cache is divided into different levels, each with a different size and speed, The Level 1 (L1) cache resides on the processor core. While Level 2 (L2) cache is larger than Level 1 and has somewhat lower clock speed, Level 3 (L3) addresses separate processor cores and gives them joint access to shared information. All levels utilize predictive algorithms and replacement policies such as LRU (Least Recently Used) for dealing with the storage of data. Cache memory operation is based on the principles which are associated with temporal and spatial locality. Temporal locality helps to store the recent data in the cache because of regular usage of the same data. While the spatial locality loads nearby data blocks can be useful in optimizing sequential data processing. A cache hit occurs when the processor accesses data from the cache and this takes nearly negligible time. A cache miss occurs when a CPU issues permission for the data to be fetched from the RAM or other storages there will be a delay (Falahati et al., 2024).  

Task 2: Impact of Key Individuals and Their Contributions to Emerging Technologies

Progress in computing technologies and its related hinges on those influential personalities that have contributed to the development of both the hardware and software and their new adaptations. Some of these contributions include the provision of a hardware architecture which formed the basis for future modern computing systems: Cloud computing: Others have to do with the Internet of Things:

1. Foundational Contributions to Hardware Evolution 

Some essentials which contributed to the emergence of new, independent and rather refined platforms of hardware for computing were from various people, for example, John von Neumann. He defined a stored-program concept in the mid-twentieth century which pointed out an architecture of computing where both the instructions and data can be stored in memory. From this model, there was a possibility for sequential instruction execution which can be considered to be the basis of present-day processors, memory and registers (Venkatesha and Parthasarathi,  2024). Moore, founder of Intel, forecasted the continued increase of transistors in integrated circuits a trend that owes its name: to Moore’s Law. constitution this insight spurred the development of better processor architecture that in turn formulated improved and faster CPUs. Moore was influential not only in altering the path for processor architecture but as an indirect catalyst for RAM and cache memory as well, because transistor density increased thus allowing for next-generation system designs. 

2. Virtualization and Cloud Computing

The start of virtualization and cloud computing as the two most important discourses in the contemporary society of technology was made possible by personalities such as Diane Greene and Mendel Rosenblum of VMware. They developed software that made it possible to execute more than one operating system on a physical computer, more commonly known as virtualization today. Through the abstraction of hardware resources, virtualization achieved the optimization of hardware resources and the change of the data centre from a fixed environment to a dynamic and very cost-effective one. This innovation proved critical in support of cloud computing where distributed resources depend much on memory optimality and computing capabilities (Rajagopalan et al., 2024). In cloud computing, Werner Vogels has been one of the biggest stakes in developing AWS. Vogels was a pioneer behind massively scalable server architectures, which capitalize on improvements in processor speed as well as memory storage. Not only would this on-demand resource allocation and speedy data access ensure massive computational loads at the level of cloud platforms, but thus also hardware efficiency, which is very important for optimized use of caches and memory. For that reason, it has made the cloud service real, usable, and trustworthy around the world.

3. Internet of Things (IoT) and Edge Computing

Figure 2: IoT and edge computing

(Source: Onat, 2022)

The technology often referred to as the Internet of Things (IoT) developed into an innovative solution that was helped along by pioneers such as Kevin Ashton who came up with the term in 1999 (Alaba, 2024). What Ashton did was to propose a scenario in which devices can communicate and exchange data on their own through enhancements in microprocessors and memory to process and store heaps of real-time data. IoT devices require ultra-low power, high-performance processors and frugal memory organizations to accomplish part of the processing before sending the data to the cloud. In like manner, Intel visionary and Tilera Corporation founder Anant Agarwal drove the multi-core processors that remain critical for edge computing. Edge computing is all about making data processing near the place where it is generated and minimizing dependency on the main data centres. Some valuable changes to multi-core architectures were made by Agarwal which made parallel processing better so that the edge devices can process data and make decisions in real-time. This development best illustrates how special attention has been placed in processor and cache memory design to address the new emergence technologies. 

4. Artificial Intelligence and Machine Learning

AI and ML have benefited from the efforts of many hardware visionaries, including Lisa Su, CEO of Advanced Micro Devices Inc. Su has been instrumental in propelling the company that produces high-performance processors and graphics processing units (GPUs), all important in carrying out Artificial Intelligence tasks. GPUs are highly used to train complex artificial models because they provide parallel-processing memory access. The changes on the hardware part spearheaded by Su have made it possible to process big data in real-time, making AI popular and effective (Chen et al., 2024). Likewise, Andrew Ng, one of the pioneers in AI explicitly highlighted the role of hardware in scaling up machine learning. In deep learning frameworks, Ng uses high-memory capability and efficient cache systems to perform computations on, and to store, model parameters. He has made direct involvement in the areas of autonomous systems, natural language processing and computer vision by correlating the enhanced hardware with the research on artificial intelligence.

5. Quantum Computing and Future Technologies 

Quantum computing is the next big leap in the evolution of computations and leading personalities in this research field is John Preskill. Preskill’s research involves quantum processors in which algorithms require sets and not binary forms. While traditional processors perform calculations in a linear or a series manner quantum processors use superposition as well as entanglement to compute these at ultra-high speed. These processors need new configurations and architecture for memory systems to address quantum conditions, something different from classical computing. Others including Dario Gil of IBM have come up with quantum systems that can be interfaced with cloud frameworks. These developments showed how quantum technologies and quantum hardware co-integrate with scalable cloud structures. Memory and processor stakeholders remain essential in new Quantum systems as their advancement can significantly determine the scalability and functionality of Quantum computing (Proctor et al., 2025).

Conclusion

The hardware components and Logical processes’ relationship remains key to studying the advanced Computing systems. That is why by analysing the processor, memory, register, and cache components it is easy to identify how all the described components support modern technologies. The cases of Intel, GPU, foundational work from AMD, and diverse contributions toward emerging ideas from cloud to IoT specify the importance of a platform in the form of hardware. It can be said that, with motion continuing forward, an intertwining understanding of hardware and software will continue to be mandatory for driving future computing circumstances/ Developments will link past victory to the potential opportunities of tomorrow. 

 

References

Alaba, F.A., 2024. The Evolution of the IoT. In Internet of Things: A Case Study in Africa (pp. 1-18). Cham: Springer Nature Switzerland.

Aswini, V., 2024. Design of a CNTFET based Low Power Ternary Content Addressable Memory.

Chen, Y., Wu, C., Sui, R. and Zhang, J., 2024. Feasibility Study of Edge Computing Empowered by Artificial Intelligence—A Quantitative Analysis Based on Large Models. Big Data and Cognitive Computing, 8(8), p.94.

Falahati, H., Sadrosadati, M., Xu, Q., Gómez-Luna, J., Saber Latibari, B., Jeon, H., Hesaabi, S., Sarbazi-Azad, H., Mutlu, O., Annavaram, M. and Pedram, M., 2024. Cross-core Data Sharing for Energy-efficient GPUs. ACM Transactions on Architecture and Code Optimization, 21(3), pp.1-32.

Gajaria, D., Gomez, K.A. and Adegbija, T., 2024. STT-RAM-based Hierarchical In-Memory Computing. IEEE Transactions on Parallel and Distributed Systems.

Hässig, M., 2024. A fully-functional Cache Control Coprocessor for Enzian (Master’s thesis, ETH Zurich).

He, M., Liu, F. and Do, S.W.S., 2024, May. Heterogeneous Hyperthreading. In 2024 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (pp. 68-78). IEEE.

Johnsen, M., 2024. Computer Engineering. Maria Johnsen.

Lerner, A. and Alonso, G., 2024, May. Data flow architectures for data processing on modern hardware. In 2024 IEEE 40th International Conference on Data Engineering (ICDE) (pp. 5511-5522). IEEE.

Li, M., Bi, Z., Wang, T., Wen, Y., Niu, Q., Liu, J., Peng, B., Zhang, S., Pan, X., Xu, J. and Wang, J., 2024. Deep learning and machine learning with gpgpu and cuda: Unlocking the power of parallel computing. arXiv preprint arXiv:2410.05686.

Onat (2022). [online] Ieee.org. Available at: https://innovationatwork.ieee.org/wp-content/uploads/2019/06/Real-Life-Use-Cases-for-Edge-Computing_1024X684.png

Proctor, T., Young, K., Baczewski, A.D. and Blume-Kohout, R., 2025. Benchmarking quantum computers. Nature Reviews Physics, pp.1-14.

Putert (2025). [online] Ecomputertips.com. Available at: https://r2.ecomputertips.com/imgs/glossary/cache-memory/cover.webp 

Rajagopalan, A., Swaminathan, D., Bajaj, M., Damaj, I., Rathore, R.S., Singh, A.R., Blazek, V. and Prokop, L., 2024. Empowering power distribution: Unleashing the synergy of IoT and cloud computing for sustainable and efficient energy systems. Results in Engineering, p.101949.

Venkatesha, S. and Parthasarathi, R., 2024. Survey on Redundancy Based-Fault tolerance methods for Processors and Hardware accelerators-Trends in Quantum Computing, Heterogeneous Systems and Reliability. ACM Computing Surveys.

Venkatesha, S. and Parthasarathi, R., 2024. Survey on Redundancy Based-Fault tolerance methods for Processors and Hardware accelerators-Trends in Quantum Computing, Heterogeneous Systems and Reliability. ACM Computing Surveys.

Wilson, K., 2024. Computer Jargon: The Illustrated Glossary of Basic Computer Terminology. Elluminet Press.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here