CS714PE - CLOUD COMPUTING (4 Year – 1 Sem) Prepared By Dr. B.Rajalingam Associate Professor Department of Computer Science and Engineering St. Martin’s Engineering College
Unit-2: Cloud Computing Fundamentals Motivation for Cloud Computing The Need for Cloud Computing Defining Cloud Computing Definition of Cloud computing Cloud Computing Is a Service Cloud Computing Is a Platform Principles of Cloud computing Five Essential Characteristics Four Cloud Deployment Models CC(Unit 1): Dr. B.Rajalingam 3
Unit-3: Cloud Computing Architecture and Management Cloud architecture, Layer Anatomy of the Cloud Network Connectivity in Cloud Computing Applications on the Cloud Managing the Cloud Managing the Cloud Infrastructure Managing the Cloud application Migrating Application to Cloud Phases of Cloud Migration Approaches for Cloud Migration CC(Unit 1): Dr. B.Rajalingam 4
Unit-4: Cloud Service Models Infrastructure as a Service: Characteristics of IaaS Suitability of IaaS Pros and Cons of IaaS Summary of IaaS Providers Platform as a Service: Characteristics of PaaS Suitability of PaaS Pros and Cons of PaaS Summary of PaaS Providers Software as a Service: Characteristics of SaaS Suitability of SaaS Pros and Cons of SaaS Summary of SaaS Providers Other Cloud Service Models CC(Unit 1): Dr. B.Rajalingam 5
Unit-5: Cloud Service Providers EMC: EMC IT Captiva Cloud Toolkit Google: Cloud Platform Cloud Storage Google Cloud Connect Google Cloud Print Google App Engine Amazon Web Services: Amazon Elastic Compute Cloud Amazon Simple Storage Service Amazon Simple Queue Service Microsoft: Windows Azure Microsoft Assessment and Planning Toolkit SharePoint, IBM: Cloud Models IBM Smart Cloud SAP Labs: SAP HANA Cloud Platform, Virtualization Services Provided by SAP Sales force: Sales Cloud Service Cloud: Knowledge as a Service Rack space VMware, Manjra soft, Aneka Platform CC(Unit 1): Dr. B.Rajalingam 6
UNIT 1 Computing Paradigms CC(Unit 1): Dr. B.Rajalingam 7
Unit-1 : Computing Paradigms High-Performance Computing Parallel Computing Distributed Computing Cluster Computing Grid Computing Cloud Computing Bio computing Mobile Computing Quantum Computing Optical Computing Nano computing Utility Computing Edge Computing FOG Computing CC(Unit 1): Dr. B.Rajalingam 8
Computing Paradigm CC(Unit 1): Dr. B.Rajalingam 9 Automatic computing has changed the way humans can solve problems and the different ways in which problems can be solved . Computing has changed the perception and even the world more than any other innovation in the recent past.
Different Computing Paradigms CC(Unit 1): Dr. B.Rajalingam 10 High-Performance Computing Parallel Computing Distributed Computing Cluster Computing Grid Computing Cloud Computing Bio computing Mobile Computing Quantum Computing Optical Computing Nano computing Utility Computing Edge Computing FOG Computing
1) What is high performance computing? CC(Unit 1): Dr. B.Rajalingam 11 High performance computing (HPC) is the ability to process data and perform complex calculations at high speeds. To put it into perspective, a laptop or desktop with a 3 GHz processor can perform around 3 billion calculations per second. While that is much faster than any human can achieve, it pales in comparison to HPC solutions that can perform quadrillions of calculations per second. One of the best-known types of HPC solutions is the supercomputer. A supercomputer contains thousands of compute nodes that work together to complete one or more tasks. This is called parallel processing. It’s similar to having thousands of PCs networked together, combining compute power to complete tasks faster.
Why is HPC important? CC(Unit 1): Dr. B.Rajalingam 12 It is through data that ground breaking scientific discoveries are made, game-changing innovations are fuelled, and quality of life is improved for billions of people around the globe. HPC is the foundation for scientific, industrial, and societal advancements. As technologies like the Internet of Things (IoT), artificial intelligence (AI), and 3-D imaging evolve, the size and amount of data that organizations have to work with is growing exponentially. For many purposes, such as streaming a live sporting event, tracking a developing storm, testing new products, or analyzing stock trends, the ability to process data in real time is crucial.
How does HPC work? CC(Unit 1): Dr. B.Rajalingam 13 HPC solutions have three main components: Compute Network Storage To build a high performance computing architecture, compute servers are networked together into a cluster. Software programs and algorithms are run simultaneously on the servers in the cluster. The cluster is networked to the data storage to capture the output. Together, these components operate seamlessly to complete a diverse set of tasks. For example, the storage component must be able to feed and ingest data to and from the compute servers as quickly as it is processed. Likewise, the networking components must be able to support the high-speed transportation of data between compute servers and the data storage.
HPC use cases CC(Unit 1): Dr. B.Rajalingam 14 Research labs. HPC is used to help scientists find sources of renewable energy, understand the evolution of our universe, predict and track storms, and create new materials. Media and entertainment. HPC is used to edit feature films, render mind-blowing special effects, and stream live events around the world. Oil and gas. HPC is used to more accurately identify where to drill for new wells and to help boost production from existing wells. Artificial intelligence and machine learning. HPC is used to detect credit card fraud, provide self-guided technical support, teach self-driving vehicles, and improve cancer screening techniques. Financial services. HPC is used to track real-time stock trends and automate trading. HPC is used to design new products, simulate test scenarios, and make sure that parts are kept in stock so that production lines aren’t held up. HPC is used to help develop cures for diseases like diabetes and cancer and to enable faster, more accurate patient diagnosis.
NetApp and HPC CC(Unit 1): Dr. B.Rajalingam 15 Performance. Delivers up to 1 million random read IOPS and 13GB/sec sustained write bandwidth per scalable building block. Reliability. Fault-tolerant design delivers greater than 99.9999% availability, proven by more than 1 million systems deployed. Easy to deploy and manage . Modular design, on-the-fly (“cut and paste”) replication of storage blocks, proactive monitoring, and automation scripts all add up to easy, fast and flexible management. Scalability. A granular, building-block approach to growth that enables seamless scalability from terabytes to petabytes by adding capacity in any increment—one or multiple drives at a time. Lower TCO. Price/performance-optimized building blocks and the industry’s best density per delivers low power, cooling, and support costs, and 4-times lower failure rates than commodity HDD and SSD devices.
2) Parallel Computing CC(Unit 1): Dr. B.Rajalingam 16 Parallel computing is defined as a type of computing where multiple computer systems are used simultaneously. Here a problem is broken into sub-problems and then further broken down into instructions. These instructions from each sub-problem are executed concurrently on different processors. T he parallel computing system consists of multiple processors that communicate with each other and perform multiple tasks over a shared memory simultaneously. The goal of parallel computing is to save time and provide concurrency.
What is Parallel Computing? CC(Unit 1): Dr. B.Rajalingam 17 Parallel computing refers to the process of executing several processors an application or computation simultaneously. Generally, it is a kind of computing architecture where the large problems break into independent, smaller, usually similar parts that can be processed in one go. It is done by multiple CPUs communicating via shared memory, which combines results upon completion. It helps in performing large computations as it divides the large problem between more than one processor. Parallel computing also helps in faster application processing and task resolution by increasing the available computation power of systems. The parallel computing principles are used by most supercomputers employ to operate.
Types of parallel computing CC(Unit 1): Dr. B.Rajalingam 18 Bit-level parallelism: The form of parallel computing in which every task is dependent on processor word size. In terms of performing a task on large-sized data, it reduces the number of instructions the processor must execute. There is a need to split the operation into series of instructions. Instruction-level parallelism: In a single CPU clock cycle, the processor decides in instruction-level parallelism how many instructions are implemented at the same time. For each clock cycle phase, a processor in instruction-level parallelism can have the ability to address that is less than one instruction. Task Parallelism: Task parallelism is the form of parallelism in which the tasks are decomposed into subtasks. Then, each subtask is allocated for execution and, the execution of subtasks is performed concurrently by processors.
Applications of Parallel Computing CC(Unit 1): Dr. B.Rajalingam 19 One of the primary applications of parallel computing is Databases and Data mining. The real-time simulation of systems is another use of parallel computing. The technologies, such as Networked videos and Multimedia. Science and Engineering. Collaborative work environments. The concept of parallel computing is used by augmented reality, advanced graphics, and virtual reality.
Fundamentals of Parallel Computer Architecture CC(Unit 1): Dr. B.Rajalingam 20 Multi-core computing A computer processor integrated circuit containing two or more distinct processing cores is known as a multi-core processor, which has the capability of executing program instructions simultaneously. Symmetric multiprocessing In Symmetric multiprocessing, a single operating system handles multiprocessor computer architecture having two or more homogeneous, independent processors that treat all processors equally. Distributed computing On different networked computers, the components of a distributed system are located. These networked computers coordinate their actions with the help of communicating through HTTP, RPC-like message queues, and connectors. The concurrency of components and independent failure of components are the characteristics of distributed systems.
Parallel Computing CC(Unit 1): Dr. B.Rajalingam 21
3) Distributed Computing CC(Unit 1): Dr. B.Rajalingam 22 A distributed computer system consists of multiple software components that are on multiple computers, but run as a single system. The computers that are in a distributed system can be physically close together and connected by a local network, or they can be geographically distant and connected by a wide area network. A distributed system can consist of any number of possible configurations, such as mainframes, personal computers, workstations, minicomputers, and so on. The goal of distributed computing is to make such a network work as a single computer. Distributed computing systems can run on hardware that is provided by many vendors, and can use a variety of standards-based software components. They can run on various operating systems, and can use various communications protocols.
How distributed computing works CC(Unit 1): Dr. B.Rajalingam 23 Distributed computing networks can be connected as local networks or through a wide area network if the machines are in a different geographic location. Processors in distributed computing systems typically run in parallel. In enterprise settings, distributed computing generally puts various steps in business processes at the most efficient places in a computer network. For example, a typical distribution has a three-tier model that organizes applications into the presentation tier, the application tier and the data tier. These tiers function as follows: User interface processing occurs on the PC at the user's location. Application processing takes place on a remote computer. Database access and processing algorithms happen on another computer that provides centralized access for many business processes.
How distributed computing works CC(Unit 1): Dr. B.Rajalingam 24 Client-server architectures. These use smart clients that contact a server for data, then format and display that data to the user. N-tier system architectures. Typically used in application servers, these architectures use web applications to forward requests to other enterprise services. Peer-to-peer architectures. These divide all responsibilities among all peer computers, which can serve as clients or servers.
Distributed Computing CC(Unit 1): Dr. B.Rajalingam 25
Benefits of distributed computing CC(Unit 1): Dr. B.Rajalingam 26 Performance. Distributed computing can help improve performance by having each computer in a cluster handle different parts of a task simultaneously. Scalability. Distributed computing clusters are scalable by adding new hardware when needed. Resilience and redundancy. Multiple computers can provide the same services. This way, if one machine isn't available, others can fill in for the service. Cost-effectiveness. Distributed computing can use low-cost, off-the-shelf hardware. Efficiency. Complex requests can be broken down into smaller pieces and distributed among different systems. Distributed applications . Unlike traditional applications that run on a single system, distributed applications run on multiple systems simultaneously.
4) Cluster Computing CC(Unit 1): Dr. B.Rajalingam 27 A cluster is a group of independent computers that work together to perform the tasks given. Cluster computing is defined as a type of computing that consists of two or more independent computers, referred to as nodes, that work together to execute tasks as a single machine. The goal of cluster computing is to increase the performance, scalability and simplicity of the system. As you can see in the below diagram, all the nodes, (irrespective of whether they are a parent node or child node), act as a single entity to perform the tasks.
What is Cluster Computing? CC(Unit 1): Dr. B.Rajalingam 28 Cluster computing refers that many of the computers connected on a network and they perform like a single entity. Each computer that is connected to the network is called a node. Cluster computing offers solutions to solve complicated problems by providing faster computational speed, and enhanced data integrity. The connected computers execute operations all together thus creating the impression like a single system (virtual machine). This process is termed as transparency of the system. Based on the principle of distributed systems, this networking technology performs its operations. Cluster computing goes with the features of: All the connected computers are the same kind of machines They are tightly connected through dedicated network connections All the computers share a common home directory.
Cluster Computing CC(Unit 1): Dr. B.Rajalingam 29
Cluster Computing( Cont …) CC(Unit 1): Dr. B.Rajalingam 30 Clusters’ hardware configuration differs based on the selected networking technologies. Cluster is categorized as Open and Close clusters wherein Open Clusters all the nodes need IP’s and those are accessed only through the internet or web. This type of clustering causes enhanced security concerns. Closed Clustering, the nodes are concealed behind the gateway node and they offer increased protection.
Types of Cluster Computing CC(Unit 1): Dr. B.Rajalingam 31 As clusters are extensively utilized in correspondence to the complexity of the information, to manage content and the anticipated operating speed. Many of the applications that anticipate high availability without a reduction in downtime employ the scenarios of cluster computing. The types of cluster computing are: Cluster load balancing High–Availability clusters High-performance clusters
Cluster Load Balancing CC(Unit 1): Dr. B.Rajalingam 32 Load balancing clusters are employed in the situations of augmented network and internet utilization and these clusters perform as the fundamental factor. This type of clustering technique offers the benefits of increased network capacity and enhanced performance. Here the entire nodes stay as cohesive with all the instance where the entire node objects are completely attentive of the requests those are present in the network. All the nodes will not operate in a single process whereas they readdress the requests individually as they arrive depending on the scheduler algorithm. The other crucial element on the load balancing technique is scalability where this feature is accomplished when every server is totally employed.
High Availability Clusters CC(Unit 1): Dr. B.Rajalingam 33 These are also termed as failover clusters. Computers so often faces failure issues. So, High Availability comes in line with the augmenting dependency of computers as computers hold crucial responsibility in many of the organizations and applications. In this approach, redundant computer systems are utilized in the situation of any component malfunction. So, when there is a single point malfunction, the system seems to be completely reliable as the network has redundant cluster elements.
High-Performance Clusters CC(Unit 1): Dr. B.Rajalingam 34 This networking approach utilizes supercomputers to resolve complex computational problems. Along with the management of IO-dependent applications like web services, high-performance clusters are employed in computational models of climate and in-vehicle breakdowns. More tightly connected computer clusters are developed for work that might consider “supercomputing”.
Cluster Computing Architecture CC(Unit 1): Dr. B.Rajalingam 35 A cluster is a kind of parallel/distributed processing network which is designed with an array of interconnected individual computers and the computer systems operating collectively as a single standalone system. A node – Either a single or a multiprocessor network having memory, input and output functions and an operating system In general, 2 or more nodes are connected on a single line or every node might be connected individually through a LAN connection.
cluster computing architecture CC(Unit 1): Dr. B.Rajalingam 36
Advantages of Cluster Computing CC(Unit 1): Dr. B.Rajalingam 37 Cost efficacy – Even mainframe computers seems to be extremely stable, cluster computing is more in implementation because of their cost-effectiveness and economical. Processing speed – The cluster computing systems offer the same processing speed as that of mainframe computers and the speed is also equal to supercomputers. Extended resource availability – Computers come across frequent breakdowns, so to eliminate this failure, cluster computers are available with high availability. Expandability – The next crucial advantage of this cluster computing is its enhanced scalability and expandability. Flexibility – Cluster computing can be upgraded to the superior specification or extended through the addition of additional nodes.
Applications of cluster computing CC(Unit 1): Dr. B.Rajalingam 38 Cluster computing can be implemented in weather modelling Stands as support in-vehicle breakdown and nuclear simulations Used in image processing and in electromagnetics too Perfect to be used in the applications of astrophysics, aerodynamics and in data mining Assist to solve complex computational problems Holds the flexibility to allocate workload as small data portions and which is called grid computing. Cluster computing has the capacity to function in many web applications such as Security, Search Engines, Database servers, web servers, proxy, and email.
5) Grid Computing CC(Unit 1): Dr. B.Rajalingam 39 Grid computing is the practice of leveraging multiple computers, often geographically distributed but connected by networks, to work together to accomplish joint tasks. It is typically run on a “data grid,” a set of computers that directly interact with each other to coordinate jobs. Grid computing is defined as a type of computing where it is constitutes a network of computers that work together to perform tasks that may be difficult for a single machine to handle. All the computers on that network work under the same umbrella and are termed as a virtual super computer. The tasks they work on is of either high computing power and consist of large data sets. All communication between the computer systems in grid computing is done on the “data grid”. The goal of grid computing is to solve more high computational problems in less time and improve productivity .
How Does Grid Computing Work? CC(Unit 1): Dr. B.Rajalingam 40 Grid computing works by running specialized software on every computer that participates in the data grid. The software acts as the manager of the entire system and coordinates various tasks across the grid. Specifically, the software assigns subtasks to each computer so they can work simultaneously on their respective subtasks. After the completion of subtasks, the outputs are gathered and aggregated to complete a larger-scale task. The software lets each computer communicate over the network with the other computers so they can share information on what portion of the subtasks each computer is running, and how to consolidate and deliver outputs.
Grid Computing CC(Unit 1): Dr. B.Rajalingam 41 With grid computing, specialized software runs on every computer that participates in the data grid. This controller software acts as the manager of the entire system and coordinates various tasks across the grid.
How is Grid Computing Used? CC(Unit 1): Dr. B.Rajalingam 42 Grid computing is especially useful when different subject matter experts need to collaborate on a project but do not necessarily have the means to immediately share data and computing resources in a single site. By joining forces despite the geographical distance, the distributed teams are able to leverage their own resources that contribute to a bigger effort. This means that all computing resources do not have to work on the same specific task, but can work on sub-tasks that collectively make up the end goal. For example, a research team might analyze weather patterns in the North Atlantic region, while another team analyzes the south Atlantic region, and both results can be combined to deliver a complete picture of Atlantic weather patterns. While often seen as a large-scale distributed computing endeavor , grid computing can also be leveraged at a local level. For example, a corporation that allocates a set of computer nodes running in a cluster to jointly perform a given task is a simple example of grid computing in action. A specific type of local data grid is an in-memory data grid (IMDG) in which computers are tightly connected via coordination software and a network connection to collectively process data in memory.
Grid Computing CC(Unit 1): Dr. B.Rajalingam 43 In-Memory Data Grids enhance the performance of grid computing by enabling higher throughput and lower latency.
6) Cloud Computing Cloud is defined as the usage of someone else’s server to host, process or store data. Cloud computing is defined as the type of computing where it is the delivery of on-demand computing services over the internet on a pay-as-you-go basis. It is widely distributed, network-based and used for storage. There type of cloud are public, private, hybrid and community and some cloud providers are Google cloud, AWS, Microsoft Azure and IBM cloud. CC(Unit 1): Dr. B.Rajalingam 44
What is cloud computing? C loud computing is the delivery of computing services - including servers, storage, databases, networking, software, analytics, and intelligence - over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping lower your operating costs, run your infrastructure more efficiently and scale as your business needs change. CC(Unit 1): Dr. B.Rajalingam 45
Cloud Computing CC(Unit 1): Dr. B.Rajalingam 46
Benefits of cloud computing Cost Cloud computing eliminates the capital expense of buying hardware and software and setting up and running on-site data centers —the racks of servers, the round-the-clock electricity for power and cooling, the IT experts for managing the infrastructure. Speed Most cloud computing services are provided self service and on demand, so even vast amounts of computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning. Global scale The benefits of cloud computing services include the ability to scale elastically. In cloud speak, that means delivering the right amount of IT resources—for example, more or less computing power, storage, bandwidth—right when it is needed and from the right geographic location. CC(Unit 1): Dr. B.Rajalingam 47
Benefits of cloud computing(Cont..) Productivity On-site datacenters typically require a lot of “racking and stacking”—hardware setup, software patching, and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more important business goals. Performance The biggest cloud computing services run on a worldwide network of secure datacenters , which are regularly upgraded to the latest generation of fast and efficient computing hardware. Reliability Cloud computing makes data backup, disaster recovery and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network. Security Many cloud providers offer a broad set of policies, technologies and controls that strengthen your security posture overall, helping protect your data, apps and infrastructure from potential threats. CC(Unit 1): Dr. B.Rajalingam 48
Types of cloud computing Public cloud Public clouds are owned and operated by a third-party cloud service providers, which deliver their computing resources like servers and storage over the Internet. Microsoft Azure is an example of a public cloud. With a public cloud, all hardware, software and other supporting infrastructure is owned and managed by the cloud provider. You access these services and manage your account using a web browser. Private cloud A private cloud refers to cloud computing resources used exclusively by a single business or organisation. A private cloud can be physically located on the company’s on-site data center . Some companies also pay third-party service providers to host their private cloud. A private cloud is one in which the services and infrastructure are maintained on a private network. Hybrid cloud Hybrid clouds combine public and private clouds, bound together by technology that allows data and applications to be shared between them. By allowing data and applications to move between private and public clouds, a hybrid cloud gives your business greater flexibility, more deployment options and helps optimise your existing infrastructure, security and compliance. CC(Unit 1): Dr. B.Rajalingam 49
Types of cloud services: IaaS, PaaS, serverless and SaaS Infrastructure as a service (IaaS) The most basic category of cloud computing services. With IaaS, you rent IT infrastructure - servers and virtual machines (VMs), storage, networks, operating systems - from a cloud provider on a pay-as-you-go basis. Platform as a service (PaaS) Platform as a service refers to cloud computing services that supply an on-demand environment for developing, testing, delivering and managing software applications. PaaS is designed to make it easier for developers to quickly create web or mobile apps, without worrying about setting up or managing the underlying infrastructure of servers, storage, network and databases needed for development. Serverless computing Overlapping with PaaS, serverless computing focuses on building app functionality without spending time continually managing the servers and infrastructure required to do so. The cloud provider handles the setup, capacity planning and server management for you. Serverless architectures are highly scalable and event-driven, only using resources when a specific function or trigger occurs. Software as a service (SaaS) Software as a service is a method for delivering software applications over the Internet, on demand and typically on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure and handle any maintenance, like software upgrades and security patching. CC(Unit 1): Dr. B.Rajalingam 50
Uses of cloud computing Create cloud-native applications Quickly build, deploy and scale applications—web, mobile and API. Take advantage of cloud-native technologies and approaches, such as containers, Kubernetes, microservices architecture, API-driven communication and DevOps. Test and build applications Reduce application development cost and time by using cloud infrastructures that can easily be scaled up or down. Store, back up and recover data Protect your data more cost-efficiently—and at massive scale—by transferring your data over the Internet to an offsite cloud storage system that is accessible from any location and any device. Analyse data Unify your data across teams, divisions and locations in the cloud. Then use cloud services, such as machine learning and artificial intelligence, to uncover insights for more informed decisions. Stream audio and video Connect with your audience anywhere, anytime, on any device with high-definition video and audio with global distribution. Embed intelligence Use intelligent models to help engage customers and provide valuable insights from the data captured. Deliver software on demand Also known as software as a service (SaaS), on-demand software lets you offer the latest software versions and updates around to customers—anytime they need, anywhere they are. CC(Unit 1): Dr. B.Rajalingam 51
7) What Is Biocomputing? CC(Unit 1): Dr. B.Rajalingam 52 Biocomputing - a cutting-edge field of technology - operates at the intersection of biology, engineering, and computer science. It seeks to use cells or their sub-component molecules (such as DNA or RNA) to perform functions traditionally performed by an electronic computer. The ultimate goal of biocomputing is to mimic some of the biological ‘hardware’ of bodies like ours - and to use it for our computing needs. From less to more complicated, this could include: 1. Using DNA or RNA as a medium of information storage and data processing 2. Connecting neurons to one another, similar to how they are connected in our brains 3. Designing computational hardware from the genome level up
Biocomputing( Cont …) Bio computing is an emerging field of computer science, biological science and engineering. It is a form of computing that uses DNA and molecular biology, instead of the traditional silicon-based computer technologies. It is one of the new computational models based on the idea derived from biological research. Using biological computing, problems can be solved in ways that differ from classical computer programming. The concept of using DNA as a support for computation is not exactly a new idea; in fact, the idea has been speculated upon since the 1950’s. The best example of a bio computer is human where the brain is like the hard drive of the computer where we store our memory and control the function of the body. CC(Unit 1): Dr. B.Rajalingam 53
8) Mobile Computing Mobile Computing is a technology that provides an environment that enables users to transmit data from one device to another device without the use of any physical link or cables. Mobile computing allows transmission of data, voice and video via a computer or any other wireless-enabled device without being connected to a fixed physical link. In this technology, data transmission is done wirelessly with the help of wireless devices such as mobiles, laptops etc. This is only because of Mobile Computing technology that you can access and transmit data from any remote locations without being present there physically. It is one of the fastest and most reliable sectors of the computing technology field. The concept of Mobile Computing can be divided into three parts: Mobile Communication Mobile Hardware Mobile Software CC(Unit 1): Dr. B.Rajalingam 54
Mobile Communication Mobile Communication specifies a framework that is responsible for the working of mobile computing technology. In this case, mobile communication refers to an infrastructure that ensures seamless and reliable communication among wireless devices. This framework ensures the consistency and reliability of communication between wireless devices. The mobile communication framework consists of communication devices such as protocols, services, bandwidth, and portals necessary to facilitate and support the stated services. These devices are responsible for delivering a smooth communication process. CC(Unit 1): Dr. B.Rajalingam 55
Mobile communication can be divided in the following four types: Fixed and Wired Fixed and Wireless Mobile and Wired Mobile and Wireless CC(Unit 1): Dr. B.Rajalingam 56
Mobile Communication( Cont …) Fixed and Wired: In Fixed and Wired configuration, the devices are fixed at a position, and they are connected through a physical link to communicate with other devices. For Example , Desktop Computer. Fixed and Wireless: In Fixed and Wireless configuration, the devices are fixed at a position, and they are connected through a wireless link to make communication with other devices. For Example , Communication Towers, WiFi router Mobile and Wired: In Mobile and Wired configuration, some devices are wired, and some are mobile. They altogether make communication with other devices. For Example , Laptops. Mobile and Wireless: In Mobile and Wireless configuration, the devices can communicate with each other irrespective of their position. They can also connect to any network without the use of any wired device. For Example , WiFi Dongle. CC(Unit 1): Dr. B.Rajalingam 57
Mobile Hardware & Mobile Software Mobile Hardware Mobile hardware consists of mobile devices or device components that can be used to receive or access the service of mobility. Examples of mobile hardware can be smartphones, laptops, portable PCs, tablet PCs, Personal Digital Assistants, etc. Mobile Software Mobile software is a program that runs on mobile hardware. This is designed to deal capably with the characteristics and requirements of mobile applications. This is the operating system for the appliance of mobile devices. In other words, you can say it the heart of the mobile systems. This is an essential component that operates the mobile device. CC(Unit 1): Dr. B.Rajalingam 58
Applications of Mobile Computing Web or Internet access. Global Position System (GPS). Emergency services. Entertainment services. Educational services. CC(Unit 1): Dr. B.Rajalingam 59
9) What is a quantum computer? A quantum computer is a type of computer that uses quantum mechanics so that it can perform certain kinds of computation more efficiently than a regular computer can. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 60
How a regular computer stores information Now, a regular computer stores information in a series of 0’s and 1’s. Different kinds of information, such as numbers, text, and images can be represented this way. Each unit in this series of 0’s and 1’s is called a bit . So, a bit can be set to either or 1 . 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 61
Now, what about quantum computers? A quantum computer does not use bits to store information. Instead, it uses something called qubits . Each qubit can not only be set to 1 or , but it can also be set to 1 and . But what does that mean exactly? Let me explain this with a simple example. This is going to be a somewhat artificial example. But it’s still going to be helpful in understanding how quantum computers work. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 62
What is a qubit ? A qubit is the basic unit of information in quantum computing . Qubits play a similar role in quantum computing as bits play in classical computing, but they behave very differently. Classical bits are binary and can hold only a position of or 1 , but qubits can hold a superposition of all possible states . 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 63
What is quantum computing? Quantum computing is a rapidly-emerging technology that harnesses the laws of quantum mechanics to solve problems too complex for classical computers . Today, IBM Quantum makes real quantum hardware - a tool scientists only began to imagine three decades ago - available to thousands of developers. Our engineers deliver ever-more-powerful superconducting quantum processors at regular intervals , building toward the quantum computing speed and capacity necessary to change the world. These machines are very different from the classical computers that have been around for more than half a century. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 64
What is quantum computing? Quantum computers harness the unique behaviour of quantum physics—such as superposition, entanglement and quantum interference —and apply it to computing. This introduces new concepts to traditional programming methods. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 65
Why do we need quantum computers? These are very large classical computers , often with thousands of classical CPU and GPU cores. However, even supercomputers struggle to solve certain kinds of problems . If a supercomputer gets stumped, that's probably because the big classical machine was asked to solve a problem with a high degree of complexity. When classical computers fail, it's often due to complexity Complex problems are problems with lots of variables interacting in complicated ways. Modeling the behavior of individual atoms in a molecule is a complex problem, because of all the different electrons interacting with one another. Sorting out the ideal routes for a few hundred tankers in a global shipping network is complex too. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 66
What is quantum AI? Quantum AI is the use of quantum computing for the computation of machine learning algorithms . Thanks to the computational advantages of quantum computing, quantum AI can help achieve results that are not possible to achieve with classical computers . 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 67
A.I. and Quantum Computing Quantum computing and artificial intelligence are both transformational technologies and artificial intelligence are likely to require quantum computing to achieve significant progress . Although artificial intelligence produces functional applications with classical computers, it is limited by the computational capabilities of classical computers. Quantum computing can provide a computation boost to artificial intelligence, enabling it to tackle more complex problems and AGI . 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 68
Why is it important? With the unique features of quantum computing, obstacles to achieving AGI (Artificial General Intelligence) can be eliminated. Quantum computing can be used for the rapid training of machine learning models and to create optimized algorithms . An optimized and stable AI provided by quantum computing can complete years of analysis in a short time and lead to advances in technology. Neuromorphic cognitive models, adaptive machine learning , or reasoning under uncertainty are some fundamental challenges of today’s AI. Quantum AI is one of the most likely solutions for next-generation AI . 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 69
What is quantum computing? Quantum mechanics is a universal model based on different principles than those observed in daily life. A quantum model of data is needed to process data with quantum computing . Hybrid quantum-classical models are also necessary for quantum computing for error correction and the correct functioning of the quantum computer . Quantum data. Hybrid quantum-classical models . Quantum algorithms. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 70
Quantum data Quantum data can be considered as data packets contained in qubits for computerization . However, observing and storing quantum data is challenging because of the features that make it valuable which are superposition and entanglement . In addition, quantum data is noisy, it is necessary to apply a machine learning in the stage of analyzing and interpreting these data correctly . 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 71
Applications of Quantum Computing and AI Quantum Computers Solve Complex Problems Quickly. Quantum Computers Will Optimize Solutions Quantum Computers Could Spot Patterns in Large Data Sets Quantum Computers Could Help Integrate Data from Different Data Sets. Better Business Insights and Models 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 72
1. Quantum Computers Solve Complex Problems Quickly The quantum computers will be able to complete calculations within seconds that would take today’s computers thousands of years to calculate. Today, Google has a quantum computer they claim is 100 million times faster than any of today’s systems. We are going to be able to process the huge amount of data we generate and solve very complex problems . The key to success is to translate our real-world problems into quantum language. The complexity and size of our data sets are growing faster than our computing resources . While today’s computers struggle or are unable to solve some problems, these same problems are expected to be solved in seconds through the power of quantum computing . Quantum computing algorithms allow us to enhance what’s already possible with machine learning. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 73
2. Quantum Computers Will Optimize Solutions Another way quantum computing will facilitate a revolution will be in our ability to sample the data and optimize all kinds of problems we encounter from portfolio analysis to the best delivery routes and even help determine what the optimal treatment and medicine protocol is for every individual. We are at a point with the growth of big data that we have changed our computer architecture which necessitates the need for a different computational approach to handling big data. Not only is it larger in scope, but the problems we’re trying to solve are very different. Quantum computers are better equipped to solve sequential problems efficiently . The power they give businesses and even consumers to make better decisions might just be what’s needed to convince companies to invest in the new technology when it becomes available. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 74
3. Quantum Computers Could Spot Patterns in Large Data Sets Quantum computing is expected to be able to search very large, unsorted data sets to uncover patterns or anomalies extremely quickly . It might be possible for the quantum computers to access all items in your database at the same time to identify these similarities in seconds. While this is theoretically possible today, it only happens with a parallel computer looking at every record one after another, so it takes an incredible amount of time and depending on the size of the data set, it might never happen. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 75
4. Quantum Computers Could Help Integrate Data from Different Data Sets Additionally, quantum computers are available due to the integration of very different data sets. Although this may be difficult without human intervention at first, the human involvement will help the computers learn how to integrate the data in the future. So, if there are different raw data sources with unique schema attached to them and a research team wants to compare them, a computer would have to understand the relationship between the schemas before the data could be compared. The promise is that quantum computers will allow for quick analysis and integration of our enormous data sets which will improve and transform our machine learning and artificial intelligence capabilities. 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 76
5. Better Business Insights and Models With the increasing amount of data generated in industries like pharmaceutical, finance and life science industry, companies are losing their ties with classical computing rope. To have a better data framework , these companies now require complex models that have the potential processing power to model the most complex situations and that’s where quantum computers play a huge role. Creating better models with quantum technology will lead to better treatments for diseases in the healthcare sector like COVID-19 research cycle from test, tracing and treating of the virus , can decrease financial implosion in the banking sector and improve the logistics chain in the manufacturing industry . 23 August 2022 Dr. B.Rajalingam: AI Applications(Unit 1) 77
10) What is Optical Computing? Optical computing is a computing technology in the research and theory stage. The idea would be to make a computer that relies entirely on light (photons) instead of electricity (electrons) to do computing. The appeal of optical computers is limited, because with short distances, they require more power than electronic computers to do the same computation. Still, optical computing may allow the construction of computers physically impossible using electronics. Optical computing is still in the early stages of development -- only a few very limited prototypes have currently been constructed in the lab. An optical computer primarily uses lasers to send signals. Unfortunately, lasers can't interact directly with one another in any meaningful way, so performing computations requires an intermediary in the form of matter somehow. Attempts to make "optical transistors" have tended to revolve around materials that re-emit light selectively in response to the intensity of the incoming light. Putting together these components into a huge web can allow the construction of an optical computer. CC(Unit 1): Dr. B.Rajalingam 78
Optical Computing( Cont …) Thus far, optics has been enthusiastically adopted for data transmission over long distances, as in fiber optics. Over short distances, however -- and this is one of the main downsides of optical computing -- the energy loss experienced by the light requires more power to send a signal than using electrons to send the same signal over the same distance. Over long distances, light wins out, but part of the point of computers is that they're supposed to be small, and the distances over which light is better (10 ft/3 m or more) are pretty big by the standards of computing. Still, it is conceivable that optic channels could be used in large supercomputers to send data more efficiently than electronics. In theory, optical computing could produce computers tens of thousands of times faster than today's computers, because light can travel that much faster than electric current. In practice, however, the need to use large beams of light to avoid signal loss has precluded that possibility. More recently, however, researchers at Harvard University found a way to flip a register using only a single photon, a milestone which could open the path to efficient optical computing. The researchers took advantage of plasmons, tiny surface disturbances in a medium which can be created by bombarding it with photons. CC(Unit 1): Dr. B.Rajalingam 79
Cont … Optical computing, like quantum computing, is one of those wild-card technologies - it's one among dozens of approaches that are being developed in anticipation of running up against physical limits with conventional electronic computing, but it remains to be seen whether it will bear fruit in the longer term. Unless you're working on the technology yourself, all we can do now is wait and watch. All-optical computers offer high speed and high levels of parallelism, but problems of miniaturization and manufacturability must be overcome to move ideas out of the laboratory. Just as fiber optic technology, with its tremendous speed and bandwidth, has largely replaced electronics in long-distance telecommunication networks, computer scientists have long strived to develop optical, or photonic, versions of electronic computers. Photons have significant potential advantages over electrons - they can interpenetrate each other unaltered, allowing the use of three-dimensional devices that are difficult or impossible to implement electronically, they are far more energy efficient, and, of course, they always move at the speed of light. But researchers trying to make these potential advantages into realities face very significant challenges – optical computing devices tend to be large and bulky, require careful alignment, and are not easy to miniaturize using existing photolithographic techniques. Such problems are just beginning to be addressed as researchers focus on basic system design ideas, but they are essential to moving technologies out of the laboratory. CC(Unit 1): Dr. B.Rajalingam 80
Why optical computing? While electronic computers have continued to advance in speed and memory at an exponential rate, doubling their clock rate every few years or so, there are inherent limitations in all electronic devices. First, electrons cannot move through each other nor can electric currents—they must always be directed through wires of some sort. This means, for example, that three-dimensional interconnections and three-dimensional computers have always been difficult to implement—there would be just too many cross connecting wires and switches. Yet three-dimensional structures have inherent advantages in density of processing - a 1-cm cubic array of 1-µm transistors could theoretically contain a trillion transistors or the equivalent. CC(Unit 1): Dr. B.Rajalingam 81
In addition, electrons inevitably generate heat as they move through conductors and semiconductors. This heat must be removed and puts potential limits on the density and speed of chips and multiprocessor computers. And, electronic devices operate at speeds far less than the speed of light. A typical clock step today is a few nanoseconds, but the time it takes light to get from one end of a chip to the other is a hundred times less. Optical computers can potentially overcome all three disadvantages. Light can travel through free space without the need for wires or fibers , and photons can travel through each other without alteration. So optical computers can be designed that are inherently three-dimensional and highly parallel. Elements such as three-dimensional holograms can be accessed by many beams simultaneously, and with other interference effects entire memories can be queried instantaneously, not in serial fashion. Furthermore, energy losses from light traversing free space are negligible, allowing highly energy-efficient devices. And while electro-optic switches can slow down optical computers, some optical computations, again using interference effects, can be performed literally at the speed of light. But optical computing has serious obstacles, as well. It is difficult to fabricate optical elements that are very small, so most laboratory systems are bench-top-sized, not chip-sized. New techniques have been developed to implement optical elements with silicon photolithography, but they are still relatively immature. In addition, optical elements need tight tolerances to work. And while optical computing can in some cases do what electronic computing does, but better and faster, the truly unique, purely optical capabilities are just now being developed for computing applications. CC(Unit 1): Dr. B.Rajalingam 82
Putting optical computing to work Given the strengths and weaknesses of optical computing, the main applications are not in replacing general-purpose electronic computers but in narrower niches in which optical advantages are the greatest. The most obvious of these are in interconnecting conventional electronic computer chips or boards (see photo). Potentially, optical interconnects can vastly increase connectivity and reduce communication times for machines with multiple processors. A second important application area is in neural networks. These were first developed to imitate human and animal neural processing and involve self-learning networks for pattern recognition and image processing. Optical methods are ideal for neural networks because they are highly parallel and rely on every unit interacting with every other one. Similarly, shared memory, often in the form of holographic memories, is another way of exploiting the natural parallelism of optical techniques. Although it is clearly a long-term goal, researchers are also looking at more-general systems of optical logic that could execute any arbitrary program that is implementable on a conventional chip. Even more exciting, a number of groups are looking at new ways of computing, using the quantum and interference properties of light to eliminate many of the intermediate steps involved in conventional Turing-machine computers. CC(Unit 1): Dr. B.Rajalingam 83
Spatial light modulators A crucial element of virtually any optical-computing system is a way to encode information efficiently on a lightwave . To take advantage of the three-dimensional nature of optical computing, such encoding is typically done by varying the amplitude or phase of a lightwave over a plane consisting of individuals modulators or "smart pixels." Such spatial light modulators (SLMs) can be electro-optical, in which an electrical signal is used to modulate the light wave, or purely optical, in which one light signal is used to modulate another. Spatial light modulators use a variety of physical principles, essentially all those that are used in any kind of optical modulator. A simple example is a Mach-Zehnder interferometer, in which a beam is split in two and an electric field is applied to one optical path before the two are recombined. By varying the phase of one leg with the applied voltage, constructive or destructive interference can be selected. Using the Kerr effect, the same result can be obtained using an input light to change the refractive index of one leg, making an all-optical switch. Similarly, magneto-optical or acousto -optical effects can be used for other electro-optical SLMs. CC(Unit 1): Dr. B.Rajalingam 84
optical computing The most developed technology for arrays of spatial light modulators is liquid crystals, which are used in ubiquitous displays. In such liquid-crystal displays (LCDs), an electric field alters the polarization properties of the liquid-crystal molecules, either allowing or blocking light transmission. To create an all-optical switch, a photoconductive layer is added to a liquid-crystal cell. The "write" light signal, coming from one side, changes the potential on the photoconductive layer, thus creating a shift in the transmittance of the "read" light wave coming from the opposite direction. CC(Unit 1): Dr. B.Rajalingam 85
Optical interconnects The most obvious application for optical-computing technology is in board-to-board interconnects on large-scale parallel processors. In such interconnects, the smart-pixel arrays are driven by electronic commands from the parallel processor but use optical switching in free space to do the interconnection. Many of the early versions of such interconnects used a simple system with five elements. A square array of light sources such as LEDs or lasers is focused by a single lens onto a smart-pixel array that is modulated with the output from one board. A second lens images the output of the first array onto a detector array attached to the second board. This is the optical equivalent of hardwired connections. However, more sophisticated approaches are also being tried out. For example, SLMs combined with sets of cylindrical lenses can create cross-bar switches, connecting any output in a line with and input in another line. In this approach, a vertical cylindrical lens fans out the light from each modulated spot across a row of SLMs, while a second horizontal cylindrical lens focuses the outputs of the SLMs onto a row of detectors. In this way, if, say, all but one SLM is closed in a given row, that determines the connection between a single element in the line of emitters and the corresponding element in the line of detectors. An advantage of this device is that data from one output can be linked to several inputs instantly, or the outputs of several sources can be additively combined at a single input. CC(Unit 1): Dr. B.Rajalingam 86
Neural nets and image processing A second application in which the optical capability for many-to-many interconnections is highly desirable is in neural networks. Neural networks, very roughly based on biological models, are systems in which processing units are interconnected by different weighting functions, which alter in response to inputs. A typical neural network consists of three layers—an input layer, a hidden layer, and an output layer (see Fig. 3). Each processor or node "fires" in response to its inputs from the preceding layer, which are summed according to the system of weights. During training, "correct" responses, for example, identifying a given pattern correctly as an aircraft, lead to increased weights. Over time, the neural network adjusts its own weights, training itself to recognize patterns that are similar but not identical. CC(Unit 1): Dr. B.Rajalingam 87
11) Nanocomputing Nanocomputing describes computing that uses extremely small, or nanoscale, devices (one nanometer [nm] is one billionth of a meter). In 2001, state-of-the-art electronic devices could be as small as about 100 nm, which is about the same size as a virus. The integrated circuits (IC) industry, however, looks to the future to determine the smallest electronic devices possible within the limits of computing technology. Until the mid-1990s, the term "nanoscale" generally denoted circuit features smaller than 100 nm. As the IC industry started to build commercial devices at such size scales since the beginning of the 2000s, the term "nanocomputing" has been reserved for device features well below 50 nm to even the size of individual molecules, which are only a few nm. Scientists and engineers are only beginning to conceive new ways to approach computing using extremely small devices and individual molecules. CC(Unit 1): Dr. B.Rajalingam 88
11) Nanocomputing ( Cont ) All computers must operate by basic physical processes. Contemporary digital computers use currents and voltages in tens of millions of complementary metal oxide semiconductor (CMOS) transistors covering a few square centimeters of silicon. If device dimensions could be scaled down by a factor of 10 or even 100, then circuit functionality would increase 100 to 10,000 times. Furthermore, if such a new device or computer architecture were to be developed, this might lead to millionfold increases in computing power. Such circuits would consume far less power per function, increasing battery life and shrinking boxes and fans necessary to cool circuits. Also, they would be remarkably fast and able to perform calculations that are not yet possible on any computer. Benefits of significantly faster computers include more accuracy in predicting weather patterns, recognizing complex figures in images, and developing artificial intelligence (AI) . Potentially, single-chip memories containing thousands of gigabytes of data will be developed, capable of holding entire libraries of books, music, or movies. Modern transistors are engineering marvels, requiring hundreds of careful processing steps performed in ultraclean environments. Today's transistors operate with microampere currents and only a few thousand electrons generating the signals, but as they are scaled down, fewer electrons are available to create the large voltage swings required of them. This compels scientists and engineers to seek new physical phenomena that will allow information processing to occur using other mechanisms than those currently employed for transistor action. CC(Unit 1): Dr. B.Rajalingam 89
Future nanocomputers could be evolutionary, scaled-down versions of today's computers, working in essentially the same ways and with similar but nanoscale devices. Or they may be revolutionary, being based on some new device or molecular structure not yet developed. Research on nano-devices is aimed at learning the physical properties of very small structures and then determining how these can be used to perform some kind of computing functions. Current nanocomputing research involves the study of very small electronic devices and molecules, their fabrication, and architectures that can benefit from their inherent electrical properties. Nanostructures that have been studied include semiconductor quantum dots, single electron structures, and various molecules. Very small particles of material confine electrons in ways that large ones do not, so that the quantum mechanical nature of the electrons becomes important. Quantum dots behave like artificial atoms and molecules in that the electrons inside of them can have only certain values of energy, which can be used to represent logic information robustly. Another area is that of "single electron devices," which, as the name implies, represent information by the behavior of only one, single electron. The ultimate scaled-down electronic devices are individual molecules on the size scale of a nm. Chemists can synthesize molecules easily and in large quantities; these can be made to act as switches or charge containers of almost any desirable shape and size. One molecule that has attracted considerable interest is that of the common deoxyribonucleic acid (DNA), best known from biology. Ideas for attaching smaller molecules, called "functional groups," to the molecules and creating larger arrays of DNA for computing are under investigation. These are but a few of the many approaches being considered. In addition to discovering new devices on the nanoscale, it is critically important to devise new ways to interconnect these devices for useful applications. One potential architecture is called cellular neural networks (CNN) in which devices are connected to neighbors , and as inputs are provided at the edge, the interconnects cause a change in the devices to sweep like a wave across the array, providing an output at the other edge. An extension of the CNN concept is that of quantum-dot cellular automata (QCA) . This architecture uses arrangements of single electrons that communicate with each other by Coulomb repulsion over large arrays. The arrangement of electrons at the edges provides the computational output. The electron arrangements of QCA are controlled by an external clock and operate according to the rules of Boolean logic . Another potential architecture is that of "crossbar switching" in which molecules are placed at the intersections of nanometer -scale wires. These molecules provide coupling between the wires and provide computing functionality. CC(Unit 1): Dr. B.Rajalingam 90
The fabrication of these nanoscale systems is also a critical area of investigation. Current ICs are manufactured in a parallel process in which short wavelength light exposes an entire IC in one flash, taking only a fraction of a second. Serial processes, in which each device is exposed separately, are too slow as of early 2002 to expose billions of devices in a reasonable amount of time. Serial processes that are capable of attaining nanometer , but not molecular, resolution include using beams of electrons or ions to write patterns on an IC. Atomic resolution can be achieved by using currents from very sharp tips, a process called scanning probe lithography, to write on surfaces one atom at a time, but this technique is too slow for manufacturing unless thousands of tips can be used in parallel. It is reasonable to search for nanoscale particles, such as molecules, that do not require difficult fabrication steps. An alternative to the direct patterning of nanoscale system components is that of self assembly, a process in which small particles or molecules arrange themselves. Regardless of the method used to create arrays of nanostructures, organizing the nanodevices into useful architectures, getting data in and out, and performing computing are problems that have not yet been solved. In summary, nanocomputing technology has the potential for revolutionizing the way that computers are used. However, in order to achieve this goal, major progress in device technology, computer architectures, and IC processing must first be accomplished. It may take decades before revolutionary nanocomputing technology becomes commercially feasible. CC(Unit 1): Dr. B.Rajalingam 91
Nanocomputing is a term used for the representation and manipulation of data by computers smaller than a microcomputer. Current devices are already utilizing transistors with channels below 100 nanometers in length. The current goal is to produce computers smaller than 10 nanometers . Future developments in nanocomputing will provide resolutions to the current difficulties of forming computing technology at the nanoscale. For example, current nanosized transistors have been found to produce a quantum tunneling effect where electrons ‘tunnel’ through barriers, making them unsuitable for use as a standard switch. The increased computing power formed by nanocomputers will allow for the solution of exponentially difficult real world problems. Nanocomputing also has the advantage of being produced to fit into any environment, including the human body, whilst being undetectable to the naked eye. The small size of devices will allow for processing power to be shared by thousands of nanocomputers . Nanocomputing in the form of DNA nanocomputers and quantum computers will require different technology than current microcomputing techniques but supply their own benefits. CC(Unit 1): Dr. B.Rajalingam 92
DNA nanocomputing Nanocomputing can be produced by a number of nanoscale structures including biomolecules such as DNA and proteins. As DNA functions through a coding system of four nucleobases it is suited for application in data processing. DNA nanocomputers could produce faster problem solving through the ability to explore all potential solutions simultaneously. This is in contrast to conventional computers which solve problems by exploring solution paths one at a time in a series of steps. Solutions to difficult problems would no longer be constrained by processing time. DNA has the ability to provide this level of computing ability at the nanoscale because of the endless possible rearrangements of DNA through gene-editing technology. The large number of random genetic code combination can be used for processing solutions simultaneously, necessary for solving exponentially difficult real world problems. Practical applications of this theoretical technology will require the ability to control and program DNA flexibly. The earliest applications of DNA to computing will likely be in the form of transistor switches, overcoming current microcomputing problems such as transistor tunneling . Biomolecular switches will be able to control electron flow for computation through a change in composition of the DNA molecules or by adapting the amount of light scattered by the biomolecules. Alternative transistors have already been developed using DNA for biological nano computers. The DNA switch could be genetically programmed to produce or inhibit the production of a protein. This would allow for the development of biological functions that can compute disease diagnostics. CC(Unit 1): Dr. B.Rajalingam 93
With the advancement in technology, we get to hear new technology-related terms on a regular basis. One such term that has is extensively used these days is that of Nanocomputing. Nanocomputing is a term that is coined for the representation and manipulation of data by computers that are smaller than a microcomputer. The devices that we get to see today employ transistors with channels below 100 nanometers in length. Now, this is exactly what draws attention. The aim, now, is to come up with computers that are smaller than 10 nanometers . All the difficulties pertaining to forming computing technology at the nanoscale can now be addressed using nanocomputing. All those real-world problems that posed as an obstacle for all this while can now be taken care of as a result of increased computing power – thanks to the nanocomputers . Well, there is more to this. Gone are the days when any device that was brought in was associated with space constraints. Now that nanocomputers boast of extremely smaller sizes, they hold the potential to fit into any environment. Well, nanocomputers standing the ability to get fitted within the human body isn’t surprising either. Talking about these extremely small-sized computers, there are two categories under this that deserve every bit of your attention – DNA nanocomputers and quantum computers. CC(Unit 1): Dr. B.Rajalingam 94
DNA nanocomputers – Similar to the concept of DNA in the human body, nanocomputing can be produced by a number of nanoscale structures. These nanoscale structures include bio-molecules such as DNA and proteins. What makes DNA nanocomputers stand apart from the rest is that they can produce faster problem solving through the ability to explore all potential solutions simultaneously. This is totally worth a mention for the sole fact that conventional computers solve problems by exploring solution paths one at a time in a series of steps. Additionally, by virtue of the endless possible rearrangements of DNA through gene-editing technology, DNA stands the potential to provide computing ability at the nanoscale without being constrained by the processing time. Quantum computing – Quantum computing holds the potential to store and manipulate data through the utilization of subatomic particles dynamics. The abilities that quantum computers hold surpass the ones held by conventional computers. Being governed by the laws of quantum mechanics rather than classical physics, quantum computers can now compute solutions to problems with greater speed whilst requiring less space. Applications of DNA computing Overcoming current microcomputing problems such as transistor tunnelling Transistor switching Compute disease diagnostics as the DNA switch DNA switch could be genetically programmed to produce or inhibit the production of a protein. Biological nanocomputers CC(Unit 1): Dr. B.Rajalingam 95
12) Utility Computing CC(Unit 1): Dr. B.Rajalingam 96 Utility computing is defined as the type of computing where the service provider provides the needed resources and services to the customer and charges them depending on the usage of these resources as per requirement and demand, but not of a fixed rate. Utility computing involves the renting of resources such as hardware, software, etc. depending on the demand and the requirement. The goal of utility computing is to increase the usage of resources and be more cost-efficient.
What is utility computing? CC(Unit 1): Dr. B.Rajalingam 97 Utility computing is a service provisioning model where a provider makes computing resources, infrastructure management and technical services available to customers as they need them. The provider then charges the customer for the amount of services they use rather than a flat-rate fee. Like other types of on-demand computing - such as grid computing - the utility model seeks to maximize efficient resource use, minimize associated costs or both. The word utility is used to make an analogy to other services, such as electrical power, that seek to meet fluctuating customer demand and charge for the resources based on usage. This approach, sometimes known as pay-per-use or metered services, is becoming increasingly common in enterprise computing and is sometimes offered to consumers for internet service, website access, file sharing and other applications.
Utility computing examples CC(Unit 1): Dr. B.Rajalingam 98 Virtually any activity performed in a data center can be replicated in a utility computing offering. Services available include the following: access to file, application and web servers; infrastructure as a service, software as a service and platform as a service; virtually unlimited processing power and computation storage space; support for customer computing applications; storage space for data, databases and applications; cloud storage and cloud computing; utility services, like power, heating, ventilation and air conditioning (HVAC), and communications; general IT technical expertise; and specialized expertise, such as ransomware response and application development.
Utility computing benefits CC(Unit 1): Dr. B.Rajalingam 99 reduction in capital costs to obtain hardware, software and specialized assets -- such as intrusion detection systems and cybersecurity applications -- that support IT operations; cost savings from cutting down the floor space needed to house equipment racks, power supplies and HVAC systems; lower costs for power, HVAC and physical security requirements previously required in data centers and other IT facilities that are no longer needed; the availability of virtually unlimited computing and storage resources to meet unexpected demand; reduced costs associated with needing IT staffing because the managed service provider's employees replace IT staff; flexibility to deploy resources only when needed; and competitive advantages from being able to more easily introduce new products and services.
Utility Computing CC(Unit 1): Dr. B.Rajalingam 100
Utility computing risks CC(Unit 1): Dr. B.Rajalingam 101 limited access to vendor facilities and resources; reluctance or refusal of vendors to discuss how they handle customer needs; inability to actively manage and maintain IT equipment and systems; vendor data breaches that could impact customer systems and data; colocation of customer systems and data with other customers; theft of customer data by rogue vendor employees; damage to customer systems and applications by rogue vendor employees; accidents that disrupt customer operations; reluctance of vendors to honor -- or even agree to -- customer service-level agreements; reluctance or refusal of vendors to conduct disaster recovery tests of client resources; and reluctance or refusal of vendors to comply with audit requests.
13) Edge Computing CC(Unit 1): Dr. B.Rajalingam 102 Edge computing is defined as the type of computing that is focused on decreasing the long distance communication between the client and the server. This is done by running fewer processes in the cloud and moving these processes onto a user’s computer, IoT device or edge device/server. The goal of edge computing is to bring computation to the network’s edge which in turn builds less gap and results in better and closer interaction.
What is edge computing? CC(Unit 1): Dr. B.Rajalingam 103 Gartner defines edge computing as “a part of a distributed computing topology in which information processing is located close to the edge—where things and people produce or consume that information.” At its most basic level, edge computing brings computation and data storage closer to the devices where it’s being gathered, rather than relying on a central location that can be thousands of miles away. This is done so that data, especially real-time data, does not suffer latency issues that can affect an application’s performance. In addition, companies can save money by having the processing done locally, reducing the amount of data that needs to be sent to a centralized or cloud-based location. Think about devices that monitor manufacturing equipment on a factory floor or an internet-connected video camera that sends live footage from a remote office. While a single device producing data can transmit it across a network quite easily, problems arise when the number of devices transmitting data at the same time grows. Instead of one video camera transmitting live footage, multiply that by hundreds or thousands of devices. Edge-computing hardware and services help solve this problem by providing a local source of processing and storage for many of these systems. An edge gateway, for example, can process data from an edge device, and then send only the relevant data back through the cloud.
Edge Computing CC(Unit 1): Dr. B.Rajalingam 104
How does edge computing work? CC(Unit 1): Dr. B.Rajalingam 105 The physical architecture of the edge can be complicated, but the basic idea is that client devices connect to a nearby edge module for more responsive processing and smoother operations. Edge devices can include IoT sensors, an employee’s notebook computer, their latest smartphone, security cameras or even the internet-connected microwave oven in the office break room. In an industrial setting, the edge device can be an autonomous mobile robot, a robot arm in an automotive factory. In health care, it can be a high-end surgical system that provides doctors with the ability to perform surgery from remote locations. Edge gateways themselves are considered edge devices within an edge-computing infrastructure. Terminology varies, so you might hear the modules called edge servers or edge gateways. While many edge gateways or servers will be deployed by service providers looking to support an edge network (Verizon, for example, for its 5G network), enterprises looking to adopt a private edge network will need to consider this hardware as well.
How to buy and deploy edge computing systems CC(Unit 1): Dr. B.Rajalingam 106 The way an edge system is purchased and deployed can vary widely. On one end of the spectrum, a business might want to handle much of the process on their end. This would involve selecting edge devices, probably from a hardware vendor like Dell, HPE or IBM, architecting a network that’s adequate to the needs of the use case, and buying management and analysis software. That’s a lot of work and would require a considerable amount of in-house expertise on the IT side, but it could still be an attractive option for a large organization that wants a fully customized edge deployment. On the other end of the spectrum, vendors in particular verticals are increasingly marketing edge services that they will manage for you. An organization that wants to go this route can simply ask a vendor to install its own hardware, software and networking and pay a regular fee for use and maintenance. IIoT offerings from companies like GE and Siemens fall into this category. This approach has the advantage of being easy and relatively headache-free in terms of deployment, but heavily managed services like this might not be available for every use case.
What are some examples of edge computing? CC(Unit 1): Dr. B.Rajalingam 107 Just as the number of internet-connected devices continues to climb, so does the number of use cases where edge computing can either save a company money or take advantage of extremely low latency. Verizon Business, for example, describes several edge scenarios including end-of-life quality control processes for manufacturing equipment; using 5G edge networks to create popup network ecosystems that change how live content is streamed with sub-second latency; using edge-enabled sensors to provide detailed imaging of crowds in public spaces to improve health and safety; automated manufacturing safety, which leverages near real-time monitoring to send alerts about changing conditions to prevent accidents; manufacturing logistics, which aims to improve efficiency through the process from production to shipment of finished goods; and creating precise models of product quality via digital twin technologies to gain insights from manufacturing processes. The hardware required for different types of deployment will differ substantially. Industrial users, for example, will put a premium on reliability and low-latency, requiring ruggedized edge nodes that can operate in the harsh environment of a factory floor, and dedicated communication links (private 5G, dedicated Wi-Fi networks or even wired connections) to achieve their goals. Connected agriculture users, by contrast, will still require a rugged edge device to cope with outdoor deployment, but the connectivity piece could look quite different – low-latency might still be a requirement for coordinating the movement of heavy equipment, but environmental sensors are likely to have both higher range and lower data requirements. An LP-WAN connection, Sigfox or the like could be the best choice there. Other use cases present different challenges entirely. Retailers can use edge nodes as an in-store clearinghouse for a host of different functionality, tying point-of-sale data together with targeted promotions, tracking foot traffic, and more for a unified store management application. The connectivity piece here could be simple – in-house Wi-Fi for every device – or more complex, with Bluetooth or other low-power connectivity servicing traffic tracking and promotional services, and Wi-Fi reserved for point-of-sale and self-checkout.
What are the benefits of edge computing? CC(Unit 1): Dr. B.Rajalingam 108 For many companies, cost savings alone can be a driver to deploy edge-computing. Companies that initially embraced the cloud for many of their applications may have discovered that the costs in bandwidth were higher than expected, and are looking to find a less expensive alternative. Edge computing might be a fit. Increasingly, though, the biggest benefit of edge computing is the ability to process and store data faster, enabling more efficient real-time applications that are critical to companies. Before edge computing, a smartphone scanning a person’s face for facial recognition would need to run the facial recognition algorithm through a cloud-based service, which would take a lot of time to process. With an edge computing model, the algorithm could run locally on an edge server or gateway, or even on the smartphone itself. Applications such as virtual and augmented reality, self-driving cars, smart cities and even building-automation systems require this level of fast processing and response.
14) Fog Computing CC(Unit 1): Dr. B.Rajalingam 109 Fog computing is a decentralized infrastructure that places storage and processing components at the edge of the cloud, where data sources such as application users and sensors exist. Fog computing is defined as the type of computing that acts a computational structure between the cloud and the data producing devices. It is also called as “fogging”. This structure enables users to allocate resources, data, applications in locations at a closer range within each other. The goal of fog computing is to improve the overall network efficiency and performance.
Fog Computing Architecture CC(Unit 1): Dr. B.Rajalingam 110
Fog Computing Components CC(Unit 1): Dr. B.Rajalingam 111
Fog Computing CC(Unit 1): Dr. B.Rajalingam 112
What Is Fog Computing? CC(Unit 1): Dr. B.Rajalingam 113 1. Physical & virtual nodes (end devices) End devices serve as the points of contact to the real world, be it application servers, edge routers, end devices such as mobile phones and smartwatches, or sensors. These devices are data generators and can span a large spectrum of technology. This means they may have varying storage and processing capacities and different underlying software and hardware. 2. Fog nodes Fog nodes are independent devices that pick up the generated information. Fog nodes fall under three categories: fog devices, fog servers, and gateways. These devices store necessary data while fog servers also compute this data to decide the course of action. Fog devices are usually linked to fog servers. Fog gateways redirect the information between the various fog devices and servers. This layer is important because it governs the speed of processing and the flow of information. Setting up fog nodes requires knowledge of varied hardware configurations, the devices they directly control, and network connectivity.
What Is Fog Computing? CC(Unit 1): Dr. B.Rajalingam 114 3. Monitoring services Monitoring services usually include application programming interfaces (APIs) that keep track of the system’s performance and resource availability. Monitoring systems ensure that all end devices and fog nodes are up and communication isn’t stalled. Sometimes, waiting for a node to free up may be more expensive than hitting the cloud server. The monitor takes care of such scenarios. Monitors can be used to audit the current system and predict future resource requirements based on usage. 4. Data processors Data processors are programs that run on fog nodes. They filter, trim, and sometimes even reconstruct faulty data that flows from end devices. Data processors are in charge of deciding what to do with the data — whether it should be stored locally on a fog server or sent for long-term storage in the cloud. Information from varied sources is homogenized for easy transportation and communication by these processors.
What Is Fog Computing? CC(Unit 1): Dr. B.Rajalingam 115 5. Resource manager Fog computing consists of independent nodes that must work in a synchronized manner. The resource manager allocates and deallocates resources to various nodes and schedules data transfer between nodes and the cloud. It also takes care of data backup, ensuring zero data loss. Since fog components take up some of the SLA commitments of the cloud, high availability is a must. The resource manager works with the monitor to determine when and where the demand is high. This ensures that there is no redundancy of data as well as fog servers. 6. Security tools Since fog components directly interact with raw data sources, security must be built into the system even at the ground level. Encryption is a must since all communication tends to happen over wireless networks. End users directly ask the fog nodes for data in some cases. As such, user and access management is part of the security efforts in fog computing. 7. Applications Applications provide actual services to end-users. They use the data provided by the fog computing system to provide quality service while ensuring cost-effectiveness. It is important to note that these components must be governed by an abstraction layer that exposes a common interface and a common set of protocols for communication. This is usually achieved using web services such as APIs.
Examples and Use Cases of Fog Computing CC(Unit 1): Dr. B.Rajalingam 116 1. Smart homes One of the most common fog computing use cases is a smart home. A smart home consists of a technology-controlled ventilation and heating system such as the Nest Learning Thermostat, smart lighting, programmable shades and sprinklers, smart intercom systems to communicate with people indoors as well as those at the door, and an intelligent alarm system. Fog computing can be used to create a personalized alarm system. It can also be used to automate certain events, such as turning on water sprinklers based on time and temperature. 2. Smart cities Smart cities aspire to be automated at every front, from garbage collection to traffic management. Fog computing is particularly pertinent when it comes to traffic regulation. Sensors are set up at traffic signals and road barriers for detecting pedestrians, cyclists, and vehicles. Speedometers can measure how fast they are traveling and how likely it can result in a collision. These sensors use wireless and cellular technology to collate this data. Traffic signals automatically turn red or stay green for a longer time based on the information processed from these sensors. 3. Video surveillance The most prevalent example of fog computing is perhaps video surveillance, given that continuous streams of videos are large and cumbersome to transfer across networks. The nature of the involved data results in latency problems and network challenges. Costs also tend to be high for storing media content. Video surveillance is used in malls and other large public areas and has also been implemented in the streets of numerous communities. Fog nodes can detect anomalies in crowd patterns and automatically alert authorities if they notice violence in the footage. 4. Healthcare The healthcare industry is one of the most governed industries, with regulations such as HIPAA being mandatory for hospitals and healthcare providers. This sector is always looking to innovate and address emergencies in real-time, such as a drop in vitals. One way of doing it is using data from wearables, blood glucose monitors, and other health apps to look for signs of bodily distress. This data should not face any latency issues as even a few seconds of delay can make a huge difference in a critical situation, such as a stroke. 5. Others Other industries that use fog computing include retail, oil & gas, government & military, and hospitality. Personal assistants such as Siri and Alexa are available across devices and are compatible with most, such as smartwatches. This flexibility and presence mean that we can count on fog computing to become a crucial part of various industry verticals. Any enterprise that offers real-time solutions will need to incorporate fog computing into its existing cloud infrastructure .