Data flow architecture

9,088 views 16 slides Nov 03, 2018
Slide 1
Slide 1 of 16
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16

About This Presentation

about data flow architecture


Slide Content

A SEMINAR ON : DATAFLOW COMPUTER PAPER NAME – COMPUTER ARCHITECTURE PAPER CODE – IT502

CONTENT Contents 1. Introduction 2 . Objective 3. Data flow Principle 4. Features of Dataflow Computer 5. Data flow Graph & Example with expression 6. Data flow Architecture 7. Models of Data flow Architecture 8. Compiler , Program & Instruction 9. References

INTRODUCTION : Data flow computers are based on the concept of data-driven computation, which is drastically different from the operation of conventional von Neumann machine. The fundamental difference is that instruction execution in a conventional computer is under program-flow control, whereas that in a data flow computer is driven by the data (operand) availability. The data-driven concept means asynchrony, which means that many instructions can be executed simultaneously and asynchronously. A higher degree of implicit parallelism is expected in dataflow computer. Becuase there is no use of shared memory cells, dataflow programs are free from side effects . The Dataflow Principles section reviews the basic principles of the dataflow model. The Dataflow Graphs section gives the representations used in dataflow system. The Dataflow Architectures section provides a general description of the dataflow architecture. The discussion includes a comparison of the architectural characteristics and the evolutionary improvements in dataflow computing.

OBJECTIVE Dataflow architecture  is a computer achitecture that directly opposite of the traditional von Neumann architecture or control flow architecture . It has been successfully implemented in specialized hardware such as in digital signal processing, network routing, graphics processing, telemetry, and more recently in data warehousing. The main objective and scope of our Project is about the discuss of principle and the uses of dataflow computer.

Features Of Dataflow Computers : Intermediate or final results are passed directly as data token between instructions. There is no concept of shared data storage as embodied the traditional notation of a variable. Program sequencing is constrained only by data dependency among instructions.

DATA FLOW GRAPH : Dataflow graphs can be viewed as the machine language for dataflow computers. A data flow graph is a directed graph whose nodes correspond to operators and arcs are pointers for forwardig data tokens. A producing node is connected to a consuming node by an arc, and the “point” where an arc enters a node is called an input port. The execution of an instruction is called the firing of a node . Data is sent along the arcs of the dataflow graph in the form of tokens, which are created by computational nodes and placed on output arcs.

Example With a expression : The below figure illustrates an example of dataflow graph for evaluation of expression X 2 – 2*X + 3. The following subtraction operation will not be carried out until these values are available. As soon as X2 and 2*X values are computed subtraction operation will be carried out which in turn will provide input to next addition operation.

DATAFLOW ARCHITECTURE : Dataflow architecture  is a computer achitecture that directly contrasts the traditional von Neumann architecture or control flow architecture . Although no commercially successful general-purpose computer hardware has used a dataflow architecture, it has been successfully implemented in specialized hardware such as in digital signal processing, network routing, graphics processing, telemetry, and more recently in data warehousing.

MODELS OF DATAFLOW ARCHITECTURE : Depending on the way of handling data tokens, data flow computers are divided into : I. static model II. dynamic model. STATIC DATA FLOW MACHINES : In static data flow machine, data tokens are assumed to move along the arcs of the data flow program graph to the operator nodes . This architecture is considered static because tokens are not labeled and control tokens must be used to acknowledge the proper timing in transferring data tokens from node to node.

DYNAMIC DATAFLOW MACHINE : A dynamic data flow machine uses tagged tokens, so that more than one token can exist in an arc. The tagging is achieved by attaching a label with each token which uniquely identifies the context of that particular token. The dynamic dataflow allows greater exploitation of parallelism; however, this advantage comes at the expense of the overhead in terms of the generation of tags, larger data tokens, and complexity of the matching tokens.

COMPILER : Normally, in the control flow architecture compilers  analyze program  code for data dependencies between instructions in order to better organize the instruction sequences in the binary output files . Binaries compiled for a dataflow machine contain this dependency information. Adataflow compiler records these dependencies by creating unique tags for each dependency instead of using variable names.

PROGRAM : Programs are loaded into the CAM of a dynamic dataflow computer. When all of the tagged operands of an instruction become available (that is, output from previous instructions and/or user input), the instruction is marked as ready for execution by an execution unit . Once an instruction is completed by an execution unit, its output data is sent (with its tag) to the CAM. Any instructions that are dependent upon this particular datum (identified by its tag value) are then marked as ready for execution.

INSTRUCTION : An instruction, along with its required data operands, is transmitted to an execution unit as a packet, also called an  instruction token . Similarly, output data is transmitted back to the CAM as a  data token . The packetization of instructions and results allows for parallel execution of ready instructions on a large scale Dataflow networks deliver the instruction tokens to the execution units and return the data tokens to the CAM. In contrast to the conventional  von Neumann architecture , data tokens are not permanently stored in memory, rather they are transient messages that only exist when in transit to the instruction storage

CONCLUSION : The advances from the development of dataflow machines indicate potential high performance computation based on the dataflow principles. This is necessary owing to increased demands of processing of complex scientific and technical data. As such applications require large processing times, data flow computers may help reduce the processing times and thus improve efficiency and effectiveness of implemented algorithms. However, there are still many issues to be addressed for the efficient use of dataflow computers.

REFERENCE international Journal of Networking & Parallel Computing www.cirworld.com Volume 1, Issue 2, November, 2012

THANK YOU
Tags