The management spectrum - 4P's For properly building a product, there’s a very important concept that we all should know in software project planning while developing a product. There are 4 critical components in software project planning which are known as the 4P’s namely: Product Process People Project
People The most important component of a product and its successful implementation is human resources. In building a proper product, a well-managed team with clear-cut roles defined for each person/team will lead to the success of the product. We need to have a good team in order to save our time, cost, and effort. Some assigned roles in software project planning are project manager, team leaders, stakeholders, analysts, and other IT professionals . Managing people successfully is a tricky process which a good project manager can do.
Product As the name inferred, this is the deliverable or the result of the project. The project manager should clearly define the product scope to ensure a successful result, control the team members, as well technical hurdles that he or she may encounter during the building of a product. The product can consist of both tangible or intangible such as shifting the company to a new place or getting a new software in a company.
Process In every planning, a clearly defined process is the key to the success of any product. It regulates how the team will go about its development in the respective time period. The Process has several steps involved like, documentation phase, implementation phase, deployment phase, and interaction phase.
Project The last and final P in software project planning is Project. In this phase, the project manager plays a critical role. They are responsible to guide the team members to achieve the project’s target and objectives, helping & assisting them with issues, checking on cost and budget, and making sure that the project stays on track with the given deadlines .
Metrics for Size Estimation: Line of Code( LoC ), Function Points(FP). 1. Lines of Code (LOC): As the name suggests, LOC count the total number of lines of source code in a project. The units of LOC are: KLOC- Thousand lines of code NLOC- Non-comment lines of code KDSI- Thousands of delivered source instruction The size is estimated by comparing it with the existing systems of the same kind. The experts use it to predict the required size of various components of software and then add them to get the total size.
It’s tough to estimate LOC by analysing the problem definition. Only after the whole code has been developed can accurate LOC be estimated. This statistic is of little utility to project managers because project planning must be completed before development activity can begin. Two separate source files having a similar number of lines may not require the same effort. A file with complicated logic would take longer to create than one with simple logic. Proper estimation may not be attainable based on LOC. The length of time it takes to solve an issue is measured in LOC. This statistic will differ greatly from one programmer to the next. A seasoned programmer can write the same logic in fewer lines than a newbie coder.
Advantages: Universally accepted and is used in many models like COCOMO. Estimation is closer to the developer’s perspective. Simple to use. Disadvantages: Different programming languages contain a different number of lines. No proper industry standard exists for this technique. It is difficult to estimate the size using this technique in the early stages of the project.
Function Point Analysis: In this method, the number and type of functions supported by the software are utilized to find FPC(function point count). The steps in function point analysis are: Count the number of functions of each proposed type. Compute the Unadjusted Function Points(UFP). Find Total Degree of Influence(TDI). Compute Value Adjustment Factor(VAF). Find the Function Point Count(FPC). The explanation of the above points is given below: Count the number of functions of each proposed type: Find the number of functions belonging to the following types: External Inputs: Functions related to data entering the system. External outputs: Functions related to data exiting the system. External Inquiries: They lead to data retrieval from the system but don’t change the system. Internal Files: Logical files maintained within the system. Log files are not included here. External interface Files: These are logical files for other applications which are used by our system. Compute the Unadjusted Function Points(UFP): Categories each of the five function types like simple, average, or complex based on their complexity. Multiply the count of each function type with its weighting factor and find the weighted sum. The weighting factors for each type based on their complexity are as follows:
Find Total Degree of Influence: Use the ’14 general characteristics’ of a system to find the degree of influence of each of them. The sum of all 14 degrees of influence will give the TDI. The range of TDI is 0 to 70. The 14 general characteristics are: Data Communications, Distributed Data Processing, Performance, Heavily Used Configuration, Transaction Rate, On-Line Data Entry, End-user Efficiency, Online Update, Complex Processing Reusability, Installation Ease, Operational Ease, Multiple Sites and Facilitate Change. Each of the above characteristics is evaluated on a scale of 0-5.
Compute Value Adjustment Factor(VAF): Use the following formula to calculate VAF VAF = (TDI * 0.01) + 0.65 Find the Function Point Count: Use the following formula to calculate FPC FPC = UFP * VAF Advantages: It can be easily used in the early stages of project planning. It is independent of the programming language. It can be used to compare different projects even if they use different technologies(database, language, etc). Disadvantages: It is not good for real-time systems and embedded systems. Many cost estimation models like COCOMO uses LOC and hence FPC must be converted to LOC.
Project Cost Estimation Approaches: Overview of Heuristic, Analytical, and Empirical Estimation. Cost estimation simply means a technique that is used to find out the cost estimates. The cost estimate is the financial spend that is done on the efforts to develop and test software in Software Engineering . Cost estimation models are some mathematical algorithms or parametric equations that are used to estimate the cost of a product or a project. Various techniques or models are available for cost estimation, also known as Cost Estimation Models as shown below :
Empirical Estimation Technique – Empirical estimation is a technique or model in which empirically derived formulas are used for predicting the data that are a required and essential part of the software project planning step. These techniques are usually based on the data that is collected previously from a project and also based on some guesses, prior experience with the development of similar types of projects, and assumptions. It uses the size of the software to estimate the effort.In this technique, an educated guess of project parameters is made. Hence, these models are based on common sense. However, as there are many activities involved in empirical estimation techniques, this technique is formalized. For example Delphi technique and Expert Judgement technique. Heuristic Technique – Heuristic word is derived from a Greek word that means “to discover”. The heuristic technique is a technique or model that is used for solving problems, learning, or discovery in the practical methods which are used for achieving immediate goals. These techniques are flexible and simple for taking quick decisions through shortcuts and good enough calculations, most probably when working with complex data. But the decisions that are made using this technique are necessary to be optimal.In this technique, the relationship among different project parameters is expressed using mathematical equations. The popular heuristic technique is given by Constructive Cost Model (COCOMO) . This technique is also used to increase or speed up the analysis and investment decisions.
Analytical Estimation Technique – Analytical estimation is a type of technique that is used to measure work. In this technique, firstly the task is divided or broken down into its basic component operations or elements for analyzing. Second, if the standard time is available from some other source, then these sources are applied to each element or component of work.Third , if there is no such time available, then the work is estimated based on the experience of the work. In this technique, results are derived by making certain basic assumptions about the project. Hence, the analytical estimation technique has some scientific basis. Halstead’s software science is based on an analytical estimation model.
COCOMO (Constructive Cost Model), COCOMO II. Boehm proposed COCOMO (Constructive Cost Estimation Model) in 1981.COCOMO is one of the most generally used software estimation models in the world. COCOMO predicts the efforts and schedule of a software product based on the size of the software.
The necessary steps in this model are: Get an initial estimate of the development effort from evaluation of thousands of delivered lines of source code (KDLOC). Determine a set of 15 multiplying factors from various attributes of the project. Calculate the effort estimate by multiplying the initial estimate with all the multiplying factors i.e., multiply the values in step1 and step2. The initial estimate (also called nominal estimate) is determined by an equation of the form used in the static single variable models, using KDLOC as the measure of the size. To determine the initial effort E i in person-months the equation used is of the type is shown below E i =a*(KDLOC)b The value of the constant a and b are depends on the project type.
In COCOMO, projects are categorized into three types: Organic Semidetached Embedded
Organic : A software project is said to be an organic type if- Project is small and simple. Project team is small with prior experience. The problem is well understood and has been solved in the past. Requirements of projects are not rigid, such a mode example is payroll processing system. Semi-Detached Mode: A software project is said to be a Semi-Detached type if- Project has complexity. Project team requires more experience , better guidance and creativity. The project has an intermediate size and has mixed rigid requirements such a mode example is a transaction processing system which has fixed requirements. It also includes the elements of organic mode and embedded mode. Few such projects are- Database Management System(DBMS), new unknown operating system, difficult inventory management system .
Embedded Mode: A software project is said to be an Embedded mode type if- A software project has fixed requirements of resources . Product is developed within very tight constraints. A software project requiring the highest level of complexity, creativity, and experience requirement fall under this category. Such mode software requires a larger team size than the other two models .
Types of COCOMO Models COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. Any of the three forms can be adapted according to our requirements. These are types of COCOMO model: Basic COCOMO Model Intermediate COCOMO Model Detailed COCOMO Model
Basic COCOMO Model Basic COCOMO Model: The first level, Basic COCOMO can be used for quick and slightly rough calculations of Software Costs. This is because the model solely considers based on lines of source code together with constant values obtained from software project types rather than other factors which have major influences on the Software development process as a whole. It requires to calculate the efforts which are required to develop in three modes of development that are organic mode, semi-detached mode, and embedded mode. The basic COCOMO estimation model is given by the following expressions: E = ax (KLOC)b D = c x (Effort)d P = effort/time
Where, E is effort applied in person-months. D is development time in months. P is the total no. of persons required to accomplish the project. The constant values a,b,c , and d for the Basic Model for the different categories of the system Software Project A B C D Organic 2.4 1.05 2.5 0.38 Semi-Detached 3.0 1.12 2.5 0.35 Embedded 3.6 1.20 2.5 0.32
Example – Consider a software project using semi-detached mode with 300 Kloc .find out effort estimation, development time, and person estimation. Solution – Effort (E) = a*(KLOC)b = 3.0*(300)1.12 = 1784.42 PM Development Time (D) = c (E)d = 2.5 (1784.42)0.35 = 34.35 Months(M) Person Required (P) = E/D = 1784.42/34.35 = 51.9481 Persons ~52 Persons
COCOMO II COCOMO-II is the revised version of the original Cocomo (Constructive Cost Model) and is developed at University of Southern California. It is the model that allows one to estimate the cost, effort and schedule when planning a new software development activity.
It consists of three sub-models : 1. End User Programming: Application generators are used in this sub-model. End user write the code by using these application generators. Example – Spreadsheets, report generator, etc. 2. Intermediate Sector : (a). Application Generators and Composition Aids – This category will create largely prepackaged capabilities for user programming. Their product will have many reusable components. Typical firms operating in this sector are Microsoft, Lotus, Oracle, IBM, Borland, Novell. (b). Application Composition Sector – This category is too diversified and to be handled by prepackaged solutions. It includes GUI, Databases, domain specific components such as financial, medical or industrial process control packages. (c). System Integration – This category deals with large scale and highly embedded systems. 3. Infrastructure Sector: This category provides infrastructure for the software development like Operating System, Database Management System, User Interface Management System, Networking System, etc.
Stages of COCOMO II : Stage-I: It supports estimation of prototyping. For this it uses Application Composition Estimation Model . This model is used for the prototyping stage of application generator and system integration. Stage-II: It supports estimation in the early design stage of the project, when we less know about it. For this it uses Early Design Estimation Model . This model is used in early design stage of application generators, infrastructure, system integration. Stage-III: It supports estimation in the post architecture stage of a project. For this it uses Post Architecture Estimation Model . This model is used after the completion of the detailed architecture of application generator, infrastructure, system integration.