Skip to main content

ABC Of Estimation

1 WHAT IS ESTIMATION

What we mean by term Estimation? How do we measure software? Well, we measure software in terms of its SIZE which can be LOC, FP etc.
Sizing is the prediction of product deliverables needed to fulfill the project requirements.
Estimation is nothing but the prediction of effort (resource) needed to produce the deliverables.
Estimation is not 100% accurate at the early stages of the SDLC and as we move ahead our estimation become more accurate. We will see this more into coming sections.
Below is a BIG picture of Software Estimation:
Scope Defined -> WBS Created (All dependencies and task identified and created) ->Estimate the Software Size->Estimate the Effort (Cost) and Duration-> Assign Resources->Schedule the Work.

1.1 ESTIMATION MODELS


In below section we will see some widely used Estimation Techniques. In this article we will not go into much detail of each Estimation technique but will give you a high level detail of each technique. My next set of article will follow more detail on each technique, so keep checking on my site.

1.1.1 LOC – Line of Code

Line of Code – LOC is a measure of the SIZE of the Software. Once we calculate the Software Size in LOC unit, we can determine the Effort, Cost and Schedule. We can estimate the LOC using:
• Expert Opinions and Bottom Up Summations
• By Analogy
We will see the detail of above in next article.
In brief consider an example where in we want to estimate the size of software which will be developed in Java technology. The organization has developed similar software in the past. Now based on their past data, they can easily determined the size of the software in terms of LOC. They might use PERT model to get the Pessimistic, Optimistic, and Most Probable estimate (LOC) by experts and then produce the Final Estimate.
Let consider Pessimistic Estimate as 400 LOC, Optimistic as 200 LOC and Most Probable as 250 LOC, then using PERT we can derive final estimate as
PERT = (P+ 4M + O)/6 = (400+4*250+200)/6 = 266 LOC
If you have your productivity figure for Java in terms of LOC like 1 Staff can write 1000 LOC in a month, then you can estimate the Duration. Still productivity in terms of LOC is debatable and may not be the right method for determining productivity.
Note: The number of thousands of source lines of code (KSLOC) delivered is a common metric, carried through the estimation of productivity, which are usually expressed as KSLOC/SM or KLOC/SM (where SM= Staff Month).
LOC is a universal metric because all software products are essentially made of them.
Many organization measure Quality as: No. of Defects / No. of lines of code which is again not the correct way of determining the quality. Quality of the code is important and not the volume.

1.1.2 Function Point (FP)

The Function point method is based on the idea that software size is better measured in terms of the number and complexity of the function that it performs than on the number of lines of code that represent it.
FP measures categories of end-user business functions. It is more methodological way than are LOC counts. A really straightforward analogy is that of a physical house to software. The no. of square feet is to the house as LOC is to software; the number of bedrooms and bathrooms is to the house as function points are to software. The former looks at the size; the latter looks at size and function.
Here is a quick overview of Function Point process:
1. Count the functions in each category ( The categories are: Outputs, Inputs, Inquiries, Data Structures and Interfaces)
2. Establish the complexity of each category as Simple, Medium and Complex.
3. Establish weights for each complexity.
4. Multiply each function by its weight and then sum up to get total function points.
5. Convert FP to LOC using the formula:
LOC = Points X ADJ X Conversion Factor,
where ADJ is an adjustment for the general characteristics of the application.
The Conversion Factor, based on historical data for the application and programming language, represents the average number of lines of code to implement a simple function.
There is more to Function point which will follow soon on my next article dedicated to Function Point only.

1.1.3 Three Point Estimate or PERT Model

Three Point Estimate is considering 3 Estimate, OPTIMISTIC, MOST PROBABLE, PESSIMISTIC. We normally ask relevant developer or Analysts to provide the estimate as 3 Point Estimate. Once they provide these estimates we calculate the Final Estimate using PERT. A formula for the same is:

PERT = (P+ 4M + O)/6, SD = P-O/6

Let say we are estimating the Size of Software in terms of LOC and asking expert for their opinion. We got below value:

O = 200
P = 400
M = 250

PERT = (P+ 4M + O)/6 = (200+ (4 X 250) + 400)/6 = 266 LOC
SD = (P – O) / 6 = (400 – 200) / 6 = 33

The Final Size Estimate would be (266±33) LOC = between 233 to 299 LOC with 68% (1σ)Confidence.
With 2 σ (90%), 266±66 = between 200 to 332 LOC
With 3 σ (99%), 266±99 = between 167 to 365 LOC

1.1.4 Feature Points

Feature Points are an extension of the function point method designed to deal with different kinds of application, such as embedded and / or real time systems. Feature points are basically function points that are sensitive to high algorithmic complexity, where an algorithm is a bounded set of rules (executable statements) required to solve a computational problem.
In Feature Point technique apart from counting Outputs, Inputs, Inquiries, Data Structures and Interfaces we also count the number of Algorithms and provide a weightage as per the Algorithm complexity.

1.1.5 Wideband Delphi

This is a disciplined method of using the experience of several people to reach an estimate that incorporates all of their knowledge.
The “pure” approach (Pure Delphi) is to collect the expert opinion in isolation, feed back anonymous summary results, and iterate until consensus is reached. (without Group discussion). As Delphi approach can take a very long time, the concept of Wideband Delphi was introduced to speed up the process. This improved approach uses group discussion.
The steps in conducting Wideband Delphi are:
1. Distribute problem statement and a response form to all the experts.
2. Conduct a Group discussion.
3. Collect expert opinion anonymously.
4. Feed back a summary of results to each expert.
5. Conduct another group discussion.
6. Iterate as necessary until consensus is reached.
Group discussions are the primary difference between pure Delphi and Wideband Delphi.
This process may utilize the PERT calculation for arriving at Final Estimate. It mainly depends on organization how they are setting up the process for Wideband Delphi.

1.2 ESTIMATION RISK

1.2.1 Risk Associated with Estimation

1. Customer Dissatisfaction with Inaccurate Estimate.
2. Loss of Money in a Fixed Price Contract, due to too optimistic estimate.
Some of the Problems with Estimating:
1. Missing Facts while doing estimation.
2. No or Little Historical data upon which to base future estimates.
3. No Standard Estimating Process within an Organization.
4. Stakeholder misconception about estimating.
5. The requirement is not clear or there is insufficient visibility into other parts of the system
Inaccurate estimates will require adjustments to the schedule, to squeeze the optimal one into a shorter time frame, which almost always results in the introduction of defects.

1.2.2 How to Mitigate Estimation Risk

1. Decompose the WBS to the lowest level possible; smaller components are easier to estimate.
2. Review assumptions with all the stakeholders, including operations, maintenance and support departments.
3. If historical data is not available, collect Anecdotal evidence.
4. Update estimates at frequent intervals. Estimation accuracy does improve over the time(Life Cycle)
5. Educate developer in estimation methods
6. Use multiple size estimating methods to increase confidence.

2 REFERENCES

• Quality Software Project Management by Robert, Donald & Linda.
• Wikipedia

3 Download

Click here to download a pdf copy of this article.

Comments

Popular posts from this blog

Is AI taking over your job in software development and testing? 😱"

Are you a software developer or tester feeling threatened by the rise of AI in your industry? 😰 You're not alone. Many professionals in the field are concerned about the potential consequences of AI's integration into software development and testing. While some experts believe that AI can bring significant benefits to the industry, others worry that it could replace human expertise altogether. 🤔 AI algorithms can analyze massive amounts of data and automate many tasks, but they cannot replace the critical thinking and creativity of human beings. Additionally, there are ethical concerns associated with the use of AI in software development and testing. So, what can you do to ensure that you're not replaced by AI in your job? 💪 First, it's essential to recognize that AI is not a replacement for human expertise but rather a tool to augment it. Therefore, it's essential to learn how to work with AI-powered systems to increase your efficiency and productivity. Additi

Revolutionize software testing with AI! 🤖💻 Share your thoughts on ethical implications in the comments.

  As technology evolves, so too does the field of software testing. One exciting development in recent years is the use of AI (Artificial Intelligence) to automate repetitive tasks and improve testing accuracy. Through analyzing large amounts of data and identifying patterns, AI can help identify potential defects or vulnerabilities in software. AI-powered tools can also generate test cases and scenarios by simulating user behavior and input, allowing for more efficient and effective testing. In addition, machine learning algorithms can analyze and learn from past testing data, leading to better predictions and more streamlined testing. AI-powered tools can also help identify and prioritize critical bugs and defects, saving valuable time and effort in manual testing. But it's important to note that AI-powered testing is not a replacement for human testers. While AI can automate certain tasks and help identify potential issues, it's still necessary for human testers to provide a

HP Quality Center - Best Practices

1.Introduction Quality Center is a test management tool which provides very good features for managing both your manual and automated test cases. This paper highlights the best practices for managing your test cases. When you open Quality center, depending on your rights it display the below mentioned option in the sidebar: 1. Requirements 2. Test Plan 3. Test Lab 4. Defects 5. Dashboard 2.Requirements When you have assigned with the responsibility of developing your test cases in the quality center then you must be wondering where to start with. I am going to share my experience to overcome such a situation. You need to find the solution of some question before you start writing your test cases. 1. Is your requirement developed and available? 2. Is your requirement organized in a logical sequence? If answer to both of the above question is Yes, then you can start with Requirement option in the side bar. In case your requirement is under development, then you keep your