Tuesday, January 30, 2007

CMMI and ISO

ISO is a standard. Companies have to comply with standards. ISO has eight principle clauses.

CMMI is a process meta model. The process model is a guideline for organizations to improve their processes. CMMI is a collection of best practices in the s/w industry. CMMI tells you what to do, the HOW is left to the choice of the ogranization implementing it.

CMMI has 25 process areas (including IPPD, SS) as per v1.1

Common metrics for maintenance projects...

1. Client satisfaction index
2. Effort metrics
-Overall effort variance
-COQ
-COPQ
-Rework effort index
3. Schedule metrics
-Overall schedule variance
-On-time delivery index
4. Quality metrics
-Overall defect density
-Residual defect density
5. Performance metrics
-PCI of last audit
6. Support metrics
-FTR

Bi-directional Traceability

Traceabilities are of two kinds:

1. Vertical Traceability
2. Horizontal Traceability

Bidirectional traceability is tracing requirements both forward and backwards (in Horizontal) AND Top to Bottom, & Bottom to Top (in Vertical) through all the phases of software development lifecycle.

Vertical traceability is tracing customer requirements from requirements phase thru System Test / UAT phase and from System Test / UAT phase back to customer requirements phase. These traceabilities can be established if the requirements are well managed. Vertical traceability helps determine that all the source requirements have been completely addressed and that all lower level requirements and selected work products can be traced to a valid source.

Forward traceability ensures all requirements have been captured, while backward traceability ensures no additional functionality (gold plating) has not been introduced into the system.

If a requirement is added or withdrawn, the development team must be aware of the design elements / development modules / test cases, etc. where the impact would be felt and changes need to be carried out.

How does bidirectional traceability help?

  1. To measure the impact of requirement changes. One requirement may spawn multiple design elements. In this case, each such design element should be traceable backwards to the same requirement.
  2. To know that all requirements have been implemented in the system. (Forward traceability ensures this).
  3. To ensure that no unwanted functionality has been incorporated into the system. (Backward traceability ensures this).

Vertical Traceability: Within a work product (BRS, SRS, HDL, LDL, Test Plan, etc.), the inter-relationships between various requirements is called vertical traceability. Each requirement is related to other requirement/s by virtue of functionality. Vertical Traceability ensures that all requirements have been captured, gaps in functionality have been identified, and that there is no duplication of requirements.

Monday, January 29, 2007

The Mythical Man-month

Frederick Brooks in his 1975 book The Mythical Man-month contends that adding more people to a behind-schedule project doesn't actually speed it up. On the contrary, ramping up resources adds up to the communication complexity (an overhead in fact) within the development group. This is because inducting new people involves a certain amount of time and resources to orient before they can be deployed for production. Weaning away time and resources from the project towards trainings / orientation results in further delay.

Saturday, January 27, 2007

Function Points

Function points are a measure of the size of computer applications and the projects that build them. The size is measured from a functional, or user, point of view. It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application.

In the late seventies, IBM felt the need to develop a language independent approach to estimating software development effort. It tasked one of its employees, Allan Albrecht, with developing this approach. The result was the function point technique.

In the early eighties, the function point technique was refined and a counting manual was produced by IBM's GUIDE organization. The International Function Point Users Group (IFPUG) was founded in the late eighties. This organization produced its own Counting Practices Manual. In 1994, IFPUG produced Release 4.0 of its Counting Practices Manual. While the GUIDE publication and each release of the IFPUG publications contained refinements to the technique originally presented by Albrecht, they always claimed to be consistent with his original thinking. In truth, it is still very close considering the nearly two decades that have elapsed since Albrecht's original publication!

During the eighties and nineties, several people have suggested function point counting techniques intended to substantially extend or completely replace the work done by Albrecht. Some of these will be briefly discussed in this FAQ. However, unless otherwise specified, information in this FAQ is intended to be consistent with IFPUG Release 4.0.
The fact that Albrecht originally used it to predict effort is simply a consequence of the fact that size is usually the primary driver of development effort. The function points measured size.

It is important to stress what function points do NOT measure. Function points are not a perfect measure of effort to develop an application or of its business value, although the size in function points is typically an important factor in measuring each. This is often illustrated with an analogy to the building trades. A three thousand square foot house is usually less expensive to build one that is six thousand square feet. However, many attributes like marble bathrooms and tile floors might actually make the smaller house more expensive. Other factors, like location and number of bedrooms, might also make the smaller house more valuable as a residence.

Read more about Function Points here:
http://ourworld.compuserve.com/homepages/softcomp/fpfaq.htm

PCMM

People Capability Maturity Model® Framework

The People Capability Maturity Model ® (People CMM ® ) is a tool that helps you successfully address the critical people issues in your organization. The People CMM employs the process maturity framework of the highly successful Capability Maturity Model ® for Software (SW- CMM ® ) [Paulk 95] as a foundation for a model of best practices for managing and develop- ing an organization™s workforce. The Software CMM has been used by software organizations around the world for guiding dramatic improvements in their ability to improve productivity and quality, reduce costs and time to market, and increase customer satisfaction. Based on the best current practices in fields such as human resources, knowledge management, and organizational development, the People CMM guides organizations in improving their processes for managing and developing their workforce. The People CMM helps organizations characterize the maturity of their workforce practices, establish a program of continuous workforce development, set priorities for improvement actions, integrate workforce development with process improvement, and establish a culture of excellence. Since its release in 1995, thousands of copies of the People CMM have been distributed, and it is used world wide by organizations, small and large, such as IBM, Boeing, BAESystems, Tata Consultancy Services, Ericsson, Lockheed Martin and QAI (India) Ltd.

The People CMM consists of five maturity levels that establish successive foundations for continuously improving individual competencies, developing effective teams, motivating improved performance, and shaping the workforce the organization needs to accomplish its future business plans. Each maturity level is a well-defined evolutionary plateau that institutionalizes new capabilities for developing the organization™s workforce. By following the maturity framework, an organization can avoid introducing workforce practices that its employees are unprepared to implement effectively.
Courtesy: SEI CMU

Cost of Quality

Cost of Quality (COQ) includes all costs incurred in the pursuit of quality or in performing quality related activities. Cost of quality studies are conducted to know the current COQ and to find out the opportunities for reducing the costs of quality and to provide a basis for comparison.

Types of COQ

Quality costs may be divided into:

1. Preventive Cost
2. Appraisal Cost
3. Failure Cost

QA Vs. QC

Software Quality Assurance (SQA) is defined as a planned andsystematic approach to the evaluation of the quality of andadherence to software product standards, processes, and procedures. SQA includes the process of assuring thatstandards and procedures are established and are followed throughout the software acquisition lifecycle. Compliance with agreed-upon standards and procedures is evaluated through process monitoring, product evaluation, and audits. Software development and control processes should include quality assurance approval points, where an SQA evaluation of the product may be done in relation to the applicable standards.

Quality assurance consists of the auditing and reporting functions of management. The aim of quality assurance is to provide management with the data necessary to be informed about product quality is meeting its goals.

How is it different from Quality Control?

Quality control is the process of variation control. Quality control is the series of inspections , reviews and tests generated throughout the development lifecycle to guarantee that each work product meets the requirements placed upon it.

Quality control activities may be fully automated, manual or combination of automated tools and human interaction.

Friday, January 26, 2007

PPQA Vs. VER

PPQA provides staff and management with objective insight into the performance and compliance of the defined process. E.g. Audits look into process adherence and process performance.

Verification ensures that the selected work products meet their specified requirements. E.g. Peer review of SRS ensures that all the details captured thru BRS are appropriately addressed in the requirement specs.

These two process areas address the same work product from different perspectives - PPQA from process perspective, and VER from the technical aspect.

Typical Audit Process

  1. Prepare and publish the yearly audit calendar (in conslutation with project teams).
  2. Publish quarterly audit schedule.
  3. Issue audit notification to all relevant stakeholders and the final schedule (1 week prior to audits).
  4. Review previous findings and check the project artifacts before the actual audit. Skim thru the VSS folders of each project looking out for discrepencies and note them down (Note down variations in naming conventions also).
  5. Take printouts of audit checklist. (Always better to have a checklist for audit rather than a random non-focused check.)
  6. Actual Audit

a. Check closure of previous NCs.
b. CM audit check (change requests, CI, )
c. Internal audit report check
d. Check against the organization process

  1. Get the affirmation of auditees on the noted NCs.
  2. Send the filled in checklist to auditees giving a scope to them to come back with their objections, if any.
  3. Prepare audit report. Do a self review. Check for Process Area mappings.
  4. Get the audit report peer reviewed.
  5. Publish the audit report to project teams, delivery head, sponsor
  6. Prepare and conduct follow up audits.
  7. Conduct review with higher management once a month and report deviations in process.

Thursday, January 25, 2007

PA Categories


Project Indicators

An indicator is a metric or combination of metrics that provide insight into the software processes, a software project, or the product itself. The insight helps the project manager or software engineers to adjust the process, the project and make it more efficient.

Project indicators enable a project manager to:
  • Assess the status of an ongoing project
  • Track potential risks
  • Uncover problem areas before the go critical
  • adjust workflows or tasks
  • evaluate the project team’s ability to control the quality of software work products

    I made use of the following project indicators for my projects:


Process Institutionalization

Institutionalization is embedding a process within the organization as an established norm or practice. The process is ingrained in the way the work is performed and there is commitment and consistency to performing the process.

The degree of institutionalized is expressed by the names of Generic Goals. The following table gives an overview of institutionalization:

For a Level 2 organization (Staged representation), all projects have their own framework (derived from org-level policy) for carrying out the processes. But each one differs from the other. For a Level 3 Organization however, the processes are tailored from the organization’s QMS as per the defined tailoring standards. Institutionalization of a process is accomplished at Level 3.
At level 4, only special causes of variations are addressed, while at level 5 even common causes of variations are addressed.

Requirements Management & Requirements Development

Requirements Management (RM) is at Level 2, while Requirements Development (RD) is at Level 3 in a staged representation. A quite often asked question is how can one manage requirements without first developing them. Logically, requirements are to be developed (both implicit and explicit customer needs are elicited), and later on managed (thru horizontal and vertical RTM).

However, from a staged representation perspective, an organization at Level 2 need only to manage requirements. Different projects would take up this activitiy in their own way (No Institutionalization). Also, customer needs are not elicited at this level. So, requirements from customers pour in, and the organization merely manages them thru a bi-directional traceability matrix.

For an organization at Level 3, however, requirements ought to be developed first. Both implicit and explicit customer needs are elicited. The requirements are then managed thru a bidirectional traceability matrix. The needs development & management activities at this level are uniform across all projects in the ogranization (Institutionalization).

Interpretation of Level IV & V in CMMI V1.2

Level IV and V process areas remain the same in CMMI V1.2. The way these process areas should be interpreted, however, has changed.

OPP and QPM are at Maturity Level IV, while OID and CAR occupy Maturity Level V. At maturity level III, the focus is on institutionalization. For a company moving from level II towards level III, it would simply mean practicing the same processes across all the projects and bring in uniformity. CAR is done to identify the causes of any repeating problem. This is the CAR at Level III. Measures are taken to contain it.

Through OPF (At level III), the company defines at a macro level what the EPG responsibilities are; through OPD the company defines how its QMS is established and functions. Process performance is monitored using M&A. Note that there is no process stability at this stage. Projects are quantitatively managed when the processes driving it are stable.

At level IV the project’s quality and process performance objectives are defined and only those processes are picked up, which are stable (using past data…in our case it is the Process Capability Baseline sheet.). Sub-processes of the Process are defined & monitored (depending on the organization’s and project’s quality objectives); variation is tracked using statistical techniques. The statistical tools help in identifying variation, and setting up corrective action.
Spikes (significant deviation from the normal) in the process performance are identified; those sub-processes are picked up (for improvement) which are vital, and contribute to process control and improvement. CAR is done to find out root cause of the variation. Measures are taken to ensure that these are not repeated. [This is level IV CAR].

Level V CAR is done to shift the mean. Shifting the mean is a process improvement activity. After shifting the mean, the sub-process performance is monitored for stability and is fine tuned.

Verification & Validation


SCAMPI - A

  1. Briefing by lead appraiser to appraisal team, sponsor, delivery head, and appraisal participants.
  2. Forming of mini teams based on PA (Project Management, Engineering, and Support Process Areas)
  3. Document reviews: Review of Filled-in PIID (Practice Implementation Indicators Description). Gather evidences of Direct Artifacts, and Indirect Artifacts. Report strengths and weaknesses.
  4. Conduct individual and group interviews, note down affirmations, and tag the notes.
    - Individual interviews for Delivery Head, followed by Project Managers, OT Head, OID Head
    - Group Interviews for Project Teams
  5. Characterize Practice Implementation (Fully Implemented/Largely Implemented/Partially Implemented/Not Implemented/ Not Yet Implemented)
  6. Aggregate practice implementation to OU (Organization Unit) Level.
  7. Report Preliminary findings to project teams (excluded sr. management) and addressed concerns/objections if any.
  8. Rate Maturity Levels.
  9. Presentation of Final Findings (to all the members including sr. management)
  10. Executive Session
  11. Sign Appraisal Disclosure Statement (as per SCAMPI V1.2)
  12. Fill SCAMPI – A Feedback form in the SEI Appraisal System: http://sas.sei.cmu.edu/AppSys/ (a web application through which SEI sets up and tracks SCAMPI-A appraisals).

Software Quality Gap Analysis

Software Quality Gap Analysis is based on customer's Process Improvement requirements mapped to Quality Models and the Desired/Targeted State. The customer may want to benchmark against a quality model, or may want to target a specific area like PPQA, and restrict to that area alone. Gap Analysis in such case is performed against the best practices and industry standards in that area.

Gap Analysis can be done against models like CMMI, ISO, COBIT, PCMMI, ITIL, etc. apart from targeting specific process areas.

Sometimes, in-house quality gap analysis is also carried out. SEPG studies the process improvement plans and comes up with changes (or in a few cases redefines the existing process) or deploys a new process. The feasibility is studied and plans are made for a pilot deployment. Measures are defined; projects are selected on which the new process would be experimented. Subsequent to training the selected project teams, and implementation of the process, metrics are collected regularly to study the process performance. If it is along expected / projected lines, then the process is institutionalized.

SQL Essential Training - LinkedIn

Datum - piece of information Data is plural of datum. Data are piece of information - text, images or video. Database - collection of data. ...