Sunday, September 21, 2008

CobiT

Control Objectives for Information and Related Technology (CobiT) is an IT governance control framework that helps organisations meet business challenges in the areas of regulatory compliance, risk management and aligning IT strategy with organisational goals.

Structure of CobiT

CobiT recognises 34 IT processes that are grouped into four domains. The four domains are:

  1. Plan and Organise (PO)
  2. Acquire and Implement (AI)
  3. Deliver and Support (DS)
  4. Monitor and Evaluate (ME)

Each process has a level of maturity (numerical) from 0-5. (0 is non-existent and 5 is optimised.) This scale can be used for a number of key evaluations, such as the level of maturity a process is currently at within your organisation, what level of maturity the processes should be at, what level is considered best practice, & what level the best of your competitors/other organisations have achieved.

BUSINESS AND IT CONTROLS

The enterprise’s system of internal controls impacts IT at three levels:

• At the executive management level, business objectives are set, policies are established and decisions are made on how to deploy and manage the resources of the enterprise to execute the enterprise strategy. The overall approach to governance and control is established by the board and communicated throughout the enterprise. The IT control environment is directed by this top-level set of objectives and policies.
• At the business process level, controls are applied to specific business activities. Most business processes are automated and integrated with IT application systems, resulting in many of the controls at this level being automated as well. These controls are known as application controls. However, some controls within the business process remain as manual procedures, such as
authorisation for transactions, separation of duties and manual reconciliations. Therefore, controls at the business process level are a combination of manual controls operated by the business and automated business and application controls. Both are the responsibility of the business to define and manage, although the application controls require the IT function to support their design and development.
• To support the business processes, IT provides IT services, usually in a shared service to many business processes, as many of the development and operational IT processes are provided to the whole enterprise, and much of the IT infrastructure is provided as a common service (e.g., networks, databases, operating systems and storage). The controls applied to all IT service activities are known as IT general controls. The reliable operation of these general controls is necessary for reliance to be placed on application controls. For example, poor change management could jeopardise (accidentally or deliberately) the reliability of automated
integrity checks.

IT GENERAL CONTROLS AND APPLICATION CONTROLS

General controls are controls embedded in IT processes and services. Examples include:

• Systems development
• Change management
• Security
• Computer operations
Controls embedded in business process applications are commonly referred to as application controls. Examples include:
• Completeness
• Accuracy
• Validity
• Authorisation
• Segregation of duties

Ref: http://www.itgovernance.co.uk/cobit.aspx

Sunday, August 31, 2008

Project Estimations during proposal phase

Estimations during proposal phase.

Impact analysis is done, the application scope is determined to understand what all applications are within and out of scope depending on which upstream and downstream applications are identified. These have significant bearing on integration and end to end testing.

1. Correct estimations are pre requisite to successful project management
2. Accurate project estimates for a proposal ensure that -

  • Project estimates are not too high, a proposal is not rejected.
  • Effort and schedule estimates are not too low, thereby avoiding cost overruns and schedule delays in the proejct
When is estimation done:

  • On receipt of a RFP from client
  • Immediately on project initiation
  • Beginning of each phase of the project (this varies for the type of project - Development, Maintenance, Conversion)
  • Upon receiving CRs or Requests for Enhancements

Estimation @ RFP stage:

  • Based on data given in the RFP
  • Less data / knowledge available
  • Low level details are not available
  • May cover the following:-Estimate for each business function of the total requirements-Add overheads for quality assurance, project management-Estimate efforts for new technology, new tools, etc-Contingency

What to Estimate?

Estimate the Size, which then leads to estimation of schedule and effort.

In what units is Size estimated?

Size of work product is estimated in:

  • KLOC
  • FP
  • Feature Points
  • Number of Requirements
  • Number of Components
  • Estimation based on Use Cases for Object Oriented Projects

How is Effort derived from Size?

Productivity figures are taken into consideration. Also, COCOMO is used. Overheads for other activities are added (for example, User documentation, project management, quality assurance, user training, etc.). Note that in all these cases, complexity of the system to be designed and technology used must also be taken into account.

How is Schedule derived from Size?

T(in months) = y * 2 sqrt Effort in Person Years

  • y is typically 2.5 to 3.0 for projects of effort <=180 person months
  • y is typically 2. to 2.5 for projects of effort > 180 person months

While deriving Schedule take into account the leaves, trainings, and contingency.

What other things have to be estimated for?

  • Team Size, Critical Computer Resources, Quality Assurance
  • What factors affect estimations?
  • Programmer experience
  • Project documentation
  • Standards
  • Training Effort
  • Contingencies
  • Assumptions and Dependencies
  • Management time
  • Attrition, etc.

Other things to be factored in

  • Knowledge and experience of the estimator (skill level of the estimator)
  • Level of details available for estimates
  • Time spent on understanding requirements and estimating
  • Method used for estimation

Finally, what steps are involved in Estimation?

  1. Identify the complexity level of each work item, obtain the figures for effort ratio for various phases and get the productivity figures for various phases
  2. Use an appropriate estimation model, finalize the estimate for each phase of the project.
  3. Document methods and calculations used for estimation, assumptions made, constraints taken into account, and loading plan
  4. Consider experience of the team in the application area, exp of the team with the programming language, use of tools and s/w engg practices, and required schedule
  5. Monitor these estimates continuously to track any change in the assumptions, constraints and loading plan

Elicit and Analyse Requirements

Understand and communicate requirements that align yout IT priorities with your business needs.

No process is more fundamental than the process of defining and managing business and technical requirements. It's no surprise that studies cite inaccurate, incomplete, and mismanaged requirements as the primary reason for project failure.

The Requirements-engineering process consists of two major domains: definition and management.

Best practices:

Elicit Requiremements:

Define the vision and project scope.
Identify the appropriate stakeholders.
Select champions (Voice of the customer).
Choose elicitation techniques (workshops, questionnaires, surveys, individual interviews).
Explore user scenarios.

Analyse Requirements: verify they are complete and achievable

Create analysis models.
Build and evaluate prototypes.
Prioritize requirements.

Specify Requirements:

Look for ambiguities.
Store requirements in a database.
Trace requirements into design, code, and tests.

Validate Requirements:

Review the requirements through a formal peer review.
Create test cases from requirements.

Manage Requirements:

Manage versions.
Adopt a change control process.
Perform requirements change impact analysis.
Store requirements attributes.
Track the status of each requirement.
Applying requirements best practices lead to higher satisfaction for your customers.

Courtesy:http://software-quality.blogspot.com/Achieve Useful Requirements: Matt Klassens @ Processes.http://www.ftponline.com/special/alm/mklassen/default.aspx

Tuesday, August 26, 2008

Implementing Configuration Management for Software Testing Projects

To analyze test process performance, testers typically review and analyze the test process artifacts produced and used during a project cycle. However, these testing artifacts, along with their related use cases, evolve during a project cycle and can frequently have multiple versions by project end. Hence, analysis of the process performance from different perspectives requires that testers know exactly which versions of artifacts they used for different tasks. For example, to analyze why the test effort estimates were not sufficiently accurate, testers need the initial versions of use cases, test analysis, and test design specifications they used as a basis for the effort estimation. In contrast, a causal analysis of software defects missed in testing requires testers to have the latest versions of use cases, test analysis, and test design specifications used in test execution.

Read rest of the article here: http://www.stsc.hill.af.mil/crosstalk/2005/07/0507Boycan.html
Courtesy: STSC (Software Technology Support Center)

Sunday, August 24, 2008

S/w program vs. industrial product

One advantage a software progam has over an industrial product is it can be reworked / reverted to a previous state if defects are found later, which is not the case with an industrial product; once a defect is injected, it stays and makes the product of no or lesser value.

Lehman's laws of software evolution

In 1980 Lehman and Belady came out with two laws explaining why software evolution could be the longest of the life cycle processes. These can be summarized as below:

  1. Law of continuing change: To continue being useful, a system must undergo changes.
  2. Law of increased complexity: The program structure detiorates as changes are introduced into it over time. Eventually, the complexity rises to an extent where it is cost effective to write a new program than to try to maintain it.

Sunday, August 17, 2008

T&M and Fixed Bid - Testing Services

Even before companies raise RFP for a project/outsourced work (as also prior to starting any software project initiation), the estimates for work are worked on at a high level to understand the time it will take to complete various phases of the work. Typically, the effort distribution for various stages of the software development lifecycle is: Design 15%, Construction/Coding 50%, Testing 30%, and Documentation 5%.

In the above generic estimate, unit testing is considered part of the development work. The figures tell us if coding itself consumes 50% of the time, rest of the time is the cumulative effort spent on design, testing and documentation. Therefore, if we estimate for 200 man days of coding effort, this would mean it will take another 200 days for the complete project!

We came across a scenario where the client was trying to develop some generic principles on T&M and Fixed Bid for Testing Services such that he can use them directly say Unit Testing can be done by Fixed Bid, Stress/Volume testing can be done by T&M!

However, the problem here is the client is not confident of the project estimate. So, the sub allocation of tasks within testing services will also change along with the project estimates. The 30% of X would keep changing with the value of X. What the client does not realize is there are certain factors to be considered which are unique to each project. There cannot be a thumb rule as the client expects. Depending on the time, money, resources and the quality of work expected out of the project (we shall have to desist talking about "ALQ, accepted level of quality as there is nothing "acceptable" in real sense") we define what kind of tests are critical for the project depending on the shipping date. Usually, companies go for fixed bid when there is a limitation of the budget and when the schedule is of priority continuous support is needed, they go for T&M. To summarize, investor options are important for making a decision on the kind of bid a company would like to offshore especially for testing service.

Tuesday, August 12, 2008

On SCM

We had an interesting discussion on SCM today. A PM was arguing configuration management of artifacts involves only checking in the final versions while working on the intermediate versions and not caring to check them in on a daily basis. Many people confuse or rather are arrogant/careless on not checking in their work. The configuration manager must ensure that not only the code but all other client deliverables including technical design, tech specs, code, etc. are checked in the version control. This helps. In the absence of a person, another one can take up the work and continue working on the code/deliverables. Thus, we ensure the work is not person dependent! And also since the deliverables are checked in daily, all versions of them are available at any moment so that if required they a branch can be taken and worked upon.

Sunday, August 10, 2008

Integration Testing...

Integration testing of a code

CM: Branching & Merging...


Consider the example given in the figure. Code was checked out from the production environment on 1st of August, and Module 1 worked on till 5th of August when it is checked in again.

To fix Module 4, another copy of the code was checked out from the production environment at the same time as Module 1, and worked on till the 10th of August. The changes were then put back on 10th. Now what happens to the previous changes made to Module 1?

Our requirement is to carry out changes to both module 1 and module 4 and be able to retain them although they have been checked out on the same day and checked in on different days. Branching and Merging is therefore essential when working on CRs on different parts of code. Your CM tool must be able to branch out a version and be able to assimilate the changes (merge) at a later point of time!

Saturday, February 09, 2008

ITSM – Information Technology Infrastructure for Service Management




Definition

ITIL is a "framework of best practice approaches intended to facilitate the delivery of high quality IT services". It outlines an extensive set of management procedures that are intended to support businesses in achieving value for money and quality in IT operations. These procedures are supplier independent and have been developed to provide guidance across the breadth of the IT infrastructure"

ITIL (the IT Infrastructure Library) is essentially a series of documents that are used to aid the implementation of a framework for IT Service Management. This customizable framework defines how Service Management is applied within an organization.

Although ITIL was originally created by the CCTA, a UK Government agency, it is now being adopted and used across the world as the de facto standard for best practice in the provision of IT Service. Although the ITIL covers a number of areas, its main focus is on IT Service Management

History of ITIL

It was initially developed during the 1980's, by the CCTA, and was widely adopted in the 1990's. This in turn led to the development of a number of standards.

ISO and ITIL

ISO/IEC 20000 is the international standard for ITSM, and aligned generally with ITIL. It comprises a 'specification' (part 1) and a 'code of practice' (part 2)

ISO 20000-1: This is the Specification for Service Management, the 'cerifiable' element of the pair.

ISO 20000-2: This is the 'Code of Practice for Service Management', which designed to work with the Specification, and outlines requirements, etc.

These two parts specify service management processes and form a basis for the assessment of a managed IT service.

Part 1 may typically be used by:

Organizations seeking tenders for outsourced services
Organizations that require a consistent approach by all service providers in a supply chain
Existing providers to benchmark their IT service management
As the basis for formal certification; and so on.

Part 2 provides guidance to auditors, implementation staff and others.

The ISO 20000 Toolkit

Implementation of any major quality standard is a complex operation. In addition to the learning curve, which can be steep, the demands of international standards are often rigorous. For this reason, a specific kit has been designed to aid both implementation and understanding.

The contents of the toolkit are both diverse and comprehensive, covering all the major processes. Included are the standards themselves, templates, guides, presentations and checklists.

How is ITIL organized?

ITIL is organized into five core publications that revolve around the service lifecycle. These provide best practice guidance for an integrated approach to IT service management.

The five core titles are:

1. Service Strategy
2. Service Design
3. Service Transition
4. Service Operation
5. Continual Service Improvement

To reflect this practice based approach, ITIL actually is formally known as 'ITIL Service Management Practices'.

Version 2 Background

The previous version of ITIL was organized into a series of sets, which themselves were divided into two main areas:

1) Service support
2) Service delivery:

Service Support was the practice of those disciplines that enabled IT Services to be provided effectively.

Service Delivery covered the management of the IT services themselves. It involved a number of management practices to ensure that IT services were actually provided as agreed between the Service Provider and the Customer.






What is ITIL Toolkit?

The ITIL Toolkit is a collection of resources brought together to accompany ITIL.

The materials included are intended to assist in both understanding and implementation, and are therefore targeted at existing ITIL users and beginners.

The toolkit includes:

A detailed guide to ITIL and service management
The ITIL Fact sheets - 12 two page documents, serving as a concise summary of each of the ITIL disciplines
A management presentation, inclusive of speaker notes
An ITIL audit/review questionnaire and reporting set based on MS-Excel
Materials to assist in the reporting of the above results (eg: presentation template)

What is ITIL Triangle?

This is a diagram that describes the relationship between ITIL, the ISO20000 service management standard, and your own in-house procedures.

Essentially, this top down illustration starts at the highest level with ISO20000, the BSI standard specification for IT service management. The next level presents a code of practice for ITSM (e.g.: PD0005, which was produced by CCTA, ITSMF and DISC/BSI). ITIL itself is the third layer, with in-house procedures represented by the bottom layer.


PPB and PCB...

Process Performance Baseline

Process Performance is a measure of the actual results achieved by following a process.

PPB is a documented characterization of the actual results achieved by following a process, which you use as a benchmark for comparing actual process performance against expected process performance. You establish a process performance baseline typically at the project level, [PPB=Project Level] although you will use the PCB to derive initial process performance baseline.

PPB documents the historical results achieved by following a process for a given project. Once a PPB is developed, it is then used as a benchmark for comparing actual process performance in a project against expected process performance. PPB for each project is collected and is used to define PCB (at organization level).
Process Capability is the range of results that can be achieved by following a process.

Quantitative Process Management is achieved by using Process Database, and PCB (Process Capability Baseline). PCB is derived from PDB and contains capability of different processes in quantitative terms.

Process Capability Baseline

PCB is a documented characterization of the range of expected results that would normally be achieved by following a specific process under typical circumstances. A process capability baseline is typically established at an organizational level. [PCB=Organization Level]

PCB specifies, based on data of past projects, what the performance of a process is. That is, what a project can expect by following a process. The performance factors of a process are primarily those that relate to quality and productivity. PCBs define - Productivity, Quality, Effort Distribution, Defect Distribution, Defect Injection Rate, CoQ, etc.

Using the capability baseline, a project can predict at a gross level the effort that will be needed for various stages, the defects likely to be observed during various development activities, and quality and productivity of the project.

SCAMPI-A & SCAMPI-B Differences...



SQL Essential Training - LinkedIn

Datum - piece of information Data is plural of datum. Data are piece of information - text, images or video. Database - collection of data. ...