Thursday, October 31, 2013

Google Code Playground...

"Google Code is Google's site for developer tools, APIs and technical resources. The site contains documentation on using Google developer tools and APIs — including discussion groups and blogs for developers using Google's developer products." - Wiki.

Check the AJAX playground here...
  1. https://code.google.com/apis/ajax/playground/ 
  2. https://code.google.com/apis/ajax/playground/?type=visualization 
You can use the Code in SharePoint Webparts to create dashboards.

Wednesday, October 30, 2013

SPICE - International Standard for Software Process Assessment

SPICE ñ International Standard for Software Process Assessment 

Download the PDF from here:

http://www.4shared.com/office/MCRwCCJk/SPICE__International_Standard_.html

The Role of Assessment in Software Process Improvement

The Role of Assessment in Software Process
Improvement - SEI CMU

Download the PDF from here.

http://www.4shared.com/office/jO4USGPo/The_Role_of_Assessment_in_Soft.html

Rethinking the Concept of Software Process Assessment - Tore Dybå, M.Sc, Nils Brede Moe, M.Sc, SINTEF, Trondheim, Norway

This is an exact reproduce of Tore and Nils' paper titled - Rethinking the Concept of Software Process Assessment. All rights reserved with those authors; site where published - http://www.iscn.at/select_newspaper/assessments/sintef_ass.html

Rethinking the Concept of Software Process Assessment

Tore Dybå, M.Sc.
Nils Brede Moe, M.Sc.
SINTEF, Trondheim, Norway

Introduction

Much of the discussions on software process improvement (SPI) during the 1990s have focused on software process assessment and "best practice" models such as the Capability Maturity Model (CMM) for software [11], and ISO/IEC 15504 (SPICE) [5].

In this paper we present a critique of the global "best practice" approach to software process assessment and improvement, focusing on the necessity to explore the contingencies of individual software organisations. Furthermore, we present some of our experiences in using tailor made assessments based on a participative approach to focus software process improvement activities in Norwegian software companies.

The participative approach to software process assessment is part of the methodological basis used in a major Norwegian SPI program called SPIQ (Software Process Improvement for better Quality). The objective of SPIQ is to increase the competitiveness and profitability of Norwegian IT-industry through a systematic and continuos approach to process improvement.

The goal of SPIQ is twofold: (1) to establish an environment for process improvement in software companies associated with SPIQ, and (2) to transfer and diffuse the knowledge gained to the remaining IT-industry in Norway through training, seminars, and conferences.

The rest of this paper is organised into five sections. In the first section we discuss the role of assessment in SPI. Next, we present a participative approach to software process assessment, and our experiences with using this approach is then exemplified with two case studies. Finally we summarise our experiences in terms of lessons learned, and lastly we make some concluding remarks.
The Role of Assessment in SPI

An increasingly popular way of starting a SPI program is to do an assessment in order to determine the state of the organisation�s current software processes, to determine high-priority issues, and to obtain organisational commitment for SPI.

Why Perform Assessments?

Not all software companies are equally skilled at identifying the causes of their problems or to identify the most rewarding opportunities for future competition. Without a preliminary problem analysis, "solutions" are seldom effective; on the contrary, they are often irrelevant to the underlying causes of the symptoms that are being treated and only add more noise to the system. Consequently, it is of utmost importance that complex problems in software development must be thoroughly understood before a solution is attempted.

In many cases, process assessment can help software organisations improve themselves by identifying their critical problems and establishing improvement priorities before attempting a solution [6]. Therefore, the main reasons to perform a software process assessment is [13]:
To understand and determine the organisation�s current software engineering practices, and to learn how the organisation works.

  1. To identify strengths, major weaknesses and key areas for software process improvement.
  2. To facilitate the initiation of process improvement activities, and enrol opinion leaders in the change process.
  3. To provide a framework for process improvement actions.
  4. To help obtain sponsorship and support for action through following a participative approach to the assessment.

As described later, the last point � a participative approach � is crucial for a successful software process assessment.

Ways of assessing software processes

There are three ways in which a software organisation can make an assessment of its development practices:

  1. Benchmark against other organisations.
  2. Benchmark against "best practice" models.
  3. Assessment guided by the individual goals and needs of the organisation.

The first way of doing an assessment is a traditional benchmark exercise used to gain an outside perspective on practices and to borrow or "steal" ideas from best-in-class companies. This type of benchmarking is "an ongoing investigation and learning experience that ensures that best practices are uncovered, analysed, adopted, and implemented." [3]

Hence, benchmarking is a time-consuming and disciplined process that involves (1) a thorough search to identify best practice companies, (2) a careful study of one�s own practices and performance, (3) systematic site visits and interviews, (4) analysis of results, (5) development of recommendations, and finally, and most importantly, (6) implementation.

The second way of performing an assessment is to benchmark the company against one or more of the "best practice" models on the market. Over the years, several assessment models have emerged in the software industry, and there is a range of possible assessment models that one can choose from. In addition to the CMM for software and ISO/IEC 15504, further examples of such models are ISO 9001 (including 9000-3), TickIT, the European Quality Award, Bootstrap, Trillium, and ISO/IEC 12207.

The models focus on different aspects of the software processes and the organisation, and they are all associated with specific strengths and weaknesses. However, they share a common set of problems, which mainly has to do with the fact that they are artificially derived and based on idealised lists of unvalidated practices. Besides, they are associated with both statistical and methodological problems [2].

Furthermore, most of these models also emphasise an improvement approach based on statistical process control (SPC), which is a highly questionable approach for the majority of software companies [10].

The third way of performing an assessment is with a participative approach tailored to the individual needs of the company. This approach is less time-consuming than the traditional benchmark approach, and it is clearly more relevant and valid than the model-based approach.

During the 1990s, "software process assessment" has become synonymous with the model-based approach. In our view it is time to rethink this conception of software process assessment, and to proceed to a tailor made and participative approach focusing on what is unique to each company and how this uniqueness can be exploited to gain competitive advantage.

A Participative Approach to Software Process Assessment

The objective of our approach to software process assessment is to focus on the necessity of participation for SPI to take place. Basically, there are three reasons for this: (1) Developers and managers alike must accept the data from the assessment as valid, (2) they must accept responsibility for the problems identified, and (3) they must start solving their problems.

General principles for performing model-based software process assessment are given in [6, 7, 11, 13] and specific guidelines for performing CMM-based assessments are given in [4] and for ISO/IEC 15504 conformant assessments in [5].
Participative Assessment Process

We adopted a general approach for organisational assessment and specialised it to the domain of software development. The process involved researchers and practitioner acting together in a participative approach to diagnose problems in software development using the basic principles of survey feedback (see e.g. [1, 8]) which is a specialised form of action research.

The assessment process is an adaptation of the evaluation model developed by Van De Ven and Ferry [12] and consists of six steps, as shown in Figure 1, and described below:


Figure TDNBM.1: The Assessment Process.


In the first step, assessment initiation, the insiders and outsiders of the organisation should clarify their respective roles and the objectives of the assessment by answering the following questions:

  • What are the purposes of performing software process assessment?
  • Who are the users of the assessment, and how will the results be used?
  • What is the scope of the assessment in terms of organisational units and issues?
  • To what extent is there a commitment to using scientific methods (e.g psychometric principles) to design and implement the assessment?
  • Who should conduct the assessment, and what resources are available?

It is important that due considerations are taken in answering these questions, since they are crucial for determining whether an assessment is relevant in the first place, and for tailoring the process and content of the assessment to the specific needs of the organisation.

The second step, focus area delineation, is an exploration of the overall issues identified for the assessment in step one. In our experience, most companies do not have a shared understanding of their specific goals. A conscious analysis of commonly used high-level performance goals and focus areas in standards and reference models can, therefore, be useful as a starting point for group discussions in this step. Examples of such focus areas are software processes (e.g. customer-supplier, engineering, support, management and organisation), competitive priorities (e.g. price, quality, flexibility, and service), organisational learning (learning from past experiences, learning from others, and current SPI practices), and perceived factors of success.

In the third step, criteria development, multiple operational criteria are developed for each of the high-level goals. The process of criteria development requires that practitioners select and define concrete characteristics that are to be measured and used as indicators of goal attainment. A decision also has to be taken regarding the use of aggregate or composite measures.

To operationalise the criteria, one question is defined for each characteristic such that the theoretical abstractions can be closely related with everyday work situations. We adopted the format used in the European Software Institute�s 1995 Software Excellence Survey, such that two subjective rating scales accompany each question: one to rate the current strength or practice level and one to rate the future importance (see Figure TDNBM.2).

Furthermore, to help the companies, we developed a standard questionnaire that could be used as a starting point for internal discussions and for the development of tailor-made questionnaires. The standard questionnaire is based on our experiences of performing process assessments in six companies during the SPIQ pre-project phase, and includes four sections. The first section, on competitive priorities, is adapted from the aforementioned ESI survey. The second section is adapted from the software process areas in the emerging ISO/IEC 15504 standard. The third section concerns SPI processes and learning from past experiences and the experiences of others. Finally, the fourth section is concerned about finding the most important factors enabling SPI success in the organisation.

Current strength Future Importance
Figure TDNBM.2: Typical question format from the questionnaire.

The issues pertinent to step four, assessment design, relate to where the assessment will be conducted (organisational units), the role of the insiders and outsiders, the time horizon of the assessment, the unit of analysis, as well as deciding what the sample would be, how the data will be collected, how aggregate concepts will be measured, and how the data will be analysed.

When aggregate or composite measures are used, one should be careful about deciding the corresponding requirements of psychometric properties (see e.g. [9]). That is, unless the composite scales in the questionnaires are constructed and evaluated along the lines associated with psychometric tests, they may produce assessment results that are seriously misleading.

It is important, however, to note that the more rigorous the assessment design becomes, the greater the time, costs, and other resources expended on the assessment are likely to be. Therefore, one should ask the question at every decision point whether the benefits that result from a more sophisticated design to ensure accuracy, confidence, generalisability, and so on, are worth the investment of more resources.

In step five, assessment implementation, the assessment is implemented according to the procedure decided upon in the previous step. The main considerations during this step are completeness and honesty in data collection procedures and the recording of unanticipated events that may influence the assessment results.

The major concerns during step six, data analysis and feedback, are to provide opportunities for respondents to participate in analysing, interpreting and learning from the results of the assessment. And, furthermore, to identify concrete areas for improvement. There are many ways in which this could be done. We have relied upon half-day workshops in which preliminary findings on initial questions and problems are presented verbally, in writing, and with illustrations.

These workshops begin with a review of the objectives of the assessment, the focus areas and the design and implementation of the assessment. Findings regarding the scores on current strengths and future importance are presented in terms of a gap analysis. Normally, the participants raise a multitude of questions and issues when the findings are presented, and they take part in group discussions and reflections as they review and evaluate the preliminary findings. Some of the questions can be clarified and answered directly with the data at hand, other questions can be answered by reanalysing the data, and finally some issues are raised which cannot be resolved with the current assessment data. In the last case, a decision has to be taken regarding further data collection.

Typically, we use scatter plots, bar charts and histograms to illustrate preliminary findings from an assessment in order to highlight gaps between current levels of practices and future importance and the dispersion of responses both within and between groups.
Figure TDNBM.3: Illustration of preliminary findings.

Figure TDNBM.3 shows an example of an illustration that was used in a feedback session presenting preliminary findings from an assessment. Some of the information was presented verbally. The extra information for this graph was "100% of the respondents have a gap that is larger than or equal to one".
Model-based approach versus participative approach to assessment

In summary, our approach to software process assessment is based on a structured process emphasising participation in each and every step. See Table TDNBM.1 for a comparison of the model-based approach and the participative approach to software process assessment.


Model-based ApproachParticipative Approach
Focus areas and criteria from"Best practices" according to the reference model.Tailor made to the needs of the organisation.
Data collected fromSelected group of managers and representatives from specific projects.Everyone in the organisation or department.
Data reported toSponsor (top management and department managers).Everyone who participated (including management).
Role of researcher
or consultant
Administration of questionnaires, documenting findings and recommendations.Obtain agreement on assessment approach, joint design and administration of questionnaire, design of workshops.
Action planning done byTop management.Teams at all levels.
Probable extent of change and SPILow.High.
Table TDNBM.1: Two approaches to software process assessment.

In this section, we take a look at how two companies made significantly different implementations of the participative approach to software process assessments and in the next section, we present the key lessons learned from these cases.
Company Y

Y is nearly 15 years old, and has grown to become one of the leading consulting companies in Scandinavia. Current customers include large industrial enterprises, government agencies and international organisations. They focus on using an iterative and incremental development method. The company has a flat and open organisation, with a process-oriented structure, and employs about 140 persons. Over 90% of these hold a MSc/MBA.

In the first step (assessment initiation) members from the SPIQ project team had an opening meeting with the manager of technology. This meeting resulted in formulation of two objectives for the assessment:
Get a survey of today�s process status.
Get an "outsiders" view on the company to suggest areas for improvement.

The assessment was mainly focusing on project managers and department leaders. All the questions in the assessment were related to software development. The scope of the assessment in terms of organisational units and issues were all the process-areas of the company. The assessment was decided to be conducted by the SPIQ-team and the manager of technology.

In steps two and three (focus area delineation and criteria development), Y used the standard questionnaire as a starting point for internal discussions, and for the development of tailor-made questionnaires. They did not change anything, which was a bit surprising. The purpose of doing this was the wish for external impulses. No aggregate or composite questions were used; the focus was only on single questions.

In step four (assessment design), the date of the assessment was determined, and it was decided that one of the researchers from SPIQ should hold a presentation for the managers in the company. This was to be followed by the assessment. The presentation was an introduction to SPI with the focus on general improvement. The purpose of this was to describe the questionnaire and its purpose, and also have a quick walkthrough of the questions.

After a short period of planning, all was set for stage five (assessment implementation). After the presentation, the participants (10 persons) from Y filled out the questionnaires in the meeting-room. This gave them the opportunity to discuss and agree upon the interpretation of unclear questions in the questionnaire. The participators in the assessment only answered for the unit and those levels that were part of their own area of responsibility. All the information was treated confidentially. Filling out the questionnaire took about 30 minutes.

In the final step (data analysis and feedback), most of the respondents participated in the analyses and interpretation of the preliminary result. A half-day workshop was set up for this event. The most important results were then presented. The participants raised a lot of questions and issues , and they started a big discussion as they reviewed and evaluated the preliminary findings. Some of the questions were clarified and answered directly, others were answered by reanalysing the data.

The discussion ended in a priority list of four key areas. These were: delivery (time-schedule and budget), customer supplier-relationship, testing and configuration management and risk control. Some of these results were not expected, while others were obvious to the participants.

The next step for company Y will be an internal discussion of the results, and then figure out the necessary SPI actions. However, they are first going to co-ordinate this work with the work going on in a parallel project.

The result of the last section of the questionnaire was also of great interest. This section is divided into tree sub-sections, and is concerned about finding the most important factors enabling SPI success in the organisation. The most important arguments in favour of SPI in the company Y were: Continuous adjustment to external conditions, job satisfaction, and vital importance for future competitiveness. The most important arguments against SPI were: Increasing workload (resource demanding), SPI suppresses the creativity and the sense of responsibility, and it moves focus away from the project.

The most important factors for ensuring successful process improvement in the company were: Management involvement, motivation/employee participation, and well-defined and simple routines.
Company X

X is one of the leading companies in their field. Their products are a combination of software (embedded software or firmware) and hardware. In addition to corporate offices and manufacturing facilities in Norway, X has significant marketing, sales and support operations in the USA, Europe, and the Far East. The company employs about 550 persons in Norway, of which the firmware division employs 30 persons.

During the first step (assessment initiation), members from the SPIQ project team and company X had an opening meeting where the objectives of the assessment were set. X wanted to get a survey of today�s process status and potential for improvements. After identifying some key areas of improvement, the intention was to make a plan for SPI-actions. The assessment was focusing on project managers, customers and co-workers in two firmware departments and one hardware department. These groups were divided into tree subgroups: Managers, customers and developers. All departments in company X were represented.

The assessment was decided to be conducted by the SPIQ-team and the people responsible for the assessment at company X.

In steps two and three (focus area delineation and criteria development), X used the standard questionnaire as a starting point for internal discussions, and for the development of a tailor-made questionnaire. A committee was put together for evaluating the questions. In this committee there were two designers, four managers, two customers, and one from the quality department.

After two meetings the number of questions were doubled. None of the original questions were removed.

We recommended X to reduce the number of questions, and to do some minor changes in the wording of items, in order to be more precise in the question text. Some rewording was subsequently done. However, the number of questions was not reduced. No aggregate or composite questions were used; the focus was only on analysing answers to single questions.

During step four (assessment design), it was decided that the assessment co-ordinator in the company should hold a presentation regarding the assessment�s role in improving X�s software processes. In step five (assessment implementation), the presentation was hold. Most of the participants in the assessment were present at this meeting.After the presentation they had 30 minutes to fill out the questionnaire. This was too little time, however, so almost everyone had to deliver the questionnaire later. The leader of the assessment personally visited those who did not show up for the presentation helping them to complete the questionnaire. 32 employees from company X participated, however, four did not deliver the forms. The participators in the assessment only answered for the unit and those levels that were part of their own area of responsibility. Most of the respondents participated in the analyses and interpretation of the presented result in step six (data analysis and feedback). A half-day workshop was set up for this event. The most important results were then presented. The problem with this session was the lack of discussion. Although this complicated the process, the session ended with a priority list of four key areas. These were: Flexibility vs. stability, long-term quality, teamwork, and learning from past experiences.

The next step for company X will be an internal discussion of these results, and to start a process to suggest alternative SPI actions to start with. This job is a bit difficult at the moment because company X is in the middle of a rearrangement.

The result of the last section of the questionnaire was also of great interest. The most important arguments in favour of SPI in the firmware-group were: Quality, motivation/employee participation, and job satisfaction. The most important arguments against SPI were: "This only creates new procedures and rules" and "a waste of recourses (bad priority)."

The most important factors for ensuring successful process improvement in the firmware-group were:

Motivation, developer/designer in focus, and management involvement

Discussion of the cases

These cases are from two quite different companies; Y, a pure software company and X, a combined software and hardware company with their own production. They both had the same method to follow, but the accomplishment was quite different in a lot of areas. The objectives of the assessment was much the same.

Company Y did not work on the questionnaire template, and let the researchers perform the assessment. The questionnaire was therefore not as tailor-made as one would expect. The reason for this was, as explained before, the wish for only external input to the assessment. If this way of conducting the assessment is successful or not is too early to conclude.

On the other hand, company X did a lot of adjustments and therefore developed a highly tailor-made questionnaire. The problem with this case was the lack of involvement from the researcher�s side. To many questions were produced without removing any from the template. Too many questions were too similar, and there were problems interpreting some of them.

With this situation in mind, one could expect that there would be a great discussion on the result from the tailor-made questionnaire, and less discussion on the result from the standard questionnaire. It was a big surprise that the opposite occurred. There could be a lot of reasons for this: At company X over 20 persons participated in the discussion, at Y there were only 10 persons. Also, the participants at X were a mixture of managers and developers, and there is a possibility that this prevented people from speaking out.

Another distinction between the two companies is the composition of the groups that participated in the assessment. In company Y, the group was homogenous (only process managers), but in company X there were three different groups. In this kind of assessment, the results are more interesting if there is a large group answering the questions, and if they come from different parts of the companies. This was the case at company X.

Comparing data from different groups and between different members of the same group gave interesting results. For example did Project manager have the opinion that the level in "Current strength" (topic: "making fast design changes") was low and that one should improve this area significantly. The developers had the opposite opinion. They meant the level today was too high, and wanted to decrease it. People from the customers group thought the level was OK.

The results from the discussions had very little in common. The results from the fourth section of the questionnaire had more in common. Under the category "The most important arguments in favour of SPI", the results tell us that both companies think that SPI activities will improve quality and make the employees more satisfied. SPI activities seem like a necessary thing to do if you want to achieve success.

In the category "The most important arguments against SPI", the companies had the same opinion on SPI-work increasing the workload and as a source of new procedures and rules, which will cost a lot of resources and move the focus away from the projects. Comparing the results from these categories is interesting, because they first argue that SPI is necessary for the company to survive, but there is a lot of negative work to be done doing this. Maybe SPI has a problem with the association of "quality control"!

Under "The most important factors for ensuring successful process improvement", the companies had two factors in common: Employee participation and management involvement.

During the presentation of the data, there were a lot of "expected" conclusions, but also some the company had never thought about. The conclusions they had expected had, however, never been externalised before. This happened for the first time as a result of the assessments.

Lessons Learned

1. Assessment initiation

By involving more than one group in the assessment, there exist possibilities for multiple views and interest in the discussion and analyse phase.

2. Focus area delineation and criteria development

A team should be put together in the company performing the assessment in order to construct a tailor-made questionnaire.

The time needed to complete the questionnaire should not exceed 30 minutes (about 60 questions). If there is a problem with not covering all the areas wanted in the initiation, the company should conduct more than one assessment.

There should be a close co-operation between the outsiders and the insiders. Companies have problem to distinguish between important and less important questions. They may also have a problem with noticing if two or more questions express the same concept. In such cases, people from the outside will be able to help.

It is very important for the outsiders to get to know the company well enough to be able to give useful advice. To achieve this, it is critical that they work closely together with the people in the company performing the assessment.

3. Assessment design

Hold a presentation for the persons participating in the assessment. This presentation should be both motivating for the assessment, and secure that everybody has the same understanding of the questions and the goals of the assessment.

During the assessment, there will always be discussions regarding the interpretation of questions. It is therefore advantageous to let the participants conduct the assessment at the same time.
All information should be treated as confidential. If not, there will always be someone not speaking from the heart.

4. Assessment implementation

Do not wait too long before performing the feedback-session. This may lead to loss of SPI focus in the department/company.

If definitions of questions are discussed during implementation, it will be wise to document the conclusions of these discussions. During the analysis and feedback session, it is highly possible that these questions will be discussed again, and the participants will not remember how they interpreted these.

5. Data analysis and feedback

Do not leave this session without identifying concrete areas for improvement. These areas will be input to the next phase of improvement � action planning.

To have a useful discussion in this session, make sure the group participating is not too large or too small. If there is a big group, the assessment leader in the company should prepare the data to suggest some key areas before the half-day workshop. The ideal size of the group is 8 � 14 people.

Maybe the most valuable lesson learned was the need for the assessment to be closely aligned with and tailored to the company�s overall strategy. Without this it will be hard to get acceptance by the managers, and this will lead to fewer resources and low priority. It is also necessary with a tight time-schedule. Waiting too long between the steps will only lower the interest and motivation.

Conclusions

In this paper we have described a participative approach to software process assessment, experiences from two divergent implementations and the lessons learned. In summary, SPI efforts should be tailored to the goals and needs of the individual organisation, not benchmarked against a "synthetic" model of so called "best practices". Furthermore, researchers and practitioners should work closely together in a mutually accepted setting of inquiry, action and learning to solve the problems at hand and to explore possibilities of competitive advantage through improved development processes.

References

Baumgartel, H. Using Employee Questionnaire Results for Improving Organizations: The Survey �Feedback� Experiment, in Kansas Business Review, Vol. 12, pp. 2-6, December, 1959.
Bollinger, T. and McGowan, C. A Critical Look at Software Capability Evaluations, in IEEE Software, pp. 25-41, July, 1991.
Camp, R.C. Benchmarking: The Search for Industry Best Practices that Lead to Superior Performance, Milwaukee, Wis.: ASQC Quality Press, 1989.
Dunaway, D.K. and Masters, S. "CMM-Based Appraisal for Internal Process Improvement (CBA IPI): Method Description," Technical Report, Software Engineering Institute, CMU/SEI-96-TR-007, 1996.
El Emam, K., Drouin, J.-N. and Melo, W. (eds.) SPICE: The Theory and Practice of Software Process Improvement and Capability Determination, Los Alamitos, California: IEEE Computer Society Press, 1998.
Humphrey, W.S. Managing the Software Process. Reading, Massachusetts: Addison-Wesley, 1989.
Humphrey, W.S. Managing Technical People: Innovation, Teamwork, and the Software Process. Reading, Massachusetts: Addison-Wesley, 1997.
Neff, F.W. Survey Research: A Tool for Problem Diagnosis and Improvement in Organizations, in A.W. Gouldner and S.M. Miller (eds.) Applied Sociology, pp. 23-38, New York: Free Press, 1966.
Nunnally, J.C. and Bernstein, I.A. Psychometric Theory, 3rd edition, New York: McGraw-Hill, 1994.
Ould, M.A. CMM and ISO 9001, in Software Process � Improvement and Practice, Vol. 2, pp. 281-289, 1996.
Paulk, M.C., Weber, C.V., Curtis, B. and Chrissis, M.B. The Capability Maturity Model: Guidelines for Improving the Software Process. Reading, Massachusetts: Addison-Wesley, 1995.
Van de Ven, A.H. and Ferry, D.L. Measuring and Assessing Organizations, New York: John Wiley & Sons, 1980.
Zahran, S. Software Process Improvement: Practical Guidelines for Business Success, Harlow, England: Addison-Wesley, 1998.


  • Tore Dybå was born in 1961 and received his M.Sc. in Computer Science and Telematics from the Norwegian Institute of Technology (now NTNU) in 1986. He worked as a consultant for eight years both in Norway and in Saudi Arabia before he joined SINTEF in 1994. In addition to being a Research Scientist at SINTEF, he is also a Research Fellow at the Norwegian University of Science and Technology working on a Ph.D. thesis in Software Process Improvement (SPI). The focus of his Ph.D. thesis is an investigation of the key learning processes and factors for success in SPI.

  • Nils Brede Moe was born in 1972 and became a M.Sc. at the Norwegian University of Science and Technology (NTNU) in 1998. His main research areas in the field of Software Processes Improvement include: Measurement based improvement, assessments and improvement on an organisational level. Other research areas are Human-Computer Interaction (HCI) and E-commerce.

SQL Essential Training - LinkedIn

Datum - piece of information Data is plural of datum. Data are piece of information - text, images or video. Database - collection of data. ...