Short Questions
Number of Questions:150
UNIT 1
1) Define Software Engineering.
Software Engineering :
• The Application of systematic, disciplined, quantifier
approach
• To the development, operations, and maintenance of software
2) What is a Process Framework?
Process Framework :
• Establishes foundation for a complete software process
• By identifying a small number of framework
activities that are applicable for all software
projects regardless of their size and complexity
3) What are the Generic Framework Activities?
Generic Framework Activities :
• Communication
• Planning
• Modeling
• Construction
• Deployment
4) Define Stakeholder.
Stakeholder :
• Anyone who has stake in successful outcome of
Project
• Business Managers, end-users, software engineer, support
people
5)How the Process Model differ from one another?
• Based on flow of activities
• Interdependencies between activities
• Manner of Quality Assurance
• Manner of Project Tracking
• Team Organization and Roles
• Work Products identify an requirement identifier
6) Write out the reasons for the Failure of Water Fall
Model?
Reasons For The Failure Of Water Fall Model :
• Real Project rarely follow Sequential Flow. Iterations are
made in indirect manner
• Difficult for customer to state all requirements explicitly
• Customer needs more patients as working product reach only
at Deployment phase
7) What are the Drawbacks of RAD Model?
Drawbacks of RAD Model :
• Require sufficient number of Human Resources to create
enough number of teams
• Developers and Customers are not committed, system result in
failure
• Not Properly Modularized building component may Problematic
• Not applicable when there is more possibility for Technical
Risk
8) Why Formal Methods are not widely used?
• Quite Time Consuming and Expensive
• Extensive expertise is needed for developers to apply formal
methods
• Difficult to use as they are technically sophisticated
maintenance may become risk
9) What is Cross Cutting Concerns?
Cross Cutting Concerns :
• When concerns cut across multiple functions, features and
information
10) What are the different Phases of Unified Process?
Different Phases of Unified Process :
• Inception Phase
• Elaboration Phase
• Construction Phase
• Transition Phase
• Production Phase
11) Define the terms :
a) Agility
b) Agile Team
a) Agility :-
• Dynamic, Content Specific, Aggressively Change
Embracing and Growth Oriented
b) Agile Team :-
• Fast Team
• Able to Respond to Changes
12) Define the terms:
a) Agile Methods
b) Agile Process
a)Agile Methods :-
• Methods to overcome perceive and actual weakness in
conventional software engineering
• To accommodate changes in environment, requirements and
use cases
b)Agile Process :-
• Focus on Team Structures, Team Communications, Rapid
Delivery of software and it de-emphasis importance of
intermediate product
13) What is the Use of Process Technology Tools?
Use of Process Technology Tools :
• Help Software Organizations
1. Analyze their current process
2. Organize work task
3. Control And Monitor Progress
4. Manage Technical Quality
5.
14) Define the term Scripts.
Scripts :
• Specific Process Activities and other detailed work
functions that are part of team process
15) What is the Objective of the Project Planning Process?
Objective of the Project Planning Process :
• To provide framework that enables manager to make
reasonable estimates of resources, cost and schedule
16) What are the Decomposition Techniques?
Decomposition Techniques :
• Software Sizing
• Problem – Based Estimation
• Process – Based Estimation
• Estimation With Use – Cases
• Reconciling Estimates
17) How do we compute the “Expected Value” for Software
Size?
• Expected value for estimation variable(size), S, can be
compute as Weighted Average of Optimistic(Sopt),most
likely(Sm),and Pessimistic(Spess) estimates
• S = (Sopt+4Sm+Spess)/6
18) What is an Object Point?
Object Point :
• Count is determined by multiplying original number of
object instances by weighting factor and summing to obtain
total object point count
19) What is the difference between the “Known Risks” and Predictable
Risks” ?
Known Risks :-
• That can be uncovered after careful evaluation of the
project plan, the business, and technical environment
in which the product is being developed
• Example : Unrealistic delivery rate
Predictable Risks :-
• Extrapolated from past project experience
• Example : Staff turnover
20) List out the basic principles of software project
scheduling ?
Basic Principles Of Software Project Scheduling :-
• Compartmentalization
• Interdependency
• Time Allocation
• Effort Validation
• Defined Responsibilities
• Defined Outcomes
• Defined Milestones
UNIT 2
21) What are the Classifications of System Engineering?
Classifications of System Engineering :
• Business Process Engineering[BPE]
• Product Engineering
22) List out the Elements in Computer-Based System?
Elements in Computer-Based System :
• Software
• Hardware
• People
• Database
• Documentation
• Procedures
23) What are the Factors to be considered in the System
Model Construction?
• Assumption
• Simplification
• Limitation
• Constraints
• Preferences
24) What does a System Engineering Model accomplish?
• Define Processes that serve needs of view
• Represent behavior of process and assumption
• Explicitly define Exogenous and Endogenous Input
• Represent all Linkages that enable engineer to better
understand view
25) What Architectures are defined and developed as part of
BPE?
• Data Architecture
• Applications Architecture
• Technology Architecture
26) What is meant by Cardinality and Modality ?
Cardinality :-
• The number of occurrence of one object related to the
number of occurrence of another object
• One to One [1 :1]
• One to Many [1 : N]
• Many to Many [M : N]
Modality :-
• Whether or not a particular Data Object must participate
in the relationship
27) What are the Objectives of Requirement Analysis ?
Objectives of Requirement Analysis :
• Describe what customer requires
• Establish a basis for creation of software design
• Define a set of requirements that can be validated once
the software design is built
28) What are the two additional feature of Hayley Pirbhai
Model?
• User Interface Processing
• Maintenance and Self test Processing
29) Define System Context Diagram[SCD]?
System Context Diagram[SCD] :
• Establish information boundary between System being
implemented and Environment which system operate
• Defines all external producers, external consumers and
entities that communicate through User Interface
30) Define System Flow Diagram[SFD]?
System Flow Diagram[SFD] :
• Indicates Information flow across SCD region
• Used to guide system engineer in developing system
31) What are the Requirements Engineering Process
Functions?
• Inception
• Elicitation
• Elaboration
• Negotiation
• Specification
• Validation
• Management
32) What are the Difficulties in Elicitation?
Difficulties in Elicitation :
• Problem Of Scope
• Problem Of Understanding
• Problem Of Volatility
33) List out the Types of Traceability Table?
Types of Traceability Table :
• Features Traceability Table
• Source Traceability Table
• Dependency Traceability Table
• Subsystem Traceability Table
• Interface Traceability Table
34) Define Quality Function Deployment[QFD]?
Quality Function Deployment[QFD] :
• Technique translates needs of customer into technical
requirements
• “Concentrates on maximizing customer satisfaction from
the software engineering process”
35) What are the Benefits of Analysis Pattern?
Benefits of Analysis Pattern :
• Seedup development of Analysis model
• Transformation of Analysis into Design model
36) What is System Modeling?
System Modeling :-
• Important Element in System Engineering Process
• Define Process in each view to be constructed
• Represent Behavior of the Process
• Explicitly define exogenous and endogenous inputs
37) Define CRC Modeling ?
CRC Modeling :-
• Class Responsibility Collaborator Modeling
• Collection of Standard Index Card .Divided into 3
sections
1. Name of class at Top
2. List of class Responsibilities at Left
3. Collaborators at Right
• Classes that Cover the Information to complete its
responsibilities
38) List out the Factors of Data Modeling?
Factors of Data Modeling :
• Data Objects
• Data Attributes
• Relationship
• Cardinality and Modality
39) Define Swim Lane Diagram?
Swim Lane Diagram :
• Variation of activity diagram
• Allows Modular to represent floe of activities
• Actor responsible for activity
40) What are the Selection Characteristic for Classes?
Selection Characteristic for Classes :
• Retained Information
• Needed Services
• Multiple Attribute
• Common Attribute
• Common operations
• Essential Requirements
41) Define Steps in Behavioral Model.
Steps in Behavioral Model :
• Evaluate all Use Cases
• Identify Events
• Create Sequence for each use Cases
• Build a State Diagram
• Review Model for Accuracy and Consistency
UNIT 3
41) Define the terms in Software Designing :
(a) Abstraction
(b) Modularity
(a) Abstraction :
1. Highest Level : Solution is stated in broad term using
language of problem environment
2. Lower Level : More detailed description of solution is
provided
(b) Modularity :
• Software is divided into separately named and
addressable components, called Modules that are
integrated to satisfy problem requirements
42) How the Architecture Design can be represented?
• Architectural Design can be represented by one or more
different models. They are,
1. Structural Models
2. Framework Models
3. Dynamic Models
4. Process Models
43) What is the Advantage of Information Hiding?
Advantage of Information Hiding :
• During testing and maintenance phase if changes
require that is done in particular module without
affecting other module
44) What types of Classes does the designer create?
• User interface Classes
• Business Domain Classes
• Process Classes
• Persistent Classes
• System Classes
45) What is Coupling?
Coupling :-
• Quantitative measure of degree to which classes are
connected to one another
• Keep coupling as low as possible
46) What is Cohesion?
Cohesion :
• Indication of relative functional strength of a module
• Natural extension of Information Hiding
• Performs a single task, requiring little integration
with other components
47) Define Refactoring.
Refactoring :
• Changing software system in the way that does not alter
external behavior of code
48) What are the Five Types of Design classes?
Five Types of Design classes :
• User Interface Classes
• Business domain Classes
• Process Classes
• Persistent Classes
• System Classes
49) What are the Different types of Design Model? Explain.
Different types of Design Model :
• Process Dimension :
Indicate evolution of Design model as design tasks
executed as part of software process
• Abstraction Dimension :
Represent level of detail as each element of analysis
model is transformed into design equivalent
50) List out the Different elements of Design Model?
Different Elements of Design Model :
• Data Design Elements
• Architectural Design Elements
• Interface Design Elements
• Component Level Design Elements
• Deployment Level Design Elements
51) What are the Types of Interface Design Elements?
Types of Interface Design Elements :
• User Interfaces
• External Interfaces
• Internal Interfaces
52) What Types of Design Patterns are available for the
software Engineer?
Types of Design Patterns :
• Architectural patterns
• Design Patterns
• Idioms
53) Define Framework.
Framework :
• Code Skeleton that can fleshed out with specific classes
or functionality
• Designed to address specifies problem at hand
54) What is the Objective of Architectural Design?
Objective of Architectural Design :
• Model overall software structure by representing
component interfaces, dependencies and relationships and
interactions
55) What are the important roles of Conventional component
within the Software Architecture?
• Control Component : that coordinates invocation of all
other problem domain
• Problem Domain Component : that implement Complete or
Partial function required by customer
• Infrastructure Component : that responsible for
functions that support processing required in problem
domain
56) What are the Basic Design principles of Class-Based
Components?
Basic Design principles of Class-Based Components :
• Open-Closed Principle[OCP]
• Liskov Substitution Principle[LSP]
• Dependency Inversion Principle[DIP]
• Interface Segregation Principle[ISP]
• Release Reuse Equivalency Principle[REP]
• Common Closure Principle[CCP]
• Common Reuse Principle[CRP]
57)What should we consider when we name components?
• Components
• Interface
• Dependencies and Inheritance
58) What are the Different Types of Cohesion?
Different Types of Cohesion :
• Functional
• Layer
• Communicational
• Sequential
• Procedural
• Temporal
• Utility
59) What are the Different Types of Coupling?
Different Types of Coupling :
• Content Coupling
• Common Coupling
• Control Coupling
• Stamp Coupling
• Data Coupling
• Routine Call Coupling
• Type Use Coupling
• Inclusion or Import Coupling
• External Coupling
60) What is Program Design Language [PDL]?
Program Design Language [PDL] :
• Also called Structured English or Pseudocode
• Pidgin Language in that it uses the vocabulary of one
language and overall syntax of another
UNIT 4
61) What are the Basic Principles of Software Testing?
Basic Principles of Software Testing :
• Traceable to Customer Requirements
• Planned long before Testing begins
• Pareto Principles applied to Software testing
• Begin small and progress towards testing
• Exhaustive testing is not possible
• Conducted by independent third party
62) List out the Characteristics of Testability of
Software?
Characteristics of Testability of Software :
• Operability
• Observability
• Controllability
• Decomposability
• Simplicity
• Stability
• Understandability
63) List out various Methods for finding Cyclomatic
Complexity ?
• Number of Regions
• Cyclomatic Complexity V(G) , for Flow Graph
V(G) = E – N + 2
• Cyclomatic Complexity V(G)
V(G) = P +1
64) Define Smoke Testing ?
Smoke Testing :
• Integration testing
• Commonly used when software products are being developed
65)What are the Attributes of Good Test?
Attributes of Good Test :
• High probability of finding errors
• Not Redundant
• “Best of Breed”
• Neither too Simple nor too complex
65) Define White Box Testing.
White Box Testing :
• Also called Glass Box Testing
• Test case design uses Control Structure of Procedural
Design to derive test cases
66) Define Basic Path Testing.
Basic Path Testing :
• White Box Testing
• Enable test case designer to derive a logical complexity
measure of a procedural design
• Use this measure as a Guide for defining a basis set of
execution paths
67) Define the terms :
a) Graph Matrices
b) Connection Matrices
Graph Matrices :-
• To develop software tool the data structure used is
Graph Matrix
• Square Matrix
• Size equals number of nodes on the Flow graph
Connection Matrices :-
• If Link Weight =1 => Connection Exists
• If Link Weight =1 => Connection Does not Exists
68) What is Behavioral Testing?
Behavioral Testing :
• Also Known as Black Box Testing
• Focuses on Functional Requirement of software
• Enables Software engineer to derive set of input
condition that fully exercise all functional
requirements of a software
69) What are the Benefits of conducting Smoke Testing?
Benefits of conducting Smoke Testing :
• Integration Risk is Minimized
• Quality of end-product is improved
• Error diagnosis and Correction are simplified
• Progress is easy to assess
70) What errors are commonly found during Unit Testing?
• Misunderstood or incorrect arithmetic precedence
• Mixed Mode Operations
• Incorrect Initializations
• Precision Accuracy
• Incorrect Symbolic representation of expression
71) What problems may be encountered when Top-Down
Integration is chosen?
• Delay are test until stubs replace with actual modules
• Develop stubs that perform limited functions that
simulate the actual module
• Integrate the software from the bottom of the hierarchy
upward
72) What are the Steps in Bottom-Up Integration?
Steps in Bottom-Up Integration :
• Low level components are combined into clusters perform
specific software sub function
• Driver is written to coordinate test case input and output
• Cluster is tested
• Drivers are removed and clusters are combined moving
inward in program structure
73) What is Regression Testing?
Regression Testing :
• Re-execution of some subset of tests that have already
been conducted
• To ensure changes have not propagated unintended side
effects
74) What are the Characteristics of “Critical Module”?
Characteristics of “Critical Module” :
• Addresses several software requirements
• Has High Level Of Control
• Complex or error prone
• Has Definite Performance Requirements
75) What are the Properties of Connection Matrices?
Properties of Connection Matrices :
• Probability that link will execute
• Processing time expended during traversal of link
• Memory required during traversal of link
• Resource required during traversal of link
76) What is Flow Graph Notation?
Flow Graph Notation :-
• Simple notation for representing Control Flow
• Draw only when Logical Structure of component is complex
77) Define Cyclomatic Complexity?
Cyclomatic Complexity :-
• Software Metric
• Quantitative measure of Logical Complexity
• Number of Independent Paths in the basis set of
program
78)What is Equivalence Partition?
Equivalence Partitions :-
• Derives a input domain of a program into classes of data
from which test cases are derived
• Set Of Objects have link by relationships as Symmetric,
Transitive and Reflexive an equivalence class is present
79) List out the possible errors of Black Box Testing?
Errors of Black Box Testing :
• Incorrect or Missing Functions
• Interface Errors
• Errors in Data Structures or external databases
• Behavioral or Performance errors
• Initialization or Termination errors
80) Define Data Objects.
Data Objects :
• Represent Composite Information
• External entity, thin, occurrence or event, role,
organizational unit, place or structure
• Encapsulates Data only
UNIT 5
81) What are the Components of the Cost of Quality?
Components of the Cost of Quality :
• Quality Costs
• Prevention Costs
• Appraisal Costs
82) What is Software Quality Control?
Software Quality Control :
• Involves series of inspections, reviews and tests
• Used throughout software process to ensure each work
product meets requirements placed upon it
83) What is Software Quality Assurance?
Software Quality Assurance :
• Set of auditing and reporting functions
• Assess effectiveness and completeness of quality control
activities
84) What are the Objective of Formal Technical Reviews?
Objective of Formal Technical Reviews :
• Uncover errors in function, logic and implementation for
representation of software
• Software represented according to predefined standard
• Verify software under review meets requirements
• Achieve software developed in Uniform Manner
• Make projects more manageable
85) What Steps are required to perform Statistical SQA?
• Information about software defects is collected and
categorized
• Attempt is made trace each defect
• Using Pareto principle, isolate 20%
• Once vital causes are identified, correct problems that
cause defects
86) Define SQA Plan.
SQA Plan :
• Provides roadmap for instituting SQA
• Plan serves as template for SQA activities that instituted
for each software project
87) What is Baseline criteria in SCM ?
• Help to control Change
• Specification or product that has been formally
• Reviewed and agreed upon serves as basis for future
development
• That can be change only through formal change control
procedures
88) Define Status Reporting ?
• Also called Configuration Status Reporting
• Is a SCM task that answers
1. What Happened ?
2. Who did it ?
3. When did it happen ?
4. What else will be affected ?
89) What is the Origin of changes that are requested for
software?
Origin Of Change :-
• New Business or Market Condition
• New Customer Needs
• Reorganization or business growth/downsizing
• Budgetary or Scheduling constraints
90) List out the Elements of SCM?
Elements of SCM :-
• Component Elements
• Process Elements
• Construction Elements
• Human Elements
91) What are the Features supported by SCM?
Features supported by SCM :
• Versioning
• Dependency tracking and change Management
• Requirements tracking
• Configuration Management
• Audit trails
92) What are the Objectives of SCM Process?
Objectives of SCM Process :
• Identify all items, collectively define software
configuration
• Manage changes to one or more these items
• Facilitate construction of different version of an
application
• Ensure that the software quality is maintained
93) What are the issues to be considered for developing
tactics for WebApp Configuration Management?
• Context
• People
• Scalability
94) Define CASE Tools.
CASE Tools :
• Computer Aided Software Engineering
• It is a System software
• Provide Automated support for software process activities
• Includes program used to support software process
activities
• Such as Requirement Analysis, System Modeling, Debugging
and Testing
95) How do we define Software Quality?
Software Quality :
• Conformance to explicitly stated functional and
performance requirements, explicitly documented
development standards
• Implicit characteristics, expected for professional
developed software
96) Define the terms :
a) Quality of Design
b) Quality of Conformance
Quality of Design :
• Characteristics, designer specify fro an item
Quality of Conformance :
• Degree to which design specifications are followed
during manufacturing
97) What are the Type of CASE Tools?
Types of CASE Tools :-
• Upper CASE Tools
• Lower CASE Tools
98) Define Software Reliability?
Software Reliability :
• Probability of failure-free operation of computer program
in a specified environment for a specified time
99) How the Registration process of ISO 9000 certification
is done?
• Registration process of ISO 9000 certification has the
following stages
1. application
2. Pre-assessment
3. Document Review and Adequacy of audit
4. Compliance Audit
5. Registration
6. Continued Surveillance
100) What are the Factors of Software Quality?
Factors of Software Quality :
• Portability
• Usability
• Reusability
• Correctness
• Maintainability
Short Answers
1. Define Software Engineering
The establishment and use of sound engineering principles in order to obtain economically
software that is reliable and works efficiently on real machines.
2. Differentiate Software engineering methods, tools and procedures.
Methods: Broad array of tasks like project planning, cost estimation etc..
Tools: Automated or semi automated support for methods.
Procedures : Holds the methods and tools together. It enables the timely development of
computer software.
3. Write the disadvantages of classic life cycle model.
Disadvantages of classic life cycle model :
(i) Real projects rarely follow sequential flow. Iteration always occurs and creates
problem.
(ii) Difficult for the customer to state all requirements
(iii) Working version of the program is not available. So the customer must have patience.
4. What do you mean by task set in spiral Model?
Each of the regions in the spiral model is populated by a set of work tasks called a task set that
are adopted to the characteristics of the project to be undertaken.
5. What is the main objective of Win-Win Spiral Model?
The customer and the developer enter into the process of negotiation where the customer may
be asked to balance functionality ,performance and other product against cost and time to market.
6. Which of the software engineering paradigms would be most effective? Why?
Incremental / Spiral model will be most effective.
Reasons:
(i) It combines linear sequential model with iterative nature of prototyping
(ii) Focuses on delivery of product at each increment
(iii)Can be planned to manage technical risks.
7. Who is called as the Stakeholder?
Stakeholder is anyone in the organization who has a direct business interest in the system
or product to be built.
8. Write the objective of project planning ?
It is to provide a framework that enables the manager to make reasonable estimates of
resources, cost and schedule.
9. What is Boot Strapping?
A sequence of instructions whose execution causes additional instructions to be loaded and
executed until the complete program is in storage.
10. Write a short note on 4GT.
Fourth Generation Technique. 4GT encompasses a broad array of software tools. Each tool
enables the software developer to specify some characteristics of software at a higher level.
11. What is FP ? How it is used for project estimation ?
Function Point. It is used as the estimation variable to size the each element of the software. It
requires considerably less detailed. Estimated indirectly by estimating te number of inputs,
outputs, data files, external interfaces.
12. What is LOC ? How it is used for project estimation?
LOC : Lines of Code. It is used as estimation variable to size each element of the software. It
requires considerable level of detail..
13. Write the formula to calculate the effort in persons-months used in Dynamic multi variable
Model?
Software Equation :E=[LOC * B0.333/P]3 *(1/t4) Where E is effort in person-months, t is
project duration, B is special skills factor, P is productivity parameter.
14. What is called object points?
It is an indirect software measure that is computed using counts of te number of screens,
reports and components.
15. What are the four different Degrees of Rigor ?
Four different degrees of Rigor are
Casual
Structured
Strict
Quick reaction
16. Write about Democratic Teams in software development. (Egoless Team)
It is egoless team. All team members participate in all decisions. Group leadership rotates
from member to member based on tasks to be performed.
17. What are the two project scheduling methods ?
PERT- Program Evaluation and Review Techniques
CPM- Critical Path Method
18. What is called support risk?
The degree of uncertainty that the resultant software will be easy to correct , adapt and
enhance.
19. What is RMMM?
Risk Mitigation, Monitoring and Management Plan. It is also called Risk Aversion.
20. What are four impacts of the project risk?
Catastrophic, Critical, Marginal, Negligible.
21. List the tools or methods available for rapid prototyping.
Rapid prototyping (Speed)
(i) 4GT
(ii) Resuable software components
(iii) Formal specification and prototyping environments.
22. What is the need for modularity ?
Need for modularity: Easier to solve a complex problem. Can achieve reusability. Best effort
and complexity reduces.
23. What are the five criteria that are used in modularity?
Modular Decomposability
Modular composability
Modular understandability
Modular continuity
Modular protection
24. What is Software Architecture?
The overall structure of the software and the ways in which that software provides conceptual
integrity for the system.
25. What are the models are used for Architectural design?
Structural models
Framework models
Dynamic models
Process models
Functional models
26. What is cohesion?
It is a measure of the relative functional strength of a module. (Binding)
27. What is Coupling?
Measure of the relative interdependence among modules.
(Measure of interconnection among modules in a software structure.)
28. List the coupling factors.
Interface complexity between modules
Reference to the module
Data pass across the interface.
29. Define Stamp coupling.
When a portion of the data structure is passed via the module interface , then it called
stamp coupling.
30. Define common coupling.
When a number of modules reference a global data area, then the coupling is called
common coupling.
31. Define temporal cohesion.
When a module contains tasks that are related by the fact that all must be executed with the
same span of time, then it termed as temporal cohesion.
32. Write a short note on structure charts.
These are used in architectural design to document hierarchical structure, parameters and
interconnections in a system. No Decision box . The chart can be augmented with module by
module specifications of I/P and O/P parameters as well as I/P and O/P attributes.
33. What do you mean by factoring?
It is also called vertical partitioning. It follows Top-Down strategy. We can say that there
are some top level modules and low level modules.
Top level modules ---- Control functions ,actual processing work
Low level modules ----Workers. Performing all input computation and
output tasks.
34. What is Aesthetics?
Aesthetics : It is a science of art and beauty. These are fundamental to software design,
whether in art or technology.
Simplicity, Elegance(refinement), clarity of purpose.
35. What do you mean by common coupling?
Common coupling : When a number of modules reference a global data area , then the
coupling is called common coupling.
36. Write about Real Time Systems.
It provides specified amount of computation with in fixed time intervals. RTS sense and
control external devices, respond to external events and share processing time between tasks.
37. Define Distributed system .
It consists of a collection of nearly autonomous processors that communicate to achieve a
coherent computing system.
38. Compare Data Flow Oriented Design with data structure oriented design
Data flow oriented design : Used to represent a system or software at any level of
abstraction.
Data Structure oriented design : It is used for representing information hierarchy using the
three constructs for sequence, selection and repetition.
39. Define Architectural Design and Data Design.
Architectural Design : To develop a modular program structure and represent the relationships
between modules.
Data Design : To select the logical representations of data objects , data storage and the
concepts of information hiding and data abstraction.
40. What are the contents of HIPO diagrams?
Visual table of contents, set of overview diagrams, set of detail diagrams.
41. What are the aspects of software reuse.
Software development with reuse
Software development for reuse
Generator based reuse
Application system reuse
42. Define Configuration Status Reporting .
What happened ? Who did it?
When it happened? What else will be affected?
It is also called status accounting.
43. What is the need for baseline?
Need for Baseline :
(i) Basis for further development
(ii) Uses formal change control procedure for change
(iii) Helps to control change
44. Define SCM.
It is an umbrella activity that is applied throughout software process. It has a set of tracking
and control activities that begin when a software engineering project begins and terminates
only when the software project is taken out of operation.
45. List the SCM Activities.
(i) Identify a change
(ii) Control change
(iii)Ensure that change is being properly implemented
(iv)Report changes to others who may have an interest
46. What is meant by software reusability?
A software component should be designed and implemented so that it can be reused in many
different programs.
47. What is CASE ?
CASE : Computer Aided Software Engineering
CASE provides the engineer with the ability to automate manual activities and to improve engineering
insight.
48. Write the distinction between SCM and software support.
SCM : It has a set of tracking and control activities that begin when a software engineering
project begins and terminates only when the software project is taken out of operation.
Software support : It has a set of software engineering activities that occur after software has
been delivered to the customer and put into operation.
49. What is he difference between basic objects and aggregate objects used in software configuration.
Basic Objects : It represents unit of text. E.g Section of requirement specification, Source
listing for a component
Aggregate objects: Collection of basic objects. And other aggregate objects. E.g Full
design specification
50. What is configuration Audit?
Has the change specified in ECO been made?
Formal technical review been conducted?
Software Engineering procedures for noting the change, recording it, reporting it been
followed?
SCI is updated?
Essay Type Questions(in Brief)
51. Explain Linear Sequential Model and prototyping model in detail
Linear Sequential Model :
Explanation, Diagram , Advantages, Disadvantages
Prototyping model:
Explanation, Diagram , Advantages, Disdvantages
52. Explain Spiral model and win-win spiral model in detail. .
Spiral Model :
Six Task Regions : Customer Communication
Planning
Risk Analysis
Engineering
Construction and Release
Customer Evaluation
Diagram , Details of four circles
Win-Win spiral model:
The customer and the developer enter into the process of negotiation, where the
customer may be asked to balance functionality,performance, and other product against
cost and time to market.
Activities, diagram ,explanation
53. Explain incremental model in detail
Explanation of increments in the stages of
Analysis, Design, Code, Test.
54. Discuss about fourth generation techniques.
4GT :
It encompasses a broad array of software tools. Each tool enables the software developer to
specify some characteristics of software at a higher level.
Explanations of : 4GT Tools
4GT Paradigm
Current state of 4GT approaches
55. Explain the Activities of Project Planning
Software scope with an example (Conveyor Line Sorting System)
Resources
Hardware/ Software Tools
56. Explain the cost estimation procedure using COCOMO Model.
It is algorithmic cost model. (One of the Empirical estimation model)
COCOMO Model: 10 steps
3 different sizing options
Explanation
57. Explain the following:
(i) Delphi Cost Estimation
(ii) Putnam Estimation model
(iii) Decomposition approach
Ans :
(i) Delphi cost estimation
Procedures to calculate
(ii) Putnam estimation model (Dynamic multi variable model)
Explanation of the software equation
(iii) Decomposition approach
Write an algorithm
58. Explain the organizational structure of the software development.
Explanations of
Project structure
Programming team structure
Management by objectives.
59. Explain the process of ‘ Risk Analysis and Management.’
Risk Identification
Risk Estimation
Risk Assessment
Risk Management and Monitoring
Risk Refinement
60. Explain the following (i) Software requirement specification.
(ii) Specification Review
Ans :
(i) Software Requirement Specification :
Information Description
Functional Description
Behavioral Description
Validation criteria
Bibliography and appendix
Preliminary user’s manual
(ii)Specification Review : Explanation
61. Explain the types of coupling and cohesion.
Coupling : Measure of the relative interdependence among modules.
Types: Data coupling , Stamp coupling, control coupling, External coupling,
Common coupling, Content coupling
Cohesion : It is a measure of the relative functional strength of a module.
Types: Coincidentally cohesive, Logically cohesive, Temporal cohesion,
procedureal cohesion, communicational cohesion, High cohesion, sequential cohesion.
62. Explain the various software design concepts
Explanations of Abstraction, Refinement, Modualrity, Software Architecture , Control
hierarchy, Structural partitioning, Data structure , Software procedure , Information hiding,
Verification, Aesthetics.
63. Explain Software Design Documentation in detail.
Design Documentation :
(Explanation of the following items and sub items )
Scope
Reference Documents
Design Description
Modules
File Structure and global data
Requirements Cross Reference
Test provisions
Packaging
Special Notes
Appendices
64. Discuss the design procedure for Real time and distributed system software.
Real Time and distributed system design :
Real Time systems : It must provide specified amounts of computation within fixed time intervals. (Explanation)
Distributed system : It consists of a collection of nearly autonomous processors that
communicate to achieve a coherent computing system.
(Explanation)
65. Explain Jackson system development with an example.
Steps are : Entity Action step
Entity Structure step
Initial modeling step
Function step
System Timing step
Implementation step
Example : University with two campuses.
66. Explain Software Design Notations
Explanations of
Data Flow diagram , Structure charts, HIPO diagrams, procedure template, pseudocode,
structured flow chart, Structured English, Decision tables.
67. Explain Data Flow Oriented design in detail.
The objective of this design is to provide a systematic approach for the derivation of program structure.
Design and information flow
Design process considerations
(Atleast one of the following with an example)
Tranform flow and analysis
Transaction flow and analysis
68. Explain programming standards in detail
Explanation of all standards.
69. What is software reuse? Explain the various aspects of software reuse.
A software component should be designed and implemented so that it can be reused in many
different programs.
Explanation of Aspects :
Software development with reuse
Software development for reuse
Generator based reuse
Application system reuse
70. Describe the various software configuration management tasks in detail.
Brief explanations of
SCM Definition
Activities
Process
Baselines
Software Configuration Items
Identification of objects
Version control
Change control
Configuration Audit
Status reporting
71. Write notes on Version Control and Change control
Version control : Description
Representations : (Evolution graph, Object Pool)
Change control : Description
Process of change control
72. What are CASE tools and their usage in Software Engineering ? Discuss each tool in brief.
Business process Engineering tools
Process modeling and management tools
Project planning tools
Risk Analysis tools
Project management tools
Requirements tracing tools
Documentation tools
System software tools
Quality Assurance tools
Database management tools
Software configuration management tools
Analysis and design tools
PRO/SIM tools
Interface design and development tools
Prototyping tools
Programming tools
Web development tools
Integration and testing tools
Static Analysis tools
Dynamic analysis tools
Test management tools
Client/Server testing tools
Re-Engineering tools
73. Explain Integrated CASE Environment in detail.
Explanations of
Integrated CASE Environment
Benefits
Integration Architecture
74. Explain CASE repository in detail
Definition
Functions
Features and content
DBMS features.
Special features of CASE
Repository features.
75. Explain Building blocks for CASE
CASE Tools
Integrated framework
Portability services
Operating system
Hardware platform
Environment Architecture
1.
Q: Define
the term Software Engineering. How it is different from Computer System
Engineering? [B.E. 2007C, 2008]
OR
Define the term “Software Engineering”
and distinguish it from Computer Science [B. E. 2008C]
Answer:
Software engineering is an engineering approach
for software development. We can alternatively view it as a systematic
collection of past experience. The experience is arranged in the form of
methodologies and guidelines. Software Engineering discusses systematic and
cost-effective and efficient techniques to software development. Alternatively
we can define software engineering as “A discipline whose aim is the production
of quality software, software that is delivered on time, within budget, and
that satisfies its requirements”
In general, we assume that the software being
developed would run on some general-purpose hardware platform such as a desktop
computer. But, in several situations it may be necessary to develop special
hardware on which the software would run. The Computer Systems Engineering
addresses development of such systems requiring development of both software
and specific hardware to sun the software. Thus Computer system Engineering
encompasses software engineering.
2.
Q: Identify the two
important techniques that software engineering uses to tackle the problem of
exponential growth of problem complexity with its size.
Answer:
Software engineering principles use
two important techniques to reduce problem complexity: abstraction and
decomposition.
In other words, a good decomposition as shown in
fig.1.5 should minimize interactions among various components.
3. Q: Identify at least two
advantages of using high-level languages over assembly languages.
Answer:
Assembly language programs are limited
to about a few hundreds of lines of assembly code, i.e. are very small in size.
Every programmer develops programs in his own individual style - based on
intuition. This type of programming is called Exploratory Programming. But, use
of high-level programming language reduces development efforts and development
time significantly. Languages like FORTRAN, ALGOL, and COBOL are the examples
of high-level programming languages.
4.
Q: State at least two basic differences between
control flow-oriented and data flow-oriented design techniques.
Answer:
Control
flow-oriented design deals with carefully designing the program’s control
structure. A program's control structure refers to the sequence, in which the
program's instructions are executed, i.e. the control flow of the program. But
data flow-oriented design technique identifies:
• Different processing stations (functions) in a system
• The data items that flows between processing stations
5.
Q: State at least five advantages of
object-oriented design techniques.
Answer:
Object-oriented techniques have gained wide acceptance because of
it’s:
·
Simplicity (due to abstraction)
·
Code and design reuse
·
Improved productivity
·
Better understandability
·
Better problem decomposition
·
Easy maintenance
6.
Q: Differentiate between program and Software
Product. [2009C]
Answer:
Programs
are developed by individuals for their personal use. They are therefore, small
in size and have limited functionality but software products are extremely
large. In case of a program, the programmer himself is the sole user but on the
other hand, in case of a software product, most users are not involved with the
development. In case of a program, a single developer is involved but in case
of a software product, a large number of developers are involved. For a program,
the user interface may not be very important, because the programmer is the
sole user. On the other hand, for a software product, user interface must be
carefully designed and implemented because developers of that product and users
of that product are totally different. In case of a program, very little
documentation is expected, but a software product must be well documented. A
program can be developed according to the programmer’s individual style of
development, but a software product must be developed using the accepted
software engineering principles.
7.
Q: What is software crisis? Give the problems of
Software Crisis. [2008]
Answer:
The
software crisis has been with us since 1970.
Since then, the computer industry has progressed at a break-neck speed
through the computer revolution, and recently, the network revolution triggered
and/or accelerated by the explosive spread of the internet and most recently
the web. Computer industry has been delivering exponential improvement in
price-performance; the problems with software have not been decreasing. Within
that period of time, the software industry unsuccessfully attempted to build
larger and larger software product by simply existing development techniques. There are many
factors that have contributed to the making of the present software crisis.
Those factors are larger problem sizes, lack of adequate training in software
engineering, increasing skill shortage, and low productivity improvements. So
basically we can define the
problems of software crisis as follows:
·
Poor
quality software productions
·
Development
team exceeds the budget
·
Late
delivering of software
·
User
requirements not completely supported by the software
·
Unreliable
software
·
High
cost in maintenance
8.
Q: Illustrate the terms Structured programming and
Unstructured Programming. [2008]
OR
What
is structured programming? What are its Advantages? [2005]
Answer:
A structured program should follow to distinct
properties. First,
a structured program uses three type of program constructs i.e. selection,
sequence and iteration. Structured programs avoid unstructured control flows by
restricting the use of GOTO statements. Secondly, structured program consists
of a well partitioned set of modules. Structured programming uses single entry,
single-exit program constructs such as if-then-else, do-while, etc. Thus, the
structured programming principle emphasizes designing neat control structures
for programs.
The
unstructured programming is the programming style where control flow is
unstructured because it uses GOTO statements.
The advantages
of structured programs are:
·
Structured
programs are easier to read and understand.
·
Structured
programs are easier to maintain.
·
They
require less effort and time for development.
·
They
are amenable to easier debugging and usually fewer errors are made in the
course of writing such programs.
9.
Q: What is phase Exit and Entry Criteria of
Software development process?
Answer:
A
software development life cycle has different distinct development phases such
as: Feasibility Study, Requirements Analysis and Specification, Design, Coding
and Unit Testing, Integration and System Testing and Maintenance. Now the phase
entry and exit criteria of each phase means each and every phase has some
strict rules in entering and exiting in that particular phases failing which no
one is allowed to enter and exit from that phase. For example:
·
At the starting of the feasibility
study, project managers or team leaders try to understand what is the actual
problem by visiting the client side. At the end of that phase they pick the
best solution and determine whether the solution is feasible financially and
technically.
·
At the starting of requirements
analysis and specification phase the required data is collected. After that
requirement specification is carried out. Finally, SRS document is produced.
Same entry and exit criteria are also followed for other
phases.
10. Q: What is
phase containment of errors? [2005,
2010]
Answer:
Phase
containment of errors means detect and correct errors as soon as possible. It
is an important Software Engineering Principle. A software development life
cycle has different distinct development phases. Phase containment of errors
means detect and correct the errors within the phase where its actually lives.
That is a design error should be detected and corrected within the design phase
itself rather than detecting it in the coding phase. To achieve phase
containment of errors we have to take periodic reviews.
11. Q: What do you
mean by Exploratory Style of Programming?
Answer:
The
Exploratory Style of Programming is a very informal style of program
development approach, and there are no set rules or recommendations. Every
programmer himself evolves his own software development techniques solely
guided by his intuition, experience, whims and fancies. Exploratory style of
programming style is possible for small size software where problem domain is
initially not clear.
12. Q: What are
the notable changes done by Software Engineering over Exploratory Style of
Programming?
Answer:
The
Notable changes are:
- An important difference is that the exploratory software development style is based on error correction while the software engineering principles are primarily based on error prevention.
·
In the exploratory style, coding was
considered synonymous with software development. For instance,
exploratory programming style believed in developing a working system as
quickly as possible and then successively modifying it until it performed
satisfactorily. In the modern software development style, coding is
regarded as only a small part of the overall software development activities.
There are several development activities such as design and testing which
typically require much more effort than coding.
·
A lot of attention is being paid to requirements
specification. Significant effort is now being devoted to develop a
clear specification of the problem before any development activity is started.
·
Now there is a distinct design
phase where standard design techniques are employed.
·
Periodic
reviews are being carried out during all
stages of the development process.
·
There is better visibility of
design and code. By visibility we mean production of good quality,
consistent and standard documents during every phase.
·
Now,
projects are first thoroughly planned. Project planning normally
includes preparation of various types of estimates, resource scheduling, and
development of project tracking plans.
·
Several metrics are being
used to help in software project management and software
quality assurance
13. Q:
Differentiate between Software Process and Software Development Life Cycle.
[2006, 2008]
OR
Do
you mean understand by Software Process? Is it similar to Software Development
Life Cycle? [2005]
Answer:
The Software Process and Software Development Life Cycle are not same. There are some slight differences. A software life cycle model (also called process model) is a descriptive and diagrammatic representation of the software life cycle. A life cycle model represents all the activities required to make a software product transit through its life cycle phases. It also captures the order in which these activities are to be undertaken. In other words, a life cycle model maps the different activities performed on a software product from its inception to retirement. But, a Software Process is also termed as Software Process Model. The software process model is the methodology and process followed in the software life cycle. It covers only a single or at best a few individual activities involved in the development. So a Software life cycle in a nutshell superset of software process model. For example testing methodology, design methodology etc.
14. Q: Explain the
problems that might be faced by an organization if it does not follow any
software life cycle model.
Answer:
The
development team must identify a suitable life cycle model for the particular
project and then adhere to it. Without using of a particular life cycle model
the development of a software product would not be in a systematic and disciplined
manner. When a software product is being developed by a team there must be a
clear understanding among team members about when and what to do. Otherwise it
would lead to chaos and project failure. This problem can be illustrated by
using an example. Suppose a software development problem is divided into
several parts and the parts are assigned to the team members. From then on,
suppose the team members are allowed the freedom to develop the parts assigned
to them in whatever way they like. It is possible that one member might start
writing the code for his part, another might decide to prepare the test
documents first, and some other engineer might begin with the design phase of
the parts assigned to him. This would be one of the perfect recipes for project
failure.
15. Q: What is
Software Process? Why and how software process does not improve? [2003]
OR
What
is Software Process? What elements can prevent a software form improving?
Answer:
A Software Process is also termed as Software Process Model. The software process model is the methodology and process followed in the software life cycle. It covers only a single or at best a few individual activities involved in the development. For example testing methodology, design methodology etc.
The following can prevent software from improving:
· Imperfect/ unclear requirement Analysis & Specification
· Improper Planning
· Wrong Estimation of Size, Cost & Effort
· Incorrect/ partial design of the problem domain
· Man-power turnaround problem
· Wrong decisions in Scheduling
· Immature project Staffing
· Lack of knowledge of the developer team in the technical area
· Use of non-standard testing methodology
· Non or incomplete documentation
16. Q: What do you understand by the expression
“Life Cycle model of Software development”? Why is it important to adhere to a
life cycle model during the development of a large software product? [2008C]
Answer:
A
software life cycle is a series of identifiable stages that a software product
undergoes during its lifetime. A software life cycle model (also called Software
process model) is a descriptive and diagrammatic representation of the software
life cycle. A life cycle model represents all the activities required to make a
software product transit through its life cycle phases. It also captures the
order in which these activities are to be undertaken.
Why
it is important to adhere to life cycle model during the development of a large
software product:
Software
engineering is an engineering approach for software development. We can
alternatively view it as a systematic collection of past experience. The
experience is arranged in the form of methodologies and guidelines. A small
program can be written without using software engineering principles. But if
one wants to develop a large software product, then software engineering
principles are indispensable to achieve a good quality software cost
effectively. Software engineering principles use two
important techniques to reduce problem complexity: abstraction
and decomposition.
The
principle of abstraction (in fig.1.4) implies that a problem can be simplified
by omitting irrelevant details. Once simpler problem is solved then the omitted
details can be taken into consideration to solve the next lower level
abstraction. In this technique any random decomposition of a problem into
smaller parts will not help. The problem has to be decomposed such that each
component of the decomposed problem can be solved in solution and then the
solution of the different components can be combined to get the full solution.
In other words, a good decomposition as shown in
fig.1.5 should minimize interactions among various components.
17. Q: Describe various types of software and
their application domains, together with their special significance? [2008C]
Answer:
Software has become integral part of most of the
fields of human life. Software applications are grouped into eight areas for
convenience as shown in fig:
·
System Software: infrastructure
software comes under this category like compilers, operating systems, editors,
drivers, etc. Basically system software is a collection of programs to provide
service to other programs.
·
Real time Software: These
software’s are used to monitor, control and analyze real world events as they
occur. An example may be software required for weather forecasting. Such
software will gather and process the status of temperature, humidity and other
environmental parameters to forecast the weather.
·
Embedded Software: This type of
software is placed in “ROM” of the product and controls various functions of
the product. The product could be an aircraft, automobile, security system,
signaling system etc.
·
Personal Computer Software: the software
used in personal computers are covered in this category. Examples are word
processors, database management, account management etc.
·
Artificial Intelligence Software: examples are
expert systems, artificial neural networks etc.
·
Web based Software: examples are
CGI, HTML, java, Perl.
·
Engineering and Scientific Software: examples are
MATLAB, CAD/CAM packages etc.
18. Q: What do you mean by Software Process? What
problems will a software development house face if it does not follow any
systematic process in its software development efforts? [2009C]
Answer:
A
software life cycle is a series of identifiable stages that a software product
undergoes during its lifetime. A software life cycle model (also called Software
process model) is a descriptive and diagrammatic representation of the software
life cycle. A life cycle model represents all the activities required to make a
software product transit through its life cycle phases. It also captures the
order in which these activities are to be undertaken.
The
problems that a software development house faces if it does not follow any
systematic process in its software development efforts are as follows:
·
Poor
quality software productions
·
Often
fail to develop the required software goal.
·
Development
team exceeds the budget
·
Cannot
handle the manpower turn-around problem.
·
Late
delivering of software
·
User
requirements not completely supported by the software
·
Unreliable
software production
·
High
cost in maintenance
19. Q: What do you mean by Software Life Cycle?
Describe Waterfall Model. Give its advantages and disadvantages.
OR
How
software life cycle provides information about the software? Explain waterfall
life cycle model. [2003]
OR
What
are the limitations of waterfall model? When this model is useful? [2008]
Answer:
A
software life cycle is a series of identifiable stages that a software product
undergoes during its lifetime. A software life cycle model (also called Software
process model) is a descriptive and diagrammatic representation of the software
life cycle. A life cycle model represents all the activities required to make a
software product transit through its life cycle phases. It also captures the
order in which these activities are to be undertaken.
Waterfall
life cycle model is divided into two classes:
·
Classical
Waterfall model
·
Iterative
waterfall model
The
classical waterfall model is intuitively the most obvious way to develop
software. Classical waterfall model divides the life cycle into the following
phases as shown in fig.2.1:
·
Feasibility Study
·
Requirements Analysis and Specification
·
Design
·
Coding and Unit Testing
·
Integration
and System Testing
·
Maintenance
The Iterative Waterfall model follows
the same stages but feedback paths are available to its preceding stages.
Activities in each phase of the life cycle
Activities undertaken during feasibility study: -
The main aim of feasibility study is to determine whether it
would be financially and technically feasible to develop the product.
·
At first project managers or team
leaders try to have a rough understanding of what is required to be done
by visiting the client side. They study different input data to the
system and output data to be produced by the system
·
After they have an overall
understanding of the problem they investigate the different solutions that are
possible.
·
Then they pick the best solution
and determine whether the solution is feasible financially and
technically.
Activities undertaken during requirements analysis and
specification: - The aim of the requirements analysis
and specification phase is to understand the exact requirements of the customer
and to document them properly. This phase consists of two distinct activities,
namely
·
Requirements gathering and analysis,
and
·
Requirements specification
Activities undertaken during design: - The goal of the design phase is to transform the
requirements specified in the SRS document into a structure that is suitable
for implementation in some programming language. In technical terms, during the design phase the software
architecture is derived from the SRS document. Two distinctly different
approaches are available: the traditional design approach and the
object-oriented design approach.
·
Traditional
design approach
Traditional design consists of two different activities;
first a structured analysis of the requirements specification is carried out
where the detailed structure of the problem is examined. This is followed by a
structured design activity. During structured design, the results of structured
analysis are transformed into the software design.
·
Object-oriented
design approach
In this technique, various objects that occur in the problem
domain and the solution domain are first identified, and the different
relationships that exist among these objects are identified. The object
structure is further refined to obtain the detailed design.
Activities undertaken during coding and unit testing:- The purpose of the coding and unit testing phase (sometimes
called the implementation phase) of software development is to translate the
software design into source code.
Each component of the design is implemented as a program module. The
end-product of this phase is a set of program modules that have been
individually tested.
During this phase, each module is unit tested to determine
the correct working of all the individual modules. It involves testing each
module in isolation as this is the most efficient way to debug the errors
identified at this stage.
Activities undertaken during integration and system testing:
- Integration of different modules is
undertaken once they have been coded and unit tested. During the integration
and system testing phase, the modules are integrated in a planned manner. The
different modules making up a software product are almost never integrated in
one shot. Integration is normally carried out incrementally over a number of
steps. During each integration step, the partially integrated system is tested
and a set of previously planned modules are added to it. Finally, when all the
modules have been successfully integrated and tested, system testing is carried
out. The goal of system testing is to ensure that the developed system conforms
to its requirements laid out in the SRS document.
System testing usually consists of three different kinds of
testing activities:
§ α – testing:
It is the system testing performed by the development team.
§ β – testing:
It is the system testing performed by a friendly set of customers.
§ acceptance testing:
It is the system testing performed by the customer himself after the product
delivery to determine whether to accept or reject the delivered product.
Activities undertaken during maintenance: -
Maintenance of a typical software product requires much more
than the effort necessary to develop the product itself. Many studies carried out
in the past confirm this and indicate that the relative effort of development
of a typical software product to its maintenance effort is roughly in the 40:60
ratio. Maintenance involves performing any one or more of the following three
kinds of activities:
·
Correcting errors that were not
discovered during the product development phase. This is called corrective
maintenance.
·
Improving the implementation of the
system and enhancing the functionalities of the system according to the
customer’s requirements. This is called perfective maintenance.
·
Porting the software to work in a new
environment. For example, porting may be required to get the software to work
on a new computer platform or with a new operating system. This is called adaptive
maintenance.
The
advantages of Classical Waterfall model are:
·
It
follows a rigid structure.
·
If
we follow it, we can develop an error free software product.
The
disadvantages of Classical Waterfall model are:
·
It
is difficult to define all requirements at the beginning of a project
·
A
working version of the system is not seen until late in the projects
·
It
does not scale up well to large projects
·
Real
projects are rarely sequential
The
advantages of Iterative Waterfall model are:
·
It
does not follow a rigid structure.
The
disadvantages of Classical Waterfall model are:
·
It
is difficult to define all requirements at the beginning of a project
·
A
working version of the system is not seen until late in the projects
·
It
does not scale up well to large projects
·
Real
projects are rarely sequential
20. Q: Describe a prototype Life Cycle Model? Give
its advantages and disadvantages.
OR
What
is a prototype? Is it always beneficial to construct a prototype model? Does
the construction of a prototype model
always increase the overall cost of Software development? Justify your answer.
[2006, 2008]
OR
What
is prototype? When we need to develop a prototype? [2008]
Answer:
A prototype is
a toy implementation of the system. A prototype usually exhibits limited
functional capabilities, low reliability, and inefficient performance compared
to the actual software. A prototype is usually built using several shortcuts.
The shortcuts might involve using inefficient, inaccurate, or dummy functions.
The shortcut implementation of a function, for example, may produce the desired
results by using a table look-up instead of performing the actual computations.
A prototype usually turns out to be a very crude version of the actual system. This
model divides the life cycle of a software development process into the phases
as shown below:-
There are several uses of a prototype. An important purpose
is to illustrate the input data formats, messages, reports, and the interactive
dialogues to the customer. This is a valuable mechanism for gaining better understanding
of the customer’s needs:
• How the screens might look like
• How the user interface would behave
• How the system would produce outputs
Advantages:
- A partial product is built in the initial stages. So customers get a chance to see the product early in the life cycle and thus give necessary feedback.
- Requirement becomes more clear resulting into an accurate product
- New requirements can be easily accommodated.
- Flexibility in design and development is also supported by the model.
Disadvantages:
- Developers in a hurry may build prototypes and end up with sub-optimal solution
- After seeing the early prototype the users may demand the actual system to be delivered soon.
- If end user is not satisfied with initial prototype, he may loose interest in the project.
- Poor documentation
No, it is always
not beneficial to construct a prototype model. Because, if
the technical solution is clear then it will be more time consuming by using
prototype. Also, it is not useful for very large projects.
Yes the construction
of a prototype model always increase the overall cost of Software development because
building the user prototype also effort time and money.
21. Q: What is Evolutionary Model? Describe its
merits and demerits.
OR
What
do you mean by Software Life Cycle? Describe Incremental Model. [2003]
Answer:
A
software life cycle is a series of identifiable stages that a software product
undergoes during its lifetime. A software life cycle model (also called Software
process model) is a descriptive and diagrammatic representation of the software
life cycle. A life cycle model represents all the activities required to make a
software product transit through its life cycle phases. It also captures the
order in which these activities are to be undertaken.
This model is
also known as successive versions model. It is sometimes also termed as Incremental
Model. This model is also referred to as successive versions model. In this
model, the software is first broken down into several models which can be
incrementally constructed and delivered. The development team first develops
the core model of the system. This initial product skeleton is refined into
increasing levels of capacity by adding new functionalities in successive
versions. Each evolutionary version may be developed using an iterative
waterfall model of development.
This model
divides the life cycle of a software development process into the phases as
shown below:-
Here
A, B, C is modules of a software product that are incrementally developed and
delivered.
Advantages:
- Early delivery of portions of the system even though some of the requirements are not yet decided.
- The core modules get tested thoroughly, thereby reducing chances of errors in the final product
Disadvantages:
- For most practical problems, it is difficult to subdivide the problem into several functional units that can be incrementally implemented and delivered
- Model can be used only for very large problems, where it is easier to identify modules for incremental implementation
22. Q: What is a Meta
Model? List the merits and demerits of Meta
Model [2006]
OR
Why the spiral life cycle model is
considered to be a Meta model? [08C, 09C]
OR
Clearly
explain in brief why spiral model is called Meta
model? [2010]
Answer:
The
diagrammatic representation of this model appears like a spiral. The exact
number of loops is not fixed. Each phase in this model is divided into four
sectors. First quadrant identifies the
objectives of the phase and alternative solutions possible for the phase under
consideration. Second quadrant, the alternative solutions are evaluated to
select the best possible solution. For the chosen solution, the potential risks
are identified and dealt with by developing an appropriate prototype. A risk is essentially any adverse
circumstances that might hamper the successful completion of software. The
third quadrant consists of developing and verifying the next level of the
product. The fourth quadrant consists of reviewing and planning the next phase.
The life cycle model is called a Meta
model since it encompasses all other life cycle models. However, this
model is much more complex than other models.
23. Q: Compare the Different Life Cycle Models.
OR
Distinguish between Waterfall model and
Spiral Model [2008C]
Answer:
Comparison of
Different Life Cycle Model
The Classical
Waterfall model can be considered as the basic model and all other life
cycle model as embellishment of this model. However, Classical Waterfall model
cannot be used in practical development of the project. This problem is
overcome in iterative waterfall model. The interactive waterfall model
is the model widely used model. This model is simple to understand and use.
However, this model is suitable only for well-understood problems; it is not
suitable for very large projects and for projects that are subject to many
risks. The prototyping model is suitable for projects for which either
the user requirements or the underlying aspects are not well understood. This
model is especially popular for development of the user-interface part of the
projects. The evolutionary approach is suitable for large problems which
can be decomposed into a set of modules for incremental development and
delivery. This model is also widely used for object-oriented development projects.
Spiral model is called a Meta model
since it encompasses all other life cycle models. Risk handle is inherently
built into this model.
24. Q: What is software development life cycle?
Explain different Software development life cycle with their relative merits
and demerits. [2007C]
Answer:
A
software life cycle is a series of identifiable stages that a software product
undergoes during its lifetime. A software life cycle model (also called Software
process model) is a descriptive and diagrammatic representation of the software
life cycle. A life cycle model represents all the activities required to make a
software product transit through its life cycle phases. It also captures the
order in which these activities are to be undertaken.
Many life cycle models have been proposed so far. Each of
them has some advantages as well as some disadvantages. A few important and
commonly used life cycle models are as follows:
·
Classical Waterfall Model
·
Iterative Waterfall Model
·
Prototyping Model
·
Evolutionary Model
·
Spiral
Model
WRITE
DOWN BREIFLY THE ARCHITECTURE, DESCRIPTION, MERITS & DEMERITS OF ALL LIFE
CYCLE MODEL.
25. Q: List the major responsibilities of a
software project manager. [2007C]
Answer:
The
major responsibilities of a Software Project Manager:
Software
project managers take the overall responsibility of steering a project to
success. It is very difficult to objectively describe the job responsibilities
of a project manager. The job responsibility of a project manager ranges from
invisible activities like building up team morale to highly visible customer
presentations.
Most
managers take responsibility for
- Project proposal writing
- project cost estimation
- Project Scheduling
- Project staffing
- Software process tailoring
- Project monitoring and control
- Software configuration management
- Project risk management
- Interfacing with clients
- Managerial report writing and presentations, etc.
These activities are certainly numerous, varied and difficult to
enumerate, but these activities can be broadly classified into project
planning, and project monitoring and control activities. The project planning
activity is undertaken before the development starts to plan the activities to
be undertaken during development. The project monitoring and control activities
are undertaken once the development activities start with the aim of ensuring
that the development proceeds as per plan and changing the plan whenever
required to cope up with the situation.
26. Q: List the Skill necessary for Software
Project Management.
Answer:
- A theoretical knowledge of different project management techniques is certainly necessary to become a successful project manager.
- However, effective software project management frequently calls for good qualitative judgment and decision taking capabilities.
- In addition to having a good grasp of the latest software project management techniques such as cost estimation, risk management, configuration management, project managers need good communication skills and the ability get work done.
- However, some skills such as tracking and controlling the progress of the project, customer interaction, managerial presentations, and team building are largely acquired through experience.
- None the less, the importance of sound knowledge of the prevalent project management techniques cannot be overemphasized.
27. Q: What are the different project planning
Activities? Briefly describe.
Answer:
Once a project is found to
be feasible, software project managers undertake project planning. Project
planning is undertaken and completed even before any development activity
starts. Project planning consists of the following essential activities:
• Estimating the following attributes of the project:
Project size:
What will be problem complexity in terms of the effort and time required to
develop the product?
Cost:
How much is it going to cost to develop the project?
Duration:
How long is it going to take to complete development?
Effort:
How much effort would be required?
The effectiveness of the
subsequent planning activities is based on the accuracy of these estimations.
• Scheduling manpower and other resources: After
the estimations are made, the schedules for manpower and other resources have
to be developed
• Staff organization and staffing plans: Staff
organization and staffing plans have to be made.
• Risk identification,
analysis, and abatement
planning : Risk identification, analysis
and abatement planning have to be done.
• Miscellaneous plans such as quality
assurance plan, configuration management plan, etc.
28. Q: What are
the contents of Software Project Management Plan
(SPMP) document? Briefly describe.
Answer:
Once project planning is
complete, project managers document their plans in a Software Project
Management Plan (SPMP) document. The SPMP document should discuss a list of
different items that have been discussed below. This list can be used as a
possible organization of the SPMP document.
Organization
of the Software Project Management Plan (SPMP) Document
1. Introduction
(a) Objectives
(b) Major Functions
(c) Performance Issues
(d) Management and Technical Constraints
2. Project Estimates
(a) Historical Data Used
(b) Estimation Techniques Used
(c) Effort, Resource, Cost, and Project Duration Estimates
3. Schedule
(a) Work Breakdown Structure
(b) Task Network Representation
(c) Gantt Chart Representation
(d) PERT Chart Representation
4. Project Resources
(a) People
(b) Hardware and Software
(c) Special Resources
5. Staff Organization
(a) Team Structure
(b) Management Reporting
6. Risk Management Plan
(a) Risk Analysis
(b) Risk Identification
(c) Risk Estimation
(d) Risk Abatement Procedures
7. Project Tracking and Control Plan
8. Miscellaneous Plans
(a) Process Tailoring
(b) Quality Assurance Plan
(c) Configuration Management Plan
(d) Validation and Verification
(e) System Testing Plan
(f) Delivery, Installation, and Maintenance Plan
29. Q: What is Sliding Window Planning?
Answer:
Especially for large projects, it is difficult to
make accurate plans. A part of this difficulty is due to the fact that the
project parameters, scope of the project, project staff, etc. may change during
the span of the project. In order to overcome this problem, sometimes project
managers undertake project planning in stages. Planning a project over a number
of stages protects managers from making big commitments too early. This
technique of staggered planning is known as Sliding Window Planning. In Sliding
Window Planning, starting with an initial plan, the project is planned more
accurately in successive development stages. At the start of a project, project
managers have incomplete knowledge about the details of the project. Their information
base gradually improves as the project progresses through different phases.
After the completion of every phase, the project managers can plan each
subsequent phase more accurately and with increasing levels of confidence.
30. Q: What are the different Metrics available
for Project size estimation?
Answer:
Accurate
estimation of the problem size is fundamental to satisfactory estimation of
effort, time duration and cost of a software project. In order to be able to
accurately estimate the project size, some important metrics should be defined
in terms of which the project size can be expressed. The size of a problem is
obviously not the number of bytes that the source code occupies. It is neither
the byte size of the executable code. The project size is a measure of the
problem complexity in terms of the effort and time required to develop the
product.
Currently
two metrics are popularly being used widely to estimate size:
·
Lines of code (LOC)
·
Function point (FP).
The usage of each of these metrics in
project size estimation has its own advantages and disadvantages.
31. Q: What is LOC? What are its advantages and
disadvantages? [B.E. 2005]
Answer:
LOC
(Lines of Codes) is the simplest among all metrics available to estimate
project size. This metric is very popular because it is the simplest to use.
Using this metric, the project size is estimated by counting the number of
source instructions in the developed program. Obviously, while counting the
number of source instructions, lines used for commenting the code and the
header lines should be ignored.
Determining the LOC count at the end of
a project is a very simple job. However, accurate estimation of the LOC count
at the beginning of a project is very difficult. In order to estimate the LOC
count at the beginning of a project, project managers usually divide the
problem into modules and each module into sub modules and so on, until the
sizes of the different leaf-level modules can be approximately predicted. To be
able to do this, past experience in developing similar products is helpful. By
using the estimation of the lowest level modules, project managers arrive at
the total size estimation. So, we can mention shortcoming of LOC as:
LOC
as a measure of problem size has several shortcomings:
- LOC gives a numerical value of problem size that can vary widely with individual coding style – different programmers lay out their code in different ways. For example, one programmer might write several source instructions on a single line whereas another might split a single instruction across several lines. Of course, this problem can be easily overcome by counting the language tokens in the program rather than the lines of code. However, a more intricate problem arises because the length of a program depends on the choice of instructions used in writing the program. Therefore, even for the same problem, different programmers might come up with programs having different LOC counts. This situation does not improve even if language tokens are counted instead of lines of code.
- A good problem size measure should consider the overall complexity of the problem and the effort needed to solve it. That is, it should consider the local effort needed to specify, design, code, test, etc. and not just the coding effort. LOC, however, focuses on the coding activity alone; it merely computes the number of source lines in the final program. We have already seen that coding is only a small part of the overall software development activities. It is also wrong to argue that the overall product development effort is proportional to the effort required in writing the program code. This is because even though the design might be very complex, the code might be straightforward and vice versa. In such cases, code size is a grossly improper indicator of the problem size.
- LOC measure correlates poorly with the quality and efficiency of the code. Larger code size does not necessarily imply better quality or higher efficiency. Some programmers produce lengthy and complicated code as they do not make effective use of the available instruction set. In fact, it is very likely that a poor and sloppily written piece of code might have larger number of source instructions than a piece that is neat and efficient.
- LOC metric penalizes use of higher-level programming languages, code reuse, etc. The paradox is that if a programmer consciously uses several library routines, then the LOC count will be lower. This would show up as smaller program size. Thus, if managers use the LOC count as a measure of the effort put in the different engineers (that is, productivity), they would be discouraging code reuse by engineers.
- LOC metric measures the lexical complexity of a program and does not address the more important but subtle issues of logical or structural complexities. Between two programs with equal LOC count, a program having complex logic would require much more effort to develop than a program with very simple logic. To realize why this is so, consider the effort required to develop a program having multiple nested loop and decision constructs with another program having only sequential control flow.
- It is very difficult to accurately estimate LOC in the final product from the problem specification. The LOC count can be accurately computed only after the code has been fully developed. Therefore, the LOC metric is little use to the project managers during project planning, since project planning is carried out even before any development activity has started. This possibly is the biggest shortcoming of the LOC metric from the project manager’s perspective.
32. Q: What is Function
point? What are its advantages and disadvantages?
Answer:
Function point (FP)
Function
point metric was proposed by Albrecht [1983]. This metric overcomes many of the
shortcomings of the LOC metric. Since its inception in late 1970s, function
point metric has been slowly gaining popularity. One of the important
advantages of using the function point metric is that it can be used to easily
estimate the size of a software product directly from the problem specification.
This is in contrast to the LOC metric, where the size can be accurately
determined only after the product has fully been developed.
The conceptual idea behind
the function point metric is that the size of a software product is directly
dependent on the number of different functions or features it supports.
Fig. 3.2: System function as a map of input data
to output data
A software product
supporting many features would certainly be of larger size than a product with
less number of features. Each function when invoked reads some input data and
transforms it to the corresponding output data. For example, the issue book
feature (as shown in fig. 3.2) of a Library Automation Software takes the name
of the book as input and displays its location and the number of copies
available. Thus, a computation of the number of input and the output data
values to a system gives some indication of the number of functions supported
by the system. Albrecht postulated that in addition to the number of basic
functions that software performs, the size is also dependent on the number of
files and the number of interfaces.
Besides using the number
of input and output data values, function point metric computes the size of a
software product (in units of functions points or FPs) using three other
characteristics of the product as shown in the following expression. The size
of a product in function points (FP) can be expressed as the weighted sum of
these five problem characteristics. The weights associated with the five
characteristics were proposed empirically and validated by the observations
over many projects. Function point is computed in two steps. The first step is
to compute the unadjusted function point (UFP).
UFP
= (Number of inputs)*4 + (Number of outputs)*5 + (Number of
inquiries)*4 + (Number of files)*10 + (Number of interfaces)*7
Number
of inputs: Each data item input by the user is counted. Data inputs should be
distinguished from user inquiries. Inquiries are user commands such as
print-account-balance. Inquiries are counted separately. It must be noted that
individual data items input by the user are not considered in the calculation
of the number of inputs, but a group of related inputs are considered as a
single input.
For example, while
entering the data concerning employee to employee pay roll software; the data
items name, age, sex, address, phone number, etc. are together considered as a
single input. All these data items can be considered to be related, since they
pertain to a single employee.
Number of outputs: The
outputs considered refer to reports printed, screen outputs, error messages
produced, etc. While outputting the number of outputs the individual data items
within a report are not considered, but a set of related data items is counted
as one input.
Number of inquiries: Number
of inquiries is the number of distinct interactive queries which can be made by
the users. These inquiries are the user commands which require specific action
by the system.
Number of files: Each
logical file is counted. A logical file means groups of logically related data.
Thus, logical files can be data structures or physical files.
Number of interfaces: Here
the interfaces considered are the interfaces used to exchange information with
other systems. Examples of such interfaces are data files on tapes, disks,
communication links with other systems etc.
Once
the unadjusted function point (UFP) is computed, the technical complexity
factor (TCF) is computed next. TCF refines the UFP measure by considering
fourteen other factors such as high transaction rates, throughput, and response
time requirements, etc. Each of these 14 factors is assigned from 0 (not
present or no influence) to 5 (strong influence/ Essential). The resulting
numbers are summed, yielding the total degree of influence (DI). Now, TCF is
computed as (0.65+0.01*DI). As DI can vary from 0 to 70, TCF can vary from 0.65
to 1.35. Finally, FP=UFP*TCF.
Advantages:
·
This
approach is independent of the language, tools or methodologies used for
implementations
·
Function
points can be estimated from requirement specification or design specification,
thus making it possible to estimate development effort in early phases of
development.
·
Function
points are directly linked to the statement of requirements.
Disadvantages:
A major shortcoming of the
function point measure is that it does not take into account the algorithmic
complexity of software. That is, the function point metric implicitly assumes
that the effort required to design and develop any two functionalities of the
system is the same. But, we know that this is normally not true, the effort
required to develop any two functionalities may vary widely. It only takes the
number of functions that the system supports into consideration without
distinguishing the difficulty level of developing the various functionalities.
To overcome this problem, an extension of the function point metric called
feature point metric is proposed.
33. Q: What is Feature point metric? What are its advantages and
disadvantages?
Answer:
Feature point metric
A major shortcoming of the
function point measure is that it does not take into account the algorithmic
complexity of software. But, we know that this is normally not true, the effort
required to develop any two functionalities may vary widely. It only takes the
number of functions that the system supports into consideration without
distinguishing the difficulty level of developing the various functionalities.
To overcome this problem, an extension of the function point metric called
feature point metric is proposed.
Feature point metric
incorporates an extra parameter algorithm complexity. This parameter ensures
that the computed size using the feature point metric reflects the fact that
the more is the complexity of a function, the greater is the effort required to
develop it and therefore its size should be larger compared to simpler
functions.
34. Q: What is the LOC for the given program?
Answer:
The above program contains
18 lines of Code and one of which is a comment line. So, LOC for the above
given program is 17. i.e. 17 LOC.
35. Q: Consider a project with the following
functional units:
Number
of user inputs =50
Number
of user outputs =40
Number
of user enquires =35
Number of user files =06
Number
of external interfaces =04
Assume all complexity adjustment
factors and weighting factors are average. Compute the function point for the
project.
Answer:
We know:
UFP = 50 x 4 +
40 x 5 + 35 x 4 +06 x 10 +04 x 7= 628
TCF = ( 0.65 + 0.01(14 x
3) ) = 1.07
FP= UFP x TCF = 628 x 1.07 = 672
36. Q: What are
the different functional units used in Function Point Estimation?
Answer:
The functional units used in FP
estimation are classified as Low, Average and High based on the complexity of
the software product. The weighting factors are:
Functional
Units
|
Weighting
Factors
|
||
Low
|
Average
|
High
|
|
Inputs(I)
|
3
|
4
|
6
|
Outputs(O)
|
4
|
5
|
7
|
Inquiry(E)
|
3
|
4
|
6
|
Number
of files(F)
|
7
|
10
|
15
|
Number
of interfaces(IF)
|
5
|
7
|
10
|
The 14 TCF factors used in FP estimation are
classified in a rating from 0 to 5.
They are
·
0
( no influence)
·
1
( Incidental)
·
2
( Moderate)
·
3
( Average)
·
4
( Signification)
·
5
( Essential)
37. Q: What are
the different project estimation techniques?
Answer:
Estimation of various project
parameters is a basic project planning activity. The important project
parameters that are estimated include: project size, effort required to develop
the software, project duration, and cost. These estimates not only help in
quoting the project cost to the customer, but are also useful in resource
planning and scheduling. There are three broad categories of estimation
techniques:
•
Empirical estimation techniques
•
Heuristic techniques
•
Analytical estimation techniques
38. Q: What is an
empirical estimation technique? What are different empirical estimation
techniques?
Answer:
Empirical estimation techniques are
based on making an educated guess of the project parameters. While using this
technique, prior experience with development of similar products is helpful.
Although empirical estimation techniques are based on common sense, different
activities involved in estimation have been formalized over the years. Two
popular empirical estimation techniques are: Expert judgment technique
and Delphi cost
estimation.
Expert Judgment Technique
Expert judgment is one of the most
widely used estimation techniques. In this approach, an expert makes an
educated guess of the problem size after analyzing the problem thoroughly.
Usually, the expert Estimates the cost of the different components (i.e.
modules or subsystems) of the system and then combines them to arrive at the
overall estimate. However, this technique is subject to human errors and
individual bias. Also, it is possible that the expert may overlook some factors
inadvertently. Further, an expert making an estimate may not have experience
and knowledge of all aspects of a project. For example, he may be conversant
with the database and user interface parts but may not be very knowledgeable
about the computer communication part. A more refined form of expert judgment
is the estimation made by group of experts. Estimation by a group of experts
minimizes factors such as individual oversight, lack of familiarity with a
particular aspect of a project, personal bias, and the desire to win contract
through overly optimistic estimates. However, the estimate made by a group of
experts may still exhibit bias on issues where the entire group of experts may
be biased due to reasons such as political considerations. Also, the decision
made by the group may be dominated by overly assertive members.
Delphi
cost estimation
Delphi
cost estimation approach tries to overcome some of the shortcomings of the
expert judgment approach. Delphi estimation is
carried out by a team comprising of a group of experts and a coordinator. In
this approach, the coordinator provides each estimator with a copy of the
software requirements specification (SRS) document and a form for recording his
cost estimate. Estimators complete their individual estimates anonymously and
submit to the coordinator. In their estimates, the estimators mention any
unusual characteristic of the product which has influenced his estimation. The
coordinator prepares and distributes the summary of the responses of all the
estimators, and includes any unusual rationale noted by any of the estimators.
Based on this summary, the estimators re-estimate. This process is iterated for
several rounds. However, no discussion among the estimators is allowed during
the entire estimation process. The idea behind this is that if any discussion
is allowed among the estimators, then many estimators may easily get influenced
by the rationale of an estimator who may be more experienced or senior. After
the completion of several iterations of estimations, the coordinator takes the
responsibility of compiling the results and preparing the final estimate.
39. Q: What is a Heuristic estimation
technique? What are different Heuristic
estimation techniques?
Answer:
Heuristic
techniques assume that the relationships among the different project parameters
can be modeled using suitable mathematical expressions. Once the basic
(independent) parameters are known, the other (dependent) parameters can be
easily determined by substituting the value of the basic parameters in the
mathematical expression. Different heuristic estimation models can be divided
into the following two classes: single variable model and the multi variable
model.
Single variable estimation
models provide a means to estimate the desired characteristics of a problem,
using some previously estimated basic (independent) characteristic of the
software product such as its size. A single variable estimation model takes the
following form:
Estimated Parameter = c1 * ed1
In the above expression, e
is the characteristic of the software which has already been estimated
(independent variable). Estimated Parameter is the dependent parameter
to be estimated. The dependent parameter to be estimated could be effort,
project duration, staff size, etc. c1 and d1 are
constants. The values of the constants c1 and d1 are
usually determined using data collected from past projects (historical data).
The basic COCOMO model is an example of single variable cost estimation model.
A multivariable cost estimation model takes the following form:
Estimated Resource = c1*e1d1 + c2*e2d2 + ...
Where e1, e2, … are
the basic (independent) characteristics of the software already estimated, and
c1,
c2,
d1,
d2,
… are constants. Multivariable estimation models are expected to give more
accurate estimates compared to the single variable models, since a project
parameter is typically influenced by several independent parameters. The
independent parameters influence the dependent parameter to different extents.
This is modeled by the constants c1, c2, d1, d2, … .
Values of these constants are usually determined from historical data. The
intermediate COCOMO model can be considered to be an example of a multivariable
estimation model.
40. Q: What is a Halstead Software Science? What are different
estimation techniques?
Answer:
Halstead’s Software
Science – An Analytical Technique
Halstead’s software
science is an analytical technique to measure size, development effort, and
development cost of software products. Halstead used a few primitive program
parameters to develop the expressions for over all program length, potential
minimum value, actual volume, effort, and development time.
For
a given program, let:
�� η1 be the
number of unique operators used in the program,
�� η2 be the
number of unique operands used in the program,
�� N1 be the
total number of operators used in the program,
�� N2 be the
total number of operands used in the program.
Length and Vocabulary
The length of a program as
defined by Halstead, quantifies total usage of all operators and operands in
the program. Thus, program length N = N1 +N2. The
program vocabulary is the number of unique operators and operands used in the
program. Thus, program vocabulary η = η1 + η2.
Program Volume
The length of a program
(i.e. the total number of operators and operands used in the code) depends on
the choice of the operators and operands used. Thus, while expressing program
size, the programming language used must be taken into consideration:
V
= N*log2η
Here
the program volume V is the minimum number of bits needed to encode the
program. In fact, to represent η different identifiers uniquely, at least log2η bits
(where η is the program vocabulary) will be needed. In this scheme, Nlog2η bits
will be needed to store a program of length N. Therefore, the volume V
represents the size of the program by approximately compensating for the effect
of the programming language used.
Potential Minimum Volume
V* = (2 + η2)log2(2 + η2).
The program level L
is given by L = V*/V. The concept of program level L is introduced in an
attempt to measure the level of abstraction provided by the programming
language. Using this definition, languages can be ranked into levels that also
appear intuitively correct.
Effort and Time
The effort required to
develop a program can be obtained by dividing the program volume with the level
of the programming language used to develop the code. Thus, effort E = V/L,
where E is the number of mental discriminations required to implement the
program and also the effort required to read and understand the program. Thus,
the programming effort E = V²/V* (since L = V*/V) varies as the square of the
volume. Experience shows that E is well correlated to the effort needed for
maintenance of an existing program. The programmer’s time T = E/S, where S the
speed of mental discriminations. The value of S has been empirically developed
from psychological reasoning, and its recommended value for programming applications
is 18.
41. Q: Let us
consider the following C program.
main( )
{
int a, b, c, avg;
scanf(“%d %d %d”, &a, &b, &c);
avg = (a+b+c)/3;
printf(“avg = %d”, avg);
}
Find out the estimated length, and program volume
of the above given program.
Answer:
The unique operators are:
main,(),{},int,scanf,&,“,”,“;”,=,+,/, printf
The unique operands are:
a, b, c, &a, &b, &c, a+b+c, avg, 3,
“%d %d %d”, “avg = %d”
Therefore, η1 = 12, η2 = 11
Estimated Length = (12*log12 + 11*log11)
= (12*3.58 + 11*3.45)
= (43+38) = 81
Volume = Length*log(23) = 81*4.52 = 366
42. Q: What are
the different classifications of Software development?
Answer:
Boehm postulated that any software development
project can be classified into one of the following three categories based on
the development complexity: organic, semidetached, and embedded. Boehm not only
considered the characteristics of the product but also those of the development
team and development environment. Boehm’s
[1981] definition of organic, semidetached, and embedded systems are elaborated
below.
Organic:
A development project can be considered
of organic type, if the project deals with developing a well understood
application program, the size of the development team is reasonably small, and
the team members are experienced in developing similar types of projects.
Semidetached:
A development project can be considered
of semidetached type, if the development consists of a mixture of experienced
and inexperienced staff. Team members may have limited experience on related
systems but may be unfamiliar with some aspects of the system being developed.
Embedded:
A development project is considered to
be of embedded type, if the software being developed is strongly coupled to
complex hardware, or if the stringent regulations on the operational procedures
exist.
43. Q: What is
COCOMO? What are different types of COCOMO model? State the Basic COCOMO model.
Answer:
COCOMO
(Constructive Cost Estimation Model) was proposed by Boehm.
According
to Boehm, software cost estimation should be done through three stages: Basic
COCOMO, Intermediate COCOMO, and Complete COCOMO.
Basic
COCOMO Model
The basic
COCOMO model gives an approximate estimate of the project parameters. The basic
COCOMO estimation model is given by the following expressions:
Effort
= a1 х
(KLOC)a2 PM
Tdev
= b1 x
(Effort)b2 Months
Where
• KLOC is the estimated size of the
software product expressed in Kilo Lines of Code,
• a1, a2, b1, b2 are
constants for each category of software products,
• Tdev is the estimated time to develop
the software, expressed in months,
• Effort is the total effort required
to develop the software product, expressed in person months (PMs).
The effort estimation is expressed in units of
person-months (PM). It is the area under the person-month plot (as shown in
fig. 11.3). It should be carefully noted that an effort of 100 PM does not
imply that 100 persons should work for 1 month nor does it imply that 1 person
should be employed for 100 months, but it denotes the area under the
person-month curve (as shown in fig. 11.3).
According
to Boehm, every line of source text should be calculated as one LOC
irrespective of the actual number of instructions on that line. Thus, if a
single instruction spans several lines (say n lines), it is considered to be
nLOC. The values of a1, a2, b1, b2 for different
categories of products (i.e. organic, semidetached, and embedded) as given by
Boehm [1981] are summarized below. He derived the above expressions by
examining historical data collected from a large number of actual projects.
Estimation
of development effort : For the
three classes of software products, the formulas for estimating the effort
based on the code size are shown below:
Organic: Effort = 2.4(KLOC) 1.05 PM
Semi-detached: Effort = 3.0(KLOC) 1.12 PM
Embedded: Effort = 3.6(KLOC) 1.20 PM
Estimation of development time: For the three classes of software products, the formulas for
estimating the development time based on the effort are given below:
Organic: Tdev
= 2.5(Effort) 0.38 Months
Semi-detached: Tdev
= 2.5(Effort) 0.35 Months
Embedded: Tdev = 2.5(Effort) 0.32 Months
44. Q: What are
different problems associated with Basic COCOMO model?
Answer:
Some insight into the basic COCOMO model can be
obtained by plotting the estimated characteristics for different software
sizes. Fig. 11.4 shows a plot of estimated effort versus product size. From
fig. 11.4, we can observe that the effort is somewhat super linear in the size
of the software product. Thus, the effort required to develop a product increases
very rapidly with project size.
The
development time versus the product size in KLOC is plotted in fig. 11.5. From
fig. 11.5, it can be observed that the development time is a sublinear function
of the size of the product, i.e. when the size of the product increases by two
times, the time to develop the product does not double but rises moderately.
This can be explained by the fact that for larger products, a larger number of
activities which can be carried out concurrently can be identified. The
parallel activities can be carried out simultaneously by the engineers. This
reduces the time to complete the project. Further, from fig. 11.5, it can be
observed that the development time is roughly the same for all the three
categories of products. For example, a 60 KLOC program can be developed in
approximately 18 months, regardless of whether it is of organic, semidetached,
or embedded type.
From
the effort estimation, the project cost can be obtained by multiplying the
required effort by the manpower cost per month. But, implicit in this project
cost computation is the assumption that the entire project cost is incurred on
account of the manpower cost alone. In addition to manpower cost, a project
would incur costs due to hardware and software required for the project and the
company overheads for administration, office space, etc. it is important to
note that the effort and the duration estimations obtained using the COCOMO
model are called as nominal effort estimate and nominal duration estimate. The
term nominal implies that if anyone tries to complete the project in a time
shorter than the estimated duration, then the cost will increase drastically.
But, if anyone completes the project over a longer period of time than the
estimated, then there is almost no decrease in the estimated cost value.
45. Q: Assume that the size of an organic type software product has
been estimated to be 32,000 lines of source code. Assume that the average
salary of software engineers be Rs. 15,000/- per month. Determine the effort
required to develop the software product and the nominal development time.
Answer:
We know,
Effort = 2.4 х (32)1.05 = 91 PM
Nominal development time = 2.5 х (91)0.38 =
14 months
Cost required to develop the product = 14 х 15,000
= Rs. 210,000/-
46. Q: A project size of 200 KLOC is to be
developed. Software development team has average
experience on similar type of projects. The project schedule is not very tight.
Calculate the effort, development time, average staff size and productivity of
the project.
Answer:
The semi-detached model is the most
appropriate mode; keeping in view the size, schedules and experience of the
development team,
Hence,
Effort = 3.0 х (200)1.12 = 1133.12 PM =E
Nominal development time = 2.5 х (1133.12)0.35=
29.3 months=D
Average Staff size (SS) = E/D=1133.12/29.3=38.67 Persons
Productivity (P) = 200/1133.12=0.1765 KLOC/PM=176
LOC/PM
47.
Q: Suppose that a project was
estimated to be 400 KLOC. Calculate the effort and development time for each of
the three modes, i.e. Organic, Semi-detached and Embedded.
Answer:
The basic COCOMO equations take the form:
Effort (E) = a х (KLOC)b
Development time (D)
= c х (E)d
Estimated size of the project is=400 KLOC
(I). Organic mode
Effort (E) = 2.4 х (400)1.05=
1295.31 PM
Development time (D)
= 2.5 х (1295.31)0.38= 38.07 M
(II). Semidetached
mode
Effort (E) = 3.0
х (400)1.12=
2462.79 PM
Development time (D)
= 2.5 х (2462.79)0.35= 38.45 M
(III). Embedded mode
Effort (E) = 3.6
х (400)1.20=
4772.81 PM
Development time (D)
= 2.5 х (4772.81)0.38= 38 M
48. Q: How
Intermediate and Complete COCOMO model concepts works?
Answer:
Intermediate COCOMO model
The
basic COCOMO model assumes that effort and development time are functions of
the product size alone. However, a host of other project parameters besides the
product size affect the effort required to develop the product as well as the
development time. Therefore, in order to obtain an accurate estimation of the
effort and project duration, the effect of all relevant parameters must be
taken into account. The intermediate COCOMO model recognizes this fact and
refines the initial estimate obtained using the basic COCOMO expressions by
using a set of 15 cost drivers (multipliers) based on various attributes of
software development. For example, if modern programming practices are used,
the initial estimates are scaled downward by multiplication with a cost driver
having a value less than 1. If there are stringent reliability requirements on
the software product, this initial estimate is scaled upward. Boehm requires
the project manager to rate these 15 different parameters for a particular
project on a scale of one to three. Then, depending on these ratings, he suggests
appropriate cost driver values which should be multiplied with the initial
estimate obtained using the basic COCOMO. In general, the cost drivers can be
classified as being attributes of the following items:
- Product: The characteristics of the product that are considered include the inherent complexity of the product, reliability requirements of the product, etc.
- Computer: Characteristics of the computer that are considered include the execution speed required, storage space required etc.
- Personnel: The attributes of development personnel that are considered include the experience level of personnel, programming capability, analysis capability, etc.
- Development Environment: Development environment attributes capture the development facilities available to the developers. An important parameter that is considered is the sophistication of the automation (CASE) tools used for software development.
Complete COCOMO model
A major
shortcoming of both the basic and intermediate COCOMO models is that they
consider a software product as a single homogeneous entity. However, most large
systems are made up several smaller sub-systems. These sub-systems may have
widely different characteristics. For example, some sub-systems may be
considered as organic type, some semidetached, and some embedded. Not only that
the inherent development complexity of the subsystems may be different, but
also for some subsystems the reliability requirements may be high, for some the
development team might have no previous experience of similar development, and
so on. The complete COCOMO model considers these differences in characteristics
of the subsystems and estimates the effort and development time as the sum of
the estimates for the individual subsystems. The cost of each subsystem is
estimated separately. This approach reduces the margin of error in the final
estimate.
The following development project can be considered as an
example application of the complete COCOMO model. A distributed Management
Information System (MIS) product for an organization having offices at several
places across the country can have the following sub-components:
• Database part
• Graphical User Interface (GUI) part
• Communication part
Of
these, the communication part can be considered as embedded software. The
database part could be semi-detached software, and the GUI part organic
software. The costs for these three components can be estimated separately, and
summed up to give the overall cost of the system.
49. Q: What do you
mean by Staffing level estimation? Describe the Putnam’s work for staffing
level estimation.
Answer:
Staffing level estimation
Once the effort required
to develop a software has been determined, it is necessary to determine the
staffing requirement for the project. Putnam first studied the problem of what
should be a proper staffing pattern for software projects. He extended the work
of Norden who had earlier investigated the staffing pattern of research and
development (R&D) type of projects. In order to appreciate the staffing
pattern of software projects, Norden’s and Putnam’s results must be understood.
Putnam’s Work
Putnam studied the problem
of staffing of software projects and found that the software development has
characteristics very similar to other R & D projects studied by Norden and
that the Rayleigh-Norden curve can be used to relate the number of delivered
lines of code to the effort and the time required to develop the project. By
analyzing a large number of army projects, Putnam derived the following expression:
L
= Ck K1/3td4/3
The various terms of this
expression are as follows:
• K is the total effort expended (in PM) in the product
development and L is the product size in KLOC.
• td corresponds to the time of system and integration
testing. Therefore, td can be approximately considered as the time required to
develop the software.
- Ck is the state of technology constant and reflects constraints that impede the progress of the programmer. Typical values of Ck = 2 for poor development environment (no methodology, poor documentation, and review, etc.), Ck = 8 for good software development environment (software engineering principles are adhered to), Ck = 11 for an excellent environment (in addition to following software engineering principles, automated tools and techniques are used). The exact value of Ck for a specific project can be computed from the historical data of the organization developing it.
Putnam suggested that
optimal staff build-up on a project should follow the Rayleigh curve. Only a
small number of engineers are needed at the beginning of a project to carry out
planning and specification tasks. As the project progresses and more detailed
work is required, the number of engineers reaches a peak. After implementation
and unit testing, the number of project staff falls. However, the staff
build-up should not be carried out in large installments. The team size should
either be increased or decreased slowly whenever required to match the
Rayleigh-Norden curve. Experience shows that a very rapid build up of project
staff any time during the project development correlates with schedule
slippage. It should be clear that a constant level of manpower through out the
project duration would lead to wastage of effort and increase the time and effort
required to develop the product. If a constant number of engineers are used
over all the phases of a project, some phases would be overstaffed and the
other phases would be understaffed causing inefficient use of manpower, leading
to schedule slippage and increase in cost.
50. Q: Describe
the Norden’s work for staffing level estimation. Give the drawback of Putnam’s
work.
Answer:
Norden
studied the staffing patterns of several R & D projects. He found that the
staffing pattern can be approximated by the Rayleigh distribution curve (as
shown in fig. 11.6). Norden represented the Rayleigh curve by the following
equation:
E = K/t²d * t * e-t² / 2 t²d
Where
E is the effort required at time t. E is an indication of the number of
engineers (or the staffing level) at any particular time during the duration of
the project, K is the area under the curve, and td is the time at which
the curve attains its maximum value.
It
must be remembered that the results of Norden are applicable to general R &
D projects and were not meant to model the staffing pattern of software
development projects.
Drawback
of Putnam’s Works:
By analyzing a large
number of army projects, Putnam derived the following expression:
L
= CkK1/3td4/3
Where, K is the total effort expended (in PM) in the product
development and L is the product size in KLOC, td corresponds to the
time of system and integration testing and Ck is the state of technology
constant and reflects constraints that impede the progress of the programmer .
Now by using the above expression it is obtained that,
K = L3/(Ck)3(td)3
Or,
K = C/td4
For the same product size, C = L3 / Ck3 is
a constant.
or, K1/K2
= (td2)4*(td1)4
or, K ∝ 1/td4
or, cost ∝ 1/td
(as
project development effort is equally proportional to project development cost)
From the above expression, it can be easily observed that when the
schedule of a project is compressed, the required development effort as well as
project development cost increases in proportion to the fourth power of the
degree of compression. It means that a relatively small compression in delivery
schedule can result in substantial penalty of human effort as well as
development cost. For example, if the estimated development time is 1 year,
then in order to develop the product in 6 months, the total effort required to
develop the product (and hence the project cost) increases 16 times.
51. Q: A software
project is planned to cost 95 PY in a period of 1 year and 9 months. Calculate
the peak manning and average rate of software team build up.
Answer:
Software project cost, K=95 PY
Peak development time, td =1.75 Years
Peak manning, m0 = K/
( tdx(e)1/2)=95/(1.75x1.648)=33 Persons
Average rate of
software team build up= m0/ td=33/1.75=18.8
Person/Years
52. Q: What are
the different steps to perform project scheduling?
Answer:
Project-task scheduling is
an important project planning activity. It involves deciding which tasks would
be taken up when. In order to schedule the project activities, a software
project manager needs to do the following:
1. Identify all the tasks needed to complete the project.
2. Break down large tasks into small activities.
3. Determine the dependency among different activities.
4. Establish the most likely estimates for the time durations
necessary to complete the activities.
5. Allocate resources to activities.
6. Plan the starting and ending dates for various activities.
7. Determine the critical path. A critical path is the chain of
activities that determines the duration of the project.
The
first step in scheduling a software project involves identifying all the tasks
necessary to complete the project. Next, the large tasks are broken down into a
logical set of small activities which would be assigned to different engineers.
The work breakdown structure formalism helps the manager to
breakdown the tasks systematically. After the project manager has broken down
the tasks and created the work breakdown structure, he has to find the
dependency among the activities. The dependency among the activities is
represented in the form of an activity network. Once the activity
network representation has been worked out, resources are allocated to each
activity. Resource allocation is typically done using a Gantt chart.
After resource allocation is done, a PERT chart representation is
developed. The PERT chart representation is suitable for program monitoring and
control. For task scheduling, the project manager needs to decompose the project
tasks into a set of activities. The time frame when each activity is to be
performed is to be determined. The end of each activity is called milestone.
The project manager tracks the progress of a project by monitoring the timely
completion of the milestones. If he observes that the milestones start getting
delayed, then he has to carefully control the activities, so that the overall
deadline can still be met.
53. Q: What is
Work breakdown structure?
Answer:
Work Breakdown Structure
(WBS) is used to decompose a given task set recursively into small activities.
WBS provides a notation for representing the major tasks need to be carried out
in order to solve a problem. The root of the tree is labeled by the problem
name. Each node of the tree is broken down into smaller activities that are
made the children of the node. Each activity is recursively decomposed into
smaller sub-activities until at the leaf level, the activities requires
approximately two weeks developing. Fig. 3.7 represents the WBS of MIS
(Management Information System) software. While breaking down a task into
smaller tasks, the manager has to make some hard decisions. If a task is broken
down into large number of very small activities, these can be carried out
independently. Thus, it becomes possible to develop the product faster (with
the help of additional manpower). Therefore, to be able to complete a project
in the least amount of time, the manager needs to break large tasks into
smaller ones, expecting to find more parallelism. However, it is not useful to
subdivide tasks into units which take less than a week or two to execute. Very
fine subdivision means that a disproportionate amount of time must be spent on
preparing and revising various charts.
Fig. 3.7: Work breakdown
structure of an MIS problem
54. Q: How network structure is constructed? What is a critical path
method?
Answer:
WBS representation of a project is
transformed into an activity network by representing activities identified in
WBS along with their interdependencies. An activity network shows the different
activities making up a project, their estimated durations, and
interdependencies (as shown in fig. 3.8). Each activity is represented
by a rectangular node and the duration of the activity is shown alongside each
task.
Fig. 3.8: Activity
network representation of the MIS problem
Managers
can estimate the time durations for the different tasks in several ways. One
possibility is that they can empirically assign durations to different tasks.
This however is not a good idea, because software engineers often resent such
unilateral decisions. A possible alternative is to let engineer himself
estimate the time for an activity he can assigned to. However, some managers
prefer to estimate the time for various activities themselves. Many managers
believe that an aggressive schedule motivates the engineers to do a better and
faster job. However, careful experiments have shown that unrealistically
aggressive schedules not only cause engineers to compromise on intangible
quality aspects, but also are a cause for schedule delays. A good way to
achieve accurately in estimation of the task durations without creating undue
schedule pressures is to have people set their own schedules.
A critical
task is one with a zero slack time. A path from the start node to the
finish node containing only critical tasks is called a critical path.
A critical path is the chain of activities that determines the duration of the
project.
55. Q: For the
activity diagram shown as in figure 3.8, use the CPM to find out the critical
path?
Answer:
From the activity network
representation following analysis can be made. The minimum time (MT) to
complete the project is the maximum of all paths from start to finish.
The earliest start (ES) time of a task is the maximum of all paths
from the start to the task. The latest start time (LS) is the difference
between MT and the maximum of all paths from this task to the finish.
The earliest finish time (EF) of a task is the sum of the
earliest start time of the task and the duration of the task. The
latest finish (LF) time of a task can be obtained by subtracting
maximum of all paths from this task to finish from MT. The slack time (ST)
is (LF – EF) and equivalently can be written as (LS – ES).
The slack time (or float time) is the total time that a task may be delayed
before it will affect the end time of the project. The slack time indicates the
“flexibility” in starting and completion of tasks. A critical task is one
with a zero slack time. A path from the start node to the finish node
containing only critical tasks is called a critical path. These parameters for
different tasks for the MIS problem are shown in the following table. So,
Task
|
ES
|
EF
|
LS
|
LF
|
ST
|
Specification
|
0
|
15
|
0
|
15
|
0
|
Design database
|
15
|
60
|
15
|
60
|
0
|
Design GUI part
|
15
|
45
|
90
|
120
|
75
|
Code database
|
60
|
165
|
60
|
165
|
0
|
Code GUI part
|
45
|
90
|
120
|
165
|
75
|
Integrate and test
|
165
|
285
|
165
|
285
|
0
|
Write user manual
|
15
|
75
|
225
|
285
|
210
|
So, the critical path is represented
by the dark line in the fig.
56. Q: What is Grantt
Chart? Draw the Grantt Chart for the above network Activity Diagram as shown in
the figure 3.8?
Answer:
Gantt
charts are mainly used to allocate resources to activities. The resources
allocated to activities include staff, hardware, and software. A Gantt chart is
a special type of bar chart where each bar represents an activity. The bars are
drawn along a time line. The length of each bar is proportional to the duration
of time planned for the corresponding activity.
Gantt
charts are used in software project management are actually an enhanced version
of the standard Gantt charts. In the Gantt charts used for software project
management, each bar consists of a white part and a shaded part. The shaded
part of the bar shows the length of time each task is estimated to take. The
white part shows the slack time, that is, the latest time by which a task must
be finished.
The Grantt
Chart for the activity network diagram in fig 3.8 is as below:
57. Q: What is
PERT Chart? Why it is used?
Answer:
PERT
(Project Evaluation and Review Technique) charts consist of a network of boxes
and arrows. The boxes represent activities and the arrows represent task
dependencies. PERT chart represents the statistical variations in the project
estimates assuming a normal distribution. Thus, in a PERT chart instead of
making a single estimate for each task, pessimistic, likely, and optimistic
estimates are made. The boxes of PERT charts are usually annotated with the
pessimistic, likely, and optimistic estimates for every task. Since all
possible completion times between the minimum and maximum duration for every
task has to be considered, there are not one but many critical paths, depending
on the permutations of the estimates for each task. This makes critical path
analysis in PERT charts very complex. A critical path in a PERT chart is shown
by using thicker arrows. The PERT chart representation of the MIS problem of
fig. 11.8 is shown in fig. 11.10. PERT charts are a more sophisticated form of
activity chart. In activity diagrams only the estimated task durations are
represented. Since, the actual durations might vary from the estimated
durations, the utility of the activity diagrams are limited.
Gantt
chart representation of a project schedule is helpful in planning the utilization
of resources, while PERT chart is useful for monitoring the timely progress of
activities. Also, it is easier to identify parallel activities in a project
using a PERT chart. Project managers need to identify the parallel activities
in a project for assignment to different engineers.
Fig. 11.10:
58. Q: What do you
mean by Organization Structure? What are Different Organizational Formats?
Differentiate them.
Or
What do you mean by Functional Format
and Project Format?? Differentiate them.
Answer:
Usually every software development organization
handles several projects at any time. Software organizations assign different
teams of engineers to handle different software projects. Each type of
organization structure has its own advantages and disadvantages so the issue
“how is the organization as a whole structured?” must be taken into
consideration so that each software project can be finished before its
deadline.
Functional
format vs. project format
There are
essentially two broad ways in which a software development organization can be
structured: functional format and project format. In the project format, the
project development staff are divided based on the project for which they work
(as shown in fig. 12.1). In the functional format, the development staff are
divided based on the functional group to which they belong. The different
projects borrow engineers from the required functional groups for specific
phases to be undertaken in the project and return them to the functional group
upon the completion of the phase.
In the functional format, different teams of
programmers perform different phases of a project. For example, one team might
do the requirements specification, another do the design, and so on. The
partially completed product passes from one team to another as the project
evolves. Therefore, the functional format requires considerable communication
among the different teams because the work of one team must be clearly
understood by the subsequent teams working on the project. This requires good
quality documentation to be produced after every activity.
In the project format, a set of engineers is
assigned to the project at the start of the project and they remain with the
project till the completion of the project. Thus, the same team carries out all
the life cycle activities. Obviously, the functional format requires more
communication among teams than the project format, because one team must
understand the work done by the previous teams.
Advantages
of functional organization over project organization
Even
though greater communication among the team members may appear as an avoidable
overhead, the functional format has many advantages. The main advantages of a
functional organization are:
• Ease of staffing
• Production of good quality documents
• Job specialization
• Efficient handling of the problems
associated with manpower turnover.
59. Q: What do you
mean by Team Structure? What are Different Team Formats? Differentiate them.
Or
What do you mean by Democratic and Mixed
Team Structrure? Differentiate them.
Answer:
Team structure addresses the issue of organization
of the individual project teams. There are some possible ways in which the
individual project teams can be organized. There are mainly three formal team structures:
chief programmer, democratic, and the mixed team organizations although several
other variations to these structures are possible. Problems of different
complexities and sizes often require different team structures for chief
solution.
Chief
Programmer Team
In this
team organization, a senior engineer provides the technical leadership and is
designated as the chief programmer. The chief programmer partitions the task
into small activities and assigns them to the team members. He also verifies and
integrates the products developed by different team members. The structure of
the chief programmer team is shown in fig. 12.2. The chief programmer provides
an authority, and this structure is arguably more efficient than the democratic
team for well-understood problems. However, the chief programmer team leads to
lower team morale, since team-members work under the constant supervision of
the chief programmer. This also inhibits their original thinking. The chief
programmer team is subject to single point failure since too much
responsibility and authority is assigned to the chief programmer.
The chief
programmer team is probably the most efficient way of completing simple and
small projects since the chief programmer can work out a satisfactory design
and ask the programmers to code different modules of his design solution. For
example, suppose an organization has successfully completed many simple MIS
projects. Then, for a similar MIS project, chief programmer team structure can
be adopted. The chief programmer team structure works well when the task is
within the intellectual grasp of a single individual. However, even for simple
and well-understood problems, an organization must be selective in adopting the
chief programmer structure. The chief programmer team structure should not be
used unless the importance of early project completion outweighs other factors
such as team morale, personal developments, life-cycle cost etc. Democratic
Team
The
democratic team structure, as the name implies, does not enforce any formal
team hierarchy (as shown in fig. 12.3). Typically, a manager provides the
administrative leadership. At different times, different members of the group
provide technical leadership. The democratic organization leads to higher morale
and job satisfaction. Consequently, it suffers from less man-power turnover.
Also, democratic team structure is appropriate for less understood problems,
since a group of engineers can invent better solutions than a single individual
as in a chief programmer team. A democratic team structure is suitable for
projects requiring less than five or six engineers and for research-oriented
projects. For large sized projects, a pure democratic organization tends to
become chaotic. The democratic team organization encourages egoless programming
as programmers can share and review one another’s work.
Mixed
Control Team Organization
The mixed
team organization, as the name implies, draws upon the ideas from both the
democratic organization and the chief-programmer organization. The mixed
control team organization is shown pictorially in fig. 12.4. This team
organization incorporates both hierarchical reporting and democratic set up. In
fig. 12.4, the democratic connections are shown as dashed lines and the reporting
structure is shown using solid arrows. The mixed control team organization is
suitable for large team sizes. The democratic arrangement at the senior
engineers level is used to decompose the problem into small parts. Each
democratic setup at the programmer level attempts solution to a single part.
Thus, this team organization is eminently suited to handle large and complex
programs. This team structure is extremely popular and is being used in many
software development companies.
60. What are the
characteristics of a Good Software Engineer? Mention briefly.
Answer:
Characteristics
of a good software engineer
The
attributes that good software engineers should posses are as follows:
- Exposure to systematic techniques, i.e. familiarity with software engineering principles.
- Good technical knowledge of the project areas (Domain knowledge).
- Good programming abilities.
- Good communication skills. These skills comprise of oral, written, and interpersonal skills.
- High motivation.
- Sound knowledge of fundamentals of computer science.
- Intelligence.
- Ability to work in a team.
- Discipline, etc.
61. What do you
mean by a risk? How does the risk management technique works?
Answer:
A risk is
any anticipated unfavorable event or circumstances that can occur while a
project is underway. If a risk becomes real, it can adversely affect the
project and hamper the successful and timely completion of the project.
Risk
management
A software
project can be affected by a large variety of risks. In order to be able to
systematically identify the important risks which might affect a software
project, it is necessary to categorize risks into different classes. The
project manager can then examine which risks from each class are relevant to
the project. There are three main categories of risks which can affect a
software project:
- Project risks. Project risks concern varies forms of budgetary, schedule, personnel, resource, and customer-related problems. An important project risk is schedule slippage. Since, software is intangible, it is very difficult to monitor and control a software project. It is very difficult to control something which cannot be seen. For any manufacturing project, such as manufacturing of cars, the project manager can see the product taking shape. He can for instance, see that the engine is fitted, after that the doors are fitted, the car is getting painted, etc. Thus he can easily assess the progress of the work and control it. The invisibility of the product being developed is an important reason why many software projects suffer from the risk of schedule slippage.
- Technical risks. Technical risks concern potential design, implementation, interfacing, testing, and maintenance problems. Technical risks also include ambiguous specification, incomplete specification, changing specification, technical uncertainty, and technical obsolescence. Most technical risks occur due to the development team’s insufficient knowledge about the project.
·
Business
risks. This type of risks include risks of
building an excellent product that no one wants, losing budgetary or personnel
commitments, etc.
Risk
assessment
The
objective of risk assessment is to rank the risks in terms of their damage
causing potential. For risk assessment, first each risk should be rated in two
ways:
• The likelihood of a risk coming true
(denoted as r).
• The consequence of the problems
associated with that risk (denoted as s).
Based
on these two factors, the priority of each risk can be computed:
p = r * s
Where,
p is the priority with which the risk must be handled, r is the probability of
the risk becoming true, and s is the severity of damage caused due to the risk
becoming true. If all identified risks are prioritized, then the most likely
and damaging risks can be handled first and more comprehensive risk abatement
procedures can be designed for these risks.
Risk
containment
After
all the identified risks of a project are assessed, plans must be made to
contain the most damaging and the most likely risks. Different risks require
different containment procedures. In fact, most risks require ingenuity on the
part of the project manager in tackling the risk.
There
are three main strategies to plan for risk containment:
- Avoid the risk: This may take several forms such as discussing with the customer to change the requirements to reduce the scope of the work, giving incentives to the engineers to avoid the risk of manpower turnover, etc.
- Transfer the risk: This strategy involves getting the risky component developed by a third party, buying insurance cover, etc.
- Risk reduction: This involves planning ways to contain the damage due to a risk. For example, if there is risk that some key personnel might leave, new recruitment may be planned.
Risk
leverage
To choose between the different
strategies of handling a risk, the project manager must consider the cost of
handling the risk and the corresponding reduction of risk. For this the risk
leverage of the different risks can be computed.
Risk
leverage is the difference in risk exposure divided by the cost of reducing the
risk. More formally,
risk leverage = (risk exposure before
reduction – risk exposure after reduction) / (cost of reduction)
62. What do you
mean by a Software Configuration management? Why it is necessary? [B.E. 2010]
Answer:
The results (also called
as the deliverables) of a large software development effort typically consist
of a large number of objects, e.g. source code, design document, SRS document,
test document, user’s manual, etc. These objects are usually referred to and
modified by a number of software engineers through out the life cycle of the
software. The state of all these objects at any point of time is called the
configuration of the software product. The state of each deliverable object
changes as development progresses and also as bugs is detected and fixed.
Necessity of software
configuration management
There are several reasons
for putting an object under configuration management. But, possibly the most
important reason for configuration management is to control the access to the
different deliverable objects. Unless strict discipline is enforced regarding
updation and storage of different objects, several problems appear. The
following are some of the important problems that appear if configuration
management is not used.
- Inconsistency problem when the objects are replicated.
- Problems associated with concurrent access.
·
Providing a stable development environment.
- System accounting and maintaining status information. System accounting keeps track of who made a particular change and when the change was made.
- Handling variants.
63. What do you
mean by version, release and revision of a software product?
Answer:
A new version of software
is created when there is a significant change in functionality, technology, or
the hardware it runs on, etc. On the other hand a new revision of software
refers to minor bug fix in that software. A new release is created if there is
only a bug fix, minor enhancements to the functionality, usability, etc. For example,
one version of a mathematical computation package might run on Unix-based
machines, another on Microsoft Windows and so on. As software is released and
used by the customer, errors are discovered that need correction. Enhancements
to the functionalities of the software may also be needed. A new release of
software is an improved system intended to replace an old one. Often systems
are described as version m, release n; or simple m.n. Formally, a history
relation is version of can be defined between objects. This relation can be
split into two sub relations is revision of and is variant of.
64. How
Configuration Control is carried out? What are the different activities of
Configuration controls?
Answer:
Configuration management
is carried out through two principal activities:
• Configuration identification involves deciding
which parts of the system should be kept track of.
• Configuration control ensures that changes to a
system happen smoothly.
Configuration identification
Typical controllable
objects include:
- Requirements specification document
- Design documents
- Tools used to build the system, such as compilers, linkers, lexical analyzers, parsers, etc.
- Source code for each module
- Test cases
- Problem reports
Configuration control
Configuration control is
the process of managing changes to controlled objects. Configuration control is
the part of a configuration management system that most directly affects the
day-to-day operations of developers. The configuration control system prevents
unauthorized changes to any controlled objects. In order to change a controlled
object such as a module, a developer can get a private copy of the module by a
reserve operation as shown in fig. 3.15. Configuration management tools allow
only one person to reserve a module at a time. Once an object is reserved, it
does not allow any one else to reserve this module until the reserved module is
restored as shown in fig. 3.15. Thus, by preventing more than one engineer to
simultaneously reserve a module, the problems associated with concurrent access
are solved.
It can be shown how the
changes to any object that is under configuration control can be achieved. The
engineer needing to change a module first obtains a private copy of the module
through a reserve operation. Then, he carries out all necessary changes on this
private copy. However, restoring the changed module to the system configuration
requires the permission of a change control board (CCB). The CCB is usually
constituted from among the development team members. For every change that
needs to be carried out, the CCB reviews the changes made to the controlled
object and certifies several things about the change:
1. Change is well-motivated.
2. Developer has considered and documented the effects of the
change.
3. Changes interact well with the changes made by other
developers.
4. Appropriate people (CCB) have validated the change, e.g.
someone has tested the changed code, and has verified that the change is
consistent with the requirement.
Fig.
3.15: Reserve
and restore operation in configuration control
The change control board
(CCB) sounds like a group of people. However, except for very large projects,
the functions of the change control board are normally discharged by the
project manager himself or some senior member of the development team. Once the
CCB reviews the changes to the module, the project manager updates the old base
line through a restore operation (as shown in fig. 12.5). A configuration
control tool does not allow a developer to replace an object he has reserved
with his local copy unless he gets an authorization from the CCB. By
constraining the developers’ ability to replace reserved objects, a stable
environment is achieved. Since a configuration management tool allows only one
engineer to work on one module at any one time, problem of accidental
overwriting is eliminated. Also, since only the manager can update the baseline
after the CCB approval, unintentional changes are eliminated.
65. What do you
mean by Requirement Analysis and Specification?
Answer:
The goal of the
requirement analysis and specification phase is to study the customer
requirements and to systematically organize the requirements into a
specification document. The requirement analysis and specification phase starts
after the feasibility study is complete.
66. What do you
mean by SRS? What are its different components?
Answer:
The SRS document is the
final outcome of the requirements analysis and specification phase.
The important parts of SRS document
are:
·
Functional requirements of the system
·
Non-functional requirements of the
system, and
·
Goals of implementation
Functional
requirements:-
- The functional requirements part discusses the functionalities required from the system. The system is considered to perform a set of high-level functions {fi}. The functional view of the system is shown in fig. 3.1. Each function fi of the system can be considered as a transformation of a set of input data (ii) to the corresponding set of output data (oi). The user can get some meaningful piece of work done using a high-level function.
Fig. 3.1: View of a system performing a set of
functions
Nonfunctional requirements:-
- Nonfunctional requirements deal with the characteristics of the system which can not be expressed as functions - such as the maintainability of the system, portability of the system, usability of the system, etc.
- Nonfunctional requirements may include:
# reliability issues,
# accuracy of
results,
# human - computer interface issues,
# constraints on the
system implementation, etc
Goals of implementation:-
The
goals of implementation part documents some general suggestions regarding
development. These suggestions guide trade-off among design goals. The goals of
implementation section might document issues such as revisions to the system
functionalities that may be required in the future, new devices to be supported
in the future, reusability issues, etc. These are the items which the
developers might keep in their mind during development so that the developed
system may meet some aspects that are not required immediately.
67. What are the
key properties of a good SRS?
Answer:
The important properties of a good SRS document are the
following:
- Concise. The SRS document should be concise and at the same time unambiguous, consistent, and complete. Verbose and irrelevant descriptions reduce readability and also increase error possibilities.
- Structured. It should be well-structured. A well-structured document is easy to understand and modify. In practice, the SRS document undergoes several revisions to cope up with the customer requirements. Often, the customer requirements evolve over a period of time. Therefore, in order to make the modifications to the SRS document easy, it is important to make the document well-structured.
- Black-box view. It should only specify what the system should do and refrain from stating how to do these. This means that the SRS document should specify the external behavior of the system and not discuss the implementation issues. The SRS document should view the system to be developed as black box, and should specify the externally visible behavior of the system. For this reason, the SRS document is also called the black-box specification of a system.
- Conceptual integrity. It should show conceptual integrity so that the reader can easily understand it.
- Response to undesired events. It should characterize acceptable responses to undesired events. These are called system response to exceptional conditions.
- Verifiable. All requirements of the system as documented in the SRS document should be verifiable. This means that it should be possible to determine whether or not requirements have been met in an implementation.
68. What are the
key properties of a bad SRS?
Answer:
The important properties of a bad SRS document are the
following:
- Over specification. It restricts the freedom of the designer in arriving at the design solution.
- Forward references. We should not refer to aspects that are discussed much later in the SRS document. It reduces reliability of the specification.
- Wishful thinking. This type of problem concern description of aspects which would be difficult to implement.
69. What is a
decision tree? Give an example.
Answer:
A
decision tree gives a graphic view of the processing logic involved in decision
making and the corresponding actions taken. The edges of a decision tree
represent conditions and the leaf nodes represent the actions to be performed
depending on the outcome of testing the condition.
Example:
-
Consider Library Membership Automation
Software (LMS) where it should support the following three options:
�� New member
�� Renewal
�� Cancel
membership
New
member option-
Decision:
When the 'new member' option is
selected, the software asks details about the member like the member's name,
address, phone number etc.
Action:
If proper information is entered then a
membership record for the member is created and a bill is printed for the
annual membership charge plus the security deposit payable.
Renewal
option-
Decision:
If the 'renewal' option is chosen, the
LMS asks for the member's name and his membership number to check whether he is
a valid member or not.
Action:
If the membership is valid then
membership expiry date is updated and the annual membership bill is printed,
otherwise an error message is displayed.
Cancel
membership option-
Decision: If
the 'cancel membership' option is selected, then the software asks for member's
name and his membership number.
Action: The
membership is cancelled, a cheque for the balance amount due to the member is
printed and finally the membership record is deleted from the database.
Decision tree representation of the
above example - The following tree (fig. 3.4) shows the
graphical representation of the above example. After getting information from
the user, the system makes a decision and then performs the corresponding
actions.
Fig. 3.4: Decision tree for LMS
70. What is a
decision table? Give an example.
Answer:
A decision table is used to represent
the complex processing logic in a tabular or a matrix form. The upper rows of
the table specify the variables or conditions to be evaluated. The lower rows
of the table specify the actions to be taken when the corresponding conditions
are satisfied. A column in a table is called a rule. A rule implies that
if a condition is true, then the corresponding action is to be executed.
Example:
-
Consider
the previously discussed LMS example. The following decision table (fig. 3.5)
shows how to represent the LMS problem in a tabular form. Here the table is
divided into two parts, the upper part shows the conditions and the lower part
shows what actions are taken. Each column of the table is a rule.
From the above table you can easily
understand that, if the valid selection condition is false then the action
taken for this condition is 'display error message'. Similarly, the actions
taken for other conditions can be inferred from the table.
71. What is a
formal language specication? Give an example.
Answer:
Formal technique
A formal technique is a mathematical method to specify a
hardware and/or software system, verify whether a specification is realizable,
verify that an implementation satisfies its specification, prove properties of
a system without necessarily running the system, etc. The mathematical basis of
a formal method is provided by the specification language.
Formal
specification language
A
formal specification language consists of two sets syn and sem, and a relation
sat between them. The set syn is called the syntactic domain, the set sem is
called the semantic domain, and the relation sat is called the satisfaction
relation. For a given specification syn, and model of the system sem, if sat
(syn, sem), as shown in fig. 3.6, then syn is said to be the specification of
sem, and sem is said to be the specificand of syn.
Model-oriented vs. property-oriented approaches
Formal methods are usually classified
into two broad categories – model – oriented and property – oriented
approaches. In a model-oriented style, one defines a system’s behavior directly
by constructing a model of the system in terms of mathematical structures such
as tuples, relations, functions, sets, sequences, etc.
In the property-oriented style, the
system's behavior is defined indirectly by stating its properties, usually in
the form of a set of axioms that the system must satisfy.
Example:-
Let
us consider a simple producer/consumer example. In a property-oriented style,
it is probably started by listing the properties of the system like: the
consumer can start consuming only after the producer has produced an item, the
producer starts to produce an item only after the consumer has consumed the
last item, etc. A good example of a producer-consumer problem is CPU-Printer
coordination. After processing of data, CPU outputs characters to the buffer
for printing. Printer, on the other hand, reads characters from the buffer and
prints them. The CPU is constrained by the capacity of the buffer, whereas the
printer is constrained by an empty buffer. Examples of property-oriented
specification styles are axiomatic specification and algebraic specification.
In a model-oriented approach, we start
by defining the basic operations, p (produce) and c (consume). Then we can
state that S1 + p → S, S + c → S1. Thus the model-oriented approaches
essentially specify a program by writing another, presumably simpler program.
Examples of popular model-oriented specification techniques are Z, CSP, CCS,
etc.
Model-oriented approaches are more
suited to use in later phases of life cycle because here even minor changes to
a specification may lead to drastic changes to the entire specification. They
do not support logical conjunctions (AND) and disjunctions (OR).
Property-oriented
approaches are suitable for requirements specification because they can be
easily changed. They specify a system as a conjunction of axioms and you can
easily replace one axiom with another one.
72. What are the
merits of formal requirements specification.
Answer:
Merits of formal requirements specification
Formal methods possess several positive features, some of
which are discussed below.
• Formal specifications encourage rigour. Often, the very process
of construction of a rigorous specification is more important than the formal
specification itself. The construction of a rigorous specification clarifies
several aspects of system behavior that are not obvious in an informal
specification.
• Formal methods usually have a well-founded mathematical basis.
Thus, formal specifications are not only more precise, but also mathematically
sound and can be used to reason about the properties of a specification and to
rigorously prove that an implementation satisfies its specifications.
• Formal methods have well-defined semantics. Therefore, ambiguity
in specifications is automatically avoided when one formally specifies a
system.
• The mathematical basis of the formal methods facilitates
automating the analysis of specifications. For example, a tableau-based
technique has been used to automatically check the consistency of
specifications. Also, automatic theorem proving techniques can be used to
verify that an implementation satisfies its specifications. The possibility of
automatic verification is one of the most important advantages of formal
methods.
73. What is
axiomatic specification? Give example.
Answer:
Axiomatic specification
In axiomatic specification of a system, first-order logic is
used to write the pre and post-conditions to specify the operations of the
system in the form of axioms. The pre-conditions basically capture the
conditions that must be satisfied before an operation can successfully be
invoked. In essence, the pre-conditions capture the requirements on the input
parameters of a function. The post-conditions are the conditions that must be
satisfied when a function completes execution for the function to be considered
to have executed successfully. Thus, the post-conditions are essentially
constraints on the results produced for the function execution to be considered
successful.
The following are the sequence of steps
that can be followed to systematically develop the axiomatic specifications of
a function:
• Establish the range of input values over which the function
should behave correctly. Also find out other constraints on the input
parameters and write it in the form of a predicate.
• Specify a predicate defining the conditions which must hold on
the output of the function if it behaved properly.
- Establish the changes made to the function’s input parameters after execution of the function. Pure mathematical functions do not change their input and therefore this type of assertion is not necessary for pure functions.
• Combine all of the above into pre and
post conditions of the function.
Example1: -
Specify the pre- and post-conditions of
a function that takes a real number as argument and returns half the input
value if the input is less than or equal to 100, or else returns double the
value.
f (x : real) : real
pre : x ∈ R
post : {(x≤100) ∧ (f(x) = x/2)} ∨
{(x>100) ∧ (f(x) = 2∗x)}
74. Identify
the requirements of algebraic specifications in order to define a system.
Answer:
In the algebraic specification technique an object class or type
is specified in terms of
relationships existing between the operations defined on that
type. Various notations of
algebraic specifications have evolved, including those based on
OBJ and Larch
languages. Essentially, algebraic specifications define a system
as a heterogeneous
algebra. A heterogeneous algebra is a collection of different sets
on which several
operations are defined. Traditional algebras are homogeneous. A
homogeneous algebra
consists of a single set and several operations; {I, +, -, *, /}.
In contrast, alphabetic
strings together with operations of concatenation and length {A,
I, con, len}, is not a
homogeneous algebra, since the range of the length operation is
the set of integers. To
define a heterogeneous algebra, firstly it is needed to specify
its signature, the involved
operations, and their domains and ranges. Using algebraic
specification, it can be
easily defined the meaning of a set of interface procedure by
using equations. An
algebraic specification is usually presented in four sections.
Types
section:- In this section, the sorts (or the data
types) being used is specified. Exceptions section:- This section gives
the names of the exceptional conditions that might occur when different
operations are carried out. These exception conditions are used in the later
sections of an algebraic specification. Syntax section:- This section
defines the signatures of the interface procedures. The collection of sets that
form input domain of an operator and the sort where the output is produced are
called the signature of the operator. For example, PUSH takes a stack and an
element and returns a new stack.
stack x element →
stack
Equations section:- This section gives a set of rewrite rules (or equations)
defining the meaning of the interface procedures in terms of each other. In
general, this section is allowed to contain conditional expressions. By
convention each equation is implicitly universally quantified over all possible
values of the variables. Names not mentioned in the syntax section such ‘r’ or
‘e’ is variables. The first step in defining an algebraic specification is to
identify the set of required operations. After having identified the required
operators, it is helpful to classify them as either basic constructor
operators, extra constructor operators, basic inspector operators, or extra
inspection operators. The definition of these categories of operators is as
follows:
- Basic construction operators. These operators are used to create or modify entities of a type. The basic construction operators are essential to generate all possible element of the type being specified. For example, ‘create’ and ‘append’ are basic construction operators in a FIFO queue.
- Extra construction operators. These are the construction operators other than the basic construction operators. For example, the operator ‘remove’ is an extra construction operator in a FIFO queue because even without using ‘remove’, it is possible to generate all values of the type being specified.
- Basic inspection operators. These operators evaluate attributes of a type without modifying them, e.g., eval, get, etc. Let S be the set of operators whose range is not the data type being specified. The set of the basic operators S1 is a subset of S, such that each operator from S-S1 can be expressed in terms of the operators from S1.
- Extra inspection operators. These are the inspection operators that are not basic inspectors.
Example:-
Let us specify a data type point
supporting the operations create, xcoord, ycoord, isequal; where the operations
have their usual meaning.
Types:
defines point
uses boolean, integer
Syntax:
1. create : integer × integer → point
2. xcoord : point → integer
3. ycoord : point → integer
4. isequal : point × point → boolean
Equations:
1. xcoord(create(x, y)) = x
2. ycoord(create(x, y)) = y
3. isequal(create(x1, y1), create(x2,
y2)) = ((x1 = x2) and (y1 = y2))
In this example, there is only one
basic constructor (create), and three basic inspectors (xcoord, ycoord, and
isequal). Therefore, there are only 3 equations.
75. What are the
characteristics of good software design?
Answer:
The
characteristics are listed below:
- Correctness: A good design should correctly implement all the functionalities identified in the SRS document.
- Understandability: A good design is easily understandable.
- Efficiency: It should be efficient.
- Maintainability: It should be easily amenable to change.
Possibly
the most important goodness criterion is design correctness. A design has to be
correct to be acceptable. Given that a design solution is correct,
understandability of a design is possibly the most important issue to be
considered while judging the goodness of a design. A design that is easy to
understand is also easy to develop, maintain and change. Thus, unless a design
is easily understandable, it would require tremendous effort to implement and
maintain it.
76. What is
Cohesion? What are different type of cohesions?
Answer:
Most researchers and engineers agree
that a good software design implies clean decomposition of the problem into
modules, and the neat arrangement of these modules in a hierarchy. The primary
characteristics of neat module decomposition are high cohesion and low
coupling. Cohesion is a measure of functional strength of a module. A module
having high cohesion and low coupling is said to be functionally independent of
other modules. By the term functional independence, we mean that a cohesive
module performs a single task or function. A functionally independent module
has minimal interaction with other modules
Classification of cohesion
The
different classes of cohesion that a module may possess are depicted in fig.
4.1.
- Coincidental cohesion: A module is said to have coincidental cohesion, if it performs a set of tasks that relate to each other very loosely, if at all. In this case, the module contains a random collection of functions. It is likely that the functions have been put in the module out of pure coincidence without any thought or design. For example, in a transaction processing system (TPS), the get-input, print-error, and summarize-members functions are grouped into one module. The grouping does not have any relevance to the structure of the problem.
- Logical cohesion: A module is said to be logically cohesive, if all elements of the module perform similar operations, e.g. error handling, data input, data output, etc. An example of logical cohesion is the case where a set of print functions generating different output reports are arranged into a single module.
- Temporal cohesion: When a module contains functions that are related by the fact that all the functions must be executed in the same time span, the module is said to exhibit temporal cohesion. The set of functions responsible for initialization, start-up, shutdown of some process, etc. exhibit temporal cohesion.
- Procedural cohesion: A module is said to possess procedural cohesion, if the set of functions of the module are all part of a procedure (algorithm) in which certain sequence of steps have to be carried out for achieving an objective, e.g. the algorithm for decoding a message.
- Communicational cohesion: A module is said to have communicational cohesion, if all functions of the module refer to or update the same data structure, e.g. the set of functions defined on an array or a stack.
- Sequential cohesion: A module is said to possess sequential cohesion, if the elements of a module form the parts of sequence, where the output from one element of the sequence is input to the next. For example, in a TPS, the get-input, validate-input, sort-input functions are grouped into one module.
- Functional cohesion: Functional cohesion is said to exist, if different elements of a module cooperate to achieve a single function. For example, a module containing all the functions required to manage employees’ pay-roll exhibits functional cohesion. Suppose a module exhibits functional cohesion and we are asked to describe what the module does, then we would be able to describe it using a single sentence.
77. What is
coupling? What are different types of cohesions?
Answer:
Coupling between two modules is a
measure of the degree of interdependence or interaction between the two
modules. A module having high cohesion and low coupling is said to be
functionally independent of other modules. If two modules interchange large
amounts of data, then they are highly interdependent. The degree of coupling
between two modules depends on their interface complexity.
The
interface complexity is basically determined by the number of types of
parameters that are interchanged while invoking the functions of the module.
Classification of Coupling
Even if there are no techniques to precisely and
quantitatively estimate the coupling between two modules, classification of the
different types of coupling will help to quantitatively estimate the degree of
coupling between two modules. Five types of coupling can occur between any two
modules. This is shown in fig. 4.2.
- Data coupling: Two modules are data coupled, if they communicate through a parameter. An example is an elementary data item passed as a parameter between two modules, e.g. an integer, a float, a character, etc. This data item should be problem related and not used for the control purpose.
- Stamp coupling: Two modules are stamp coupled, if they communicate using a composite data item such as a record in PASCAL or a structure in C.
- Control coupling: Control coupling exists between two modules, if data from one module is used to direct the order of instructions execution in another. An example of control coupling is a flag set in one module and tested in another module.
- Common coupling: Two modules are common coupled, if they share data through some global data items.
- Content coupling: Content coupling exists between two modules, if they share code, e.g. a branch from one module into another module.
78. What do you
mean by functional interdependency? Why it is needed?
Answer:
A
module having high cohesion and low coupling is said to be functionally
independent of other modules. By the term functional independence, we mean that
a cohesive module performs a single task or function. A functionally
independent module has minimal interaction with other modules.
Functional
independence is a key to any good design due to the following reasons:
• Error isolation: Functional
independence reduces error propagation. The reason behind this is that if a
module is functionally independent, its degree of interaction with the other
modules is less. Therefore, any error existing in a module would not directly
effect the other modules.
• Scope of reuse: Reuse of a
module becomes possible. Because each module does some well-defined and precise
function, and the interaction of the module with the other modules is simple
and minimal. Therefore, a cohesive module can be easily taken out and reused in
a different program.
• Understandability: Complexity
of the design is reduced, because different modules can be understood in
isolation as modules are more or less independent of each other.
79. Differentiate
between function oriented and object oriented design.
Answer:
- The following are some of the important differences between function-oriented and object-oriented design. unlike function-oriented design methods, in OOD, the basic abstraction are not real-world functions such as sort, display, track, etc, but real-world entities such as employee, picture, machine, radar system, etc.
- In OOD, state information is not represented in a centralized shared memory but is distributed among the objects of the system.
- Function-oriented techniques such as SA/SD group functions together if, as a group, they constitute a higher-level function. On the other hand, object-oriented techniques group functions together on the basis of the data they operate on.
80.
Identify three least salient features of
an object-oriented design approach.
Answer.: - In the object-oriented design approach, the system is viewed
as collection of objects (i.e. entities). The state is decentralized among the
objects and each object manages its own state information. For example, in a
Library Automation Software, each library member may be a separate object with
its own data and functions to operate on these data. In fact, the functions
defined for one object cannot refer or change data of other objects. Objects
have their own internal data which define their state. Similar objects
constitute a class. In other words, each object is a member of some class.
Classes may inherit features from super class. Conceptually, objects
communicate by message passing.
81.
What is DFD? What are different elements of DFD.
Answer.: - The DFD (also
known as a bubble chart) is a hierarchical graphical model of a system that
shows the different processing activities or functions that the system performs
and the data interchange among these functions. Each function is considered as
a processing station (or process) that consumes some input data and produces
some output data. The system is represented in terms of the input data to the
system, various processing carried out on these data, and the output data
generated by the system. A DFD model uses a very limited number of primitive
symbols [as shown in fig. 5.1(a)] to represent the functions performed
by a system and the data flow among these functions.
The different elements
are:
82.
When a DFD is said to be a Synchronous?
Answer:
When two bubbles are directly connected with a
data flow arrow then they are termed as synchronous DFD.
83.
When a DFD is said to be a balanced DFD?
Answer:
The data that flow into or out of a bubble must
match the data flow at the next level of DFD. This is known as balancing a DFD.
84.
what do you mean by the data dictionary of a DFD?
Answer:
A data
dictionary lists all data items appearing in the DFD model of a system. The
data items listed include all data flows and the contents of all data stores
appearing on the DFDs in the DFD model of a system. A data dictionary lists the
purpose of all data items and the definition of all composite data items in
terms of their component data items. For example, a data dictionary entry may
represent that the data grossPay consists of the components regularPay
and overtimePay.
grossPay
= regularPay + overtimePay
85.
A software
system called RMS calculating software would read three integral numbers from
the user in the range of -1000 and +1000 and then determine the root mean
square (rms) of the three input numbers and display it. Draw the DFD for this
software?
Answer:
Nice notes on software engineering.I found many important questions on this post.Thanks to share it on your blog.Keep it up
ReplyDeleteNice to be contributing
ReplyDelete