📘 Complete Textbook · All Levels

Advanced
Software Design

CIT316 — A comprehensive, visually rich textbook covering every topic from SDLC to COCOMO II, designed for all skill levels.

8
Core Chapters
4
Learning Outcomes
40+
Key Concepts
Real Examples
01
Chapter One
Introduction to Software Development
Lifecycle · Models · Methodologies · Standards · Metrics · Documentation · Training
📌 LO1 — Software Development Lifecycle & Methodologies

What is Software Development?

Software development is the systematic, disciplined process of conceiving, designing, programming, testing, and maintaining applications, frameworks, or other software components. It combines engineering rigor with creative problem-solving to produce reliable, useful systems.

🏗️ Real-World Analogy

Think of software development like constructing a building. You need a blueprint (requirements & design), workers following building codes (methodology & standards), inspectors at each stage (testing), and a maintenance crew after handover (maintenance). Skip any stage and the building — or the software — may collapse.

The Software Development Life Cycle (SDLC)

The SDLC provides a structured framework that defines exactly what needs to happen, in what order, and why — transforming a vague idea into a reliable, delivered product.

🔭
1
Planning
📋
2
Requirements
✏️
3
Design
💻
4
Implementation
🧪
5
Testing
🚀
6
Deployment
🔧
7
Maintenance
#PhaseKey ActivitiesKey Question
1PlanningFeasibility study, scope definition, risk ID, resource allocation, schedule creationCAN we build it?
2Requirements AnalysisStakeholder interviews, use cases, user stories, requirements specification documentWHAT should it do?
3System DesignArchitecture diagrams, database schema, UI wireframes, technology selectionHOW will it work?
4ImplementationSource code writing, version control, code reviews, unit test writingBuild it now
5Testing & IntegrationUnit, integration, system, UAT testing; bug tracking; regression testingDoes it work correctly?
6DeploymentPhased rollout, parallel running, user training, go-liveRelease to users
7MaintenanceBug fixes, enhancements, performance tuning, security patchingKeep it working

SDLC Models — Choosing the Right Approach

Different projects need different structures. SDLC models define HOW phases are organized — sequentially, iteratively, or spirally.

Waterfall Model

The oldest, most linear model. Each phase must be 100% complete before the next begins. Best for projects with fixed, well-understood requirements.

✅ Advantages
  • Simple, easy to understand and manage
  • Clear milestones and deliverables
  • Extensive documentation produced
  • Works well for fixed-scope projects
  • Easy for new team members to onboard
❌ Disadvantages
  • No working software until late in cycle
  • Cannot accommodate changing requirements
  • Integration issues found very late
  • Customer sees product only at the end
  • Errors discovered late are extremely costly

Agile / Scrum Model

Agile is a flexible, iterative approach delivering working software in short sprints (1–4 weeks). Built on 4 core values from the Agile Manifesto:

⚡ The Agile Manifesto — 4 Core Values
Product Owner

Represents stakeholders; owns and prioritizes the product backlog

Scrum Master

Facilitates the process; removes obstacles; coaches the team

Development Team

Cross-functional, self-organizing team that does the actual work

Sprint (1–4 weeks)

Time-boxed iteration producing a potentially shippable increment

Daily Standup

15-min meeting: What did I do? What will I do? Any blockers?

Sprint Retrospective

Team reflects on what went well and what to improve next sprint

Spiral Model

Combines iteration with systematic risk analysis. Each loop of the spiral covers 4 quadrants: Planning → Risk Analysis → Development → Evaluation. Best for large, high-risk projects.

V-Model (Verification & Validation)

Development Phase (Left V)Test Phase (Right V)
Requirements AnalysisvalidatesAcceptance Testing (UAT)
System DesignvalidatesSystem Testing
Architectural DesignvalidatesIntegration Testing
Module DesignvalidatesUnit Testing
↕ Implementation (bottom of the V)

Standards and Metrics

MetricDefinitionWhat It Tells You
Lines of Code (LOC)Count of source code linesRough size measure; language-dependent
Function Points (FP)Functionality from user perspectiveLanguage-independent size measure
Cyclomatic ComplexityNumber of independent code pathsHigher = harder to test & maintain
Defect DensityDefects per 1000 LOC (KLOC)Lower = better quality
MTTFMean Time to FailureHigher = more reliable system
Code Coverage% of code executed during testsHigher = more thorough testing

Procedures, Installation & Documentation

1
Pre-Installation Planning

Review hardware requirements, verify compatibility, back up existing data, notify stakeholders of downtime window, prepare rollback plan.

2
Environment Preparation

Configure servers, install dependencies and runtimes, set up databases, configure network access and firewall rules.

3
Software Installation

Run installer or deploy packages, configure application settings, set up user accounts, assign permissions and roles.

4
Testing & Verification

Run smoke tests to verify basic functionality, test critical business workflows, confirm all integrations work correctly.

5
User Training & Handover

Train end users on the new system, distribute user manuals, hand over to operations team with full documentation.

Types of Documentation

Document TypePrimary AudienceContents
Requirements SpecificationPM, developers, clientsUse cases, user stories, functional & non-functional specs
Architecture DocumentDevelopers, architectsSystem diagrams, database schemas, API specifications
User ManualEnd usersStep-by-step instructions, screenshots, FAQs
API DocumentationIntegrating developersEndpoints, parameters, example requests and responses
Test DocumentationQA teamTest plans, test cases, test results, defect reports
Operations ManualIT/DevOps teamDeployment guide, monitoring setup, backup & recovery

📚 Chapter 1 — Key Takeaways

02
Chapter Two
User-Oriented Systems & Outsourcing Decisions
UCD · Personas · Build vs Buy · Vendor Assessment · Implementation
📌 LO2 — User-Oriented Approach & Outsourcing Analysis

Tailoring Systems for End Users

A user-oriented approach places the end user at the absolute center of the design process. Systems built without considering real users often fail — not due to technical problems, but because users find them confusing, slow, or irrelevant to their actual work.

70% of software failures

are caused by poor requirements & user involvement — NOT technical issues

cheaper to fix early

Finding a usability issue in design vs. after release

↑84% higher adoption rate

When users are involved in the design process throughout

User-Centered Design (UCD) Process

1
Understand Context of Use

Research who users are: their jobs, goals, skill levels, frustrations, and environments. Use interviews, observations (contextual inquiry), and surveys.

2
Specify User Requirements

Turn user research into specific, measurable requirements. Create personas, user stories ("As a [user], I want [goal], so that [reason]"), and use cases.

3
Produce Design Solutions

Create wireframes (structure), mockups (visual), and interactive prototypes. Focus on solving user problems before aesthetics.

4
Evaluate Against Requirements

Test with real users. Observe where they struggle. Fix problems. Repeat until users can accomplish their goals easily and efficiently.

User Personas — Bringing Users to Life

A persona is a fictional but realistic representation of a key user group. Personas help design teams make decisions by asking: "What would Sarah need here?"

👩‍⚕️
Sarah, 34
Hospital Nurse — Primary User
Goals:Quick access to patient records during rounds
Frustrations:Complex navigation, too many clicks, slow load times
Tech Level:Moderate — uses phone and basic PC daily
Quote:"I just need the medicine schedule fast. No time for complex menus."
👨‍💼
James, 52
IT Administrator — Power User
Goals:Maintain uptime, manage user accounts, run compliance reports
Frustrations:Systems that don't integrate with existing tools
Tech Level:Expert — manages servers and networks daily
Quote:"I need admin tools that don't require a PhD to operate."

Outsourcing Analysis & Evaluation

Outsourcing means hiring external organizations to perform work that could be done internally. The fundamental decision is Build vs. Buy vs. Outsource.

OptionWhen to UseKey BenefitKey Risk
Build (In-house)Core competitive advantage; full IP control neededExact fit to needs; full ownershipHigh cost; long timeline
Buy (COTS)Standard functionality; faster deployment neededLower cost; faster to deployMay not fit exactly; vendor dependency
OutsourceSpecialized skills needed; non-core functionalityAccess to expertise; cost savingsCommunication; quality control
HybridLarge enterprises with diverse needsBest of all approachesCoordination complexity

Outsourcing Analysis Framework — 5 Key Questions

🔍 Before Outsourcing — Ask These Questions

Vendor Assessment & Selection

Evaluation CriterionWeightWhat to Assess
Technical Capability30%Technology stack, architecture skills, certifications, portfolio of similar projects
Experience & References25%Years in business, similar projects completed, verifiable client testimonials
Cost Structure20%Pricing model (fixed/T&M), hidden costs, payment terms, change request handling
Communication & Culture15%Language, time zone, responsiveness, methodology alignment (Agile vs Waterfall)
Legal & Compliance10%Data protection, IP ownership clauses, NDAs, SLA terms and penalty clauses

Vendor Implementation & Management

🤝 Key Success Factors for Outsourced Projects

📚 Chapter 2 — Key Takeaways

03
Chapter Three
Software Development Approaches
RAD · JAD · GSS · CASE Tools · Structured Methodologies · Extreme Programming
📌 LO3 — Software Development Approaches in Practice

Rapid Application Development (RAD)

RAD, created by James Martin in 1991, emphasizes speed and iterative development over rigid planning. Instead of designing everything upfront, RAD uses rapid prototyping and continuous user feedback to build software in 60–90 days.

🚀 Core RAD Philosophy

Get a working prototype in front of users FAST. Let them react. Refine. Repeat. Ship.
Traditional Waterfall timeline: 12–18 months. RAD timeline: 60–90 days.

RAD Phases

1
Requirements Planning

Executives, managers, and users define business functions, data needs, and constraints. Focus on high-level agreement — not detailed specs. Think workshop, not formal document.

2
User Design

Users and analysts collaborate intensively to build interactive models using CASE tools. Prototypes built rapidly and refined with immediate user feedback — multiple iterations happen here.

3
Construction

Developers build the full system using the approved prototype as a guide. Heavy use of code reuse and component libraries. Less emphasis on "building from scratch."

4
Cutover

Testing, user acceptance, data migration, and go-live. Similar to Waterfall's deployment phase but significantly compressed due to earlier validation.

Joint Application Design (JAD)

JAD brings all stakeholders — users, managers, developers, and facilitators — into intensive, structured workshops to define and design systems rapidly. Developed at IBM in the 1970s, JAD cuts requirements time by up to 50% compared to one-on-one interviews.

JAD RoleWhoResponsibilities
FacilitatorNeutral leader (often consultant)Runs sessions, ensures full participation, keeps focus on objectives, resolves conflicts
Executive SponsorSenior managerOpens sessions, provides authority, resolves high-level conflicts, communicates importance
UsersEnd users & domain expertsProvide domain knowledge, validate requirements, define workflows and business rules
DevelopersTechnical team membersAssess technical feasibility, raise constraints, estimate complexity, suggest alternatives
ScribeDedicated recorderDocuments all decisions, agreements, and action items in real-time during sessions

Group Support Systems (GSS)

GSS are computer-based systems supporting group decision-making. Used in JAD sessions to collect ideas anonymously, vote on priorities, and reach consensus faster.

🖥️ How GSS Works in a JAD Session

CASE Tools

Computer-Aided Software Engineering (CASE) tools provide automated support for software development activities — from diagramming to code generation.

CASE TypeExamplesWhat It Does
Upper CASE (Front-end)IBM Rational Rose, Enterprise ArchitectAnalysis & design tools; create UML diagrams, data models, use cases
Lower CASE (Back-end)Visual Studio generators, JUnitCode generation, testing automation; turn designs into runnable code
Integrated CASE (I-CASE)IBM Engineering Lifecycle MgmtFull lifecycle support from planning through deployment
DiagrammingLucidchart, Draw.io, VisioCreate flowcharts, ERDs, DFDs, UML diagrams
Version ControlGit, SVN, MercurialTrack code changes, enable team collaboration, support branching

Extreme Programming (XP)

XP, created by Kent Beck in 1996, takes good Agile practices "to the extreme." It is designed for projects with rapidly changing requirements and emphasizes technical excellence above all.

👥
Pair Programming
Two devs share one keyboard. Driver writes; Observer reviews. Switch often. Fewer bugs, better design.
🧪
Test-Driven Development
Write the test BEFORE the code. Code only to make tests pass. Results in thorough coverage.
🔄
Continuous Integration
Merge code to main branch multiple times per day. Catch integration bugs immediately.
📦
Small Releases
Release working software frequently in small increments rather than one big bang.
🎯
Simple Design
Design only what's needed now. YAGNI: "You Ain't Gonna Need It." Avoid over-engineering.
♻️
Refactoring
Continuously improve code structure without changing behavior. Reduce technical debt.
🤝
Collective Ownership
Any developer can modify any code. No silos. Everyone owns everything together.
👤
On-Site Customer
A real customer representative is present with the team throughout development for instant feedback.
40-Hour Week
No overtime. Tired developers make more mistakes. Sustainable pace is a productivity principle.
📖
Coding Standards
All code follows the same style and conventions so any developer can read and modify any file.
🎮
Planning Game
At iteration start, customers choose features from a list based on business value within the team's capacity.
🔖
Metaphor
Use a simple shared story/analogy to explain how the entire system works to all stakeholders.

📚 Chapter 3 — Key Takeaways

04
Chapter Four
Development Before the Fact (DBTF) Technology
Integrated Modelling · Primitive Structures · FMaps · TMaps · Universal Primitive Operations
📌 LO3 — Software Development Approaches in Practice

What is DBTF?

Development Before the Fact (DBTF) is a software engineering paradigm created by Margaret Hamilton — the engineer who led NASA's Apollo software development team. The core philosophy is radical:

🎯 The DBTF Core Principle
🚀 The NASA Connection

Hamilton's team at MIT developed the software for NASA's Apollo Guidance Computer — the system that landed humans on the Moon in 1969. The software had to be perfect. There was no way to patch a bug when astronauts were 240,000 miles from Earth. This requirement for absolute correctness led Hamilton to develop DBTF. She was awarded the Presidential Medal of Freedom in 2016 and is credited with coining the term "software engineering."

The Integrated Modelling Environment (001 Tool Suite)

DBTF is implemented through the 001 Tool Suite — a formal software specification environment that generates complete, error-free software from mathematical models.

ComponentRoleWhat It Does
FMap (Function Map)Behavior definitionDefines the control structure — HOW functions sequence and relate to each other
TMap (Type Map)Data definitionDefines the types and structure of ALL data that flows between functions
001AXESCode generatorAutomatically generates complete, compilable code from validated FMap+TMap models
RAT (Analyzer)VerifierChecks specifications for completeness and consistency before any code is generated

Primitive Structures — The Building Blocks

DBTF defines a small set of primitive structures — fundamental patterns that ALL systems can be decomposed into. Every function or data type is either a primitive, or a combination of these:

StructureSymbolMeaningReal Example
Leaf (Primitive)PCannot be decomposed further — directly implemented in codeA function that reads a card number
Include (AND)AALL sub-functions must execute; they share the same input/output spaceTo process payment: validate AND debit AND receipt
Or (OR)OREXACTLY ONE sub-function executes based on a conditionError type: EITHER warning OR critical halt
Input JoinIJMultiple inputs must all be present before the function executesOnly process order when BOTH payment AND stock confirmed
Output JoinOJOne input produces multiple outputs sent to different consumersOrder confirmed → notify customer AND update inventory
🔵 Include (AND) Structure

Like a recipe — ALL steps must happen:

  • Process Payment [Include]
  • ├── 1. Validate card details
  • ├── 2. Check account balance
  • ├── 3. Authorize transaction
  • └── 4. Update ledger
  • All 4 must happen. None skipped.
🟢 Or Structure

Like a menu — EXACTLY ONE option chosen:

  • Handle Error [Or]
  • ├── 1. Show warning popup
  • ├── 2. Show critical error page
  • ├── 3. Log silently
  • └── 4. Halt system
  • Only ONE executes based on error type.

FMaps (Function Maps)

An FMap is a hierarchical tree structure that defines ALL functions in a system and their control relationships. It shows WHAT the system does and in what order/structure — from system level down to individual primitive operations.

🏧 ATM Withdraw Cash — FMap Example

TMaps (Type Maps)

A TMap defines the data types and their structure that flow between functions. Where FMaps define behavior, TMaps define data — they are two sides of the same specification coin. The 001 Tool verifies type compatibility at EVERY function interface automatically.

📊 BankAccount TMap Example

Universal Primitive Operations (UPOs)

DBTF defines Universal Primitive Operations — the atomic actions that all system behavior ultimately reduces to. By limiting operations to these primitives, DBTF ensures every action is explicitly defined with no hidden side effects.

UPODefinitionExample
CreateBring a new object into existenceCreate a new bank account record
AccessRead the value of an existing objectRead the current account balance
ModifyChange the value of an existing objectUpdate balance after a transaction
DeleteRemove an object from existenceDelete a closed account record
EvaluatePerform a computation and return a resultCalculate interest on balance
Enable/DisableActivate or deactivate a functionEnable overdraft protection on account

📚 Chapter 4 — Key Takeaways

05
Chapter Five
Software Engineering Design
Design Specification · OOD · UI Design · Re-engineering · Software Testing
📌 LO3 — Software Development Approaches in Practice

The Design Specification

The Software Design Document (SDD) is the blueprint of the system. It translates requirements into a technical plan developers can implement. A great SDD eliminates ambiguity and enables parallel development.

📄 Key Sections of a Design Specification

Object-Oriented Design (OOD)

OOD organizes software around objects — entities combining data (attributes) and behavior (methods). OOD mirrors the real world: a Car object has color and speed (data) and can accelerate() and brake() (methods).

The 4 Pillars of OOP

Encapsulation Bundle + Hide

Bundle data & methods together; hide internal details. BankAccount hides balance; expose only deposit() and withdraw().

Inheritance IS-A Relationship

Child classes inherit from parent. SavingsAccount IS-A BankAccount. Reuse code; extend behaviour.

Polymorphism Many Forms

Same interface, different behaviour. draw() on Circle, Square, Triangle each draw differently — same call, different result.

Abstraction Essential Features

Show only what matters; hide complexity. You drive a car without knowing how the combustion engine works.

SOLID Principles

LetterPrincipleMeaning in Plain English
SSingle ResponsibilityEach class should have exactly ONE reason to change. Don't mix data access with business logic.
OOpen/ClosedOpen for extension; closed for modification. Add features by extending, not editing existing code.
LLiskov SubstitutionSubclasses must be substitutable for parent class without breaking behaviour.
IInterface SegregationMany small, specific interfaces are better than one large, generic interface.
DDependency InversionDepend on abstractions (interfaces), not concrete implementations. Use dependency injection.

User Interface Design

A well-designed UI is invisible — users accomplish goals without thinking about the interface. Nielsen's 10 Heuristics are the gold standard for evaluating usability.

#HeuristicMeaning & Example
1Visibility of System StatusAlways tell users what's happening — loading spinners, progress bars, confirmation messages
2Match the Real WorldUse words and icons familiar to users, not technical jargon. Use a trash can icon for delete.
3User Control & FreedomProvide Undo, Redo, and clear "cancel" paths. Users make mistakes constantly.
4Consistency & StandardsFollow platform conventions. "Save" should always look and behave the same way throughout the app.
5Error PreventionDesign to prevent problems. Disable a "Submit" button until required fields are filled.
6Recognition over RecallMake options visible. Don't force users to remember information from a previous screen.
7Flexibility & EfficiencyAllow experts to use keyboard shortcuts that don't slow novices.
8Aesthetic & Minimal DesignRemove irrelevant information. Every element on screen must serve a purpose.
9Help Users Recover from ErrorsError messages must be in plain language, describe the problem, and suggest a solution.
10Help & DocumentationEven great interfaces may need help. Make it easy to search and find task-oriented help.

Software Re-Engineering

Re-engineering analyzes and rebuilds existing software to improve quality, maintainability, or performance — without necessarily changing its external functionality.

1
Reverse Engineering

Analyze existing code to extract its design. Understand what it does and how, without the benefit of documentation (which often doesn't exist for legacy systems).

2
Restructuring / Refactoring

Reorganize and clean up code to improve internal structure without changing external behavior. Eliminate duplication, simplify logic, improve naming.

3
Forward Engineering

Build the new, improved system using recovered design information plus enhancements. May involve new technology, framework migration, or full rewrite.

Software Testing

Testing verifies software meets requirements and finds defects. Remember: Testing can prove the presence of bugs, but cannot prove their absence.

Testing LevelWhoWhat Is TestedTool Examples
Unit TestingDevelopersIndividual functions/methods in isolation — the smallest testable unitJUnit, pytest, NUnit
Integration TestingDevelopers/QAHow units work together — interfaces, data flows, module interactionsPostman, Selenium
System TestingQA teamComplete integrated system against specified requirementsJMeter, LoadRunner
Acceptance (UAT)End usersReal-world scenarios to confirm system meets business needs before go-liveManual, Cucumber

📚 Chapter 5 — Key Takeaways

06
Chapter Six
EDP Auditing
Systematic Auditing · Security · Quality · Ergonomics · Customer Service · Legality
📌 LO4 — Auditing in Software Design & Management

What is EDP Auditing?

Electronic Data Processing (EDP) Auditing — now called IT Auditing — is the examination and evaluation of an organization's information systems, infrastructure, and management to ensure they are effective, secure, and legally compliant.

🏢 Building Inspection Analogy

An EDP audit is like a building safety inspection. The inspector checks: Are all systems functioning? Is the building (system) safe for occupants (users)? Does it meet building codes (regulations)? The auditor reports findings to management — who must fix any violations found.

The Audit Process — 6 Phases

1
Audit Planning

Define scope, objectives, and methodology. Identify key systems, risks, and controls to evaluate. Set timeline and team assignments.

2
Preliminary Review

Gather background: system documentation, previous audit reports, organizational charts, risk assessments.

3
Audit Fieldwork

Execute the plan: test controls, review evidence, observe operations, interview staff, sample transactions.

4
Evidence Evaluation

Analyze findings against audit criteria. Assess adequacy of controls. Identify deficiencies and root causes.

5
Reporting

Document findings, recommendations, and management responses in a formal audit report. Present to leadership.

6
Follow-Up

Verify that management has implemented agreed corrective actions from the previous audit before closing findings.

COBIT Framework

COBIT (Control Objectives for Information and Related Technologies) is the most widely used framework for IT governance and auditing, developed by ISACA.

COBIT DomainFocusKey Processes
Plan & Organise (PO)Strategic alignmentIT strategy, risk management, quality management, HR planning
Acquire & Implement (AI)Solution deliverySoftware acquisition, change management, installation procedures
Deliver & Support (DS)OperationsService desk, performance mgmt, security, continuity, data management
Monitor & Evaluate (ME)OversightPerformance monitoring, internal control, regulatory compliance

Security — The CIA Triad

The CIA Triad is the foundational framework for all information security decisions. Every security control exists to protect at least one of these three properties:

C
Confidentiality
Only authorized users can access sensitive data.

Tools: Encryption (AES-256), access controls (RBAC), authentication (MFA), data masking
I
Integrity
Data is accurate and has not been tampered with.

Tools: Checksums (SHA-256), hashing, digital signatures, version control, audit logs
A
Availability
Systems are accessible when users need them.

Tools: Redundancy, load balancing, backups, DDoS protection, failover systems

Ergonomics

Ergonomics examines whether IT environments support healthy, efficient, and comfortable use of information systems.

🖥️ Physical Ergonomics
  • Monitor at eye level, 50–70 cm from eyes
  • Wrist in neutral position — no awkward bending
  • Adjustable chair with lumbar support
  • Feet flat on floor or footrest
  • 300–500 lux indirect lighting; no screen glare
🧠 Cognitive Ergonomics (Software)
  • Don't overwhelm users with information overload
  • Response <0.1s: feels instant; <1s: acceptable; >10s: needs progress bar
  • Error messages in plain language — never "Error 0x00000FF"
  • Progressive disclosure: show what's needed now
  • Consistent interface patterns reduce cognitive load

Legality & Compliance

Law / StandardScopeWhat It Requires
GDPR (Europe)Personal data of EU residentsConsent, data minimization, right to erasure, breach notification within 72 hours
PCI DSSPayment card industrySecure cardholder data, encrypt transmissions, access controls, regular audits
HIPAA (USA)Healthcare dataProtect patient health information, access controls, audit trails, breach notification
SOX (USA)Publicly traded companiesFinancial reporting IT controls, audit trails for all financial transactions
ISO 27001Information security (global)Information Security Management System (ISMS), risk assessment, controls

📚 Chapter 6 — Key Takeaways

07
Chapter Seven
Management of Software Maintenance
Maintenance Process · Types · Cost Modelling · Lehman's Laws · Personnel Management
📌 LO4 — Auditing & Software Management

What is Software Maintenance?

Software maintenance is ALL work done on software AFTER delivery. It is NOT just fixing bugs — it encompasses enhancements, adaptations, and preventive work that keeps software useful throughout its operational life.

60–80% of lifecycle cost

Maintenance consumes more budget than initial development

3–4× original build cost

A $1M system typically costs $3–4M over its lifetime in maintenance

10+ yrs typical system life

Well-maintained systems often outlast their original design expectations

Types of Software Maintenance

IEEE Standard 1219 defines four categories. Understanding the distribution helps prioritize maintenance budgets:

⭐ Perfective
60%
60%
🔄 Adaptive
18%
18%
🔧 Corrective
17%
17%
🛡️ Preventive
5%
5%
TypeDefinitionExample
⭐ Perfective (60%)Improve performance or add new features requested by usersAdd dark mode; improve search speed by 40%; add export to PDF
🔄 Adaptive (18%)Modify software to work in a changed technical environmentUpdate app for iOS 17; migrate from on-premise to cloud AWS
🔧 Corrective (17%)Fix faults and bugs discovered during operationFix crash when user enters invalid date; fix payment calculation error
🛡️ Preventive (5%)Restructure or update code to prevent future problemsRefactor messy module; update deprecated security libraries; add tests

Maintenance Process

1
Problem / Change Request Identification

User or system reports an issue. Recorded in a change management system (JIRA, ServiceNow). Assigned a unique ID and priority level.

2
Analysis & Classification

Assess severity, priority, and type. Estimate impact on other modules. Classify as corrective, adaptive, perfective, or preventive.

3
Design the Change

Plan implementation approach. Update design docs. Identify ALL affected modules using impact analysis to avoid unintended breakage.

4
Implementation

Code the change following existing standards. Write unit tests before modifying code (TDD approach). Maintain original code style.

5
Testing

Unit test the change. Run full regression test suite to verify nothing else broke. Conduct integration testing across affected modules.

6
Release & Documentation

Deploy to production via controlled release pipeline. Update user documentation, technical docs, and release notes. Close the change ticket.

Maintenance Cost Modelling

Annual Maintenance Cost = Development Cost × Maintenance Factor Where Maintenance Factor = 0.05 to 0.15 (5–15% per year) Example: System built for $500,000: Conservative (5%): $500,000 × 0.05 = $25,000 / year Typical (10%): $500,000 × 0.10 = $50,000 / year Complex (15%): $500,000 × 0.15 = $75,000 / year

Lehman's Law II — Software complexity increases over time unless active effort is made to reduce it. This means maintenance costs tend to INCREASE over a system's life.

Lehman's Laws of Software Evolution

M.M. Lehman's empirical observations about how large software systems change over decades:

I
Continuing Change

A system must be continually adapted or it becomes progressively less useful over time.

II
Increasing Complexity

As a system evolves, its complexity increases unless work is done to maintain or reduce it.

III
Large Program Evolution

Program evolution is a self-regulating process with statistically determinable trends.

IV
Invariant Work Rate

Development and maintenance teams work at a statistically invariant average rate over time.

V
Conservation of Familiarity

The amount of incremental change in each release is statistically invariant across releases.

VI
Continuing Growth

Functional content must be continually increased to maintain user satisfaction over time.

VII
Declining Quality

Quality declines unless rigorously maintained and adapted to changes in the operational environment.

VIII
Feedback System

Evolution processes are multi-level, multi-loop, multi-agent feedback systems that must be treated as such.

Managing Maintenance Personnel

ChallengeStrategy
Knowledge RetentionDocument everything; use pair maintenance; cross-train team members on all major systems
MotivationRecognize maintenance as critical, skilled work. Rotate developers through maintenance to build empathy.
Workload ManagementUse prioritized backlog; protect maintenance staff from constant interruptions with scheduled maintenance windows
Skill DevelopmentMaintenance engineers need deep system knowledge PLUS current technology skills — budget for both
Succession PlanningIdentify key personnel whose departure would be catastrophic. Maintain documented knowledge transfer plans.

📚 Chapter 7 — Key Takeaways

08
Chapter Eight
Cost Estimation in Software Engineering
Function Point Analysis · Putnam Model · COCOMO II · Best Practices
📌 LO4 — Auditing & Software Management

Why Cost Estimation Matters

Software cost estimation predicts the realistic effort, time, and money needed to build or maintain software. Poor estimation is the #1 cause of project failures.

30%of projects cancelled

Standish CHAOS Report — cancelled before completion

189%avg overrun

52% of projects cost 189% of their original estimates

18%delivered on time

Only 18% of projects delivered on time and on budget

📉 The Cone of Uncertainty

During the requirements phase, estimates can be off by ±400%. This range narrows as more is known — at architecture: ±50%; at detailed design: ±25%; near completion: ±10%. The key insight: never over-commit early. Re-estimate at every major milestone as uncertainty reduces.

Function Point Analysis (FPA)

FPA measures software size from the user's perspective — how much functionality does the software provide, regardless of implementation language or technology.

5 Information Domain Components

ComponentAbbrevWhat It CountsWeights (Simple/Avg/Complex)
External InputsEIUnique user inputs that add, change, or delete data (forms, screens)3 / 4 / 6
External OutputsEOReports, screens, outputs the system generates for users4 / 5 / 7
External InquiriesEQInput-output pairs: user queries that retrieve data immediately3 / 4 / 6
Internal Logical FilesILFGroups of user data maintained BY the application7 / 10 / 15
External Interface FilesEIFData referenced but maintained by ANOTHER system5 / 7 / 10

Worked Example: Online Library System

Step 1 — Count Components: EI: 3 Simple + 1 Average → (3×3) + (1×4) = 13 EO: 2 Average + 1 Complex → (2×5) + (1×7) = 17 EQ: 2 Average → (2×4) = 8 ILF: 1 Simple + 1 Average → (1×7) + (1×10) = 17 EIF: 1 Simple → (1×5) = 5 ────── Unadjusted Function Points (UFP) = 60 Step 2 — Apply Value Adjustment Factor (VAF): AFP = UFP × (0.65 + 0.01 × ΣGSCs) Where GSCs = 14 General System Characteristics, each rated 0–5 Step 3 — Convert to Effort: Effort = AFP ÷ Productivity Rate (Java: 8–12 FP/day | COBOL: 3–5 FP/day | 4GL: 20–30 FP/day)

The Putnam Model (SLIM)

Developed by Lawrence Putnam in 1978 from analysis of hundreds of real projects. Uses the Rayleigh curve to model how effort distributes across a project's life.

Putnam's Fundamental Equation: Size = C_k × Effort^(1/3) × Time^(4/3) Rearranged to solve for Effort: Effort = (Size / C_k)³ / Time⁴

C_k = Technology constant (productivity of the environment)

C_k = 2 → Poor environment, weak process

C_k = 8 → Good environment, good tools

C_k = 11 → Excellent environment, experienced team

⚠️ KEY INSIGHT: If you HALVE the schedule (Time/2), Effort increases by a factor of ~5.7! This is why compressing schedules always costs more.

📈 Brooks' Law Connection

Putnam's model explains Brooks' Law mathematically: "Adding manpower to a late software project makes it later." Staffing follows a Rayleigh curve — peak effort is around 40% through the project. Adding people after the peak cannot change the curve enough to recover the schedule, but DOES massively increase coordination overhead and cost.

COCOMO II Model

COCOMO II (Constructive Cost Model II) by Barry Boehm at USC is the most researched algorithmic estimation model. It provides three sub-models for different project phases:

Sub-ModelWhen UsedSize MeasureAccuracy
Application CompositionEarly prototyping with 4GL/GUI toolsObject PointsOrder of magnitude
Early DesignArchitecture phase; limited design detailUnadjusted Function Points±50%
Post-Architecture ⭐After detailed design — MOST ACCURATEKSLOC (thousands of SLOC)±20%
COCOMO II Post-Architecture — Core Formula: Effort (person-months) = 2.94 × Size^E × ΠEM_i Schedule (months) = 3.67 × Effort^(0.28 + 0.2×(E−0.91)) Staff (avg team size) = Effort ÷ Schedule

2.94 = Empirically calibrated constant

Size = Software size in KSLOC (thousands of source lines of code)

E = Exponent derived from 5 Scale Factors: E = 0.91 + 0.01 × ΣSF

EM_i = 17 Effort Multipliers (product of all; each near 1.0 at Nominal rating)

5 Scale Factors — Affecting Exponent E

Scale FactorAbbrevLow Rating → More Effort Because...
PrecedentednessPRECTeam hasn't built this kind of system before — more unknowns and surprises
Development FlexibilityFLEXRigid requirements leave no room to find efficient solutions
Risk ResolutionRESLArchitecture not well understood — more exploration and rework needed
Team CohesionTEAMPoor team dynamics create communication overhead and conflicts
Process MaturityPMATLow CMMI level means more rework, inconsistent practices, inefficiency

17 Effort Multipliers (EM) — Key Examples

Category
Cost Driver
Range (VL→VH)
Product
RELY — Required Reliability (safety-critical needs more testing)
0.82→1.26
Product
CPLX — Product Complexity (algorithms, data structures, AI)
0.73→1.74
Product
DATA — Database Size (larger DBs need more integration effort)
0.90→1.28
Platform
TIME — Execution Time Constraint (tight performance = complex optimization)
1.00→1.63
Platform
STOR — Main Storage Constraint (limited memory forces workarounds)
1.00→1.46
Personnel
ACAP — Analyst Capability (excellent analysts reduce effort significantly)
1.42→0.71
Personnel
PCAP — Programmer Capability (skilled coders produce quality work faster)
1.34→0.76
Personnel
PEXP — Platform Experience (familiar platform = fewer surprises)
1.19→0.85
Project
TOOL — Use of Software Tools (CI/CD, IDEs, CASE tools reduce effort)
1.17→0.78
Project
SITE — Multisite Development (geographic split increases communication cost)
1.22→0.80
Project
SCED — Required Schedule (BOTH compression AND stretch increase effort!)
1.43→1.43

Complete COCOMO II Worked Example

Project: Enterprise HR Management System — 100 KSLOC Scale Factors (all Nominal for simplicity): E = 0.91 + (0.01 × 5 factors × 3.0 each) = 0.91 + 0.15 = 1.06 Effort Multipliers (selective adjustments): CPLX = High = 1.17 (complex payroll calculations) ACAP = High = 0.85 (experienced analysts) PCAP = High = 0.88 (strong programming team) TOOL = High = 0.90 (good CI/CD tooling) All others = 1.0 Combined EM = 1.17 × 0.85 × 0.88 × 0.90 = 0.787 Effort = 2.94 × (100)^1.06 × 0.787 = 2.94 × 148.0 × 0.787 = 342 person-months Schedule = 3.67 × (342)^(0.28 + 0.2×(1.06−0.91)) = 3.67 × (342)^0.31 = 3.67 × 5.96 ≈ 22 months Staff = 342 ÷ 22 ≈ 15.5 people (average team size)

Model Comparison

CriterionFunction PointsPutnam/SLIMCOCOMO II
BasisUser functionalityHistorical curve fittingParametric equations
Best PhaseEarly — any phaseSchedule-driven projectsPost-Architecture phase
Key StrengthLanguage-independentShows schedule trade-offsRich calibration options
Key LimitCounting can be subjectiveNeeds historical dataNeeds size estimate first
Accuracy±25% (good data)±30–40%±20% post-architecture
✅ Best Practices in Cost Estimation

📚 Chapter 8 — Key Takeaways

Quick Reference

CIT316 — Key acronyms, formulas, and model comparisons at a glance

Key Acronyms

SDLC
Software Development Life Cycle — structured phases for building software
RAD
Rapid Application Development — fast prototyping methodology (60–90 days)
JAD
Joint Application Design — intensive group requirements workshops
GSS
Group Support System — anonymous parallel input for group decisions
CASE
Computer-Aided Software Engineering — tools automating development tasks
XP
Extreme Programming — Agile methodology emphasizing technical excellence
TDD
Test-Driven Development — write tests BEFORE writing the code
DBTF
Development Before the Fact — build correctness in from the start (Hamilton)
FMap
Function Map — DBTF hierarchical behavior definition (AND/OR structure)
TMap
Type Map — DBTF data type definition ensuring type safety
OOD
Object-Oriented Design — organizing software around objects (4 pillars)
SOLID
5 OOD principles: Single Resp, Open/Closed, Liskov, Interface Seg, DI
UML
Unified Modeling Language — standard notation for software design
CIA
Confidentiality, Integrity, Availability — information security triad
COBIT
Control Objectives for IT — ISACA's IT governance & audit framework
FPA
Function Point Analysis — language-independent software size metric
COCOMO
Constructive Cost Model II — Boehm's algorithmic estimation (2.94 × Size^E × ΠEM)
SLIM
Software LIfecycle Model — Putnam's Rayleigh-based estimation tool
SLA
Service Level Agreement — documented performance commitments & penalties
UAT
User Acceptance Testing — end-user validation before go-live
CMMI
Capability Maturity Model Integration — 5-level process maturity scale
UPO
Universal Primitive Operations — Create, Access, Modify, Delete, Evaluate

Key Formulas

📐 COCOMO II
Effort = 2.94 × Size^E × ΠEM
E = 0.91 + 0.01 × ΣSF
Schedule = 3.67 × Effort^(0.28+0.2(E-0.91))
Staff = Effort ÷ Schedule
📐 Putnam Model
Size = C_k × Effort^(1/3) × Time^(4/3)
C_k = 2 (poor) | 8 (good) | 11 (excellent)
Effort = (Size/C_k)³ / Time⁴
📐 Function Points
UFP = Σ(Count × Weight)
AFP = UFP × (0.65 + 0.01 × ΣGSCs)
Effort = AFP ÷ Productivity Rate
📐 Maintenance Cost
Annual Cost = Dev Cost × Factor
Factor = 0.05 (simple) to 0.15 (complex)
Typical: 10% of original dev cost/year

SDLC Model Quick Comparison

Model Best For Key Advantage Key Risk
WaterfallFixed requirements, small teamsSimple, clear milestonesNo change accommodation
IterativeEvolving requirementsEarly working softwareScope creep risk
SpiralLarge, high-risk projectsSystematic risk reductionComplex management overhead
Agile/ScrumFast-changing business needsContinuous delivery & feedbackNeeds strong team discipline
RADBusiness apps, tight deadlinesSpeed — 60–90 daysQuality risk if rushed
V-ModelSafety-critical systemsEvery phase has paired testCannot handle change
XPSmall teams, changing reqsTechnical excellence cultureRequires expert, committed team