⭐ Perfective
60%
60%
🔄 Adaptive
18%
18%
🔧 Corrective
17%
17%
🛡️ Preventive
5%
5%
CIT316 — A comprehensive, visually rich textbook covering every topic from SDLC to COCOMO II, designed for all skill levels.
Software development is the systematic, disciplined process of conceiving, designing, programming, testing, and maintaining applications, frameworks, or other software components. It combines engineering rigor with creative problem-solving to produce reliable, useful systems.
Think of software development like constructing a building. You need a blueprint (requirements & design), workers following building codes (methodology & standards), inspectors at each stage (testing), and a maintenance crew after handover (maintenance). Skip any stage and the building — or the software — may collapse.
The SDLC provides a structured framework that defines exactly what needs to happen, in what order, and why — transforming a vague idea into a reliable, delivered product.
| # | Phase | Key Activities | Key Question |
|---|---|---|---|
| 1 | Planning | Feasibility study, scope definition, risk ID, resource allocation, schedule creation | CAN we build it? |
| 2 | Requirements Analysis | Stakeholder interviews, use cases, user stories, requirements specification document | WHAT should it do? |
| 3 | System Design | Architecture diagrams, database schema, UI wireframes, technology selection | HOW will it work? |
| 4 | Implementation | Source code writing, version control, code reviews, unit test writing | Build it now |
| 5 | Testing & Integration | Unit, integration, system, UAT testing; bug tracking; regression testing | Does it work correctly? |
| 6 | Deployment | Phased rollout, parallel running, user training, go-live | Release to users |
| 7 | Maintenance | Bug fixes, enhancements, performance tuning, security patching | Keep it working |
Different projects need different structures. SDLC models define HOW phases are organized — sequentially, iteratively, or spirally.
The oldest, most linear model. Each phase must be 100% complete before the next begins. Best for projects with fixed, well-understood requirements.
Agile is a flexible, iterative approach delivering working software in short sprints (1–4 weeks). Built on 4 core values from the Agile Manifesto:
Represents stakeholders; owns and prioritizes the product backlog
Facilitates the process; removes obstacles; coaches the team
Cross-functional, self-organizing team that does the actual work
Time-boxed iteration producing a potentially shippable increment
15-min meeting: What did I do? What will I do? Any blockers?
Team reflects on what went well and what to improve next sprint
Combines iteration with systematic risk analysis. Each loop of the spiral covers 4 quadrants: Planning → Risk Analysis → Development → Evaluation. Best for large, high-risk projects.
| Development Phase (Left V) | ↔ | Test Phase (Right V) |
|---|---|---|
| Requirements Analysis | validates | Acceptance Testing (UAT) |
| System Design | validates | System Testing |
| Architectural Design | validates | Integration Testing |
| Module Design | validates | Unit Testing |
| ↕ Implementation (bottom of the V) | ||
| Metric | Definition | What It Tells You |
|---|---|---|
| Lines of Code (LOC) | Count of source code lines | Rough size measure; language-dependent |
| Function Points (FP) | Functionality from user perspective | Language-independent size measure |
| Cyclomatic Complexity | Number of independent code paths | Higher = harder to test & maintain |
| Defect Density | Defects per 1000 LOC (KLOC) | Lower = better quality |
| MTTF | Mean Time to Failure | Higher = more reliable system |
| Code Coverage | % of code executed during tests | Higher = more thorough testing |
Review hardware requirements, verify compatibility, back up existing data, notify stakeholders of downtime window, prepare rollback plan.
Configure servers, install dependencies and runtimes, set up databases, configure network access and firewall rules.
Run installer or deploy packages, configure application settings, set up user accounts, assign permissions and roles.
Run smoke tests to verify basic functionality, test critical business workflows, confirm all integrations work correctly.
Train end users on the new system, distribute user manuals, hand over to operations team with full documentation.
| Document Type | Primary Audience | Contents |
|---|---|---|
| Requirements Specification | PM, developers, clients | Use cases, user stories, functional & non-functional specs |
| Architecture Document | Developers, architects | System diagrams, database schemas, API specifications |
| User Manual | End users | Step-by-step instructions, screenshots, FAQs |
| API Documentation | Integrating developers | Endpoints, parameters, example requests and responses |
| Test Documentation | QA team | Test plans, test cases, test results, defect reports |
| Operations Manual | IT/DevOps team | Deployment guide, monitoring setup, backup & recovery |
A user-oriented approach places the end user at the absolute center of the design process. Systems built without considering real users often fail — not due to technical problems, but because users find them confusing, slow, or irrelevant to their actual work.
are caused by poor requirements & user involvement — NOT technical issues
Finding a usability issue in design vs. after release
When users are involved in the design process throughout
Research who users are: their jobs, goals, skill levels, frustrations, and environments. Use interviews, observations (contextual inquiry), and surveys.
Turn user research into specific, measurable requirements. Create personas, user stories ("As a [user], I want [goal], so that [reason]"), and use cases.
Create wireframes (structure), mockups (visual), and interactive prototypes. Focus on solving user problems before aesthetics.
Test with real users. Observe where they struggle. Fix problems. Repeat until users can accomplish their goals easily and efficiently.
A persona is a fictional but realistic representation of a key user group. Personas help design teams make decisions by asking: "What would Sarah need here?"
Outsourcing means hiring external organizations to perform work that could be done internally. The fundamental decision is Build vs. Buy vs. Outsource.
| Option | When to Use | Key Benefit | Key Risk |
|---|---|---|---|
| Build (In-house) | Core competitive advantage; full IP control needed | Exact fit to needs; full ownership | High cost; long timeline |
| Buy (COTS) | Standard functionality; faster deployment needed | Lower cost; faster to deploy | May not fit exactly; vendor dependency |
| Outsource | Specialized skills needed; non-core functionality | Access to expertise; cost savings | Communication; quality control |
| Hybrid | Large enterprises with diverse needs | Best of all approaches | Coordination complexity |
| Evaluation Criterion | Weight | What to Assess |
|---|---|---|
| Technical Capability | 30% | Technology stack, architecture skills, certifications, portfolio of similar projects |
| Experience & References | 25% | Years in business, similar projects completed, verifiable client testimonials |
| Cost Structure | 20% | Pricing model (fixed/T&M), hidden costs, payment terms, change request handling |
| Communication & Culture | 15% | Language, time zone, responsiveness, methodology alignment (Agile vs Waterfall) |
| Legal & Compliance | 10% | Data protection, IP ownership clauses, NDAs, SLA terms and penalty clauses |
RAD, created by James Martin in 1991, emphasizes speed and iterative development over rigid planning. Instead of designing everything upfront, RAD uses rapid prototyping and continuous user feedback to build software in 60–90 days.
Get a working prototype in front of users FAST. Let them react. Refine. Repeat. Ship.
Traditional Waterfall timeline: 12–18 months. RAD timeline: 60–90 days.
Executives, managers, and users define business functions, data needs, and constraints. Focus on high-level agreement — not detailed specs. Think workshop, not formal document.
Users and analysts collaborate intensively to build interactive models using CASE tools. Prototypes built rapidly and refined with immediate user feedback — multiple iterations happen here.
Developers build the full system using the approved prototype as a guide. Heavy use of code reuse and component libraries. Less emphasis on "building from scratch."
Testing, user acceptance, data migration, and go-live. Similar to Waterfall's deployment phase but significantly compressed due to earlier validation.
JAD brings all stakeholders — users, managers, developers, and facilitators — into intensive, structured workshops to define and design systems rapidly. Developed at IBM in the 1970s, JAD cuts requirements time by up to 50% compared to one-on-one interviews.
| JAD Role | Who | Responsibilities |
|---|---|---|
| Facilitator | Neutral leader (often consultant) | Runs sessions, ensures full participation, keeps focus on objectives, resolves conflicts |
| Executive Sponsor | Senior manager | Opens sessions, provides authority, resolves high-level conflicts, communicates importance |
| Users | End users & domain experts | Provide domain knowledge, validate requirements, define workflows and business rules |
| Developers | Technical team members | Assess technical feasibility, raise constraints, estimate complexity, suggest alternatives |
| Scribe | Dedicated recorder | Documents all decisions, agreements, and action items in real-time during sessions |
GSS are computer-based systems supporting group decision-making. Used in JAD sessions to collect ideas anonymously, vote on priorities, and reach consensus faster.
Computer-Aided Software Engineering (CASE) tools provide automated support for software development activities — from diagramming to code generation.
| CASE Type | Examples | What It Does |
|---|---|---|
| Upper CASE (Front-end) | IBM Rational Rose, Enterprise Architect | Analysis & design tools; create UML diagrams, data models, use cases |
| Lower CASE (Back-end) | Visual Studio generators, JUnit | Code generation, testing automation; turn designs into runnable code |
| Integrated CASE (I-CASE) | IBM Engineering Lifecycle Mgmt | Full lifecycle support from planning through deployment |
| Diagramming | Lucidchart, Draw.io, Visio | Create flowcharts, ERDs, DFDs, UML diagrams |
| Version Control | Git, SVN, Mercurial | Track code changes, enable team collaboration, support branching |
XP, created by Kent Beck in 1996, takes good Agile practices "to the extreme." It is designed for projects with rapidly changing requirements and emphasizes technical excellence above all.
Development Before the Fact (DBTF) is a software engineering paradigm created by Margaret Hamilton — the engineer who led NASA's Apollo software development team. The core philosophy is radical:
Hamilton's team at MIT developed the software for NASA's Apollo Guidance Computer — the system that landed humans on the Moon in 1969. The software had to be perfect. There was no way to patch a bug when astronauts were 240,000 miles from Earth. This requirement for absolute correctness led Hamilton to develop DBTF. She was awarded the Presidential Medal of Freedom in 2016 and is credited with coining the term "software engineering."
DBTF is implemented through the 001 Tool Suite — a formal software specification environment that generates complete, error-free software from mathematical models.
| Component | Role | What It Does |
|---|---|---|
| FMap (Function Map) | Behavior definition | Defines the control structure — HOW functions sequence and relate to each other |
| TMap (Type Map) | Data definition | Defines the types and structure of ALL data that flows between functions |
| 001AXES | Code generator | Automatically generates complete, compilable code from validated FMap+TMap models |
| RAT (Analyzer) | Verifier | Checks specifications for completeness and consistency before any code is generated |
DBTF defines a small set of primitive structures — fundamental patterns that ALL systems can be decomposed into. Every function or data type is either a primitive, or a combination of these:
| Structure | Symbol | Meaning | Real Example |
|---|---|---|---|
| Leaf (Primitive) | P | Cannot be decomposed further — directly implemented in code | A function that reads a card number |
| Include (AND) | A | ALL sub-functions must execute; they share the same input/output space | To process payment: validate AND debit AND receipt |
| Or (OR) | OR | EXACTLY ONE sub-function executes based on a condition | Error type: EITHER warning OR critical halt |
| Input Join | IJ | Multiple inputs must all be present before the function executes | Only process order when BOTH payment AND stock confirmed |
| Output Join | OJ | One input produces multiple outputs sent to different consumers | Order confirmed → notify customer AND update inventory |
Like a recipe — ALL steps must happen:
Like a menu — EXACTLY ONE option chosen:
An FMap is a hierarchical tree structure that defines ALL functions in a system and their control relationships. It shows WHAT the system does and in what order/structure — from system level down to individual primitive operations.
A TMap defines the data types and their structure that flow between functions. Where FMaps define behavior, TMaps define data — they are two sides of the same specification coin. The 001 Tool verifies type compatibility at EVERY function interface automatically.
DBTF defines Universal Primitive Operations — the atomic actions that all system behavior ultimately reduces to. By limiting operations to these primitives, DBTF ensures every action is explicitly defined with no hidden side effects.
| UPO | Definition | Example |
|---|---|---|
| Create | Bring a new object into existence | Create a new bank account record |
| Access | Read the value of an existing object | Read the current account balance |
| Modify | Change the value of an existing object | Update balance after a transaction |
| Delete | Remove an object from existence | Delete a closed account record |
| Evaluate | Perform a computation and return a result | Calculate interest on balance |
| Enable/Disable | Activate or deactivate a function | Enable overdraft protection on account |
The Software Design Document (SDD) is the blueprint of the system. It translates requirements into a technical plan developers can implement. A great SDD eliminates ambiguity and enables parallel development.
OOD organizes software around objects — entities combining data (attributes) and behavior (methods). OOD mirrors the real world: a Car object has color and speed (data) and can accelerate() and brake() (methods).
Bundle data & methods together; hide internal details. BankAccount hides balance; expose only deposit() and withdraw().
Child classes inherit from parent. SavingsAccount IS-A BankAccount. Reuse code; extend behaviour.
Same interface, different behaviour. draw() on Circle, Square, Triangle each draw differently — same call, different result.
Show only what matters; hide complexity. You drive a car without knowing how the combustion engine works.
| Letter | Principle | Meaning in Plain English |
|---|---|---|
| S | Single Responsibility | Each class should have exactly ONE reason to change. Don't mix data access with business logic. |
| O | Open/Closed | Open for extension; closed for modification. Add features by extending, not editing existing code. |
| L | Liskov Substitution | Subclasses must be substitutable for parent class without breaking behaviour. |
| I | Interface Segregation | Many small, specific interfaces are better than one large, generic interface. |
| D | Dependency Inversion | Depend on abstractions (interfaces), not concrete implementations. Use dependency injection. |
A well-designed UI is invisible — users accomplish goals without thinking about the interface. Nielsen's 10 Heuristics are the gold standard for evaluating usability.
| # | Heuristic | Meaning & Example |
|---|---|---|
| 1 | Visibility of System Status | Always tell users what's happening — loading spinners, progress bars, confirmation messages |
| 2 | Match the Real World | Use words and icons familiar to users, not technical jargon. Use a trash can icon for delete. |
| 3 | User Control & Freedom | Provide Undo, Redo, and clear "cancel" paths. Users make mistakes constantly. |
| 4 | Consistency & Standards | Follow platform conventions. "Save" should always look and behave the same way throughout the app. |
| 5 | Error Prevention | Design to prevent problems. Disable a "Submit" button until required fields are filled. |
| 6 | Recognition over Recall | Make options visible. Don't force users to remember information from a previous screen. |
| 7 | Flexibility & Efficiency | Allow experts to use keyboard shortcuts that don't slow novices. |
| 8 | Aesthetic & Minimal Design | Remove irrelevant information. Every element on screen must serve a purpose. |
| 9 | Help Users Recover from Errors | Error messages must be in plain language, describe the problem, and suggest a solution. |
| 10 | Help & Documentation | Even great interfaces may need help. Make it easy to search and find task-oriented help. |
Re-engineering analyzes and rebuilds existing software to improve quality, maintainability, or performance — without necessarily changing its external functionality.
Analyze existing code to extract its design. Understand what it does and how, without the benefit of documentation (which often doesn't exist for legacy systems).
Reorganize and clean up code to improve internal structure without changing external behavior. Eliminate duplication, simplify logic, improve naming.
Build the new, improved system using recovered design information plus enhancements. May involve new technology, framework migration, or full rewrite.
Testing verifies software meets requirements and finds defects. Remember: Testing can prove the presence of bugs, but cannot prove their absence.
| Testing Level | Who | What Is Tested | Tool Examples |
|---|---|---|---|
| Unit Testing | Developers | Individual functions/methods in isolation — the smallest testable unit | JUnit, pytest, NUnit |
| Integration Testing | Developers/QA | How units work together — interfaces, data flows, module interactions | Postman, Selenium |
| System Testing | QA team | Complete integrated system against specified requirements | JMeter, LoadRunner |
| Acceptance (UAT) | End users | Real-world scenarios to confirm system meets business needs before go-live | Manual, Cucumber |
Electronic Data Processing (EDP) Auditing — now called IT Auditing — is the examination and evaluation of an organization's information systems, infrastructure, and management to ensure they are effective, secure, and legally compliant.
An EDP audit is like a building safety inspection. The inspector checks: Are all systems functioning? Is the building (system) safe for occupants (users)? Does it meet building codes (regulations)? The auditor reports findings to management — who must fix any violations found.
Define scope, objectives, and methodology. Identify key systems, risks, and controls to evaluate. Set timeline and team assignments.
Gather background: system documentation, previous audit reports, organizational charts, risk assessments.
Execute the plan: test controls, review evidence, observe operations, interview staff, sample transactions.
Analyze findings against audit criteria. Assess adequacy of controls. Identify deficiencies and root causes.
Document findings, recommendations, and management responses in a formal audit report. Present to leadership.
Verify that management has implemented agreed corrective actions from the previous audit before closing findings.
COBIT (Control Objectives for Information and Related Technologies) is the most widely used framework for IT governance and auditing, developed by ISACA.
| COBIT Domain | Focus | Key Processes |
|---|---|---|
| Plan & Organise (PO) | Strategic alignment | IT strategy, risk management, quality management, HR planning |
| Acquire & Implement (AI) | Solution delivery | Software acquisition, change management, installation procedures |
| Deliver & Support (DS) | Operations | Service desk, performance mgmt, security, continuity, data management |
| Monitor & Evaluate (ME) | Oversight | Performance monitoring, internal control, regulatory compliance |
The CIA Triad is the foundational framework for all information security decisions. Every security control exists to protect at least one of these three properties:
Ergonomics examines whether IT environments support healthy, efficient, and comfortable use of information systems.
| Law / Standard | Scope | What It Requires |
|---|---|---|
| GDPR (Europe) | Personal data of EU residents | Consent, data minimization, right to erasure, breach notification within 72 hours |
| PCI DSS | Payment card industry | Secure cardholder data, encrypt transmissions, access controls, regular audits |
| HIPAA (USA) | Healthcare data | Protect patient health information, access controls, audit trails, breach notification |
| SOX (USA) | Publicly traded companies | Financial reporting IT controls, audit trails for all financial transactions |
| ISO 27001 | Information security (global) | Information Security Management System (ISMS), risk assessment, controls |
Software maintenance is ALL work done on software AFTER delivery. It is NOT just fixing bugs — it encompasses enhancements, adaptations, and preventive work that keeps software useful throughout its operational life.
Maintenance consumes more budget than initial development
A $1M system typically costs $3–4M over its lifetime in maintenance
Well-maintained systems often outlast their original design expectations
IEEE Standard 1219 defines four categories. Understanding the distribution helps prioritize maintenance budgets:
| Type | Definition | Example |
|---|---|---|
| ⭐ Perfective (60%) | Improve performance or add new features requested by users | Add dark mode; improve search speed by 40%; add export to PDF |
| 🔄 Adaptive (18%) | Modify software to work in a changed technical environment | Update app for iOS 17; migrate from on-premise to cloud AWS |
| 🔧 Corrective (17%) | Fix faults and bugs discovered during operation | Fix crash when user enters invalid date; fix payment calculation error |
| 🛡️ Preventive (5%) | Restructure or update code to prevent future problems | Refactor messy module; update deprecated security libraries; add tests |
User or system reports an issue. Recorded in a change management system (JIRA, ServiceNow). Assigned a unique ID and priority level.
Assess severity, priority, and type. Estimate impact on other modules. Classify as corrective, adaptive, perfective, or preventive.
Plan implementation approach. Update design docs. Identify ALL affected modules using impact analysis to avoid unintended breakage.
Code the change following existing standards. Write unit tests before modifying code (TDD approach). Maintain original code style.
Unit test the change. Run full regression test suite to verify nothing else broke. Conduct integration testing across affected modules.
Deploy to production via controlled release pipeline. Update user documentation, technical docs, and release notes. Close the change ticket.
Lehman's Law II — Software complexity increases over time unless active effort is made to reduce it. This means maintenance costs tend to INCREASE over a system's life.
M.M. Lehman's empirical observations about how large software systems change over decades:
A system must be continually adapted or it becomes progressively less useful over time.
As a system evolves, its complexity increases unless work is done to maintain or reduce it.
Program evolution is a self-regulating process with statistically determinable trends.
Development and maintenance teams work at a statistically invariant average rate over time.
The amount of incremental change in each release is statistically invariant across releases.
Functional content must be continually increased to maintain user satisfaction over time.
Quality declines unless rigorously maintained and adapted to changes in the operational environment.
Evolution processes are multi-level, multi-loop, multi-agent feedback systems that must be treated as such.
| Challenge | Strategy |
|---|---|
| Knowledge Retention | Document everything; use pair maintenance; cross-train team members on all major systems |
| Motivation | Recognize maintenance as critical, skilled work. Rotate developers through maintenance to build empathy. |
| Workload Management | Use prioritized backlog; protect maintenance staff from constant interruptions with scheduled maintenance windows |
| Skill Development | Maintenance engineers need deep system knowledge PLUS current technology skills — budget for both |
| Succession Planning | Identify key personnel whose departure would be catastrophic. Maintain documented knowledge transfer plans. |
Software cost estimation predicts the realistic effort, time, and money needed to build or maintain software. Poor estimation is the #1 cause of project failures.
Standish CHAOS Report — cancelled before completion
52% of projects cost 189% of their original estimates
Only 18% of projects delivered on time and on budget
During the requirements phase, estimates can be off by ±400%. This range narrows as more is known — at architecture: ±50%; at detailed design: ±25%; near completion: ±10%. The key insight: never over-commit early. Re-estimate at every major milestone as uncertainty reduces.
FPA measures software size from the user's perspective — how much functionality does the software provide, regardless of implementation language or technology.
| Component | Abbrev | What It Counts | Weights (Simple/Avg/Complex) |
|---|---|---|---|
| External Inputs | EI | Unique user inputs that add, change, or delete data (forms, screens) | 3 / 4 / 6 |
| External Outputs | EO | Reports, screens, outputs the system generates for users | 4 / 5 / 7 |
| External Inquiries | EQ | Input-output pairs: user queries that retrieve data immediately | 3 / 4 / 6 |
| Internal Logical Files | ILF | Groups of user data maintained BY the application | 7 / 10 / 15 |
| External Interface Files | EIF | Data referenced but maintained by ANOTHER system | 5 / 7 / 10 |
Developed by Lawrence Putnam in 1978 from analysis of hundreds of real projects. Uses the Rayleigh curve to model how effort distributes across a project's life.
C_k = Technology constant (productivity of the environment)
C_k = 2 → Poor environment, weak process
C_k = 8 → Good environment, good tools
C_k = 11 → Excellent environment, experienced team
⚠️ KEY INSIGHT: If you HALVE the schedule (Time/2), Effort increases by a factor of ~5.7! This is why compressing schedules always costs more.
Putnam's model explains Brooks' Law mathematically: "Adding manpower to a late software project makes it later." Staffing follows a Rayleigh curve — peak effort is around 40% through the project. Adding people after the peak cannot change the curve enough to recover the schedule, but DOES massively increase coordination overhead and cost.
COCOMO II (Constructive Cost Model II) by Barry Boehm at USC is the most researched algorithmic estimation model. It provides three sub-models for different project phases:
| Sub-Model | When Used | Size Measure | Accuracy |
|---|---|---|---|
| Application Composition | Early prototyping with 4GL/GUI tools | Object Points | Order of magnitude |
| Early Design | Architecture phase; limited design detail | Unadjusted Function Points | ±50% |
| Post-Architecture ⭐ | After detailed design — MOST ACCURATE | KSLOC (thousands of SLOC) | ±20% |
2.94 = Empirically calibrated constant
Size = Software size in KSLOC (thousands of source lines of code)
E = Exponent derived from 5 Scale Factors: E = 0.91 + 0.01 × ΣSF
EM_i = 17 Effort Multipliers (product of all; each near 1.0 at Nominal rating)
| Scale Factor | Abbrev | Low Rating → More Effort Because... |
|---|---|---|
| Precedentedness | PREC | Team hasn't built this kind of system before — more unknowns and surprises |
| Development Flexibility | FLEX | Rigid requirements leave no room to find efficient solutions |
| Risk Resolution | RESL | Architecture not well understood — more exploration and rework needed |
| Team Cohesion | TEAM | Poor team dynamics create communication overhead and conflicts |
| Process Maturity | PMAT | Low CMMI level means more rework, inconsistent practices, inefficiency |
| Criterion | Function Points | Putnam/SLIM | COCOMO II |
|---|---|---|---|
| Basis | User functionality | Historical curve fitting | Parametric equations |
| Best Phase | Early — any phase | Schedule-driven projects | Post-Architecture phase |
| Key Strength | Language-independent | Shows schedule trade-offs | Rich calibration options |
| Key Limit | Counting can be subjective | Needs historical data | Needs size estimate first |
| Accuracy | ±25% (good data) | ±30–40% | ±20% post-architecture |
CIT316 — Key acronyms, formulas, and model comparisons at a glance
Effort = 2.94 × Size^E × ΠEM
E = 0.91 + 0.01 × ΣSF
Schedule = 3.67 × Effort^(0.28+0.2(E-0.91))
Staff = Effort ÷ Schedule
Size = C_k × Effort^(1/3) × Time^(4/3)
C_k = 2 (poor) | 8 (good) | 11 (excellent)
Effort = (Size/C_k)³ / Time⁴
UFP = Σ(Count × Weight)
AFP = UFP × (0.65 + 0.01 × ΣGSCs)
Effort = AFP ÷ Productivity Rate
Annual Cost = Dev Cost × Factor
Factor = 0.05 (simple) to 0.15 (complex)
Typical: 10% of original dev cost/year
| Model | Best For | Key Advantage | Key Risk |
|---|---|---|---|
| Waterfall | Fixed requirements, small teams | Simple, clear milestones | No change accommodation |
| Iterative | Evolving requirements | Early working software | Scope creep risk |
| Spiral | Large, high-risk projects | Systematic risk reduction | Complex management overhead |
| Agile/Scrum | Fast-changing business needs | Continuous delivery & feedback | Needs strong team discipline |
| RAD | Business apps, tight deadlines | Speed — 60–90 days | Quality risk if rushed |
| V-Model | Safety-critical systems | Every phase has paired test | Cannot handle change |
| XP | Small teams, changing reqs | Technical excellence culture | Requires expert, committed team |