Week 11 ToDos

1. Watch Lecture Videos

Watch the week 11 lecture videos in advance of your team meeting.

Videos
Software Estimation Video Slides
Estimating Size Video Slides
Estimating Effort Video Slides
Lecture by TalesVideo

Lecture References:
See Resources→References for instructions on how to access lecture references.

  • Steve McConnell, Software Estimation: Demystifying the Black Art, Microsoft Press, 2006.

Details

Tales (our TA) has recorded a short lecture on a Reqs ’n Specs topic of his choice (in consultation with me). It is part of the official course content and will be on the final exam.

Tales has also prepared a second short lecture, to be presented at the beginning of class. It, too, is part of the official course content.

Both of these part of the opportunities we provide to graduate student TAs to improve and document their teaching abilities.

2. Solution-Fit Interviews

Complete your solution-fit interviews. Specifically, interview at least five members of your project’s target customer segment(s) to see how well your proposed product (which you have been refining for the last several weeks) meets their primary problems. You may re-interview the same people that you interviewed previously or you may recruit new participants. Just remember that, if you are recruiting new participants for your interviews, the interviewees cannot be minors under the age of 13 and cannot be members of your team or close friends or family of the members of your team. Ideally, try to find target users beyond fellow students. You must follow ethics procedures:

  • How you recruit interviewees will depend on what channels you use, but try as much as possible to use language from the Office of Research Ethics sample recruitment materials, to ensure that your approach is professional and ethical.
  • Use the provided Participant Information and Verbal Consent Form (without alteration) to collect verbal consent responses. Keep a record of the participants and their verbal-consent responses in a spreadsheet.
  • You CANNOT record the audio or video of your interviews, so keep good notes.
  • Your raw interview data must not have identifiable information about the participants.
  • Keep all of your raw interview data and your verbal-consent spreadsheet on a password-protected computer (data server or cloud services) in files that are private and viewable to your team only.

Consider using your Navigation Diagram and associated UI screens as a paper prototype of your project, to show the interviewees your current ideas of what your project (and what interacting with your product) will be like.

Your writeup should include anonymized data about the interviewees. This includes the type of stakeholder, type of customer segment, and demographics of the interviewee (e.g., gender, race, age, education, employment, level of experience). Provide a clear and professional summary of the results of the interviews (e.g., ratio (#responses/#interviewees) per test answer, insights learned). With respect to insights learned, explain how the interview results and the interviewees’ feedback result in any fine-tuning of your proposed product. Compile the data, analysis, and results of your Solution-Fit Interviews in a PDF file named «TeamName»_D11.pdf and submit it to LEARN.

Here is an example Solution-Fit Interview Analysis (including hypotheses, interview questions, and interview results) for the Curb’n project.

Grading Scheme: The interview writeup will be marked on the basis of (1) the Completeness of the writeup, (2) the Quality and Clarity of your explanations of insights learned and resulting fine-tuning of your proposed prodct, and (3) the Professionalism of the presentation of the writeup. See the Week 11 Rubric for details.


3. Software Estimation

You are to create an estimate of the amount of effort it would take to implement the top-priority use case of your project (identified in Deliverable #6):

  1. For each of the use case’s Scenarios (main, alternative, and exception scenarios defined in Deliverable #7), list the function-point elements involved the scenario (i.e., list the inputs, outputs, queries, internal files, external interfaces/APIs). Then every team member is to use their own expert judgment to provide their own best-case, worst-case, and most-likely-case estimates of function points in the scenario. For alternative and exception scenarios, your function-point estimates should just pertain to the new behaviour described in the alternative.
  2. As a team, discuss your respective individual estimates and derive a team consensus for best-, worst-, and most-likely-case estimates of function points for each scenario. Explain how you derived your team-based estimates from the members’ estimates.
  3. Derive your team’s best-case, worst-case, and most-likely-case estimates of code size (LOC) for each scenario, based on your team’s function-points estimates and the programming language(s) that will be used to implement the scenario. Here is a conversion table that you can use. Explain how you arrived at your team’s final best-case, most-likely-case, and worst-case estimates of code sizes (e.g., choice of programming language(s) to be used and why, choice of conversion table).
  4. Use the pessimistic PERT equation to compute an expected-case estimate of code size for each scenario.
  5. Use your best-case, expected-case (PERT), and worst-case code-size estimations of the seven scenarios to compute an aggregate expected-case estimate of code size for the use case. Compute an aggregate standard deviation for the scenario size estimates and use this to (1) compute aggregate best-case and worst-case estimates of the size of the use case, and (2) compute use-case size estimates with 25%, 50%, 75%, and 100% confidence.
  6. Using industry-average data for converting estimates of code size to estimates of effort, convert all of the above code-size estimates for the use case into effort estimates (that is, best-case, 25%-confidence case, expected-case, 75%-confidence case, worst-case effort estimates). Provide a reference to the conversion chart that you use. (As a last resort you can use the QSM benchmark for business systems or slide 4 from the Estimating Effort lecture.) Effort should be measured in staff days, weeks, or months - show units of measure.

Compile all of the above into a report on your team’s estimate of the effort to implement your project’s top-priority use case. You can use the worksheets in the provided Cost Estimation Template spreadsheet to provide your team’s inputs to steps 1-4 and submit this to Include your explanations for steps 2 and 3, and your team’s computations for steps 5 and 6 in the PDF file named «TeamName»_D11.pdf and submit it to LEARN.

Here is an example Cost Estimation Spreadsheet and write up for the Modern Family project.

Grading Scheme: Your software-estimation report will be marked on the basis of (1) the Reasonableness of your expert-judgement estimations, (2) the Correctness with which the estimation techniques are applied, (3) the Completeness of the work shown, and (4) the Clarity and Professionalism of the writeup. See the Week 11 Rubric for details.


4. Continue Working on the SRS

Please refer to the Project→SRS page.


Due Next Monday (March 30, 8:59pm ET)

  • Every student: Complete the online Team Health Check survey
  • Every team: Create a single PDF named «TeamName»_D11.pdf that includes the following, and submit it to LEARN
    • Team Name, and Team Members who attended the Team Meeting (Nov. 25))
    • Report on the Analysis and Results of your Solution-Fit Interviews
    • Report on your team’s Estimate of Effort to implement your project’s top-priority use case
  • Every team: Submit your Excel spreadsheet named «TeamName»_CostEstimation.xlsx that includes your team’s inputs to your effort estimations.