Lab 4 (pdf)

In this lab we will build a simulator for the Monty Hall problem. Our emphasis is on three aspects:

The Monty-Hall problem

The Monty-Hall problem is as follows:

It can be mathematically proven that it is to your advantage to switch. What we will do in this assignment is create a simulator that will help us see this fact. It will work as follows:

Updating your project

Assignment

In your project you will find a Lab4 folder. It has some startup files, but you will also need to add your own files as you make progress.

Running a local server

In order to test the application, you will need to open the index.html file as if it was a file served from a server. In order to achieve that, you will need to be running a local server.

If you are on the lab computers, then open up a terminal window and navigate to the Lab3 folder of your project. In that folder, execute “http-server” from the terminal. You should see a response which will look something like this:

Starting up http-server, serving ./
Available on:
  http://127.0.0.1:8080
  http://10.83.0.52:8080
Hit CTRL-C to stop the server

Keeping that terminal window open, open up your web-browser and in the location address write:

http://127.0.0.1:8080/index.html

where the numbers should be whatever your system is reporting.

If you want to do this on your own machines, you need to either know a way to run a local server, or you need to install the http-server package from NPM. The instructions to do that would likely be:

npm install -g http-server

You may need to install Node and NPM if you don’t have those set up yet, and you may also need to run the above command as administrator.

You can stop the server at any time by hitting Ctrl-C on the terminal.

Issues, Milestones, Labels

We will start by setting up GitHub to help us track our progress.

Milestones vs Labels: An issue can have many labels associated to it, but it can only belong to one milestone.

Let’s get started!

Creating a milestone

Creating custom labels

Repeat this process to create a label “doors” that will contain functionality related to having doors, and one called “game” that will contain functionality related to the overall game process (which guides the user to choose a door, then choose to switch or stay etc).

You will notice that the labels we are creating are fundamentally different than the ones provided by GitHub: those focused on the type of issue (bug, enhancement etc); ours focused on the functionality affected. There are no rules for what labels to use; in general do whatever helps you organize the issues.

Creating our first issue

We’ll create an overall planning issue that contains our plan of action.

Working on the assignment

Now we start our work on the assignment.

Verifying out testing environment

The first order of business would be to verify that our testing environment is all set up. You should have already started an http-server from the terminal, running on the location where your files are checked out. With that you should be able to open up the two files index.html and tests.html and you should not see any worrisome errors in the console for the two pages.

If it all seems to work out and you see one passing test and no errors on the console on either page, go ahead and click that first checkbox in our master issue that said “make sure we can run tests”.

Creating the Score class

We will start by creating the Score class. We will do it in a test-driven approach.

First, we will create an issue that describes in more detail how the class will work. Go ahead and create such an issue, titled “Create Score class”. Add the following to the comment area for the issue:

The scorecard class maintains the score for the simulation. It keeps track of four instance variables: `switchWins`, `switchLosses`, `stayWins`, `stayLosses`. It also provides two methods:

- `reset()` resets all the counts to 0.
- `addResult(userAction, result)` that takes as input the result of a play and updates the counts accordingly. The possible values for userAction and result are given by class constants that we will need to create: `Score.ACTION_STAY`, `Score.ACTION_SWITCH`, `Scorecard.RESULT_WIN`, `Score.RESULT_LOSS`. These will simply stand for corresponding strings "stay", "switch" etc.

Make sure to add the scorecard label to the issue, and to add it to your milestone. After you create the issue, take note of its number (#XXX). You will need it in a few moments.

Now that we have an issue, we can start work on our Score class. We start by creating a test class:

Refactoring

OK we are not done with the class yet. Let’s look back at our issue comment. It looks like we need to add the two functions reset and addResult. It would be nice if we could write reset first, but since reset simply sets the values to 0, and they are already 0, there is no meaningful test we can write for it. We will therefore start from addResult. But before we do so, let’s take a closer look at our test code. We are about to add some more it statements, and each of them will need to work with a Score instance. It would be nice if we didn’t have to write it every single time. Here’s how we will do that:

What we did is called refactoring: We changed the structure of the code to prepare it for future change, but without changing the behavior. We will be doing a number of these refactorings. In fact, we’ll do one more right now. This is a popular refactoring, called extract function:

Look at the body of our test. We test that the four stored values match four specific numbers, namely 0. I imagine we’ll need to do a lot more similar checks in future tests. Instead of repeating these four lines, we’ll turn them into a function:

Hm OK this is nice, but it’s not going to do us much good unless the values are always going to be 0. What we need to do next is turn these values into function parameters, and the method that achieves that is called extract parameter.

Now repeat for the other three zeros, to add parameters switchLosses, stayWins and stayLosses. Run your tests after each parameter to make sure you did not mess up anywhere along the way.

Go ahead and create a commit from all these changes. You can put “test refactoring” on its summary. Make sure to reference your issue number!

More tests

Now we are finally ready to add more tests! We start with the test for addResult.

Great! Time for another commit now, saying that you added the addResult method. Don’t forget to reference your issue number!

Now we just need to add the reset function and its tests.

Now commit this, and close the issue! You can either do it by including close #... in the commit message, or from the GitHub interface.

Now that we have closed this issue, go back to your master checklist issue and check off the second item.

This concludes part a of the lab. Due to its size, the lab is broken into pieces. Continue to part b