No8am Saved Schedule Analysis

No8am is a course scheduling web site available to students at Bucknell University. It has been running since our Sophomore year in 2014 and collected course information from hundreds of students. This document presents how we explored this data by using various Data Mining Topics in R.

Data Description

Data Creation:
A user’s schedule is sent to a database each time the user clicks the save schedule button on web site. The data has been exported from the database as a list of saved course schedules over multiple semesters.

Data Schema:
Each row of the data represents a schedule that a student saved. Each column represents an attribute about the schedule (ex: creation time, course section in the schedule).

Data Import

Importing Data:
We use the JSONLite library to convert the data to JSON from the file containing the saved schedules, which is stored in a format similar to JSON. We had to set the flatten parameter to True in the library’s fromJSON() function so that it would be converted to a table format instead of a list of lists.

When first converting the data there were complications in the JSON parsing function due to the JSON file haveing null values in it. The null values were caused by students picking a course, but not a specific section. We resolved the issue by substituting the null values with placeholder data similar to the values for courses that have had a section selected.

Note:

  • The data for each course-section column contains the CRN and section numbers.
  • A course-section can be the main section, a lab, a reciteation, or a problem session. (these are denoted by the end of the column names, ex: .main).
  • The reftime is when a schedule was created.
# set filename of data
fileName <- "data/no8am_export_11-9.txt"

# open connection to file and read in lines
conn <- file(fileName,open="r")
linn <-readLines(conn)
close(conn)

# remove null values from JSON string
linn <- gsub("null", "[\"000000\",\"00\"]", linn)

# convert data to JSON formatted string
linnString <- paste(linn, collapse = ',')
linnString <- paste("[", linnString, "]")

# create datafame from JSON string
rawData <- fromJSON(linnString, flatten = T)

# the raw data
head(rawData[1:5])
##   kind      reftime path.kind path.collection path.key
## 1 item 1.447178e+12      item           links     041e
## 2 item 1.478719e+12      item           links     0DKC
## 3 item 1.478543e+12      item           links     03uX
## 4 item 1.446748e+12      item           links     00qa
## 5 item 1.471483e+12      item           links     09lq
## 6 item 1.447289e+12      item           links     01pe
# section of raw data showing course information
head(rawData[10:15])
##   value.semester value.DANC 340.main value.MGMT 102.main
## 1           <NA>           57959, 01           57675, 02
## 2           <NA>                NULL                NULL
## 3           <NA>                NULL                NULL
## 4           <NA>                NULL                NULL
## 5           <NA>                NULL                NULL
## 6           <NA>                NULL                NULL
##   value.MGMT 201.main value.SOCI 100.main value.DANC 275.main
## 1           55872, 03           57020, 03           57965, 01
## 2                NULL                NULL                NULL
## 3                NULL                NULL                NULL
## 4                NULL                NULL                NULL
## 5                NULL                NULL                NULL
## 6                NULL                NULL                NULL

Data Cleaning

Cleaning our data was an interative process and we did not do all of the steps at the beginning. Initially, we cleaned the obvious aspects such as irrelevant data and converting to correct data types. However, as we used the data, we discoverd more features that we had to clean later, such as the presense of non-Bucknell courses.

Steps:
First, we had to remove some unnecessary columns that did not provide us with any new or useful information. We also changed all the column’s names to be better formatted. Next, we filtered out courses that are not offered by Bucknell. These were added to the data when Nadeem was experimenting with expanding the website to other schools. Additionally, converted attributes in the dataframe to their correct type, such as the UNIX timestamp representing when a schedule was created.

# generate vector of columns to drop
drops <- c("kind", "path.reftime" ,"path.kind", "path.ref", "path.collection",
           "value.semester", "value.hello", "value.customName",
           "value.courseData.CHEM 202.main", "value.courseData.CHEM 202.R",
           "value.courseData.CHEM 202.L", "value.courseData.EDUC 201.main",
           "value.courseData.RELI 200.main", "value.courseData.MATH 201.main",
           "value.courseData.ACFM 261.main", "value.courseData.ECON 103.main",
           "value.courseData.CSCI 203.main", "value.courseData.CSCI 203.L"
)

# drop unnecessary columns
data = rawData[ , !(names(rawData) %in% drops)]

# clean column names
colnames(data) = sapply(colnames(data), function(x) {gsub("value.","",x)})

# drop non-bucknell courses
bucknellDepartments = c("ACFM", "AFST", "ANBE", "ANTH", "ARBC", "ARTH", "ARST", "ASTR", "BIOL", "BMEG", "CHEG", "CHEM", "CHIN", "CEEG", "CLAS", "CSCI", "ENCW", "DANC", "EAST", "ECON", "EDUC", "ECEG", "ENGR", "ENGL", "ENST", "ENFS", "FOUN", "FREN", "GEOG", "GEOL", "GRMN", "GLBM", "GREK", "HEBR", "HIST", "HUMN", "IREL", "ITAL", "JAPN", "LATN", "LAMS", "LING", "ENLS", "MGMT", "MSUS", "MIDE", "MATH", "MECH", "MILS", "MUSC", "NEUR", "PHIL", "PHYS", "POLS", "PSYC", "RELI", "RESC", "RUSS", "SIGN", "SOCI", "SPAN", "THEA", "UNIV", "WMST", "ELEC")

coursesToDropI = sapply(colnames(data)[3:ncol(data)],
                          function(x) {
                            !(unlist(strsplit(x, ' '))[1] %in% bucknellDepartments)
                          }
                        )

# remove unnecessary columns
coursesToDrop = colnames(data)[3:ncol(data)][coursesToDropI]
data = data[ , !(names(data) %in% coursesToDrop)]

# convert unix time to data
data$reftime = as.POSIXct(unlist(data$reftime)/1000, origin="1970-01-01", tz="UTC")

# show a snippet of the data
head(data[1:6])
##               reftime path.key DANC 340.main MGMT 102.main MGMT 201.main
## 1 2015-11-10 17:59:33     041e     57959, 01     57675, 02     55872, 03
## 2 2016-11-09 19:22:48     0DKC          NULL          NULL          NULL
## 3 2016-11-07 18:30:13     03uX          NULL          NULL          NULL
## 4 2015-11-05 18:20:02     00qa          NULL          NULL          NULL
## 5 2016-08-18 01:13:08     09lq          NULL          NULL          NULL
## 6 2015-11-12 00:51:04     01pe          NULL          NULL          NULL
##   SOCI 100.main
## 1     57020, 03
## 2          NULL
## 3          NULL
## 4          NULL
## 5          NULL
## 6          NULL

Note:

  • At times you will see us subset data using [3:ncol(data)], this is used to only give us the columns for the course sections.

Creating a Binary Dataset

At this point the data frame is clean and in a good format. However, we chose to create another, more simplified, dataset by abstracting away the CRN and section numbers of a course chosen by a student. We do this by replacing every instance of a section with a selected section with a 1 and all other values with 0.

# convert cells to have 1's if a section is selected for a course or 0 if not
binaryData = cbind(data[1:2], sapply(data[3:ncol(data)], function(x) {
  x != "NULL"
}))

# Remove empty schedules from data and binary data
rmRow = list()
for (row in 1:nrow(binaryData)) {
  if (sum(binaryData[row,]==T) == 0) {
    rmRow <- append(rmRow,row)
  }
}

binaryData <- binaryData[!(1:nrow(binaryData) %in% rmRow),]
data <- data[!(1:nrow(data) %in% rmRow),]

# show subsection cleaned data
head(binaryData[1:4])
##               reftime path.key DANC 340.main MGMT 102.main
## 1 2015-11-10 17:59:33     041e          TRUE          TRUE
## 2 2016-11-09 19:22:48     0DKC         FALSE         FALSE
## 3 2016-11-07 18:30:13     03uX         FALSE         FALSE
## 4 2015-11-05 18:20:02     00qa         FALSE         FALSE
## 6 2015-11-12 00:51:04     01pe         FALSE         FALSE
## 7 2015-02-24 16:01:08     0IUS         FALSE         FALSE

Creating Useful Datasets

Note:

  • We create these datasets early on in the document to avoid clutter while we perform analyses and create plots.
  • In the later sections, there are links to their respective data in this section.

Filter For Main Sections

Here we create a list of all the column names that correspond to a main section of a course (i.e it ignores labs, recitations, etc…). The list will allow us to get a subset of our data that only conatins the main courses.

# get all column names (names of courses)
courseNames <- colnames(data)[3:ncol(data)]

# get the type of each section (main section, recitation, lab, or problem session)
courseType <- sapply(strsplit(courseNames,"[.]"), function(x) x[2])

# get only main sections of each course
mains <- as.list(courseNames[courseType == "main"])

# look at head of list
head(unlist(mains))
## [1] "DANC 340.main" "MGMT 102.main" "MGMT 201.main" "SOCI 100.main"
## [5] "DANC 275.main" "MGMT 101.main"

Grouping Courses by Department

Here we are creating a list that has a nested list for each department. Each nested list contains the courses that are in the department. We can now easily access all courses of a specified department.

# create list of empty lists for each department
coursesByDepartment <- vector("list", length(bucknellDepartments))

# name each nested list by department
names(coursesByDepartment) <- bucknellDepartments

# Fill each list for a department with their respective courses
for (i in mains) {
  deptName = unlist(strsplit(i, ' '))[1]
  coursesByDepartment[[deptName]] <- c(coursesByDepartment[[deptName]], i)
}

# look at head of dataset
head(coursesByDepartment)
## $ACFM
##  [1] "ACFM 261.main" "ACFM 220.main" "ACFM 351.main" "ACFM 370.main"
##  [5] "ACFM 340.main" "ACFM 357.main" "ACFM 377.main" "ACFM 352.main"
##  [9] "ACFM 372.main" "ACFM 476.main" "ACFM 378.main" "ACFM 359.main"
## [13] "ACFM 353.main" "ACFM 355.main" "ACFM 354.main" "ACFM 375.main"
## [17] "ACFM 365.main"
##
## $AFST
## [1] "AFST 229.main" "AFST 199.main" "AFST 280.main" "AFST 274.main"
## [5] "AFST 290.main"
##
## $ANBE
## [1] "ANBE 266.main" "ANBE 620.main" "ANBE 296.main" "ANBE 372.main"
## [5] "ANBE 370.main"
##
## $ANTH
##  [1] "ANTH 109.main" "ANTH 229.main" "ANTH 270.main" "ANTH 260.main"
##  [5] "ANTH 283.main" "ANTH 250.main" "ANTH 202.main" "ANTH 204.main"
##  [9] "ANTH 243.main" "ANTH 290.main" "ANTH 266.main" "ANTH 201.main"
## [13] "ANTH 248.main" "ANTH 232.main" "ANTH 203.main" "ANTH 256.main"
## [17] "ANTH 330.main" "ANTH 288.main" "ANTH 267.main" "ANTH 200.main"
##
## $ARBC
## [1] "ARBC 203.main" "ARBC 103.main" "ARBC 102.main" "ARBC 202.main"
## [5] "ARBC 104.main"
##
## $ARTH
##  [1] "ARTH 240.main" "ARTH 207.main" "ARTH 241.main" "ARTH 204.main"
##  [5] "ARTH 275.main" "ARTH 208.main" "ARTH 381.main" "ARTH 102.main"
##  [9] "ARTH 303.main" "ARTH 101.main"

Create a Dataset for Predictive Models

In this section we create a data frame that will be used to generate predictive models for the CSCI department. For each schedule it stores the number of departments in the schedule.

In order to allow the predictive models to use the data frame, we had to make each column a factor and make the CSCI department have a predictable output (i.e. predictive models cannot predict numbers like “1” and “0”, so we had to use “yes” and “no” for if a schedule has the CSCI department or not).

# FUNCTION: Converts columns to factor (except department being predicted on)
convertToFactor <- function(df) {
  for (dept in bucknellDepartments) {
    if (dept != "CSCI") {
      df[, dept] <- as.factor(df[, dept])
    }
  }
  return(df)
}

# Create data frame with columns for if a course is scheduled
binaryCourseData = binaryData[3:ncol(binaryData)]

# Select only the main courses
binaryCourseDataMains = binaryData[, unlist(mains)]

# From the list of main courses, it gets the department names
colDepts <- sapply(mains, function(x) {
    return(unlist(strsplit(x, ' '))[1])
});

# Create empty dataframe with department names for columns
scheduleByDepartment <- data.frame(matrix(ncol = length(bucknellDepartments), nrow = nrow(binaryData)))
colnames(scheduleByDepartment) <- bucknellDepartments

# fill scheduleByDepartment with the number of times a department appears in a schedule
for (row in 1:nrow(scheduleByDepartment)) {                # for each row
  for (dept in bucknellDepartments) {                      # for each department in row
    # select all columns(courses) of a department and sum their occurances
    scheduleByDepartment[row, dept] = sum( as.numeric(binaryCourseDataMains[row, colDepts == dept]))
  }
}

# initialize dataframe for predictive algorithms on the CSCI department
departmentCount <- scheduleByDepartment

# converts columns in departmentCount to factors
departmentCount <- convertToFactor(departmentCount)

# Convert department that is being predicted on into a predicatble form (cannot be represented as a number)
departmentCount$CSCI[departmentCount$CSCI > 0] <- 1        # all transacction's CSCI-column become 1 if they have compSci courses
departmentCount$CSCI[departmentCount$CSCI == 1] <- "yes"   # convert 1's to "yes"
departmentCount$CSCI[departmentCount$CSCI == "0"] <- "no"  # convert 0's to "no"
departmentCount$CSCI <- as.factor(departmentCount$CSCI)    # convert CSCI column to factor

# look at structure and head of dataset
str(departmentCount)
## 'data.frame':    2710 obs. of  65 variables:
##  $ ACFM: Factor w/ 5 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ AFST: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ANBE: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ANTH: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 2 1 1 1 ...
##  $ ARBC: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ARTH: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ARST: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ASTR: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ BIOL: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
##  $ BMEG: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 3 1 ...
##  $ CHEG: Factor w/ 7 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ CHEM: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
##  $ CHIN: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ CEEG: Factor w/ 5 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ CLAS: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ CSCI: Factor w/ 2 levels "no","yes": 1 2 1 1 1 1 1 2 1 1 ...
##  $ ENCW: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ DANC: Factor w/ 3 levels "0","1","2": 3 1 1 1 1 1 1 1 1 1 ...
##  $ EAST: Factor w/ 4 levels "0","1","2","4": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ECON: Factor w/ 6 levels "0","1","2","3",..: 1 2 3 1 2 1 1 1 1 1 ...
##  $ EDUC: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ECEG: Factor w/ 5 levels "0","1","2","3",..: 1 1 1 3 1 1 1 1 1 1 ...
##  $ ENGR: Factor w/ 4 levels "0","1","2","3": 1 1 1 2 1 1 1 1 1 1 ...
##  $ ENGL: Factor w/ 5 levels "0","1","2","3",..: 1 1 1 2 1 2 1 2 1 1 ...
##  $ ENST: Factor w/ 7 levels "0","1","2","3",..: 1 1 1 1 1 1 2 1 1 1 ...
##  $ ENFS: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ FOUN: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ FREN: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ GEOG: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ GEOL: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 1 2 ...
##  $ GRMN: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ GLBM: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ GREK: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ HEBR: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ HIST: Factor w/ 5 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ HUMN: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
##  $ IREL: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ITAL: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ JAPN: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ LATN: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ LAMS: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ LING: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ ENLS: Factor w/ 6 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ MGMT: Factor w/ 5 levels "0","1","2","3",..: 5 1 1 1 2 1 1 1 2 1 ...
##  $ MSUS: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ MIDE: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ MATH: Factor w/ 4 levels "0","1","2","3": 1 2 1 2 2 1 3 2 1 3 ...
##  $ MECH: Factor w/ 5 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ MILS: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ MUSC: Factor w/ 5 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ NEUR: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ PHIL: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 2 1 1 1 ...
##  $ PHYS: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 2 1 2 1 1 ...
##  $ POLS: Factor w/ 6 levels "0","1","2","3",..: 1 1 1 1 2 1 1 1 1 1 ...
##  $ PSYC: Factor w/ 7 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 2 1 ...
##  $ RELI: Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
##  $ RESC: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ RUSS: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ SIGN: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
##  $ SOCI: Factor w/ 6 levels "0","1","2","3",..: 2 1 1 1 1 1 1 1 1 1 ...
##  $ SPAN: Factor w/ 4 levels "0","1","2","4": 1 2 3 1 1 1 1 1 1 1 ...
##  $ THEA: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ UNIV: Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ WMST: Factor w/ 3 levels "0","1","2": 1 1 2 1 1 1 1 1 1 1 ...
##  $ ELEC: Factor w/ 2 levels "0","2": 1 1 1 1 1 1 1 1 1 1 ...
head(departmentCount)
##   ACFM AFST ANBE ANTH ARBC ARTH ARST ASTR BIOL BMEG CHEG CHEM CHIN CEEG
## 1    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 2    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 3    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 4    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 5    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 6    0    0    0    0    0    0    0    0    0    0    0    0    0    0
##   CLAS CSCI ENCW DANC EAST ECON EDUC ECEG ENGR ENGL ENST ENFS FOUN FREN
## 1    0   no    0    2    0    0    0    0    0    0    0    0    0    0
## 2    0  yes    0    0    0    1    0    0    0    0    0    0    0    0
## 3    0   no    0    0    0    2    0    0    0    0    0    0    0    0
## 4    0   no    0    0    0    0    0    2    1    1    0    0    0    0
## 5    0   no    0    0    0    1    0    0    0    0    0    0    0    0
## 6    0   no    0    0    0    0    0    0    0    1    0    0    0    0
##   GEOG GEOL GRMN GLBM GREK HEBR HIST HUMN IREL ITAL JAPN LATN LAMS LING
## 1    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 2    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 3    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 4    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 5    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 6    0    0    0    0    0    0    0    0    0    0    0    0    0    0
##   ENLS MGMT MSUS MIDE MATH MECH MILS MUSC NEUR PHIL PHYS POLS PSYC RELI
## 1    0    4    0    0    0    0    0    0    0    0    0    0    0    0
## 2    0    0    0    0    1    0    0    0    0    0    0    0    0    0
## 3    0    0    0    0    0    0    0    0    0    0    0    0    0    0
## 4    0    0    0    0    1    0    0    0    0    0    0    0    0    0
## 5    0    1    0    0    1    0    0    0    0    0    0    1    0    0
## 6    0    0    0    0    0    0    0    0    0    0    1    0    0    0
##   RESC RUSS SIGN SOCI SPAN THEA UNIV WMST ELEC
## 1    0    0    0    1    0    0    0    0    0
## 2    0    0    0    0    1    0    0    0    0
## 3    0    0    0    0    2    0    0    1    0
## 4    0    0    0    0    0    0    0    0    0
## 5    0    0    0    0    0    0    0    0    0
## 6    0    0    0    0    0    0    0    0    0

Creating Datasets to be Plotted

We made various datasets for creating plots later in the document. These will help us better understand our data.

# PLOT: popularity of courses within a department
# get frequency of courses within various departments
tmp <- sapply(binaryData[3:ncol(binaryData)],sum)
ENGL.frequencies <- tmp[coursesByDepartment$ENGL]  # English department course popularity
CSCI.frequencies <- tmp[coursesByDepartment$CSCI]  # Computer Science department course popularity
ECON.frequencies <- tmp[coursesByDepartment$ECON]  # Economics department course popularity
MGMT.frequencies <- tmp[coursesByDepartment$MGMT]  # Management department course popularity
MATH.frequencies <- tmp[coursesByDepartment$MATH]  # Mathematics department course popularity

# PLOT: popularity of departments
# get frequency of each department
DEPTs.frequencies <- sapply(scheduleByDepartment, sum)

# PLOT: number of sections people have
# get number of sections in each schedule
sectionCounts <- apply(binaryData[3:ncol(binaryData)], 1, sum)

# PLOT: number of courses people take
# get number of courses in each schedule
mainData <- binaryData[unlist(mains)]
courseCounts <- apply(mainData, 1, sum)

# PLOT: number of schedules created over time
# get number of schedules by day
# get all schedule creation dates
Date <- as.Date(data$reftime)
dates <- data.frame(Date)

Create Datasets for Association Rules

We make two datasets for creating association rules with the Apriori algorithm (each row represents a schedule and a set of data for the algorithm).

The first dataset (transactionListMain) has each row contain the names of the courses that are in its schedule. If the course is not in the schedule, then it contains an empty string for that course.

The second dataset (transactionListDepts) is the same as the first except by department instead of by course.

# FUNCTION: replace Trues in a dataframe with the name of the course (column names)
# and other values with the empty string
replaceWithColumnName <- function(df) {
  for (colNumber in 1:ncol(df)) {
    for (row in 1:length(df[,1])) {
      colName = colnames(df)[colNumber]
      if (df[row, colNumber] == T) {      # if the value is true, set each row of column to column name
        df[row, colNumber] = colName
      }
    }
  }
  df[df == "FALSE"] <- ""                 # sets False values to empty string
  return(df)
}

# FUNCTION: takes a set of data and generates transactions from it by first writing the data
# to a file and then reading it in transaction form
createTransactions <- function(transactionList) {
  # create a list of lists where each nested list contains only the courses in a row
  transactions = apply(transactionList, 1,
      function(row) strsplit(paste(row[nzchar(row)], collapse = ", "), ',') # creates list of all courses in row
  )

  # creates a string that can store the transaction data as a CSV
  transactionString <- ""                                                   # string to store final data string
  for (row in 1:length(transactions)) {
    vRow = unlist(transactions[row])
    tranStr <- ""                                                           # string to create each row
    for (col in 1:length(vRow)) {
        tranStr <- paste(tranStr,vRow[col],",")                             # append commas between courses in a row
    }
    transactionString <- paste(transactionString,sub(",$",'',tranStr),"\n") # removes last comma, adds newline, appends to final string
  }
  transactionString <- sub("\n$",'',transactionString)                      # removes last newline

  # write the string containing the data to a CSV file
  write(transactionString, file = "my_basket")
  # read transaction data from the CSV file
  trans = read.transactions("my_basket", format = "basket", sep=",")
  return(trans)
}

# replace values in data to their course name
rules <- replaceWithColumnName(binaryData[3:ncol(binaryData)])

# get only the main courses from the rules
transactionListMain <- rules[unlist(mains)]

# look at subsection of dataset
head(transactionListMain[, 1:10]) 
##   DANC 340.main MGMT 102.main MGMT 201.main SOCI 100.main DANC 275.main
## 1 DANC 340.main MGMT 102.main MGMT 201.main SOCI 100.main DANC 275.main
## 2
## 3
## 4
## 6
## 7
##   MGMT 101.main MGMT 200.main CSCI 204.main ECON 103.main MATH 201.main
## 1 MGMT 101.main MGMT 200.main
## 2                             CSCI 204.main ECON 103.main MATH 201.main
## 3
## 4
## 6 MGMT 101.main                             ECON 103.main
## 7
# Create empty dataframe with department names for columns
transactionListDepts <- data.frame(matrix(ncol = length(bucknellDepartments), nrow = nrow(binaryData)))
colnames(transactionListDepts) <- bucknellDepartments

# fill transactionListDepts with
for (row in 1:nrow(transactionListDepts)) {
  for (dept in bucknellDepartments) {
    # select all columns(courses) in a department and check if there are one or more
    transactionListDepts[row, dept] = sum( as.numeric(binaryCourseDataMains[row, colDepts == dept])) > 0
  }
}

# replace values in data to their department name
transactionListDepts <- replaceWithColumnName(transactionListDepts)

# look at subsection of dataset
head(transactionListDepts[,1:20])
##   ACFM AFST ANBE ANTH ARBC ARTH ARST ASTR BIOL BMEG CHEG CHEM CHIN CEEG
## 1
## 2
## 3
## 4
## 5
## 6
##   CLAS CSCI ENCW DANC EAST ECON
## 1                DANC
## 2      CSCI                ECON
## 3                          ECON
## 4
## 5                          ECON
## 6

Create a Dataset for Correlation Analysis

We made two lists for analyzing the correlation between the number of times a student scheduled a department and the number of courses a department has.

# Create list of departments in STEM
Departments.STEM <- c("ASTR", "BIOL", "BMEG", "CHEG", "CHEM", "CEEG", "CSCI", "ECEG", "ENGR", "ENST", "GEOL", "MATH", "MECH", "NEUR", "PHYS", "ELEC", "PSYC", "ECON", "MGMT")

# Create list of departments not in STEM
Departments.nonSTEM <- bucknellDepartments[ !(bucknellDepartments %in% Departments.STEM)]

# Get the number of times a department was placed in a schedule
deptStudentEnrolled <- sapply(scheduleByDepartment, sum)
deptStudentEnrolled.STEM <- sapply(scheduleByDepartment[Departments.STEM], sum)
deptStudentEnrolled.nonSTEM <- sapply(scheduleByDepartment[Departments.nonSTEM], sum)

# Get the number of courses in a department
coursesByDeptCorrelation <- unlist(lapply(coursesByDepartment[names(deptStudentEnrolled)], length))
coursesByDeptCorrelation.STEM <- unlist(lapply(coursesByDepartment[Departments.STEM], length))
coursesByDeptCorrelation.nonSTEM <- unlist(lapply(coursesByDepartment[Departments.nonSTEM], length))

Next, we create dataframes for plotting the correlation between frequency of course levels scheduled and the time during registration period.

# Declare times defining registration period
registrationEnd <- as.POSIXct("2016-11-11")
registrationStart <- as.POSIXct("2016-10-15")

# Index binaryData with dates
binaryDataReg <- binaryData[binaryData$reftime > registrationStart & binaryData$reftime < registrationEnd,]
binaryDataReg <- binaryDataReg[, colnames(binaryDataReg) %in% c(unlist(mains),"reftime")]

# convert schedule creation time days until end of registration period
binaryDataReg$reftime <- floor(registrationEnd - binaryDataReg$reftime)
colnames(binaryDataReg) <- c("daysUntil", colnames(binaryDataReg)[2:ncol(binaryDataReg)])

# Group departments by course level
numLevels <- 4
coursesByLevel.names <- as.character(1:numLevels)
coursesByLevel <- as.list(rep(NA, length(coursesByLevel.names)))
names(coursesByLevel) <- coursesByLevel.names

for (i in 1:numLevels) {
  mask <- unlist(lapply(strsplit(colnames(binaryDataReg),' '),function(X){substr(X[2],1,1) == toString(i)}))
  coursesByLevel[[i]] <- colnames(binaryDataReg)[mask]
}

# Dataframe to store time by level - holds count
daysUntil <- 26
levelByTime.colnames <- c("daysUntil", "level", "department", "count")
levelByTime <- data.frame(matrix(ncol=length(levelByTime.colnames),nrow=(length(bucknellDepartments)*numLevels*daysUntil)))
colnames(levelByTime) <- levelByTime.colnames

indexCounter <- 1

# Populate dataframe with values
# loop through depts
for (deptName in bucknellDepartments) {
  # loop through levels
  for (level in 1:numLevels) {
    # get all courses in a level
    currentCoursesLevel <- unlist(coursesByLevel[toString(level)])

    # get only courses for a single dept in current level
    isCurrentDept <- substr(currentCoursesLevel, 1, 4) == deptName
    currentCoursesLevelDept <- currentCoursesLevel[isCurrentDept]

    # break that down by days until end of registration period
    for (dayUntil in 1:daysUntil) {
      if (length(currentCoursesLevelDept) > 1) {
        # get all schedules with current day until
        allDayUntil <- binaryDataReg[binaryDataReg$daysUntil == dayUntil, ]

        # index those by a single dept in current level and take the sum
        allDayUntilByLevelDept <- allDayUntil[, colnames(allDayUntil) %in% currentCoursesLevelDept]

        sumAllDayUntilByLevelDept <- sum(sapply(allDayUntilByLevelDept, sum))

        # store values in df
        currentRow <- levelByTime[indexCounter,]
        currentRow$daysUntil <- dayUntil
        currentRow$level <- level
        currentRow$department <- deptName
        currentRow$count <- sumAllDayUntilByLevelDept

        levelByTime[indexCounter,] <- currentRow
      }

      indexCounter <- indexCounter + 1
    }
  }
}

# remove NA values
levelByTimeClean <- na.omit(levelByTime)
levelByTimeClean <- levelByTimeClean[levelByTimeClean$count != 0, ]

levelByTimeClean$level <- levelByTimeClean$level*100

# Interesting Statistics about when students create schedules during registration
summary(levelByTimeClean$daysUntil)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    1.00    7.00   13.00   12.21   17.00   26.00
# print head of the data
head(levelByTimeClean)
##    daysUntil level department count
## 28         2   200       ACFM     1
## 30         4   200       ACFM     1
## 35         9   200       ACFM     1
## 36        10   200       ACFM     2
## 39        13   200       ACFM     1
## 40        14   200       ACFM     2

Create Dataset for Clustering

We created a dataset that for clustering the number of schedules created on the same day. To do this, we used the K-Means algorithm. The K-Means algorithm partitions data into k groups, where each groups contains the points closest to the centroid of the group. When we use the K-Means function, we specify a k-value of 4 and a nstart of 20. (K-value represents the number of clusters it will make; nstart represents the number of starting cluster-sets it will make and choose the best from to use) We used a k-value of 4 becuase we knew there were 4 semesters in the time period used.

# create dataframe with dates and counts of schedules on that date
dateCounts <- data.frame(table(dates$Date))

# update column names
colnames(dateCounts) <- c("Date", "count")

# convert dates to be continuous numeric values (Unix timestamps)
dateCounts$Date <- as.numeric(as.POSIXct(dateCounts$Date))

# run K-means on data
dateCluster <- kmeans(dateCounts, centers = 4, nstart = 20)

# Change the clusters from being numeric to being clusters
dateCluster$cluster <- as.factor(dateCluster$cluster)

Exploring the Data

Data Statistics

# Number of Schedules Saved
nrow(data)
## [1] 2710
# Number of Total Sections
ncol(data[3:ncol(data)])
## [1] 1203
# Number of Main Courses
length(mains)
## [1] 1024
# Number of Departments
length(bucknellDepartments)
## [1] 65
# Time Range of Data Collection
max(data$reftime) - min(data$reftime)
## Time difference of 687.4085 days
# stats for number of times a department was placed in a schedule
summary(deptStudentEnrolled)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##     2.0    40.0   110.0   199.5   260.0  1315.0
# stats for number of courses in a department
summary(coursesByDeptCorrelation)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    1.00    6.00   15.00   15.75   23.00   42.00
# stats for number of students selecting a department
summary(deptStudentEnrolled)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##     2.0    40.0   110.0   199.5   260.0  1315.0
# stats for number of courses in a department
summary(coursesByDeptCorrelation)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    1.00    6.00   15.00   15.75   23.00   42.00

Plots

See data above

Popularity of Courses in Departments

# Visualize the number of English courses enrolled in
qplot(names(ENGL.frequencies), ENGL.frequencies, geom="blank") +
    geom_segment(aes(xend=names(ENGL.frequencies)), size = 3,yend=0) +
    expand_limits(y=0) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
    ggtitle("Popularity of different English courses")

# Visualize the number of Computer Science courses enrolled in
qplot(names(CSCI.frequencies), CSCI.frequencies, geom="blank") +
    geom_segment(aes(xend=names(CSCI.frequencies)), size = 3,yend=0) +
    expand_limits(y=0) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
    ggtitle("Popularity of different Computer Science courses")

# Visualize the number of Economics courses enrolled in
qplot(names(ECON.frequencies), ECON.frequencies, geom="blank") +
    geom_segment(aes(xend=names(ECON.frequencies)), size = 3,yend=0) +
    expand_limits(y=0) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
    ggtitle("Popularity of different Economics courses")

# Visualize the number of Management courses enrolled in
qplot(names(MGMT.frequencies), MGMT.frequencies, geom="blank") +
    geom_segment(aes(xend=names(MGMT.frequencies)), size = 3,yend=0) +
    expand_limits(y=0) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
    ggtitle("Popularity of different Management courses")

# Visualize the number of Mathematics courses enrolled in
qplot(names(MATH.frequencies), MATH.frequencies, geom="blank") +
    geom_segment(aes(xend=names(MATH.frequencies)), size = 3,yend=0) +
    expand_limits(y=0) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
    ggtitle("Popularity of different Math courses")

# Visualize the number of Departments enrolled in
qplot(names(DEPTs.frequencies), DEPTs.frequencies, geom="blank") +
    geom_segment(aes(xend=names(DEPTs.frequencies)), size = 3,yend=0) +
    expand_limits(y=0) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
    ggtitle("Frequency of departments")

These plots give us insight into how many people enroll in different courses or departments.

From the first four plots of course popularity within departments, we see that the introductory sections of courses are very popular, whereas the plot for MATH has greater popularity in the 200-level courses.

The department frequency plot also generates interesting insights, with the prominent one being the significantly greater popularity of the MATH department relative to the other departments.

Course and Section Counts

# Visualize number of courses people take
qplot(courseCounts, binwidth=1) + ggtitle("Number of courses in a schedule")

# Visualize number of sections people have
qplot(sectionCounts, binwidth=1) + ggtitle("Number of sections in a schedule\n (includes labs and recitations)")

This first plot matches our expectations that the majority of students take 4 courses. The adjacent values, 3 and 5, can be explained by students planning on overloading or underloading. All other values can most likely be explained by students that saved incomplete schedules or saved additional courses as backups during registration period.

The second plot, containing the number of sections in a schedule, has a greater variability than the first. This is due to the varaiblity in the number of secitons a course has. Some of the heavier schedules can contain more labs for instance.

Schedule Creation Frequency Over Time

ggplot(dates, aes(x=Date)) + geom_histogram(binwidth=30, colour="white") +
       scale_x_date(labels = date_format("%Y-%b"),
                    breaks = seq(min(dates$Date)-5, max(dates$Date)+5, 30),
                    limits = c(min(dates$Date), max(dates$Date))) +
       ylab("Frequency") + xlab("Year and Month") +
       ggtitle("Schedule Creation Frequency") +
       theme_bw() + theme(axis.text.x = element_text(angle = 90, hjust = 1))

This plot shows the usage of No8am. It shows how many schedules were created in each month over the past 2 years. From this plot, we can see that the spikes (over 200 schedules) represent registration periods. Over there past 2 years there would have been 4 registration periods and there are 4 spikes.

Supervised Machine Learning Models

In this section, we predicted whether a student will take a course in the Computer Science Department based on the other departments in their schedule. We did this by using several supervised machine learning models and training them with data labels for the target class being predicted.

Generate Models

Four Algorithms for Generating Models: * Neural Nets: This model is based on the human brain and nervous system, interconnecting multiple neural units to generate predictions.
* Naive Bayes: This is a simple model based on Bayes’ Theorem with an assumption of independence among predictors.
* Decision Trees: This model generates a tree-like data structure to determine which attributes of data to split on to best classify the data.
* Random Forests: This model generates multiple decision trees instead of just one to avoid overfitting a model to its training data.

We use 75% of the data for training the models, and evaluate them using the remaining 25%. We train the models using 10-fold cross-validation.

See data above

# split data into training and test data
trainIndex <- createDataPartition(departmentCount$CSCI, p = 0.75, list = F)
trainData <- departmentCount[trainIndex, ]
testData <- departmentCount[-trainIndex, ]

# create train control using 10-fold cross-validation
train_control <- trainControl( method = "cv", number=10, savePredictions =T,
                               summaryFunction = twoClassSummary, classProbs = T)

# generate and store the different models using the training dataset
annModel <- train(CSCI ~ ., data = trainData, trControl = train_control,
                  method = "nnet", metric = "ROC", maxit = 1000)

treeModel <- train(CSCI ~ ., data = trainData, trControl=train_control,
                   method = "rpart", metric = "ROC")

nbayesModel <- train(CSCI ~ ., data = trainData, trControl = train_control,
                     method = "nb", metric = "ROC")

rfModel <- train(CSCI ~ ., data = trainData, trControl = train_control,
                 method = "rf", metric = "ROC")

Evaluate and Compare Models

Now that the models have been created, we can generate metrics for the success of each model using the remaining 25% of the data (testData) we allocated for this task.

The metrics we will be focusing on to quantify success include:

  • Recover Operating Characteristic (ROC): a graph that illustrates a binary classifier system. It is considered good if the curve moves up and to the right
  • Specificity: a value representing the models ability to correctly predict a true value (If a schedule has a CSCI course)
  • Sensitivity: a value representing the model’s ability to correctly predict a false value (If a schedule doesn’t have a CSCI course)

https://en.wikipedia.org/wiki/Receiver_operating_characteristic
https://en.wikipedia.org/wiki/Sensitivity_and_specificity

# FUNCTION: predict on model using test data, generate ROC curve, and plot it
generateModelResults <- function(model, testData, predictColumn) {
  predData <- predict(model, testData, type="prob")
  modelROC <- roc(testData[, predictColumn], predData$yes)
  plot(modelROC)
}

# generate model results using test data by predicting on the CSCI department
generateModelResults(annModel, testData, "CSCI")

##
## Call:
## roc.default(response = testData[, predictColumn], predictor = predData$yes)
##
## Data: predData$yes in 528 controls (testData[, predictColumn] no) < 149 cases (testData[, predictColumn] yes).
## Area under the curve: 0.9288
generateModelResults(treeModel, testData, "CSCI")

##
## Call:
## roc.default(response = testData[, predictColumn], predictor = predData$yes)
##
## Data: predData$yes in 528 controls (testData[, predictColumn] no) < 149 cases (testData[, predictColumn] yes).
## Area under the curve: 0.7015
generateModelResults(nbayesModel, testData, "CSCI")

##
## Call:
## roc.default(response = testData[, predictColumn], predictor = predData$yes)
##
## Data: predData$yes in 528 controls (testData[, predictColumn] no) < 149 cases (testData[, predictColumn] yes).
## Area under the curve: 0.875
generateModelResults(rfModel, testData, "CSCI")

##
## Call:
## roc.default(response = testData[, predictColumn], predictor = predData$yes)
##
## Data: predData$yes in 528 controls (testData[, predictColumn] no) < 149 cases (testData[, predictColumn] yes).
## Area under the curve: 0.9196
# resample the data
results <- resamples(list(
  ANN = annModel,
  DT = treeModel,
  NB = nbayesModel,
  RF = rfModel
))

# print and plot the resampled data
summary(results)
##
## Call:
## summary.resamples(object = results)
##
## Models: ANN, DT, NB, RF
## Number of resamples: 10
##
## ROC
##       Min. 1st Qu. Median   Mean 3rd Qu.   Max. NA's
## ANN 0.8938  0.9181 0.9298 0.9260  0.9349 0.9570    0
## DT  0.5561  0.5687 0.6679 0.6379  0.6903 0.7138    0
## NB  0.5000  0.5000 0.5000 0.5000  0.5000 0.5000    0
## RF  0.8815  0.9231 0.9308 0.9274  0.9404 0.9525    0
##
## Sens
##       Min. 1st Qu. Median   Mean 3rd Qu.   Max. NA's
## ANN 0.8987  0.9261 0.9434 0.9382  0.9559 0.9623    0
## DT  0.9560  0.9763 0.9905 0.9842  0.9937 1.0000    0
## NB  1.0000  1.0000 1.0000 1.0000  1.0000 1.0000    0
## RF  0.8805  0.8884 0.9243 0.9156  0.9400 0.9430    0
##
## Spec
##       Min. 1st Qu. Median   Mean 3rd Qu.   Max. NA's
## ANN 0.6667  0.7111 0.7500 0.7429  0.7910 0.8000    0
## DT  0.1136  0.1611 0.2444 0.2393  0.2955 0.3778    0
## NB  0.0000  0.0000 0.0000 0.0000  0.0000 0.0000    0
## RF  0.7333  0.7740 0.7778 0.7898  0.8182 0.8667    0
bwplot(results)

The Random Forest and Neural Net clearly perform best based on how their ROC, specificity, and sensitivity values are closest to 1. The Random Forest appears to perform less consistently, as can be seen by its slightly larger IQR in all three attributes. However, due to the similarity in performance in these plots, it is hard to tell which of the two models performed better. However, the Neural Net takes significantly more time to create.

The Decision Tree and Naive Bayes models appear to have the worst performaces. Both of their ROC values are the lowest and the Decision Tree has a lot of variability. Interestingly, we think that the naive bayes model always guesses that a student would want to take a CSCI course, regardless of the courses they have selected. We can see this by how their sensitivities are at 1 while their specificities are at 0.

Association Rules

Using the Apriori algorithm on transaction-based data, we can quickly find the most frequent itemsets. These will represent the most commonly taken groups of courses and departments at Bucknell.

The concepts we use in this section include:

  • Association rules: A rule-based method that finds relations between variables. It is often in market basket analysis to find which items among all items available for sale in a store are purchased together. In our case, we use it see which courses and departments are taken together.
  • Apriori: An association rule algorithm that decreases the time taken to generate strong rules by only using the frequent itemsets, determined by a minimum support value, to generate larger itemsets.

See data above

Association Rules By Course

We will begin by analyzing assocation rules between different courses. This is done by creating a transaction object from the main sections of each course, where each transaction is a list of courses selected in each schedule.

# FUNCTION: Run apriori on the transaction data and output results
enhancedInspect <- function(trans, suppFreq, suppRules) {
  print(summary(trans))
  freqItemsets <- apriori(trans, parameter=list(support=suppFreq, target="frequent"))
  inspect(freqItemsets)
  trans_rules <- apriori(trans,parameter=list(supp=suppRules,target="rules"))
  inspect(sort(trans_rules, by="lift"))
}

# create transactions using all transaction list containing courses
transMain <- createTransactions(transactionListMain)

# generate frequent itemsets, run apriori on the data, and print the results
enhancedInspect(transMain, 0.02, 0.015)
## transactions as itemMatrix in sparse format with
##  2710 rows (elements/itemsets/transactions) and
##  1024 columns (items) and a density of 0.004672365
##
## most frequent items:
## PHYS 212.main ECON 103.main UNIV 200.main MATH 216.main MATH 211.main
##           312           295           254           217           185
##       (Other)
##         11703
##
## element (itemset/transaction) length distribution:
## sizes
##    1    2    3    4    5    6    7    8    9   10   11   12   13   14   15
##   69   27  114 1220  804  236   87   52   21   34    7   11    8    4    5
##   16   17   18   20
##    4    5    1    1
##
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##   1.000   4.000   4.000   4.785   5.000  20.000
##
## includes extended item information - examples:
##          labels
## 1 ACFM 220.main
## 2 ACFM 261.main
## 3 ACFM 340.main
## Apriori
##
## Parameter specification:
##  confidence minval smax arem  aval originalSupport support minlen maxlen
##          NA    0.1    1 none FALSE            TRUE    0.02      1     10
##             target   ext
##  frequent itemsets FALSE
##
## Algorithmic control:
##  filter tree heap memopt load sort verbose
##     0.1 TRUE TRUE  FALSE TRUE    2    TRUE
##
## Absolute minimum support count: 54
##
## set item appearances ...[0 item(s)] done [0.00s].
## set transactions ...[1024 item(s), 2710 transaction(s)] done [0.00s].
## sorting and recoding items ... [46 item(s)] done [0.00s].
## creating transaction tree ... done [0.00s].
## checking subsets of size 1 2 3 done [0.00s].
## writing ... [56 set(s)] done [0.00s].
## creating S4 object  ... done [0.00s].
##    items                                       support
## 1  {CHEM 202.main}                             0.02140221
## 2  {MATH 201.main}                             0.02361624
## 3  {ENGR 240.main}                             0.02029520
## 4  {ANTH 109.main}                             0.02029520
## 5  {MATH 241.main}                             0.02398524
## 6  {CSCI 205.main}                             0.02435424
## 7  {POLS 140.main}                             0.02398524
## 8  {GEOL 203.main}                             0.02361624
## 9  {ECON 257.main}                             0.02140221
## 10 {ENGL 101.main}                             0.02361624
## 11 {CHEM 231.main}                             0.02472325
## 12 {GEOG 101.main}                             0.02583026
## 13 {CHEM 211.main}                             0.02767528
## 14 {BIOL 122.main}                             0.02952030
## 15 {MGMT 201.main}                             0.02656827
## 16 {PHIL 103.main}                             0.02361624
## 17 {CSCI 204.main}                             0.03284133
## 18 {CSCI 311.main}                             0.02066421
## 19 {ENST 100.main}                             0.03136531
## 20 {MATH 245.main}                             0.02693727
## 21 {MATH 226.main}                             0.02878229
## 22 {WMST 150.main}                             0.03505535
## 23 {MGMT 102.main}                             0.03653137
## 24 {BIOL 207.main}                             0.03579336
## 25 {POLS 170.main}                             0.03874539
## 26 {PSYC 100.main}                             0.04132841
## 27 {EDUC 101.main}                             0.04280443
## 28 {CSCI 206.main}                             0.04649446
## 29 {PHYS 211.main}                             0.04464945
## 30 {MATH 212.main}                             0.04428044
## 31 {ENGR 229.main}                             0.04206642
## 32 {MATH 222.main}                             0.04391144
## 33 {BIOL 206.main}                             0.05387454
## 34 {SOCI 100.main}                             0.05055351
## 35 {MGMT 101.main}                             0.05645756
## 36 {MATH 202.main}                             0.05571956
## 37 {MGMT 200.main}                             0.05535055
## 38 {CHEM 212.main}                             0.06678967
## 39 {CSCI 203.main}                             0.06568266
## 40 {CHEM 201.main}                             0.05904059
## 41 {MATH 211.main}                             0.06826568
## 42 {PHIL 100.main}                             0.06752768
## 43 {MATH 216.main}                             0.08007380
## 44 {UNIV 200.main}                             0.09372694
## 45 {PHYS 212.main}                             0.11512915
## 46 {ECON 103.main}                             0.10885609
## 47 {CHEM 201.main,MATH 226.main}               0.02140221
## 48 {CSCI 206.main,ENGR 229.main}               0.02287823
## 49 {CSCI 206.main,MATH 222.main}               0.02435424
## 50 {CSCI 206.main,PHYS 212.main}               0.02435424
## 51 {ENGR 229.main,MATH 222.main}               0.02693727
## 52 {MATH 222.main,PHYS 212.main}               0.02287823
## 53 {BIOL 206.main,CHEM 212.main}               0.03800738
## 54 {CSCI 203.main,PHYS 212.main}               0.03357934
## 55 {CHEM 201.main,ECON 103.main}               0.02140221
## 56 {CSCI 206.main,ENGR 229.main,MATH 222.main} 0.02214022
## Apriori
##
## Parameter specification:
##  confidence minval smax arem  aval originalSupport support minlen maxlen
##         0.8    0.1    1 none FALSE            TRUE   0.015      1     10
##  target   ext
##   rules FALSE
##
## Algorithmic control:
##  filter tree heap memopt load sort verbose
##     0.1 TRUE TRUE  FALSE TRUE    2    TRUE
##
## Absolute minimum support count: 40
##
## set item appearances ...[0 item(s)] done [0.00s].
## set transactions ...[1024 item(s), 2710 transaction(s)] done [0.00s].
## sorting and recoding items ... [61 item(s)] done [0.00s].
## creating transaction tree ... done [0.00s].
## checking subsets of size 1 2 3 4 5 done [0.00s].
## writing ... [52 rule(s)] done [0.00s].
## creating S4 object  ... done [0.00s].
##    lhs                rhs                support confidence      lift
## 1  {GEOL 250.main} => {ENGR 101.main} 0.01697417  1.0000000 57.659574
## 2  {ENGR 101.main} => {GEOL 250.main} 0.01697417  0.9787234 57.659574
## 3  {ENGR 229.main,
##     GEOL 250.main} => {ENGR 101.main} 0.01697417  1.0000000 57.659574
## 4  {ENGR 101.main,
##     ENGR 229.main} => {GEOL 250.main} 0.01697417  0.9787234 57.659574
## 5  {CSCI 206.main,
##     ENGR 229.main,
##     PHYS 212.main} => {ENGL 101.main} 0.01808118  1.0000000 42.343750
## 6  {CSCI 206.main,
##     MATH 222.main,
##     PHYS 212.main} => {ENGL 101.main} 0.01808118  1.0000000 42.343750
## 7  {ENGR 229.main,
##     MATH 222.main,
##     PHYS 212.main} => {ENGL 101.main} 0.01808118  1.0000000 42.343750
## 8  {CSCI 206.main,
##     ENGR 229.main,
##     MATH 222.main,
##     PHYS 212.main} => {ENGL 101.main} 0.01808118  1.0000000 42.343750
## 9  {ENGR 229.main,
##     PHYS 212.main} => {ENGL 101.main} 0.01808118  0.9800000 41.496875
## 10 {CSCI 206.main,
##     ENGR 229.main,
##     MATH 222.main} => {ENGL 101.main} 0.01845018  0.8333333 35.286458
## 11 {CSCI 206.main,
##     ENGR 229.main} => {ENGL 101.main} 0.01881919  0.8225806 34.831149
## 12 {GEOL 250.main} => {ENGR 229.main} 0.01697417  1.0000000 23.771930
## 13 {ENGR 101.main} => {ENGR 229.main} 0.01734317  1.0000000 23.771930
## 14 {ENGR 101.main,
##     GEOL 250.main} => {ENGR 229.main} 0.01697417  1.0000000 23.771930
## 15 {CSCI 206.main,
##     ENGL 101.main} => {ENGR 229.main} 0.01881919  1.0000000 23.771930
## 16 {ENGL 101.main,
##     MATH 222.main} => {ENGR 229.main} 0.01845018  1.0000000 23.771930
## 17 {CSCI 206.main,
##     ENGL 101.main,
##     MATH 222.main} => {ENGR 229.main} 0.01845018  1.0000000 23.771930
## 18 {CSCI 206.main,
##     ENGL 101.main,
##     PHYS 212.main} => {ENGR 229.main} 0.01808118  1.0000000 23.771930
## 19 {ENGL 101.main,
##     MATH 222.main,
##     PHYS 212.main} => {ENGR 229.main} 0.01808118  1.0000000 23.771930
## 20 {CSCI 206.main,
##     MATH 222.main,
##     PHYS 212.main} => {ENGR 229.main} 0.01808118  1.0000000 23.771930
## 21 {CSCI 206.main,
##     ENGL 101.main,
##     MATH 222.main,
##     PHYS 212.main} => {ENGR 229.main} 0.01808118  1.0000000 23.771930
## 22 {ENGL 101.main,
##     PHYS 212.main} => {ENGR 229.main} 0.01808118  0.9800000 23.296491
## 23 {CSCI 206.main,
##     ENGL 101.main,
##     PHYS 212.main} => {MATH 222.main} 0.01808118  1.0000000 22.773109
## 24 {ENGL 101.main,
##     ENGR 229.main,
##     PHYS 212.main} => {MATH 222.main} 0.01808118  1.0000000 22.773109
## 25 {CSCI 206.main,
##     ENGR 229.main,
##     PHYS 212.main} => {MATH 222.main} 0.01808118  1.0000000 22.773109
## 26 {CSCI 206.main,
##     ENGL 101.main,
##     ENGR 229.main,
##     PHYS 212.main} => {MATH 222.main} 0.01808118  1.0000000 22.773109
## 27 {CSCI 206.main,
##     ENGL 101.main} => {MATH 222.main} 0.01845018  0.9803922 22.326578
## 28 {CSCI 206.main,
##     ENGL 101.main,
##     ENGR 229.main} => {MATH 222.main} 0.01845018  0.9803922 22.326578
## 29 {ENGL 101.main,
##     PHYS 212.main} => {MATH 222.main} 0.01808118  0.9800000 22.317647
## 30 {ENGR 229.main,
##     PHYS 212.main} => {MATH 222.main} 0.01808118  0.9800000 22.317647
## 31 {CSCI 206.main,
##     ENGR 229.main} => {MATH 222.main} 0.02214022  0.9677419 22.038493
## 32 {ENGL 101.main,
##     ENGR 229.main} => {MATH 222.main} 0.01845018  0.9615385 21.897220
## 33 {CSCI 206.main,
##     MATH 222.main} => {ENGR 229.main} 0.02214022  0.9090909 21.610845
## 34 {ENGL 101.main,
##     MATH 222.main} => {CSCI 206.main} 0.01845018  1.0000000 21.507937
## 35 {ENGL 101.main,
##     ENGR 229.main,
##     MATH 222.main} => {CSCI 206.main} 0.01845018  1.0000000 21.507937
## 36 {ENGL 101.main,
##     ENGR 229.main,
##     PHYS 212.main} => {CSCI 206.main} 0.01808118  1.0000000 21.507937
## 37 {ENGL 101.main,
##     MATH 222.main,
##     PHYS 212.main} => {CSCI 206.main} 0.01808118  1.0000000 21.507937
## 38 {ENGR 229.main,
##     MATH 222.main,
##     PHYS 212.main} => {CSCI 206.main} 0.01808118  1.0000000 21.507937
## 39 {ENGL 101.main,
##     ENGR 229.main,
##     MATH 222.main,
##     PHYS 212.main} => {CSCI 206.main} 0.01808118  1.0000000 21.507937
## 40 {ENGL 101.main,
##     ENGR 229.main} => {CSCI 206.main} 0.01881919  0.9807692 21.094322
## 41 {ENGL 101.main,
##     PHYS 212.main} => {CSCI 206.main} 0.01808118  0.9800000 21.077778
## 42 {ENGR 229.main,
##     PHYS 212.main} => {CSCI 206.main} 0.01808118  0.9800000 21.077778
## 43 {ENGL 101.main} => {ENGR 229.main} 0.01918819  0.8125000 19.314693
## 44 {ENGR 229.main,
##     MATH 222.main} => {CSCI 206.main} 0.02214022  0.8219178 17.677756
## 45 {ENGL 101.main,
##     MATH 222.main} => {PHYS 212.main} 0.01808118  0.9800000  8.512179
## 46 {CSCI 206.main,
##     ENGL 101.main,
##     MATH 222.main} => {PHYS 212.main} 0.01808118  0.9800000  8.512179
## 47 {ENGL 101.main,
##     ENGR 229.main,
##     MATH 222.main} => {PHYS 212.main} 0.01808118  0.9800000  8.512179
## 48 {CSCI 206.main,
##     ENGL 101.main,
##     ENGR 229.main,
##     MATH 222.main} => {PHYS 212.main} 0.01808118  0.9800000  8.512179
## 49 {CSCI 206.main,
##     ENGL 101.main} => {PHYS 212.main} 0.01808118  0.9607843  8.345274
## 50 {CSCI 206.main,
##     ENGL 101.main,
##     ENGR 229.main} => {PHYS 212.main} 0.01808118  0.9607843  8.345274
## 51 {ENGL 101.main,
##     ENGR 229.main} => {PHYS 212.main} 0.01808118  0.9423077  8.184788
## 52 {CSCI 206.main,
##     ENGR 229.main,
##     MATH 222.main} => {PHYS 212.main} 0.01808118  0.8166667  7.093483

The association rules provide insight to which courses are likely to be taken together. For example, we found the following rule in our output:

{ENGR 229.main, MATH 222.main} => {CSCI 206.main}

This matches the courses taken by sophomores in the Spring semester.

Association Rules By Department

Although the output from apriori when given individual course data provides useful information, it is dense and difficult to interpret. We group courses by department as it will be easier to understand, while providing similar insight.

# create transactions using all transaction list containing departments
transDepts <- createTransactions(transactionListDepts)

# generate frequent itemsets, run apriori on the data, and print the results
enhancedInspect(transDepts, 0.04, 0.007)
## transactions as itemMatrix in sparse format with
##  2710 rows (elements/itemsets/transactions) and
##  65 columns (items) and a density of 0.05959693
##
## most frequent items:
##    MATH    CHEM    BIOL    CSCI    ECON (Other)
##    1141     666     620     596     537    6938
##
## element (itemset/transaction) length distribution:
## sizes
##    1    2    3    4    5    6    7    8    9   10   11   12   13
##   97  205  619 1221  379  105   28   21   13    6    9    6    1
##
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##   1.000   3.000   4.000   3.874   4.000  13.000
##
## includes extended item information - examples:
##   labels
## 1   ACFM
## 2   AFST
## 3   ANBE
## Apriori
##
## Parameter specification:
##  confidence minval smax arem  aval originalSupport support minlen maxlen
##          NA    0.1    1 none FALSE            TRUE    0.04      1     10
##             target   ext
##  frequent itemsets FALSE
##
## Algorithmic control:
##  filter tree heap memopt load sort verbose
##     0.1 TRUE TRUE  FALSE TRUE    2    TRUE
##
## Absolute minimum support count: 108
##
## set item appearances ...[0 item(s)] done [0.00s].
## set transactions ...[65 item(s), 2710 transaction(s)] done [0.00s].
## sorting and recoding items ... [26 item(s)] done [0.00s].
## creating transaction tree ... done [0.00s].
## checking subsets of size 1 2 3 done [0.00s].
## writing ... [44 set(s)] done [0.00s].
## creating S4 object  ... done [0.00s].
##    items            support
## 1  {WMST}           0.04833948
## 2  {MECH}           0.06273063
## 3  {ACFM}           0.05608856
## 4  {ANTH}           0.04870849
## 5  {HIST}           0.05387454
## 6  {RELI}           0.05313653
## 7  {GEOL}           0.06420664
## 8  {GEOG}           0.05977860
## 9  {ECEG}           0.08118081
## 10 {ENGL}           0.07380074
## 11 {SPAN}           0.08265683
## 12 {EDUC}           0.08007380
## 13 {ENST}           0.08007380
## 14 {SOCI}           0.09409594
## 15 {POLS}           0.10516605
## 16 {PHIL}           0.10332103
## 17 {PSYC}           0.12029520
## 18 {UNIV}           0.12546125
## 19 {ENGR}           0.13284133
## 20 {MGMT}           0.15940959
## 21 {PHYS}           0.19114391
## 22 {ECON}           0.19815498
## 23 {CSCI}           0.21992620
## 24 {BIOL}           0.22878229
## 25 {CHEM}           0.24575646
## 26 {MATH}           0.42103321
## 27 {CSCI,ECEG}      0.05350554
## 28 {ECEG,MATH}      0.04870849
## 29 {ENGL,MATH}      0.04132841
## 30 {MATH,PHIL}      0.04059041
## 31 {BIOL,UNIV}      0.04280443
## 32 {ENGR,MATH}      0.09409594
## 33 {ECON,MGMT}      0.04206642
## 34 {MATH,MGMT}      0.04538745
## 35 {CSCI,PHYS}      0.08265683
## 36 {BIOL,PHYS}      0.06125461
## 37 {CHEM,PHYS}      0.05535055
## 38 {MATH,PHYS}      0.10073801
## 39 {ECON,MATH}      0.08708487
## 40 {CSCI,MATH}      0.15055351
## 41 {BIOL,CHEM}      0.13542435
## 42 {BIOL,MATH}      0.04981550
## 43 {CHEM,MATH}      0.09335793
## 44 {CSCI,MATH,PHYS} 0.06974170
## Apriori
##
## Parameter specification:
##  confidence minval smax arem  aval originalSupport support minlen maxlen
##         0.8    0.1    1 none FALSE            TRUE   0.007      1     10
##  target   ext
##   rules FALSE
##
## Algorithmic control:
##  filter tree heap memopt load sort verbose
##     0.1 TRUE TRUE  FALSE TRUE    2    TRUE
##
## Absolute minimum support count: 18
##
## set item appearances ...[0 item(s)] done [0.00s].
## set transactions ...[65 item(s), 2710 transaction(s)] done [0.00s].
## sorting and recoding items ... [58 item(s)] done [0.00s].
## creating transaction tree ... done [0.00s].
## checking subsets of size 1 2 3 4 5 done [0.00s].
## writing ... [53 rule(s)] done [0.00s].
## creating S4 object  ... done [0.00s].
##    lhs                      rhs    support     confidence lift
## 36 {ENGR,ENST,MATH}      => {GEOL} 0.008118081 0.8800000  13.705747
## 41 {CSCI,ENGR,PHYS}      => {ENGL} 0.018450185 0.9803922  13.284314
## 53 {CSCI,ENGR,MATH,PHYS} => {ENGL} 0.018081181 0.9800000  13.279000
## 17 {CHEM,MECH}           => {ECEG} 0.008856089 0.9600000  11.825455
## 5  {CHEG,CHEM}           => {ENGR} 0.010332103 0.8484848   6.387205
## 18 {ACFM,ENGL}           => {MGMT} 0.007011070 0.8636364   5.417719
## 52 {CSCI,ENGL,ENGR,MATH} => {PHYS} 0.018081181 0.9800000   5.127027
## 14 {CEEG,CHEM}           => {ECON} 0.011070111 1.0000000   5.046555
## 34 {CEEG,CHEM,MATH}      => {ECON} 0.011070111 1.0000000   5.046555
## 40 {CSCI,ENGL,ENGR}      => {PHYS} 0.018450185 0.9615385   5.030443
## 46 {CSCI,ENGL,MATH}      => {PHYS} 0.025092251 0.8717949   4.560935
## 51 {ENGL,ENGR,MATH,PHYS} => {CSCI} 0.018081181 1.0000000   4.546980
## 39 {ENGL,ENGR,PHYS}      => {CSCI} 0.018450185 0.9803922   4.457823
## 45 {ENGL,MATH,PHYS}      => {CSCI} 0.025092251 0.9714286   4.417066
## 7  {CHEM,CHIN}           => {BIOL} 0.007011070 1.0000000   4.370968
## 25 {CSCI,ENGL}           => {PHYS} 0.025461255 0.8214286   4.297435
## 24 {ENGL,PHYS}           => {CSCI} 0.025461255 0.9324324   4.239751
## 6  {BIOL,CHIN}           => {CHEM} 0.007011070 1.0000000   4.069069
## 21 {ECEG,PHYS}           => {CSCI} 0.015129151 0.8913043   4.052743
## 38 {ECEG,MATH,PHYS}      => {CSCI} 0.014760148 0.8888889   4.041760
## 3  {CHEM,LAMS}           => {BIOL} 0.007749077 0.9130435   3.990884
## 33 {CEEG,ECON,MATH}      => {CHEM} 0.011070111 0.9677419   3.937809
## 13 {CEEG,ECON}           => {CHEM} 0.011070111 0.9375000   3.814752
## 4  {CHEG,ENGR}           => {CHEM} 0.010332103 0.9032258   3.675288
## 28 {CHEM,SPAN}           => {BIOL} 0.021033210 0.8382353   3.663899
## 9  {BIOL,BMEG}           => {CHEM} 0.008856089 0.8571429   3.487773
## 2  {BIOL,LAMS}           => {CHEM} 0.007749077 0.8400000   3.418018
## 49 {ECON,ENGR,MATH}      => {CHEM} 0.009225092 0.8064516   3.281507
## 11 {CEEG,ENST}           => {MATH} 0.007011070 1.0000000   2.375110
## 16 {CEEG,CHEM}           => {MATH} 0.011070111 1.0000000   2.375110
## 32 {CEEG,CHEM,ECON}      => {MATH} 0.011070111 1.0000000   2.375110
## 35 {ENGR,ENST,GEOL}      => {MATH} 0.008118081 1.0000000   2.375110
## 44 {CSCI,ENGL,PHYS}      => {MATH} 0.025092251 0.9855072   2.340688
## 47 {CSCI,ENGR,PHYS}      => {MATH} 0.018450185 0.9803922   2.328539
## 50 {CSCI,ENGL,ENGR,PHYS} => {MATH} 0.018081181 0.9800000   2.327607
## 22 {ECEG,PHYS}           => {MATH} 0.016605166 0.9782609   2.323477
## 37 {CSCI,ECEG,PHYS}      => {MATH} 0.014760148 0.9756098   2.317180
## 15 {CEEG,ECON}           => {MATH} 0.011439114 0.9687500   2.300887
## 20 {ENGR,GEOL}           => {MATH} 0.022140221 0.9677419   2.298493
## 43 {CSCI,ENGL,ENGR}      => {MATH} 0.018450185 0.9615385   2.283759
## 42 {ENGL,ENGR,PHYS}      => {MATH} 0.018081181 0.9607843   2.281968
## 8  {BMEG,PHYS}           => {MATH} 0.007749077 0.9545455   2.267150
## 26 {ENGL,PHYS}           => {MATH} 0.025830258 0.9459459   2.246725
## 23 {ENGL,ENGR}           => {MATH} 0.024354244 0.9428571   2.239389
## 27 {CSCI,ENGL}           => {MATH} 0.028782288 0.9285714   2.205459
## 10 {CEEG,GEOL}           => {MATH} 0.007749077 0.9130435   2.168578
## 48 {CHEM,ECON,ENGR}      => {MATH} 0.009225092 0.8928571   2.120634
## 29 {ENGR,PHYS}           => {MATH} 0.025830258 0.8750000   2.078221
## 30 {CSCI,ENGR}           => {MATH} 0.026568266 0.8674699   2.060336
## 31 {CSCI,PHYS}           => {MATH} 0.069741697 0.8437500   2.003999
## 19 {ENST,GEOL}           => {MATH} 0.012177122 0.8048780   1.911674
## 12 {CEEG,ENGR}           => {MATH} 0.013653137 0.8043478   1.910414
## 1  {RESC}                => {MATH} 0.007380074 0.8000000   1.900088

In the output, we see rules with groupings such as the following: {CHEM,MECH} => {ECEG}

This matches the standard semester for Mechanical Engineering majors in the Fall of sophomore year.

We also see output such as: {ACFM,ENGL} => {MGMT}

From this, we can tell that English courses may be a popular elective for management majors.

Correlation Analysis

In this section, we look at correlations in different aspects of our data. A correlation is a statistical measure of how two or more variables fluctuate together.

We do this by developing a hypothesis for patterns that may appear in our data, getting a dataframe we can plot to test our hypothesis, plotting the data, and analyzing the results.

See data above

Correlating Number of Courses in a Department and Frequency Department was Scheduled

In this section we look at the correlation between Department Course Count and Department Frequency.

# Generate plot
qplot(coursesByDeptCorrelation,deptStudentEnrolled) + geom_point() + geom_smooth(method='lm') + geom_text(aes(label=names(DEPTs.frequencies)),hjust=0, vjust=0) + ggtitle("Department Course Count vs Department Frequency") + xlab("Number of Courses in Departments") + ylab("Frequency Department was Scheduled")

# Find correlation
cor(coursesByDeptCorrelation,deptStudentEnrolled)
## [1] 0.6605243
# Generate plot (STEM)
qplot(coursesByDeptCorrelation.STEM,deptStudentEnrolled.STEM) + geom_point() + geom_smooth(method='lm') + geom_text(aes(label=Departments.STEM),hjust=0, vjust=0) + ggtitle("Department Course Count vs Department Frequency (STEM Departments)") + xlab("Number of Courses in Departments") + ylab("Frequency Department was Scheduled")

# Find correlation (STEM)
cor(coursesByDeptCorrelation.STEM,deptStudentEnrolled.STEM)
## [1] 0.5999757
# Generate plot (nonSTEM)
qplot(coursesByDeptCorrelation.nonSTEM,deptStudentEnrolled.nonSTEM) + geom_point() + geom_smooth(method='lm') + geom_text(aes(label=Departments.nonSTEM),hjust=0, vjust=0) + ggtitle("Department Course Count vs Department Frequency (Non-STEM Departments") + xlab("Number of Courses in Departments") + ylab("Frequency Department was Scheduled")

# Find correlation (nonSTEM)
cor(coursesByDeptCorrelation.nonSTEM,deptStudentEnrolled.nonSTEM)
## [1] 0.8184949

From these plots, we can see that STEM departments, on average, have fewer courses and more people scheduling them. We can see this from departments like CSCI, MATH, CHEM, PHYS, and ENGR being above the correlation line. This makes sense since non-STEM courses are often smaller and more discussion-based. Additionally, there are fewer STEM departments than non-STEM departments, which partially explains why they appear to be more popular than the non-STEM departments.

Correlating Frequency of Course Levels Scheduled and Registration Time

In this section, we wanted to look at when different levels of courses (100-level, 200-level, etc) were scheduled over time in a single registration period. We began by plotting all of the course data for the 26 days leading up the end of the November 2016 course registration period.

# Plot it
ggplot(levelByTimeClean, aes(x=daysUntil, y = level, color=department, size=count)) + geom_jitter() + geom_smooth(aes(group = 1), method='lm') + scale_x_reverse() +
xlab("Days Until End of Course Registration") +
ylab("Level of Course") +
ggtitle("Frequency of Course Levels Scheduled vs Registration Time")

From this plot, we see a general downward trend in the level of course being scheduled as we approached the end of the course registration period. From the slope of the correlation line, we can infer that upperclassmen, who are first to register for courses, are the ones planning out their schedules first.

Next, we looked at the data for individual departments.

# plot for one dept
levelByTimeClean.CSCI <- levelByTimeClean[levelByTimeClean$department == "CSCI",]
ggplot(levelByTimeClean.CSCI, aes(x=daysUntil, y = level, size=count)) + geom_point()+ geom_smooth(aes(group = 1), method='lm') + scale_x_reverse() +
xlab("Days Until End of Course Registration") +
ylab("Level of Course") +
ggtitle("Frequency of Course Levels Scheduled vs Registration Time (CSCI)")

# plot for one dept
levelByTimeClean.ECON <- levelByTimeClean[levelByTimeClean$department == "ECON",]
ggplot(levelByTimeClean.ECON, aes(x=daysUntil, y = level, size=count)) + geom_point()+ geom_smooth(aes(group = 1), method='lm') + scale_x_reverse() +
xlab("Days Until End of Course Registration") +
ylab("Level of Course") +
ggtitle("Frequency of Course Levels Scheduled vs Registration Time (ECON)")

These plots display a similar trend of lower level courses being scheduled later on in the course registration period.

Clustering

We use the data clustered by schedule creation time, to create a frequency plot that visualizes the different semesters in our data.

See data above

ggplot(dateCounts, aes(Date, count, color = as.character(dateCluster$cluster))) + geom_point() + ylab("Frequency") + xlab("Schedule Creation Time (Unix timestamp)")  + guides(color=guide_legend(title="Semester")) + ggtitle("Clustering by Semester")

Plot Analysis:
The plot shows how many schedules were created each day over the past 2 years and the colors represent different semesters. Since there were 4 semesters, each with a registration period, it is expected to see a spike in each semester where a lot of schedules were created.

Clustering Algorithm:
We can see that the kmeans clustering algorithm performed well. It was able to determine the four semesters based on the frequency of schedules being made.