CSI 5325: Introduction to Machine Learning, Spring 2018
Objectives
This is a course in machine learning, which is a broad, interesting, and fast-growing field. The central problem we address in this class is how to use the computer to make models which can learn, make inferences, or improve its behavior, based on observations about the world. Further, we would like to use the learned models to make predictions about unknowns.
Machine learning is related to artificial intelligence, but also uses a lot of computer science, statistics, logic, probability, information theory, geometry, linear algebra, calculus, optimization theory, etc. It would be good to brush up on these topics if they're rusty.
Practical information
Lectures are from 09:30 to 10:45 in Cashion C315 on Tuesdays and Thursdays.
My office hours are listed on my home page. I am glad to talk to students during and outside of office hours. If you can't come to my office hour, please email me to make an appointment at another time.
Assignments
- Assignment 0: mathematics refresher (this assignment will be turned in for credit, but not graded in detail)
- Assignment 1: learning, probabilities, estimation
- Assignment 2: generalization bounds, linear models
- Assignment 3: linear models, logistic regression
- Assignment 4: regularization, RBFs
- Assignment 5: k-means, neural networks
Semester project
Here are guidelines on the semester project.
Two important updates (2018-04-03):
- We will meet on May 1 at our regular class meeting time for the last set of presentations.
- Each presentation should be about 10 minutes long, plus 2 minutes for transitions. Please be prepared.
- The presentation schedule is:
- April 24: Baas, Kronser, Pennington, Jordan, Wu, Cong
- April 26: Lavoie, Maskey, Qian, Ding, Kendzior, Rohn
- May 1: Wang, Kuritcyn, Griffin, Rapp, Smid, Xing
- The semester project is now due at 11:59 PM on Wednesday May 2.
Schedule
Here is a schedule of the topics we will cover, which is subject to change. Chapters are from the course textbook, Learning from Data. I'll primarily be using lecture slides from Magdon-Ismail.
- Introduction (chapter 1)
- Introduction and motivation; models of learning
- Linear separators / perceptrons, the perceptron learning algorithm
- Is learning feasible?
- Learning theory (chapter 2)
- Training versus testing
- Bounding the growth of the error term
- VC Dimension, Bias, Variance
- Linear models for classification and regression (chapter 3)
- Linear models for classification and regression
- Non-separable data
- Logistic regression and gradient descent
- Non-linear transformations
- Overfitting (chapter 4)
- Overfitting
- Regularization
- Validation and model selection
- Similarity methods (e-chapter 6)
- Similarity and nearest-neighbor
- Efficiency in nearest-neighbor search
- Radial basis functions
- Neural networks (e-chapter 7)
- The multilayer perceptron
- The neural network and backpropagation
- Preventing overfitting
- Support vector machines (e-chapter 8)
- Maximizing the margin
- Optimal hyperplane, overfitting
- The kernel trick
- Unsupervised learning
- Clustering, density estimation, dimension reduction
- k-means, Gaussian mixture models
- Principal components analysis (PCA)
- Independent components analysis (ICA)
The midterm exam will be in class Tuesday, February 27th.
The final exam will be on May 7 at 9:00 AM. For the latest university finals information, check here.
Textbooks & resources
This semester we will follow a newer book, Learning From Data by Abu-Mostafa, Magdon-Ismail, and Lin. The textbook has several electronic chapters available for free to those who have the physical textbook. To obtain them, you will need the login information, which are username 'bookreaders', with password being the first word on page 27 of the book.
We may also be reading from some papers in the research literature.
Other texts (I may draw some material from these):
- Introduction to Machine Learning (3rd ed.) by Ethem Alpaydin.
- Machine Learning by Tom Mitchell.
- Bayesian Reasoning and Machine Learning by David Barber (free version available online).
- Pattern recognition and machine learning by Christopher Bishop.
- The Elements of Statistical Learning: Data Mining, Inference, and Prediction by Trevor Hastie, Robert Tibshirani, and Jerome Friedman.
Further online resources:
- Past versions of this course have followed Andrew Ng's CS 229 course at Stanford.
- Kaggle datasets and Kaggle competitions
- Andrew Moore has a number of nice tutorials on topics related to machine learning
- Tommi Jaakkola and MIT OpenCourseWare have provided a machine learning course with course materials available online
- KDD dataset repository -- has many popular machine-learning datasets
- A Matlab tutorial
- LaTeX introduction, another LaTeX introduction, LaTeX reference
- Matrix reference manual
Grading
Grades will be assigned based on this breakdown:
- homeworks: 40%
- semester project: 20%
- midterm exam: 20%
- final exam: 20%
Here is a tentative grading scale:
F <
60 ≤ D- <
62 ≤ D <
67 ≤ D+ <
70 ≤ C- <
72 ≤ C <
78 ≤ C+ <
80 ≤ B- <
82 ≤ B <
88 ≤ B+ <
90 ≤ A- <
92 ≤ A
Some projects may be worth more than others. Exams are closed-book. The final will be comprehensive.
Policies
- Check this website and your email often for updates and announcements. We only meet a few times a week, but I may post updates at any time. It is your responsibility to follow these updates.
- All work in this course is strictly individual, unless the instructor explicitly states otherwise. While discussion of course material is encouraged, collaboration on any work for the course is not allowed. Collaboration includes (but is not limited to) discussing with anyone other than the professor any material that is specific to completing an assignment. You are encouraged to discuss the course material with the professor, preferably in office hours, and also by email.
- Assignments which are late are not accepted without prior arrangement. Exams may be made up with prior arrangement (made at least one class before to the exam) or due to illness, with a note from a health care professional.
- Bring any grading correction requests to my attention within 2 weeks of receiving the grade or before the end of the semester, whichever comes first. After that, I will not adjust your grade. If you find any mistake in grading, please let me know.
Academic honesty
I take academic honesty very seriously.
Many studies, including one by Sheilah Maramark and Mindi Barth Maline have suggested that "some students cheat because of ignorance, uncertainty, or confusion regarding what behaviors constitute dishonesty" (Maramark and Maline, Issues in Education: Academic Dishonesty Among College Students, U.S. Department of Education, Office of Research, August 1993, page 5). In an effort to reduce misunderstandings in this course, a minimal list of activities that will be considered cheating have been listed below.
- Copying another student's work. Simply looking over someone else's source code is copying.
- Providing your work for another student to copy.
- Collaboration on any assignment, unless the work is explicitly given as collaborative work.
- Using notes or books during any exam.
- Giving another student answers during an exam.
- Reviewing a stolen copy of an exam.
- Plagiarism.
- Studying tests or using assignments from previous semesters.
- Providing someone with tests or assignments from previous semesters.
- Taking an exam for someone else.
- Turning in someone else's work as your own work.
- Studying a copy of an exam prior to taking a make-up exam.
- Providing a copy of an exam to someone who is going to take a make-up exam.
- Giving test questions to students in another class.
- Reviewing previous copies of the instructor's tests without permission from the instructor.
Copyright © 2018 Greg Hamerly, with some content
taken from a syllabus by Jeff Donahoo.
Computer Science Department
Baylor University