I recently finished taking a class called Interactive Programming in Python. The focus was to introduce students to the process of developing games or other applications that a user may interact with in real time. Much of the course focused on standard Python operators, methods and some "best practices" for using them and organizing intuitive, readable code. The class content was generated through the shared effort of a team of professors at Rice University: Joe Warren, John Greiner, Stephen Wong, and Scott Rixner.
As far as exercises go, the class consisted of relatively challenging weekly review quizzes and a coding project (usually in the form of some game such as Pong, or Memory). I can really appreciate the work that was put into developing the quizzes. They make an effort to compel the student to explore all of the characteristics of the various data structures and ask questions that aren't deliberately answered in the lectures but may be determined via deductive reasoning or plain old trial and error.
The video lectures were very engaging. Professors Warren and Rixner did a great job of keeping the videos interesting and getting the students excited for the weekly project. At the end of each lecture series, there would be a short introduction to the latest assignment. This would usually be capped off with a competition between Professors Warren and Rixner using the game that we would be developing. Through their banter, it was amusing to preview their sort of friendly adversarial relationship off camera. While it can't be expected that a class will be entertaining, as long as it is also informative - it certainly can't hurt.
One unique characteristic of the course was that it was entirely self contained. This is sort of a philosophical objective of coursera courses in general but this course could be completed entirely within the confines of a web browser. The student does not need to download any documents, install Python, or even submit files. This is made possible by Professor Rixner's browser-based Python emulator CodeSkulptor. This approach to development can be good and bad. While CodeSkulptor makes the coding assignments easier on the student, it limits exposure to some of the contextual learning that is necessary for anyone trying to utilize Python. CodeSkulptor only supports a tightly packaged set of operations; this narrows the focus of the assignments so that the student isn't as overwhelmed by a gigantic API. At the same time, while it is great for the primary focus of the course, the reliance on CodeSkulptor does not prepare the student to jump into developing Python code right out of the gate. They must learn a little more before they can start initializing their scripts and modules on their own machine.
The timeline was a little more rigid than other coursera courses that I have experienced. Unfortunately, the second and third weeks coincided with a trip that I took to Turkey which caused me to miss one or two deadlines and really hurt my overall score. I can understand the motivation for the deadlines - the class implemented a unique scoring method that came with some logistic trade-offs. For instance, All coding assignments were peer-reviewed. After submitting an assignment, the student is then expected to grade five additional assignments before doing a self-evaluation. The tradeoff here is that, although this allows for flexible implementation of the assignment because it is being interpreted by a proficiently heuristic machine (another human being), the grading takes more than a few days and the deadlines are absolute. If a submission is late, no one may grade it and the student does not get any credit. This is a challenging characteristic for a Massive Open Online Course - usually assignments in online courses of this nature have a healthy buffer to accommodate students with demanding schedules.
I think this process worked out fairly well. All of my graders seemed to put a good deal of thought into their feedback (and I did the same for those who I graded). This indicated to me that they took the role seriously and the process worked pretty well. I believe there are ways of improving the logistics such that students who need to submit past the deadline may be paired with others who need to do the same for partial credit. Additionally, in anticipation of abuse, I would suggest implementing a method for the student to provide feedback to the graders such that consistently poor graders (graders whose assessments consistently represent outliers) may be identified and isolated from impacting other student's grades.
In summary, I'm pleased with the class. I decided to try this one out because I wanted to learn a little more about generating user interfaces and graphics objects in Python. This is sort of a weak point for me - most of the code that I have generated so far is for personal use, terminal based, and sort of clunky. I was also interested in learning some better practices for organizing interactive programs and implementing robust GUIs. Unfortunately, the class did not get a whole lot deeper than timers, simple drawing, and interactive objects. Still, I learned a good deal in the course and I look forward to additional opportunities to learn about interactive programming in this type of setting.
I am a Mechanical Engineer who is interested in creating intelligent machines. To that end, I am committed to developing a stronger familiarity with robotics, mechatronics, and computer science. This blog is primarily motivational; I will record my progress on various projects and post interesting or useful resources as I come across them.
Friday, June 21
Saturday, June 15
Coursera: Machine Learning
I started the latest track of Andrew Ng's Machine Learning Coursera Course a month ago. There are about two more weeks in the class and my experience so far has been positive. We've covered Linear classifiers, Logistic regression by gradient descent, multi-layer neural networks, and SVMs. This week's topic is unsupervised learning via K-means clustering.
Part of the material has been review for me as I studied Ng's OpenClassroom materials in the past. I posted on that experience a little bit in My Introduction to Machine Learning. Although my primary goal when working through those materials was to understand how to implement backpropagation of artificial neural networks for supervised learning, this iteration of the class on Coursera has greatly increased my intuition for how to improve and by what metrics to judge the performance of a learning algorithm.
One large contrast between this course and the OpenClassroom materials is the in-video questions and review quizzes. Distributed practice is a great method for learning concepts and keeping the viewer engaged; Coursera courses in general seem to do a great job of keeping the learning experience interactive. Ng's Machine Learning, being sort of the "flagship" Coursera course, is no exception.
This type of meta-knowledge for the application of learning algorithms is incredibly useful to me as an aspiring data scientist. Some of the techniques such as cross-validation or generating learning curves were entirely unknown to me when I was playing around with that Kaggle assignment. If I had been aware of and made use of those techniques, I would have generated much better classification accuracy in that project and done so in a much shorter amount of time by correctly tailoring my algorithm to the data.
This is all very impressive but it can be rather limiting. I understand that the motivation is to make the coding task as clean and standardized as possible to allow for reasonable evaluation of the student's work and for the student to focus on the particular concepts that they are trying to practice. However, the whole experience is so convenient and constrained that it feels too easy. I remember the satisfaction that I felt after implementing an ANN from scratch for the tutorial Kaggle Digit Recognition Challenge and these exercises, although they involve some incredibly exciting algorithms, don't invoke that sensation in me. I could be a complete outlier in that view - I haven't had a chance yet to learn what other students' opinions are on the forums.
Part of the material has been review for me as I studied Ng's OpenClassroom materials in the past. I posted on that experience a little bit in My Introduction to Machine Learning. Although my primary goal when working through those materials was to understand how to implement backpropagation of artificial neural networks for supervised learning, this iteration of the class on Coursera has greatly increased my intuition for how to improve and by what metrics to judge the performance of a learning algorithm.
One large contrast between this course and the OpenClassroom materials is the in-video questions and review quizzes. Distributed practice is a great method for learning concepts and keeping the viewer engaged; Coursera courses in general seem to do a great job of keeping the learning experience interactive. Ng's Machine Learning, being sort of the "flagship" Coursera course, is no exception.
Good Lectures
Ng's lectures are very good at explaining the motivations and the nuances of employing machine learning algorithms. Every algorithm as presented has some prototypical application upon which analogies and concepts are based. He provides a lot of insights based on his own experiences with the profession of machine learning as to common pitfalls that people tend to run into when implementing a classification algorithm. For instance, it is common for someone, when their algorithm does not perform well, to conclude that the solution is to find more training data examples or features. In real-world applications, 'finding more training data' can be a significant project on its own. Furthermore, in the case of over-fitting, it would actually be detrimental to increase the number of features in the training set.This type of meta-knowledge for the application of learning algorithms is incredibly useful to me as an aspiring data scientist. Some of the techniques such as cross-validation or generating learning curves were entirely unknown to me when I was playing around with that Kaggle assignment. If I had been aware of and made use of those techniques, I would have generated much better classification accuracy in that project and done so in a much shorter amount of time by correctly tailoring my algorithm to the data.
Programming Exercises
The class involves mandatory weekly programming exercises that center around some particular algorithm from the week's lectures. The tasks can range from anywhere between classifying spam emails to teaching a neural network to recognize hand-written digits. The exercises are well developed and, in the interest of time, are provided with a lot more content than what the student generates. For instance, all of the data is imported and pre-processed in the provided script files and functions. All that the student is usually tasked with is implementing one or more cogs in the system - some particular cost function or kernel for the task at hand. The submission process is also completely automated by the supplied "submit" script - all that the user has to do is update the indicated code and run it.This is all very impressive but it can be rather limiting. I understand that the motivation is to make the coding task as clean and standardized as possible to allow for reasonable evaluation of the student's work and for the student to focus on the particular concepts that they are trying to practice. However, the whole experience is so convenient and constrained that it feels too easy. I remember the satisfaction that I felt after implementing an ANN from scratch for the tutorial Kaggle Digit Recognition Challenge and these exercises, although they involve some incredibly exciting algorithms, don't invoke that sensation in me. I could be a complete outlier in that view - I haven't had a chance yet to learn what other students' opinions are on the forums.
Moving Forward
Nevertheless, I'm very excited by the knowledge and experience that I have obtained throughout this course and greatly appreciate the efforts of Andrew Ng and his TAs in providing this learning experience. There is a great deal to explore about just the application of machine learning algorithms - not to mention their potential adaptations. My next goal is to read up on the subject of neural networks and built a better context for understanding these machine learning algorithms. I recognize that the ANN implementation that Ng presents is just a special case called a 'feed forward' neural networks. It would be interesting to learn how neural networks behave in real time so I've found an old book called Introduction to the Theory of Neural Computation that looks promising and I've recently begun reading Sebastian Seung's Connectome.Thursday, June 13
Calculus of Variations, 2/2
In the first post of this series, the Barchistochrone problem was introduced and we looked at how the performance of a particular curve could be evaluated. Recall that the total transit may be expressed in this way:
Also, because the bead is traveling through a constant gravitational field, the velocity may be expressed as a function of the change in height by conservation of energy.
Therefore, a functional that evaluates the transit time for some curve f(x) in the brachistochrone problem is given by:
which may be generalized with a function L:
where
Subscribe to:
Posts (Atom)