"Who moved my cheese?" asked someone in the 90's - and me today. I've spent several months going deep on Machine Learning (especially random forest), dimensionality reduction, and Topological Data Analysis (TDA). Fascinating stuff to learn about - though took me a while to get my head around a few of the concepts. Anyway, just when I was starting to feel pretty good about my progress...
BANG! Deep Learning ....zoink!
From three sources, in three days. "Deep Learning" - It really feels like someone might have "moved my cheese" Anyway, I need to get my head around this, so thought I'd do a little legwork, and also share my journey with other folks hoping to get their bearings, or new to the subject (like me).
Goal - Define "Deep Learning" and Key Components
So what is Deep Learning and how does it differ from plain old ML and Neural Networks? Is this old wine in new bottles, hype driven, or has something else changed?
What I've learned so far
1) LAYERED: " The term "deep learning" gained traction in the mid-2000s after a publication ..showed how a many-layered feedforward neural network could be effectively pre-trained one layer at a time, treating each layer in turn as an unsupervised machine, then using supervised backpropagation for fine-tuning" "Data is generated by interactions of many different factors on different levels. Deep learning adds the assumption that these factors are organized into multiple levels, corresponding to different levels of abstraction or composition... Deep learning algorithms in particular exploit this idea of hierarchical explanatory factors" (From WIki page - link below)
'Deep learning refers to a relatively recently developed set of generative machine learning techniques that autonomously generate high-level representations from raw data sources, and using these representations can perform typical machine learning tasks such as classification, regression and clustering. Many of the most important deep learning techniques are extensions of neural network methods and a simple way to understand them is to think of multiple layers of neural networks linked together.'
'A deep neural network (DNN) is defined to be an artificial neural network with at least one hidden layer of units between the input and output layers - to achieve abstraction'
2) UNSUPERVISED - "Unlike supervised machine learning, deep learning is mostly unsupervised. large-scale neural nets that allow the computer to learn and “think” by itself" ; No "training" set,
no feedback loop, no 'reward signal'.; (note above there is a "supervised backpropagation for fine-tuning")
3) SUBSET OF MACHINE LEARNING - It's a subcategory of machine learning, that uses neural networks to improve or better understand things like speech recognition, computer vision, and natural language processing. Also, it's not new, but it's getting a 'second wind' it seems.
4) BLEEDING EDGE , depending on your definition - "Yoshua Bengio, an AI researcher at the University of Montreal, estimates that there are only about 50 experts worldwide in deep learning, many of whom are still graduate students. He estimated that DeepMind employed about a dozen of them on its staff" - suspect this is a point that many folks would take exception to.
5) TRENDING (or starting to) (Graph Above, Google Trends) Definitely on the rise, but given light volume, still early to call it a hockey stick. It's not hit prime-time/mainstream yet. Also interesting to see the "cross" of ML and NN in 2010.
6) HYPE AND MONEY Google acquired DeepMind Technologies in January - which got people's attention. For a sector (Data Science / ML) already bubbling, this turned up the gas even more.
December 2013, Facebook announced that it hired Yann LeCun to head its new artificial intelligence (AI) lab with operations in California, London, and New York.
7) BUT IT WORKS - "In 2009, deep multidimensional LSTM networks demonstrated the power of deep learning with many nonlinear layers, by winning three ICDAR 2009 competitions in connected handwriting recognition, without any prior knowledge about the three different languages to be learned" - see wiki footnotes.
In the Google/Stanford paper from 2012 "Building High-level Features Using Large Scale Unsupervised Learning" - they achieved a 70% improvement in cat-detection technology :) (goal to build a high-level, class-specific feature detectors from unlabeled images - e.g. face detectors"
From Josh Bloom, CTO at Wise.IO "The accuracy wise.io is seeing that DL provides for multi class inference problems on sensor data, eg. imaging, is remarkable. Now, we're trying to make predictions from all the algorithms we use in our machine learning driven applications more interpretable for the end business user."
And to exercise / test it: Automatic speech recognition is application - there is a popular TIMIT data set - used for initial evaluations of deep learning architectures. The entire set contains 630 speakers from eight major dialects of American English, with each speaker reading 10 different sentences. (For images there is MNIST is composed of handwritten digits and includes 60000 training examples and 10000 test examples)
Recent Developments in DL (gets technical in spots)
8) APPLIED IN FUTURE - Effective learning off of unlabeled data, using unsupervised methods = desired outcome with lots of practical applications. Humans are visual creatures, cameras are everywhere, and capabilities that allow firms to excel in voice and visual ML, will allow them to outcompete their peers.
9) METAPHYSICAL - In looking at some of the research papers, and seeing the "Master Neuron" images of Cats and Faces - that wasn't any one cat or one face - I was struck by the parallels to Plato's Theory of Forms (Platonic realism is a philosophical term usually used to refer to the idea of realism regarding the existence of universals or abstract objects).
Anyway, hope this helped a little. I'm going to poke around and see if there are some examples I can work with for R. If I find any, will post them in this blog.
Links & References
- WIKI - nice background - http://en.wikipedia.org/wiki/Deep_learning
- FastCompany light article http://www.fastcolabs.com/3026423/why-google-is-investing-in-deep-learning
- MIT Technology Review - google focus / talent http://www.technologyreview.com/news/524026/is-google-cornering-the-market-on-deep-learning/
- NYT light article (2012 - Cat focus) http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?_r=0
- Academic Paper for above http://arxiv.org/pdf/1112.6209v5.pdf
- Nice Blog overview - http://theanalyticsstore.com/deep-learning/
- Reddit - some long dialogue on the subject http://www.reddit.com/r/MachineLearning/comments/22u1yt/is_deep_learning_basically_just_neural_networks/
- And.. http://www.reddit.com/r/MachineLearning/comments/xg5e6/ok_what_exactly_is_deep_learning/
For the "R" Crowd
R LIBRARIES & LINKS (Developing)
- - darch: Package for deep architectures and Restricted-Bolzmann-Machines
- - rbm - https://github.com/zachmayer/rbm - Restricted Boltzmann Machines in R
Caffe Architecture, Install & Examples (have not tried yet)
- Introductory slides: slides about the Caffe architecture, updated 03/14.
- Installation: Instructions on installing Caffe (works on Ubuntu, Red Hat, OS X).
- Pre-trained models: BVLC provides some pre-trained models for academic / non-commercial use.
- Development: Guidelines for development and contributing to Caffe.
- LeNet / MNIST Demo: end-to-end training and testing of LeNet on MNIST.
- CIFAR-10 Demo: training and testing on the CIFAR-10 data.
- Training ImageNet: end-to-end training of an ImageNet classifier.
- Running Pretrained ImageNet [notebook]: run classification with the pretrained ImageNet model using the Python interface.
- Running Detection [notebook]: run a pretrained model as a detector.
- Visualizing Features and Filters [notebook]: trained filters and an example image, viewed layer-by-layer.
Will keep working on this - let me know if you have any 'adds' or edits - Cheers! Ryan
About this blog
Data Analytics & Visualization Blog - Generating insights from Data since 2013
Created: July 25, 2014English