The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics)

Category: Computer Science
Author: Robert Tibshirani, Jerome Friedman
4.0
This Month Reddit 2

Comments

by effernand   2019-08-24

When I started on the field I took the famous course on Coursera by Andrew Ng. It helped to grasp the major concepts in (classical) ML, though it really lacked on mathematical profundity (truth be told, it was not really meant for that).

That said, I took a course on edX, which covered things in a little more depth. As I was getting deeper into the theory, things became more clear. I have also read some books, such as,

  • Neural Networks, by Simon Haikin,
  • Elements of Statistical Learning, by Hastie, Tibshirani and Friedman
  • Pattern Recognition and Machine Learning, by Bishop

All these books have their own approach to Machine Learning, and particularly I think it is important that you have a good understanding on Machine Learning, and its impacts on various fields (signal processing, for instance) before jumping into Deep Learning. Before almost three years of major dedication in studying the field, I feel like I can walk a little by myself.

Now, as a begginer in Deep Learning, things are a little bit different. I would like to make a few points:

  • If you have a good base on maths and Machine Learning, the algorithms used in Deep Learning will be more straightforward, as some of them are simply an extension of previous attempts.
  • The practical part in Machine Learning seems a little bit childish with respect to Deep Learning. When I programmed Machine Learning models, I usually had small datasets, and algorithms who could run in a simple CPU.
  • As you begin to work with Deep Learning, you will need to master a framework of your choice, which will yield issues about data usage (most datasets do not fit into memory), GPU/memory management. For instance, if you don't handle your data well, it becomes a bottleneck that slows down your code. So, when compared with simple numpy + matplotlib applications, tensorflow API's + tensorboard visualizations can be tough.

So, to summarize, you need to start with simple, boring things until you can be an independent user of ML methods. THEN you can think about state-of-the-art problems to solve with cutting-edge frameworks and APIs.

by fusionquant   2019-08-24
  1. Being honest with yourself is great. Not being able to build anything profitable is much better than tricking yourself into believing you make "profitable" algos on backtest, only to see them all fail in real life.

  2. DO NOT trade forex. FX is super hard to trade: poor data, fragmented markets, most efficient asset class. Finding alpha in FX is 10x harder than in equities/commodities spot/futures.

  3. Unless you are super comfortable with basic stuff DO NOT go to ML, especially deep learning. Simple stuff like momentum, mean-reversion first, then interpretable models, then ensembles of models, then DL. Otherwise you will never learn, nor make any progress at all.

This book is the bible: https://www.amazon.com/Elements-Statistical-Learning-Prediction-Statistics/dp/0387848576/ Read it 100x times and code as much examples yourself as possible.

by throwawaystickies   2019-07-21

Thank you!! If you don't mind my asking, if you're working a full-time job, how much time have you been allocating for the program, and in how many months are you projected to finish?

Also, do you have any tips on how I can best prepare before entering the program? I'm considering reading the Elements of Statistics during commute instead of the usual ones I read and brush up on my linear algebra to prepare.

by fusionquant   2019-07-21

Since, we're not in the 80s, this style of pitching ideas is not good anymore.

  • PnL = Profits - Losses, so showing only those cases, where the idea worked and dropping the cases when it did not is not good.

  • Showing a couple of random indicators, preferably noisy enough like RSI, to serve as a proof is poor thinking. Usually, the idea comes first, then you implement your idea with certain methods, then test if the hypothesis works.

  • In this article, there are a lot of things being implied: like a connection between a surge in price and surge in volume (a lot of times it is true, actually), then the idea that those indicators correlate or "predict" a move (probably not true at all), and stuff like that...

  • None of the hypothesis was clearly stated, and none has been tested

  • "Backtest" in the end with 10 or 30 signals is laughable. With a calm market like that I can make 100 "strategies" that would show a modest loss on 9 trades out of 10, with 10th one being a 10%+ surge that will cover all the losses and turn a decent profit. Due to overfitting these strategies are sure to lose money.

All being said, I am not sure if alok310 is the author, but if he is, I would suggest reading "Elements of Statistical Learning" (https://www.amazon.com/Elements-Statistical-Learning-Prediction-Statistics/dp/0387848576/).

Sorry for the rough feedback, but I strongly believe, that such level of idea generation should not be promoted or encouraged

by anonymous   2019-01-13

Why?

Probability theory is very important for modern data-science and machine-learning applications, because (in a lot of cases) it allows to "open up a black box" and shed some light into the model's inner workings, and with luck find necessary ingredients to transform a poor model into a great model. Without it, data scientist's work is very much restricted in what they are able to do.

A PDF is a fundamental building block of the probability theory, absolutely necessary to do any sort of probability reasoning, along with expectation, variance, prior and posterior, and so on.

Some examples here on StackOverflow, from my own experience, where a practical issue boils down to understanding data distribution:

  • Which loss-function is better than MSE in temperature prediction?
  • Binary Image Classification with CNN - best practices for choosing “negative” dataset?
  • How do neural networks account for outliers?

When?

The questions above provide some examples, here're a few more if you're interested, and the list is by no means complete:

  • What is the 'fundamental' idea of machine learning for estimating parameters?
  • Role of Bias in Neural Networks
  • How to find probability distribution and parameters for real data? (Python 3)

I personally try to find probabilistic interpretation whenever possible (choice of loss function, parameters, regularization, architecture, etc), because this way I can move from blind guessing to making reasonable decisions.

Reading

This is very opinion-based, but at least few books are really worth mentioning: The Elements of Statistical Learning, An Introduction to Statistical Learning: with Applications in R or Pattern Recognition and Machine Learning (if your primary interest is machine learning). That's just a start, there dozens of books on more specific topics, like computer vision, natural language processing and reinforcement learning.

by olooney   2018-10-04
Some of the best textbooks in statistics and machine learning:

Applied -------

Hosmer et al., Applied Logistic Regression. An exhaustive guide to the perils and pitfalls of logistic regression. Logistic regression is the power tool of interpretable statistical models, but if you don't understand it, it will take your foot off (concretely, your inferences will be wrong and your peers will laugh at you.) This book is essential. Graduate level, or perhaps advanced undergraduate, intended for STEM and social science grad students.

https://www.amazon.com/Applied-Logistic-Regression-Probabili...

Peter Christen's Data Matching. Record Linkage is a relatively niche concept, so Christen's book has no right to be as good as it is. But it covers every relevant topic in a clear, even-handed way. If you are working on a record linkage system, then there's nothing in this book you can afford not to know. Undergraduate level, but intended for industry practitioners.

https://www.amazon.com/Data-Matching-Techniques-Data-Centric...

Max Kuhn's Applied Predictive Modeling. Even if you don't use R, this is an incredibly good introduction to how predictive modeling is done in practice. Early undergraduate level.

http://appliedpredictivemodeling.com/

Theoretical -----------

The Elements of Statistical Learning. Probably the single most respected book in machine learning. Exhaustive and essential. Advanced undergraduate level.

https://www.amazon.com/Elements-Statistical-Learning-Predict...

Kevin Murphy's Machine Learning: A Probabilistic Perspective. Covers lots of the same ground as Elements but is a little easier. Undergraduate level.

https://www.amazon.com/Machine-Learning-Probabilistic-Perspe...

Taboga's Lectures on Probability Theory and Mathematical Statistics. Has the distinction of being available for free in web-friendly format at https://www.amazon.com/Lectures-Probability-Theory-Mathemati...

by olooney   2018-09-13
> In Searle’s time, the dominant AI paradigm was GOFAI (Good Old-Fashioned Artificial Intelligence.)

Russel and Norvig's book is probably the best introduction to "old fashioned" AI:

https://www.amazon.com/Artificial-Intelligence-Modern-Approa...

GOFAI may not have lead directly to true AI, but it produced a ton of useful algorithms such as A* and minimax. Although the attention has turned to machine learning algorithms (à la https://www.amazon.com/Elements-Statistical-Learning-Predict...) the hybrid of GOFAI and ML has produced some extraordinary results, such as AlphaZero:

https://deepmind.com/blog/alphago-zero-learning-scratch/