Just finished Andrew Ng's ML course^{cheeky} and I can wholeheartedly recommend it as a nice balance of both exposition and handson work. Was a bit surprised though when, around logistic regression, maybe thirdway through the course, prof. Ng said that in his experience of "walking around Silicon Valley" many startups don't use concepts beyond of what was covered thus far. Sounds encouraging for a beginner  where to next?
I've seen two recommended approaches: either dive deep with the Deep learning book or go for An Introduction to Statistical Learning. For terser read, 'All' of Statistics is also often listed as the next step  coincidentally, its author strongly expresses why grounding in math is crucial:
Students who analyze data, or who aspire to develop new methods for analyzing data, should be well grounded in basic probability and mathematical statistics. Using fancy tools like neural nets, boosting, and support vector machines without understanding basic statistics is like doing brain surgery before knowing how to use a bandaid.
— Wasserman
So, lots to learn still. Here are the solutions in case you get stuck figuring out the Octave minutia of vectorization (or that ~
is for ignored return value, ...
for line continuation, etc.) and can thus spend more time grokking the concepts.^{ack}

The Coursera syllabus is actually a watered down version of Stanford CS 229, so the title is a bit cheeky.
↩ 
This breaches the Coursera Honor Code. I reason that freely available solutions diminish worth of a (paid) Coursera Certification  which is debatable anyway. And really, whoever is inclined to (self) cheat would have no problem finding these solutions elsewhere.
↩