Other interpretations of machine learning:
* People learning how to use a machine
* People being thought a machine (it happens in schools now)
But probably that is not what you mean.
Neural networks is one way of implementing machine learning. Another might be biological evolution like code (or a combination).
I assume that Facebook's face recognition uses a neural network, but in combination with pre-defined logic rules. Same for self-driving cars. The teaching time would otherwise be uneconomic and learning the car what a collision is and how to prevent it, is better not done by trial and error. Although it can also learn form "almost collisions".
The most advanced neural network code (with back-propagation) that I once made, was a program that learned the color 'yellow'. One does not need a neural net for this, but it went as follows:
3 inputs: Red, green and blue values (each 0...1 or -1...+1), maybe some additional inputs like "red squared". A few intermediate cells (although probably not needed for this simple case), 1 output cell (-1...+1), where > 0 was "yellow" and =< 0 was "not yellow". The 3 layers of cells connected by 2 layers of weights. Random values for the weights at start.
A random color was generated and its RGB components supplied to the inputs, the result was a "yellow" or "not yellow" output. I, as teacher, told the program then (by a key press) "correct" or "incorrect". In both cases the network could be adjusted based on these outcomes. Lets say I judged the result incorrect, then the program had to change something to its network. It is where to (complex) back-propagation comes in action.
It determines which of the inputs contributed most to the "error" (incorrect answer) based on its input cell value and connection weights. The weights where then adjusted accordingly. A little, not too much, like a child which might needs multiple corrections before it learns not to throw with Lego-bricks. Slowly, step be step, the program than became better a recognizing yellow to a point where the teacher (me, but it could be an Internet search engine as well) does not know anymore the difference between somewhat yellow and yellowish-green.
Not sure where I have the code. I tried more complex things, but I was disappointed by the results. To get better neural nets, a random initialized net is probably not the best. A human is also not born with a random initialized brain, there is already "code" present. And the current human or animal brain is most likely the result of millions of years of iteration (evolution) with millions of individuals and failures. Like a lot of CPU power and a very long running "program". I am talking more about general artificial intelligence now. Something different then a highly specialized machine that learned how to recognize faces or play Go (going the read the linked Go article now...)