What it really is?

When one digs into the inner workings of any machine learning algorithm one realizes that it is fundamentally just optimizing functions with coefficients. It is the tuning of these coefficients we call 'learning'. They are the quantities/unknowns being learnt. All of machine learning can be thought of as an exercise in optimization. At the convergence of this optimization we get an inference engine - a way to go from input to output. When this inference is used on an unseen input to predict in a continuous context (real numbers) it is called regression; prediction of a categorical output is called classification.

Machine learning and AI don't invent their own math; they use math that already exists like matrix algebra and probability calculus in their algorithms. It is how they are utilized within the algorithms that generates value.

A neural network for instance just uses stochastic gradient descent to optimize a loss function with coefficients. #justmath