The single-layer perceptron is the simplest neural network. It does not contain any hidden layer. It is used to look at how a machine learns linearly. However, a single-layer perceptron can only process inputs in one way, so it cannot be used to find patterns.
A multilayer perceptron is a complicated structure with one or more hidden layers of perceptrons. With this type of perceptron learning, the number of perceptron layers in the network determines how many ways the machine can process inputs. MLP is more useful in real life because the brain does not linearly process information.
Deep artificial neural networks are usually multilayer perceptrons with more than one hidden layer.