Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised
This is the most modern version of the classic neural network architecture. Pass input through a series of layers into one or more output nodes. This model is great for dealing with csv datasets such as the popular Pima-Indians diabetes dataset, the iris flower dataset, etc… These models are predicated around two basic statistical models, regressions and classifiers. In these models we are trying to predict either a numeric or categorical output. Numeric output/regressions map to a real-number output such as ‘82.4’, whereas categorical output/classifications map to a discrete number output contained in a set of values such as ‘red’. If you are looking to find more inspiration for datasets, check out kaggle
The cutting edge architecture for Computer Vision, there are tons of ways you can apply this model. I recommend starting out with the MNIST or CIFAR-10 tutorial and then moving into binary classification models because they are easier to understand. You can plug and play with this code here to a Jupyter notebook and get your first binary image recognition model started. This example is good because it helps show you how to import your own custom datasets, one of the biggest challenges when moving from tutorials to your own projects.