The field of natural language processing (NLP) is one of the most important and useful application areas of artificial intelligence. NLP is undergoing rapid evolution as new methods and toolsets converge with an ever-expanding availability of data.
In this course, you will explore the fundamental concepts of NLP and its role in current and emerging technologies. You will gain a thorough understanding of modern neural network algorithms for the processing of linguistic information. By mastering cutting-edge approaches, you will gain the skills to move from word representation and syntactic processing to designing and implementing complex deep learning models for question answering, machine translation, and other language understanding tasks.
What you will learn
-Computational properties of natural languages
-Neural network models for language understanding tasks
-Word vectors, syntactic, and semantic processing
-Coreference, question answering, and machine translation
Instructors
Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)
Abigail See
Ph.D. Candidate, Computer Science
Head TA, CS224: Natural Language Processing with Deep Learning
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 1 – Introduction and Word Vectors
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 2 – Word Vectors and Word Senses
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 3 – Neural Networks
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 5 – Dependency Parsing
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 6 – Language Models and RNNs
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 7 – Vanishing Gradients, Fancy RNNs
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 8 – Translation, Seq2Seq, Attention
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 9 – Practical Tips for Projects
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 10 – Question Answering
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 11 – Convolutional Networks for NLP
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 12 – Subword Models
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 13 – Contextual Word Embeddings
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 14 – Transformers and Self-Attention
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 15 – Natural Language Generation
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 16 – Coreference Resolution
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 17 – Multitask Learning
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 18 – Constituency Parsing, TreeRNNs
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 19 – Bias in AI
Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 20 – Future of NLP + Deep Learning
Source: https://online.stanford.edu/courses/xcs224n-natural-language-processing-deep-learning