Human language processing and understanding has been a long-standing goal in artificial intelligence. The ultimate goal is to make human-computer interaction more natural. There are many sub-tasks involved in the research of natural language processing.
Some of them are mentioned below.
The aim of automatic summarizer is to provide a coherent summary of the complete document. There are two types of summarizers: extractive and abstractive.
Co-reference resolution is used to determine which mentions refer to the same object in a chunk of text.
The goal of machine translation is to automatically translate one human language to other.
Named Entity Recognition (NER)
Given a stream of text, NER determines which items in the text map to proper names, such as people, places or organizations.
Natural Language Generation
The aim of NLG is to convert information from computer databases or semantic web to human readable language.
Natural Language Understanding
NLU deals with converting chunks of text into more formal representations such as first-order logic structures that are easier for computer programs to manipulate.
Given a sentence, determine the part of speech for each word. This can be helpful in other tasks.
Question answering deals with answering questions presented in natural language. There are many systems available today such as Google Now from Google, Cortona from Microsoft, and Siri from Apple.
The aim of relationship extraction is to find relationships between different entities in a given text.
Word Sense Disambiguation (WSD)
Many words have more than one meaning and WSD aims to identify the actual meaning of a word in a particular context.
Information Retrieval (IR)
IR deals with sorting, searching and retrieving information and uses many techniques from NLP.
Information Extraction (IE)
This is concerned in general with the extraction of semantic information from text and may act as a sub-task in other sub-tasks already mentioned above.