Text mining methods are used to automatically extract structured information from large amounts of texts. The workshop provides a first, practical introduction to the topic. Together we will analyze the abstracts of scientific articles.
As a tool we will use the Python library Natural Language Toolkit to tokenize the texts, remove stop words and finally generate visualizations of the words which are characteristic for these abstracts.
As development environment, we will use the open source software Jupyter Notebook, popular in the data science field, to run our software code and display its results.
How do I use a Jupyter notebook to run Python code while documenting my workflow at the same time in a meaningful way?
- Where can I find suitable scientific text material that I can analyze automatically?
- How do I extract the contents of a specific column from a csv file for subsequent analysis?
- How do I use the Python library Natural Language Toolkit (NLTK) to prepare texts for a text mining analysis?
- How do I determine word frequencies with the NLTK and then visualize them in the form of a diagram or word cloud?
You will get answers to these questions in the workshop and can apply your new knowledge directly on practical examples.
After the workshop, you can use your self-created Jupyter notebook to repeat the analyses on your own text documents.
Anaconda serves as a platform for managing the required Python libraries nltk, numpy, matplotlib, and wordcloud, as well as the Jupyter notebook software. Instructions on how to install these packages in Anaconda will be provided in advance of the workshop.