The AISOP recipe

The AISOP webapp is a service built as the result of various training and configurations.
This recipe explains how to extract the content fragments, annotate them, and create model trained on it. This will let us create a pipeline and a seminar on which we can analyse portfolios.

Basic Terms

The context of the AISOP-web-app usage is that of a course at learning institution which typically has fixed students and fixed contents. A course can contain multiple courses or modules.

  • AISOP Web-app: The nodeJS server that interfaces with the portfolio-composing system.
  • Portfolio: the content written by a student in order to represent his or her progress, learning and knowledge using a textual and graphical form. Generally expressed in HTML, can be embedded in various web-pages.
  • Course-contents: The set of slides, their annotations, the videos and handouts that normally read by students and teachers.
  • Analysis: The set of programmes that recognize and measure the contents of a portfolio. Often also the name of the resulting interactive presentation (which can feature summaries or enriched portfolio views).
  • Composition Platform: A space where the portfolio is written. Normally a web-space. In AISOP we have focussed on the classical e-portfol;io composition platform Mahara (a PHP server).

1) Data Preparation

1.1: Make a Concept Map

Employing tools such as CMapTools, create a graphical concept map that represents the topics of the course. This concept map can be familiar with the teachers and learners of this course as a way to show the paths through the content.

From the concept map, extract a .cxl file which carries the same information and will be presented on the web-page.

From the concept map, also extract a hierarchy of topics, assuming there is more than (approx) 10 topics in the map. The hierarchy should be a text file with a label per line and the label indented to the right in case of children relation as in the following example:

Algorithmization
    Flow Charts
    Programming
        Programming Paradigm
            Imperative Programming
Data-Structure
    ....
Operating System
    ....

We'll name this file labels-all-depths.txt. From this text file, extract a text file with only the top labels (in the extract above only Algorithmization, Data-Structure and Operating System), named labels-depth1.txt.

1.2: Extract Text of the Course Content

In order for the topic recognition to work, a model needs to be trained that will recognize the words used by the students to denote a part or another of the course. This allows to create relations between the concepts of the course and the paragraphs of the portfolio and offer these in the interactive dashboards. The training is the result of annotating fragments of texts which, first, need to be extracted from their media, be them PDF files, PowerPoint slides, scanned texts or student works. These texts will not be shared so that even protected material or even personal-information carrying texts can be used.

Practically:

  • Make all documents accessible for you to open and browse (e.g. download them or get the authorized accesses)
  • Install and launch the clipboard extractor which will gather the fragments in a text file
  • Go through all contents and copy each fragment. A fragment is expected to be the size of a paragraph so this is what you should copy.
  • The extractor should have copied all the fragments in one file. Which we shall call extraction.json.
  • The least amount of content to be extracted is the complete set of slides and their comments. We recommend to use past students' e-portfolios too. We had rather good experience with about 1000 fragments for a course.
  • If interrupted, the process may create several JSON files. You can combine them using the merge-tool.

1.3: Annotate Text Fragments

It is time to endow the fragments with topics so that we can recognize students' paragraphs' topics. In AISOP, we have used the (commercial) prodigy for this task in two steps which, both, iterate through all fragments to give them topics.

The first step: top-level-labels: This is the simple "text classifier" recipe of prodigy: we can invoke the following command for this: prodigy textcat.manual the-course-name-l1 ./fragments.jsonl --label labels-depth1.txt  which will offer a web-interface on which each fragment is annotated with the (top-level) label. This web-interface can be left running for several days.
Then extract the content into a file: prodigy db-out the-course-name-l1 > the-course-name-dbout.jsonl

The second step is the hierarchical annotation custom recipe (link to become public soon): The same fragments are now annotated with the top-level annotation and all their children. E.g. using the command python -m prodigy subcat_annotate_with_top2 the-course-name-l2 the-course-name-dbout.jsonl labels-all-depths.txt  -F ./subcat_annotate_with_top2.py .

The resulting data-set can be extracted out of prodigy using the db-out recipe, e.g. prodigy db-out the-course-name-l2 the-course-name-l2-dbout or can be converted to a spaCy dataset for training e.g. using the command xxxxx (see here)


2) Deployment

2.1 Train a Recognition Model

See here.

2.2 Create a Pipeline

... write down the configuration JSON of the pipeline, get inspired pipeline-medieninderlehre.json

2.3 Create a Seminar and Import Content

...

Create a seminar with the web-interface, associate the appropriate pipeline.

2.4 Interface with the composition platform

See the Mahara authorization configuration.


3) Usage

3.1 Invite Users

...

3.2 Verify Imports and Analyses

...

3.3 Observe Usage and Reflect on Quality

...

3.4 Gather Enhancements

... on the web-app, on the creation process, and on the course

Tags:
Created by Paul Libbrecht on 2025/01/14 16:42
    

Need help?

If you need help with XWiki you can contact: