<
From version < 1.5 >
edited by AISOP Admin
on 2025/01/14 20:31
To version < 1.7 >
edited by AISOP Admin
on 2025/01/14 21:15
>
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -22,7 +22,6 @@
22 22  
23 23  ----
24 24  
25 -(% class="wikigeneratedid" %)
26 26  == 1) Data Preparation ==
27 27  
28 28  === 1.1: Make a Concept Map ===
... ... @@ -37,8 +37,14 @@
37 37  > Flow Charts
38 38  > Programming
39 39  > Programming Paradigm
40 -> Imperative Programming....
39 +> Imperative Programming
40 +>Data-Structure
41 +> ....
42 +>Operating System
43 +> ....
41 41  
45 +We'll name this file labels-all-depths.txt. From this text file, extract a text file with only the top labels (in the extract above only Algorithmization, Data-Structure and Operating System), named labels-depth1.txt.
46 +
42 42  === 1.2: Extract Text of the Course Content ===
43 43  
44 44  In order for the topic recognition to work, a model needs to be trained that will recognize the words used by the students to denote a part or another of the course. This allows to create relations between the concepts of the course and the paragraphs of the portfolio and offer these in the interactive dashboards. The training is the result of annotating fragments of texts which, first, need to be extracted from their media, be them PDF files, PowerPoint slides, scanned texts or student works. These texts will not be shared so that even protected material or even personal-information carrying texts can be used.
... ... @@ -45,12 +45,25 @@
45 45  
46 46  Practically:
47 47  
48 -* Assemble the documents
53 +* Make all documents accessible for you to open and browse (e.g. download them or get the authorized accesses)
54 +* Install and launch the [[clipboard extractor>>https://gitlab.com/aisop/aisop-hacking/-/tree/main/aisop-clipboard-extractor?ref_type=heads]] which will gather the fragments in a text file
55 +* Go through all contents and copy each fragment. A fragment is expected to be the size of a paragraph so this is what you should copy.
56 +* The extractor should have copied all the fragments in one file. Which we shall call extraction.json.
57 +* The least amount of content to be extracted is the complete set of slides and their comments. We recommend to use past students' e-portfolios too. We had rather good experience with about 1000 fragments for a course.
58 +* If interrupted, the process may create several JSON files. You can combine them using the [[merge-tool>>https://gitlab.com/aisop/aisop-hacking/-/tree/main/merge-json-files]].
49 49  
50 50  === 1.3: Annotate Text Fragments ===
51 51  
52 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
62 +It is time to endow the fragments with topics so that we can recognize students' paragraphs' topics. In AISOP, we have used the (commercial) [[prodigy>>https://prodi.gy/]] for this task in two steps which, both, iterate through all fragments to give them topics.
53 53  
64 +**The first step: top-level-labels:** This is the simple [["text classifier" recipe>>https://prodi.gy/docs/recipes#textcat]] of prodigy: we can invoke the following command for this: prodigy textcat.manual the-course-name-l1 ./fragments.jsonl ~-~-label labels-depth1.txt  which will offer a web-interface on which each fragment is annotated with the (top-level) label. This web-interface can be left running for several days.
65 +Then extract the content into a file: prodigy db-out the-course-name-l1 > the-course-name-dbout.jsonl
66 +
67 +**The second step is the hierarchical annotation** [[custom recipe>>https://gitlab.com/aisop/aisop-nlp/-/tree/main/hierarchical_annotation?ref_type=heads]] (link to become public soon): The same fragments are now annotated with the top-level annotation and all their children. E.g. using the command python -m prodigy subcat_annotate_with_top2 the-course-name-l2 the-course-name-dbout.jsonl labels-all-depths.txt  -F ./subcat_annotate_with_top2.py .
68 +
69 +The resulting data-set can be extracted out of prodigy using the db-out recipe, e.g. prodigy db-out the-course-name-l2 the-course-name-l2-dbout or can be converted to a spaCy dataset for training e.g. using the command xxxxx (see [[here>>https://gitlab.com/aisop/aisop-nlp/-/tree/main/it3/fundamental-principles]])
70 +
71 +
54 54  ----
55 55  
56 56  == 2) Deployment ==
... ... @@ -57,19 +57,21 @@
57 57  
58 58  === 2.1 Train a Recognition Model ===
59 59  
60 -...
78 +See [[here>>https://gitlab.com/aisop/aisop-nlp/-/tree/main/it3/fundamental-principles]].
61 61  
62 62  === 2.2 Create a Pipeline ===
63 63  
64 -...
82 +... write down the configuration JSON of the pipeline, get inspired [[pipeline-medieninderlehre.json>>https://gitlab.com/aisop/aisop-webapp/-/blob/main/config/couchdb/pipeline-medieninderlehre.json?ref_type=heads]]
65 65  
66 66  === 2.3 Create a Seminar and Import Content ===
67 67  
68 68  ...
69 69  
88 +Create a seminar with the web-interface, associate the appropriate pipeline.
89 +
70 70  === 2.4 Interface with the composition platform ===
71 71  
72 -...
92 +See the Mahara authorization configuration.
73 73  
74 74  ----
75 75  
... ... @@ -90,4 +90,3 @@
90 90  === 3.4 Gather Enhancements ===
91 91  
92 92  ... on the web-app, on the creation process, and on the course
93 -

Need help?

If you need help with XWiki you can contact: