Celebrate Excellence in Education: Nominate Outstanding Educators by April 15!
Found this content helpful? Log in or sign up to leave a like!
Hi All and @James
We have a lot of essay (one-sentence questions, filling blank space) and fill-in-the-blank quiz-type questions.
Now, we need to convert them to a mix-and-match question-type quiz.
Is there a tool that can convert matching questions to QTI format?
Any ideas on how to do it - except manually? 🙂
Solved! Go to Solution.
Thanks for clarifying what you are attempting to do. Like I said, I have only used Respondus, but there were some free alternatives to it listed when I did a search. Some of those looked like they would take your type of file (which is not a QTI file).
I can talk about the API, though.
You can tell there's a huge difference between what you're sending and what Canvas is creating. Canvas sends a lot more than is absolutely necessary.
It almost looks like you didn't include all of what Canvas sent, either, as you're wrapping it in an array rather than an object. That makes me wonder if you're including all of what you're sending, either.
The other thing is that you're showing me the results of a GET from Canvas (or the results after a POST) rather than they payload of the POST itself. The fields are different and you cannot take a GET (or what the POST returns) and blindly insert it into a new question. For example, comments_html in the answer section is called answer_comment_html in the POST. left is called answer_match_left, and so on.
The most common error I see with this is that people forget to wrap the request in a question property.
You can see that when you look at the Create a single quiz question API endpoint documentation.
Try wrapping it like this (of course, comments are not allowed in JSON, I'm just adding that for explanation purposes.
{
"question": {
// put your existing object in here
}
}
In fact, I took your request, wrapped it in the question property and sent it to Canvas. It added a new question with one item. It did not add the distractors or some of your text, though.
The appendices that explain the format for each type of question is buried in the Quiz Submission Questions page. The documentation for Matching Questions (perhaps others) isn't very helpful. It makes it sound like you would need to know the match IDs, but there's no way to know those when you are creating a question, just getting or updating it.
The reason why it didn't add the distractors or text is because of where you have it. You have it for a specific response (inside the array) rather than with the question. When I look at the output sent by Canvas (the payload of the POST), then matching_answer_incorrect_matches and text_after_answers goes with the question, not the answers. I do not see where text_after_answers is used for the matching question type, though.
The overall question comments go in correct_comments_html, incorrect_comments_html, or neutral_comments_html, which fall under the question property (not the answer).
For item-specific response, you need to use answer_comment or answer_comment_html within the answer property. Note that you misspelled it answer_comments (plural). That said, it did not work for me with answer_comment, only with answer_comment_html.
answer_weight is not needed since you can only put in correct responses. If you don't include points_possible, then it sets the points to 0.
Here is a MWE (minimal working example) based off what you had. I changed the comments a little.
{
"question": {
"question_type": "matching_question",
"question_text": "Match the correct name to the discovery or theory.",
"question_name": "Matching Question",
"matching_answer_incorrect_matches": "Nevada\nCalifornia\nWashington",
"answers": [
{
"answer_match_left": "Salt Lake City",
"answer_match_right": "Utah",
"answer_comment_html": "<p>Salt Lake City is the capital of Utah.</p>"
}],
"points_possible": 1
}
}
I'm going to expand the MWE in case other people stumble across this and ask what good a matching question is with just one blank as that's essentially a multiple choice question except you can add feedback to incorrect responses with multiple choice questions.
Here's what it would look like with another capital.
{
"question": {
"question_type": "matching_question",
"question_text": "Match the capital with the state.",
"question_name": "State Capitals",
"matching_answer_incorrect_matches": "Nevada\nCalifornia\nWashington",
"answers": [
{
"answer_match_left": "Salt Lake City",
"answer_match_right": "Utah",
"answer_comment_html": "<p>Salt Lake City is the capital of Utah.</p>"
},
{
"answer_match_left": "Springfield",
"answer_match_right": "Illinois",
"answer_comment_html": "<p>Springfield is the capital of Illinois.</p>"
}],
"points_possible": 1
}
}
There is an appendix describing the quiz answer formats at the bottom of the Quiz Submission Questions documentation.
You will likely find that information insufficient, but I do not have any extra information that I keep on file. What I do each time is go back to the browser's developer tools and create a question in Canvas and then see what it sends as a request.
@Nva2023, I've not had enough sleep recently to make sure I follow what you're trying to do.
You say mix-and-match, which is not a term I'm familiar with, but then you say matching, which I do recognize. That's where you have a list of definitions and terms and you have to pick from the correct answer from a dropdown list. I'm struggling to see how matching would be the appropriate question type for any of this, so perhaps you mean multiple choice and it's just called matching where you are?
You write essay, but you're describing a fill-in-the-blank question. I'll address both.
Essay questions have no correct answer and have to be manually graded. There is nothing in an essay question to put it into a correct response. I suppose someone could use generative AI to come up with a one word response, but I wouldn't trust it and would want to manually double check. Essay questions cannot be automatically converted to any type of auto-graded question type (matching or multiple choice) without interaction.
Fill in the blank questions have have just one question and potentially more than one response but most importantly, no incorrect responses. If you were to convert that to a matching question, there would only be one prompt on the left and multiple values for them to choose from, only one of which can be correct. But all of the responses you would bring over would be correct. That means that a student could pick answers that are acceptable but are not counted as correct. Even if you had only one correct answer for the blank, converting it to matching would still require additional work to add distractors or the only thing the student could pick would be the correct answer, guaranteeing 100% correct on every question without knowing anything. You still have the question about why have a matching question when there's only one pair. Notice that I said pair. You have the list of correct responses, but what do you put in the other half of the pair? You've got nothing but a blank in your question.
I don't see how you hope to automatically convert a fill in the blank question to matching or even multiple choice without manual action. You do not have the information in the question to do so. There are no wrong answers to convert and for matching, there is no other half of the pair.
There are also fill-in-multiple-blanks questions. You didn't mention that type. It would be better suited for matching because there is at least more than one pair, but you still only have half of each pair and no wrong answers.
Not seeing a way to automatically convert questions completely from one type to another without additional work to get a usable question, let me approach it from another direction. Let's say that you are wanting to start the process of conversion so that you can manually edit the questions. All that you're asking is to make an essay question into a whatever-type question (maybe multiple choice but probably not matching).
If you are using Classic Quizzes, then you can do that now with Canvas (just not automatically). Edit the question and change the type. This will lose the question responses, if any, because they are incompatible types. Since you're going to have to edit the question to make it usable, that one extra step of changing the type isn't unreasonable. I try to make it a point to not use New Quizzes, but I think I read somewhere that you cannot change the question type in new quizzes.
The Canvas API could be used to go through and create new quiz questions that do that switch of question types. In some cases, you could preserve the answers. For example, take the response from a fill-in-the-blank question and make it one of the choices in a multiple choice question. You would need to write a program that would do it and you would still need to manually intervene to finish the question. Both Classic and New quizzes have API support for creating quiz items, although documentation and Community support for Classic Quizzes is better than for New Quizzes.
If your questions aren't in Canvas but you have a QTI file you could bring them into Canvas and then make the changes. The only tool I've used for working with QTI is Respondus. It's a commercial (not free) application that runs only on Windows machines. That's not programmable to do things in bulk and you said you had a lot of questions, so that may rule it out. You would need to set the Respondus personality to IMS QTI to import them. Respondus does support matching questions and you can change the format of a question and it will retain the question wording. The problem comes from the responses -- if any. For example, if I try to change a multiple choice to matching, it removes all of the choices I have. I don't blame that behavior on Respondus as the question types are fundamentally incompatible. Canvas does the same thing. That is going to be true for almost any conversion.
There are some alternatives to Respondus (a quick web-search turns up a list). I haven't used any, which is why I limited my comments to Respondus, but I imagine they all suffer the same issue. There isn't a big on-going demand for converting from one type of question to another, so most focus on question creation. The conversion of one type to another requires manual intervention and isn't something they would want to do in bulk.
Hi @James
I am trying to look for a free solution or build own solution to create matching questions from a text file.
Example, given this format, create a Canvas matching question.
Type: MT
Title: Match the correct description to the proper term.
4) Match the correct name to the discovery or theory.
a) Brain of the computer = CPU
b) Permanent storage = Hard Disk
c) Temporary storage = RAM
d) Input device = keyboard
Respondus works with this, but costs $.
I am trying with API, sending this JSON, but it's not working.
{
"question_name": "Matching Question",
"question_text": "Match the correct name to the discovery or theory.",
"question_type": "matching_question",
"answers": [
{
"id": null,
"answer_text": "Constantinople",
"answer_weight": 100,
"answer_comments": "Remember to check your spelling prior to submitting this answer.",
"text_after_answers": " is the capital of Utah.",
"answer_match_left": "Salt Lake City",
"answer_match_right": "Utah",
"matching_answer_incorrect_matches": "Nevada\nCalifornia\nWashington",
"numerical_answer_type": null,
"exact": 0,
"margin": 0,
"approximate": 0.0,
"precision": 0,
"start": 0,
"end": 0,
"blank_id": 0
}
],
"points_possible": 1
}
Manually create matching question looks like this:
[
{
"id": 2189072,
"quiz_id": 144619,
"quiz_group_id": null,
"assessment_question_id": 4117336,
"position": null,
"question_name": "Question",
"question_type": "matching_question",
"question_text": "<p>Match key terms below</p>",
"points_possible": 1.0,
"correct_comments": "",
"incorrect_comments": "",
"neutral_comments": "",
"correct_comments_html": "",
"incorrect_comments_html": "",
"neutral_comments_html": "",
"answers": [
{
"id": 8032,
"text": "Brain of the computer",
"left": "Brain of the computer",
"right": "CPU",
"comments": "",
"comments_html": "",
"match_id": 9670
},
{
"id": 8720,
"text": "Temp Storage",
"left": "Temp Storage",
"right": "RAM",
"comments": "",
"comments_html": "",
"match_id": 5045
},
{
"id": 382,
"text": "Permanent storage",
"left": "Permanent storage",
"right": "Hard disk",
"comments": "",
"comments_html": "",
"match_id": 1671
},
{
"id": 1183,
"text": "Input device",
"left": "Input device",
"right": "Keyboard",
"comments": "",
"comments_html": "",
"match_id": 7906
}
],
"variables": null,
"formulas": null,
"answer_tolerance": null,
"formula_decimal_places": null,
"matches": [
{
"text": "CPU",
"match_id": 9670
},
{
"text": "RAM",
"match_id": 5045
},
{
"text": "Hard disk",
"match_id": 1671
},
{
"text": "Keyboard",
"match_id": 7906
},
{
"text": "Output device",
"match_id": 254
}
],
"matching_answer_incorrect_matches": "Output device"
}
]
Thanks for clarifying what you are attempting to do. Like I said, I have only used Respondus, but there were some free alternatives to it listed when I did a search. Some of those looked like they would take your type of file (which is not a QTI file).
I can talk about the API, though.
You can tell there's a huge difference between what you're sending and what Canvas is creating. Canvas sends a lot more than is absolutely necessary.
It almost looks like you didn't include all of what Canvas sent, either, as you're wrapping it in an array rather than an object. That makes me wonder if you're including all of what you're sending, either.
The other thing is that you're showing me the results of a GET from Canvas (or the results after a POST) rather than they payload of the POST itself. The fields are different and you cannot take a GET (or what the POST returns) and blindly insert it into a new question. For example, comments_html in the answer section is called answer_comment_html in the POST. left is called answer_match_left, and so on.
The most common error I see with this is that people forget to wrap the request in a question property.
You can see that when you look at the Create a single quiz question API endpoint documentation.
Try wrapping it like this (of course, comments are not allowed in JSON, I'm just adding that for explanation purposes.
{
"question": {
// put your existing object in here
}
}
In fact, I took your request, wrapped it in the question property and sent it to Canvas. It added a new question with one item. It did not add the distractors or some of your text, though.
The appendices that explain the format for each type of question is buried in the Quiz Submission Questions page. The documentation for Matching Questions (perhaps others) isn't very helpful. It makes it sound like you would need to know the match IDs, but there's no way to know those when you are creating a question, just getting or updating it.
The reason why it didn't add the distractors or text is because of where you have it. You have it for a specific response (inside the array) rather than with the question. When I look at the output sent by Canvas (the payload of the POST), then matching_answer_incorrect_matches and text_after_answers goes with the question, not the answers. I do not see where text_after_answers is used for the matching question type, though.
The overall question comments go in correct_comments_html, incorrect_comments_html, or neutral_comments_html, which fall under the question property (not the answer).
For item-specific response, you need to use answer_comment or answer_comment_html within the answer property. Note that you misspelled it answer_comments (plural). That said, it did not work for me with answer_comment, only with answer_comment_html.
answer_weight is not needed since you can only put in correct responses. If you don't include points_possible, then it sets the points to 0.
Here is a MWE (minimal working example) based off what you had. I changed the comments a little.
{
"question": {
"question_type": "matching_question",
"question_text": "Match the correct name to the discovery or theory.",
"question_name": "Matching Question",
"matching_answer_incorrect_matches": "Nevada\nCalifornia\nWashington",
"answers": [
{
"answer_match_left": "Salt Lake City",
"answer_match_right": "Utah",
"answer_comment_html": "<p>Salt Lake City is the capital of Utah.</p>"
}],
"points_possible": 1
}
}
I'm going to expand the MWE in case other people stumble across this and ask what good a matching question is with just one blank as that's essentially a multiple choice question except you can add feedback to incorrect responses with multiple choice questions.
Here's what it would look like with another capital.
{
"question": {
"question_type": "matching_question",
"question_text": "Match the capital with the state.",
"question_name": "State Capitals",
"matching_answer_incorrect_matches": "Nevada\nCalifornia\nWashington",
"answers": [
{
"answer_match_left": "Salt Lake City",
"answer_match_right": "Utah",
"answer_comment_html": "<p>Salt Lake City is the capital of Utah.</p>"
},
{
"answer_match_left": "Springfield",
"answer_match_right": "Illinois",
"answer_comment_html": "<p>Springfield is the capital of Illinois.</p>"
}],
"points_possible": 1
}
}
Thanks James, your MWE was very helpful. It works now:)
Hi James,
Thank you so much for your instruction with json format. It works well with me now to generate questions with ChatGPT :). By the way, I am generating my library test bank and would prepare all my data/ question files in TXT format, however, I am now encountering with an issue for the TXT format for each type of questions (Matching/ Multiple Blank/ Multiple Dropdowns...) which support by Canvas quiz. I do a search for those TXT formats, but could not find a reference. So I would love to ask if you have any chance to have those quiz txt format, could you share me some. Thank you so much.
There is an appendix describing the quiz answer formats at the bottom of the Quiz Submission Questions documentation.
You will likely find that information insufficient, but I do not have any extra information that I keep on file. What I do each time is go back to the browser's developer tools and create a question in Canvas and then see what it sends as a request.
Thanks James for your response. Though it is a not direct answer but that is a great idea to try. Furthermore, your suggestion opens a new way for me to generate my quiz library. Brilliant!
Dear Nva,
I want to use CSV format (Table) for converting Matching Questions to qti. Please tell me how to go about it.
Kanagaraj Easwaran
To participate in the Instructure Community, you need to sign up or log in:
Sign In