Celebrate Excellence in Education: Nominate Outstanding Educators by April 15!
Found this content helpful? Log in or sign up to leave a like!
Hello,
We are trying to deter sharing exam questions. At first we tried just using a <div> and posting a copyright notice behind the question. Like this:
<p style="opacity: 0.2; color: black; position: fixed; top: 20%; width: 100%; left: 0; transform: translateY(-50%) rotate(-20deg); font-size: 20px;">© 2020, Rutgers University, All Rights Reserved. © 2020, Rutgers University, All Rights Reserved. © 2020, Rutgers University, All Rights Reserved © 2020, Rutgers University, All Rights Reserved</p>
In the RCE, it looks like this:
Once the question is saved, the RCE strips it out.
So then I went with an image....
Solved! Go to Solution.
Canvas does sanitize the HTML. Here is the Canvas HTML Editor Whitelist that specifies what is acceptable. None of the stuff you're trying to do with rotations, opacity, or transforms is allowed. The z-index is.
The work-around for this is to defined the CSS in your global custom CSS file applied to theme editor. Give it a classname such as rotated_watermark or copyright_notice_20 (for the 20°). Then, in your div in your first example, just add class="rotated_watermark" or class="copyright_notice_20"
This would not be a user-specific solution, but it has a better chance of working in the mobile apps.
A JavaScript solution would require access to the theme editor as well. They can work in the browser, but I'm not so sure about the mobile apps. You will need JavaScript if you want to personalize things.
The ENV variable (technically Window.ENV, but ENV will work), has an object called current user.
ENV.current_user contains their Canvas ID (ENV.current_user.id), display name (ENV.current_user.display_name), and preferred pronouns (ENV.current_user.pronouns) if enabled. It also has some URLs that may not be as helpful.
I cannot tell from your post what you're wanting to customize in order to identify the person taking a screen shot. You mentioned fake question codes, but I don't see any of that. You should consider whether you want a student's Canvas ID attached to a question on the screen.
The use of formula questions will not allow you to put in e^(1/3) as an answer or even e^0.33333. It requires a numeric value. For that particular question, it would allow someone to do a decimal approximation using a table without knowing the exact value.
When you are ready to switch to New Quizzes, you can put the variable into your question text in a formula question. You cannot do that with the classic quizzes, you would have to create a question for each variation and then put them in a question group to mimic a formula question. I'm not ready to switch to New Quizzes, either, I'm just throwing that out there.
I will add that the watermark makes it hard to read the question. You may also want to look into disabling the watermark for screen readers. The MDN page on using the aria-hidden attribute says aria-hidden="true" removes the element from the accessibility API, role="presentation" and role="none" remove the semantic meaning, but still expose it to assistive technology. It sounds like aria-hidden="true" is what you want.
Canvas does sanitize the HTML. Here is the Canvas HTML Editor Whitelist that specifies what is acceptable. None of the stuff you're trying to do with rotations, opacity, or transforms is allowed. The z-index is.
The work-around for this is to defined the CSS in your global custom CSS file applied to theme editor. Give it a classname such as rotated_watermark or copyright_notice_20 (for the 20°). Then, in your div in your first example, just add class="rotated_watermark" or class="copyright_notice_20"
This would not be a user-specific solution, but it has a better chance of working in the mobile apps.
A JavaScript solution would require access to the theme editor as well. They can work in the browser, but I'm not so sure about the mobile apps. You will need JavaScript if you want to personalize things.
The ENV variable (technically Window.ENV, but ENV will work), has an object called current user.
ENV.current_user contains their Canvas ID (ENV.current_user.id), display name (ENV.current_user.display_name), and preferred pronouns (ENV.current_user.pronouns) if enabled. It also has some URLs that may not be as helpful.
I cannot tell from your post what you're wanting to customize in order to identify the person taking a screen shot. You mentioned fake question codes, but I don't see any of that. You should consider whether you want a student's Canvas ID attached to a question on the screen.
The use of formula questions will not allow you to put in e^(1/3) as an answer or even e^0.33333. It requires a numeric value. For that particular question, it would allow someone to do a decimal approximation using a table without knowing the exact value.
When you are ready to switch to New Quizzes, you can put the variable into your question text in a formula question. You cannot do that with the classic quizzes, you would have to create a question for each variation and then put them in a question group to mimic a formula question. I'm not ready to switch to New Quizzes, either, I'm just throwing that out there.
I will add that the watermark makes it hard to read the question. You may also want to look into disabling the watermark for screen readers. The MDN page on using the aria-hidden attribute says aria-hidden="true" removes the element from the accessibility API, role="presentation" and role="none" remove the semantic meaning, but still expose it to assistive technology. It sounds like aria-hidden="true" is what you want.
Hi @James
We can edit the theme editor.
After thinking this through a bit, this is what I am trying to accomplish:
Here is a sample question that is composed of a unique identifier and a class.
<div id="question_652003_question_text" class="question_text user_content enhanced">
<p>What is a cat's primary diet?</p></div>
The class is consistent for Classic Quizzes. I haven't tested in New Quizzes. We would want to add a <p> at the end of the question with some information.
Ideally, I would like to have something displayed as "Question: canvas_user_id - quiz_id - question_id - [random 4-6 digits]"
It would look like a long string of numbers to the view of the question with the first one being the canvas_id for the user viewing the quiz (or the person taking the screen shot) and the second number would be the quiz_id (from the URL), the third number would be parsed from the question unique identifier in the id. And to top it off, I thought we could just add a random set of numbers at the end to really confuse things.
For a user, the first number would always be the same for every quiz/question. The second value will change for every quiz. The third value would change for every question.
With this string of numbers, an instructor could provide the screencap to an admin and we would tell them (or the Office of Student Conduct) who was the user that allowed someone to take a picture of the exam.
A couple of thoughts.
How about encoding the number so it minimally protects their IDs if shared but is easily recoverable?
Going with your example (and making up UserID and QuizID), you might have a string like this:
12345-6932451-652003-718368
Base64 encoding makes it look totally random, but is long and hard to type.
MTIzNDUtNjkzMjQ1MS02NTIwMDMtNzE4MzY4
Converting each ID to a hexadecimal number makes it look like a code you might type into a product key.
3039-69C7E3-9F2E3-AF620
You could obfuscate those even more by applying a transform to them before encoding. Take the quiz number, for example, and then use it to modify the other numbers. Then none of the numbers would stay the same for each student and it would look more random.
Hexadecimal will save you space over straight decimal, so the code is smaller. You may not even need the random digit at the end, perhaps incorporating the course ID would be nice since looking up quizzes by ID requires the course ID.
Since this is all done through JavaScript, nothing is secure. This shouldn't be used to encode confidential information, but there are ways to make it look random without blatantly divulging information.
I set up MathJax on a subaccount in preparation for my fall calculus courses. How that pertains here is that I made it run on select pages only and it did run on the Mobile apps. That means that should be able to parse the location to grab the quiz ID. That said, the mobile apps may pull in the data for a quiz completely differently. There is a form element that contains the quiz ID if we cannot get it from the location. It might be easier to get it from the form#submit_quiz_form element as the action attribute contains the course number, quiz number, and user ID all in one spot.
I also reconfirmed my suspicion that JavaScript you specify for a browser is automatically loaded in the mobile apps as well, so if you write the code well, it should only have to be inserted in the one spot.
CSS selectors will make it difficult to match on an ID of question_652003_question_text where the number really needs to be a wildcard. I would look for .quiz_sortable.question_holder because that's the holder for the whole question. I am not sure how it works for delivered quizzes, but in editing them, there is a phantom question on the page that you might need to watch out for.
Alternatively, you could use document.queryAllSelector('.quiz_sortable.question_holder .text > .question_text.user_content') to get all of the texts. That includes the question number in the id and we already have the user ID and quiz ID from somewhere else.
If the aim is to deter copying of the questions so that other students can know the question, have you considered having a question group with a large number of questions - each of which is different (this can be different values are used, different icons/images/... on the page, and even unique code in each question). Then you set Canvas to present the user with a random question from the question group. Canvas will keep track of which question instance was presented to the student.
Using a set of known tokens you could have an external program that would take the single "master" question instance and then created versions of it and insert them into the question group and remove the master question (or perhaps even better have a master quiz that is not published and only visible to the teachers in the course and from this create the version of the quiz that is made available to the students).
Any pictures of the screen will be one of the randomly selected instances of the question.
One could even analyze the answers given to detect systematic problems with students and the specific content (when lots of students give systematically an incorrect answer) or possible cheating when N students have the same pattern of answers (despite having different questions).
Interesting approach Chip, and good questions that seek to identify what is really important. Is it deterring copying or identifying who did the copying? Priscilla seemed resigned to the fact that the cheating was going to happen.
Using the API to alter the quiz begs the question of when do you change the quiz? Is it every hour? After every student begins a quiz?
You could use Canvas Live Events to detect real-time activity. Using the asset events, there is a quizzes:quiz event that I think is fired when someone views a quiz.
Your suggestion could even work with a small set of questions. Embed an innocuous image that has a large pool from which to choose. Emojis come to mind. You would want to log the image to a database to make it easier to search and find, rather than digging through every student's quiz.
I think there is a time period here that may make things challenging using this approach though. Let's say a student takes a copy in Spring 2020 but it isn't found online until Spring 2021. Going through and looking at every quiz that people took for the last however-long terms is going to be problematic. Encoding the course, quiz, user, and question IDs into the code will help track that down.
An interesting, or not, approach might be to create a QR code and embed it into each question. There is QRCode.js that has no dependencies and could create the code to embed the information. It would likely take too much space, but you could have it go to a server that you have control over and then whenever someone scans it -- not matter when -- you get the IP address of the person and that it has happened -- as opposed to just happening to discover the question is out there on some website.
You could put some kind of useful information on the website that made it look legit and as a resource for students. For example:
This question was last seen on Exam 3 in the Fall 2015 Biology 101 course at Trump University. The material in this question is covered in section 15.2 of the textbook. This question is rated moderately difficult, with 63% of the students getting it correct.
On the backend, you are collecting the information about who made the copy in the first place and how many times the information has gotten out there. That can then be used to drive decisions about when to replace questions. It won't be foolproof, but it might provide additional information.
It may be that someone even scans the code while they're taking the quiz. Then you could change the message to something that discourages sharing it:
This question does not appear very often and no information about it is available.
Thanks, James for your insightful (as always) response.
Altering the quiz can be done at the frequency that you want. For example, during the 1990s we had an ID printed on each written exam for a course with its questions (and a paper copy was kept in file cabinets). Additionally, we had to publish each exam, and its answers as well as publically post a physical copy after the exam. Not surprisingly the student organizations collected all of the exams & solutions and keep files of their for their members. I even got a request one year (from a student organization at another university) demanding that I send them the exam and solutions for a recent exam for a course that I taught. However, I disappointed them by saying that as there had been less than 10 students for the makeup exam in August that I had an oral exam instead of a written exam and therefore under the rules did not have to publish the written exam and its answers - as there was no such exam.
As you point out, it is straightforward to encode a watermark in terms of the content in the exam.
In a former LMS used at KTH, the database contained the random number used as the seed to generate the selection of the parameters for a given student's question. In a course on electrical measurements, each of the questions had a half dozen questions with specific initial values and answers and the random seed was used to select the specific values used in the version of the question that a given student would see. When I migrated these questions to Canvas, I generated each of the possible variants of the question and its corresponding answers as alternatives in a question group - from which a given question was randomly selected for a student by Canvas. With a number of questions on the exam and a number of alternative questions selected from the question bank - it would be rare that two students got the same exam. (Yes, the size of the question group in the Canvas quiz could be huge as in many cases, each variation of the parameters led to a different circuit diagram and a different set of solutions. Even worse was that the earlier Canvas quizzes did not have very good facilities for handing answers with a given precision and accuracy - so many alternative answers had to be generated.) One of the best features of this faculty member's online exams was that he had accumulated so much knowledge about the answers given by the student to the exam (which generally required both numeric values with a stated accuracy and precision and the units) that he could generate responses that included the probably mistake that the student had made in solving the problem. I wrote code in WIRIS to be able to ask his questions and check the answers and generate the corresponding grade and comments. This was then invoked by an LTI interface to WIRIS. Sadly, this has now been abandon, as the university has switched to Möbius (which has no API and requires manually entering the questions and answers [both of which can use Maple code]).
Today, when we have (physical) written examinations - all of the students' exam booklets, are scanned and entered into a records system - this happens prior to the examiner receiving the exam booklets (lest an examiner change a student's response to the exam question). Similarly, after grading all of the physical exam booklets are again scanned before they are available to the students. So even in these physical exams, it is possible to given each student a randomized set of questions. [In the pre-Covid-19 days, I even suggested mixing the students in the physical exam rooms so that no two students taking the exam for a given course were seated next to each other, but instead, we could mix students taking exams from multiple courses in a single room.]
One of the features that I find very powerful in Canvas is that during an online exam the status of the student's responses are stored and timestamped - so that in an audit one could go back and see when the student answered a question, what the answer was, and if it was changed, when this change occurred.
For more than a decade I have changed the way that I handle examinations in the 3rd year and later in that, for each course, I include a project with written and oral reports and each student or group of students has a different topic. This avoids the problem of students having the same questions/material and the oral examination enables me to find out if the individual students actually understand the material that they have submitted. Within a few questions, it is very apparent if a student has not done the work. One drawback is that it takes 20 to 30 minutes per group for the oral presentations and questions and additionally this requires a lot of work of the examiner who has to keep track of and be knowledgable in a large number of typically quite different projects. In one term, I had oral presentations from 8:00 until 20:00 each day for more than a week - without any breaks other than the change over between groups. I do not recommend this sort of schedule! While my wife was very good about helping me with this and bring food at suitable intervals during the day, she made it clear that I should not do this again 🙂 I should also say that having the ability to provide feedback on the project as it proceeds and look at how the students respond to this feedback is a very powerful mechanism to help the students and to avoid cheating.
So when it comes to exams, I think that it is important to examine what it is that you want to achieve.
To participate in the Instructure Community, you need to sign up or log in:
Sign In