// Numbas version: finer_feedback_settings {"name": "Test Yourself Week 2", "metadata": {"description": "", "licence": "None specified"}, "duration": 0, "percentPass": 0, "showQuestionGroupNames": false, "shuffleQuestionGroups": false, "showstudentname": true, "question_groups": [{"name": "Group", "pickingStrategy": "all-ordered", "pickQuestions": 1, "questionNames": ["", "", ""], "variable_overrides": [[], [], []], "questions": [{"name": "Reading data", "extensions": [], "custom_part_types": [], "resources": [], "navigation": {"allowregen": true, "showfrontpage": false, "preventleave": false, "typeendtoleave": false}, "contributors": [{"name": "Will Rushworth", "profile_url": "https://numbas.mathcentre.ac.uk/accounts/profile/20560/"}], "tags": [], "metadata": {"description": "", "licence": "None specified"}, "statement": "
Suppose that the file hookes_law.csv
contains the data:
Force (N),Extension (cm),Uncertainty (cm)
0.50,0.20,0.10
1.00,0.11,0.10
1.50,0.18,0.10
2.00,0.58,0.10
2.50,0.61,0.10
3.00,0.67,0.10
4.00,0.80,0.10
5.00,1.15,0.10
6.00,1.47,0.10
7.00,1.85,0.10
8.00,2.04,0.10
9.00,2.17,0.10
10.00,2.41,0.10
Which of the following will successfully create three arrays, force
, extension
, and uncertainty
, corresponding to the columns of the given data?
force, extension, uncertainty = np.loadtxt(\"hookes_law.csv\", delimiter=\",\", skiprows=1, unpack=True)
", "force = np.loadtxt(\"hookes_law.csv\", delimiter=\",\", skiprows=1)
extension = np.loadtxt(\"hookes_law.csv\", delimiter=\",\", skiprows=2)\n\nuncertainty = np.loadtxt(\"hookes_law.csv\", delimiter=\",\", skiprows=3)
", "import numpy as np\n
force, extension, uncertainty = np.loadtxt(\"hookes_law.csv\", delimiter=\",\", skiprows=1, unpack=True)
", "\nimport numpy as np
np.loadtxt(\"hookes_law.csv\", delimiter=\",\", skiprows=1, unpack=True)\n
"], "matrix": [0, 0, "1", 0], "distractors": ["", "", "", ""]}], "partsMode": "all", "maxMarks": 0, "objectives": [], "penalties": [], "objectiveVisibility": "always", "penaltyVisibility": "always", "type": "question"}, {"name": "Plotting functions", "extensions": ["mas-codeassess", "programming"], "custom_part_types": [{"source": {"pk": 195, "author": {"name": "Christian Lawson-Perfect", "pk": 7}, "edit_page": "/part_type/195/edit"}, "name": "Code", "short_name": "mark-code-3", "description": "Mark code provided by the student by running it and a series of validation and marking tests.
\nThe validation tests are used to reject an answer if the student has misunderstood the task, for example if they haven't defined a required variable or function.
\nMarking tests check properties of the student's code. Each test awards a proportion of the available credit if it is passed.
\nYou can optionally show the student the STDOUT and/or STDERR when running their code.
\nYou can give a preamble and postamble which are run before and after the student's code, and also modify the student's code before running it.
", "help_url": "", "input_widget": "code-editor", "input_options": {"correctAnswer": "if(settings[\"correct_answer_subvars\"],\n render(settings[\"correct_answer\"])\n,\n settings[\"correct_answer\"]\n)", "hint": {"static": false, "value": "\"Write \"+capitalise(language_synonym(settings[\"code_language\"]))+\" code\""}, "language": {"static": false, "value": "language_synonym(settings[\"code_language\"])"}, "placeholder": {"static": false, "value": "if(settings[\"correct_answer_subvars\"],\n render(settings[\"placeholder\"])\n,\n settings[\"placeholder\"]\n)"}, "theme": {"static": true, "value": "textmate"}}, "can_be_gap": true, "can_be_step": true, "marking_script": "mark:\napply(main_error);\napply(show_images);\napply(matplotlib_feedback);\napply(postamble_feedback);\napply(validation_test_feedback);\napply(marking_test_feedback)\n\ninterpreted_answer:\nstudentAnswer\n\nmain_result:\ncode_result[3]\n\nmarking_results:\ncode_result[6..(len(settings[\"tests\"])+6)]\n\nvalidation_results:\ncode_result[(len(settings[\"tests\"])+6)..len(code_result)]\n\nmain_error:\nassert(main_stdout=\"\" or not settings[\"show_stdout\"],\n feedback(\"Your code produced this output:{escape_html(main_stdout)}\")\n);\nassert(main_result[\"success\"],\n warn(\"\"\"There was an error in your code.\"\"\");\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in your code:
{escape_html(main_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in your code.\")\n )\n)\n\nmarking_test_feedback:\nmap(\n let(\n [name,weight,code], test,\n header, \"Test: {name} \",\n if(r[\"success\"],\n let(\n result, r[\"result\"],\n max_credit, weight/total_weight,\n credit, if(result isa \"number\", result, award(1,result)),\n switch(\n credit=0, negative_feedback(header+\"was not passed.\"),\n credit=1, add_credit(max_credit, header+\"was passed.\"),\n add_credit(credit*max_credit, header+\"was partially passed.\")\n )\n )\n ,\n if(settings[\"show_marking_errors\"],\n negative_feedback(\"\"\"There was an error:
{escape_html(r[\"stderr\"])}\"\"\")\n ,\n negative_feedback(header+\"was not passed.\")\n )\n )\n ),\n [test,r],\n zip(settings[\"tests\"],marking_results)\n)\n\nvalidation_test_feedback:\nmap(\n let([name,code], test,\n if(r[\"success\"],\n if(r[\"result\"],\n true\n ,\n warn(\"\"\"Your code failed the test {name}.\"\"\");\n fail(\"\"\"Your code failed the test {name}.\"\"\");false\n )\n ,\n warn(\"\"\"There was an error running the test {name}.\"\"\");\n fail(\"\"\"There was an error running the test {name}:
{escape_html(r[\"stderr\"])}\"\"\")\n )\n ),\n [test,r],\n zip(settings[\"validation_tests\"],validation_results)\n)\n\ntotal_weight:\nsum(map(weight,[name,weight,code],settings[\"tests\"]))\n\npre_submit:\n[run_code(code_language,\n [\n matplotlib_preamble,\n variables_as_code(language_synonym(code_language), settings[\"variables\"]),\n render(settings[\"preamble\"]),\n if(trim(settings[\"modifier\"])=\"\", studentAnswer, eval(expression(settings[\"modifier\"]))),\n render(settings[\"postamble\"]),\n matplotlib_postamble\n ]\n +map(code,[name,marks,code],settings[\"tests\"])\n +map(code,[name,code],settings[\"validation_tests\"])\n)]\n\ncode_result:\npre_submit[\"code_result\"]\n\nmain_stdout:\nsafe(main_result[\"stdout\"])\n\ncode_language:\nsettings[\"code_language\"]\n\npreamble_result:\ncode_result[2]\n\npreamble_stderr:\npreamble_result[\"stderr\"]\n\npostamble_result:\ncode_result[4]\n\npostamble_stderr:\npostamble_result[\"stderr\"]\n\npostamble_feedback:\nassert(postamble_result[\"stdout\"]=\"\",\n feedback(\n if(settings[\"postamble_feedback_whitespace\"],\n html(\"\"\"
{escape_html(postamble_result[\"stdout\"])}\"\"\")\n ,\n postamble_result[\"stdout\"]\n )\n )\n);\nassert(postamble_result[\"success\"],\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in the marking routine postamble:
{escape_html(postamble_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in the marking routine postamble.\")\n )\n)\n\nmatplotlib_preamble:\nif(code_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n plt.clf() \n\"\"\"),\n \"\"\n)\n\nmatplotlib_postamble:\nswitch(\ncode_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n fig = plt.gcf()\n if fig.get_axes():\n fig.savefig(sys.stdout, format='svg')\n\"\"\"),\n \"\"\n)\n\nmatplotlib_result:\ncode_result[5]\n\nmatplotlib_feedback:\nswitch(\ncode_language=\"pyodide\",\n assert(matplotlib_result[\"stdout\"]=\"\",\n feedback(matplotlib_result[\"stdout\"])\n ),\n \"\"\n)\n\n\n\nimages:\nflatten(map(\n get(r,\"images\",[]),\n r,\n code_result\n))\n\nshow_images:\nassert(len(images)=0 or not settings[\"show_stdout\"],\n feedback(\"Your code produced the following {pluralise(len(images),'image','images')}:\");\n map(\n feedback(html(x)),\n x,\n images\n )\n)", "marking_notes": [{"name": "mark", "description": "This is the main marking note. It should award credit and provide feedback based on the student's answer.", "definition": "apply(main_error);\napply(show_images);\napply(matplotlib_feedback);\napply(postamble_feedback);\napply(validation_test_feedback);\napply(marking_test_feedback)"}, {"name": "interpreted_answer", "description": "A value representing the student's answer to this part.", "definition": "studentAnswer"}, {"name": "main_result", "description": "
The result of running the student's code and the preamble, without any tests.
\nNormally used to detect errors in the student's code.
", "definition": "code_result[3]"}, {"name": "marking_results", "description": "The results of running the marking tests.
", "definition": "code_result[6..(len(settings[\"tests\"])+6)]"}, {"name": "validation_results", "description": "The results of running the validation tests.
", "definition": "code_result[(len(settings[\"tests\"])+6)..len(code_result)]"}, {"name": "main_error", "description": "Show STDOUT if allowed.
\nCheck the student's code runs on its own. Fail if there was an error, and show STDERR if allowed.
", "definition": "assert(main_stdout=\"\" or not settings[\"show_stdout\"],\n feedback(\"Your code produced this output:{escape_html(main_stdout)}\")\n);\nassert(main_result[\"success\"],\n warn(\"\"\"There was an error in your code.\"\"\");\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in your code:
{escape_html(main_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in your code.\")\n )\n)"}, {"name": "marking_test_feedback", "description": "
Feedback on the marking tests. For each test, if the test was passed then add the corresponding amount of credit. If there was an error, show the error.
", "definition": "map(\n let(\n [name,weight,code], test,\n header, \"Test: {name} \",\n if(r[\"success\"],\n let(\n result, r[\"result\"],\n max_credit, weight/total_weight,\n credit, if(result isa \"number\", result, award(1,result)),\n switch(\n credit=0, negative_feedback(header+\"was not passed.\"),\n credit=1, add_credit(max_credit, header+\"was passed.\"),\n add_credit(credit*max_credit, header+\"was partially passed.\")\n )\n )\n ,\n if(settings[\"show_marking_errors\"],\n negative_feedback(\"\"\"There was an error:{escape_html(r[\"stderr\"])}\"\"\")\n ,\n negative_feedback(header+\"was not passed.\")\n )\n )\n ),\n [test,r],\n zip(settings[\"tests\"],marking_results)\n)"}, {"name": "validation_test_feedback", "description": "
Give feedback on the validation tests. If any of them are not passed, the student's answer is invalid.
", "definition": "map(\n let([name,code], test,\n if(r[\"success\"],\n if(r[\"result\"],\n true\n ,\n warn(\"\"\"Your code failed the test {name}.\"\"\");\n fail(\"\"\"Your code failed the test {name}.\"\"\");false\n )\n ,\n warn(\"\"\"There was an error running the test {name}.\"\"\");\n fail(\"\"\"There was an error running the test {name}:{escape_html(r[\"stderr\"])}\"\"\")\n )\n ),\n [test,r],\n zip(settings[\"validation_tests\"],validation_results)\n)"}, {"name": "total_weight", "description": "
The sum of the weights of the marking tests. Each test's weight is divided by this to produce a proportion of the available credit.
", "definition": "sum(map(weight,[name,weight,code],settings[\"tests\"]))"}, {"name": "pre_submit", "description": "The code blocks to run.
\nIn order, they are:
\nThe results of the code blocks: a list with an entry corresponding to each block of code.
", "definition": "pre_submit[\"code_result\"]"}, {"name": "main_stdout", "description": "The stdout from the student's code.
", "definition": "safe(main_result[\"stdout\"])"}, {"name": "code_language", "description": "The language the code is written in. Either \"pyodide\" (Python) or \"webr\" (R)
", "definition": "settings[\"code_language\"]"}, {"name": "preamble_result", "description": "The result of running the preamble block.
", "definition": "code_result[2]"}, {"name": "preamble_stderr", "description": "The STDERR produced by the preamble block.
", "definition": "preamble_result[\"stderr\"]"}, {"name": "postamble_result", "description": "The result of running the postamble.
", "definition": "code_result[4]"}, {"name": "postamble_stderr", "description": "The STDERR produced by the postamble block.
", "definition": "postamble_result[\"stderr\"]"}, {"name": "postamble_feedback", "description": "Show the STDOUT from the postamble, if there is any.
", "definition": "assert(postamble_result[\"stdout\"]=\"\",\n feedback(\n if(settings[\"postamble_feedback_whitespace\"],\n html(\"\"\"{escape_html(postamble_result[\"stdout\"])}\"\"\")\n ,\n postamble_result[\"stdout\"]\n )\n )\n);\nassert(postamble_result[\"success\"],\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in the marking routine postamble:
{escape_html(postamble_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in the marking routine postamble.\")\n )\n)"}, {"name": "matplotlib_preamble", "description": "
Preamble for a hack to ensure that figures produced by matplotlib in Python are displayed.
\nThis code clears the matplotlib output, if matplotlib has been loaded.
", "definition": "if(code_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n plt.clf() \n\"\"\"),\n \"\"\n)"}, {"name": "matplotlib_postamble", "description": "A hack to show any figures produced with matplotlib in the stdout.
", "definition": "switch(\ncode_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n fig = plt.gcf()\n if fig.get_axes():\n fig.savefig(sys.stdout, format='svg')\n\"\"\"),\n \"\"\n)"}, {"name": "matplotlib_result", "description": "The result of running the matplotlib hack.
", "definition": "code_result[5]"}, {"name": "matplotlib_feedback", "description": "Feedback from the matplotlib hack: if a figure is produced, it's displayed as SVG here.
", "definition": "switch(\ncode_language=\"pyodide\",\n assert(matplotlib_result[\"stdout\"]=\"\",\n feedback(matplotlib_result[\"stdout\"])\n ),\n \"\"\n)\n\n"}, {"name": "images", "description": "Any images produced by the code blocks.
", "definition": "flatten(map(\n get(r,\"images\",[]),\n r,\n code_result\n))"}, {"name": "show_images", "description": "Show the images produced by the code.
", "definition": "assert(len(images)=0 or not settings[\"show_stdout\"],\n feedback(\"Your code produced the following {pluralise(len(images),'image','images')}:\");\n map(\n feedback(html(x)),\n x,\n images\n )\n)"}], "settings": [{"name": "show_input_hint", "label": "Show the input hint?", "help_url": "", "hint": "", "input_type": "checkbox", "default_value": true}, {"name": "code_language", "label": "Code language", "help_url": "", "hint": "The language that the student's code will be written in.", "input_type": "dropdown", "default_value": "pyodide", "choices": [{"value": "pyodide", "label": "Python"}, {"value": "webr", "label": "R"}]}, {"name": "correct_answer", "label": "Correct answer", "help_url": "", "hint": "A correct answer to the part.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "correct_answer_subvars", "label": "Substitute question variables into the correct answer?", "help_url": "", "hint": "If ticked, then JME expressions between curly braces will be evaluated and substituted into the correct answer.studentAnswer
.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "preamble", "label": "Preamble", "help_url": "", "hint": "This code is run before the student's code. Define anything that the student's code or your tests need.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "postamble", "label": "Postamble", "help_url": "", "hint": "This code is run after the student's code but before the validation and unit tests.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "postamble_feedback_whitespace", "label": "Format postamble output as code?", "help_url": "", "hint": "If ticked, any output produced by the postamble will be formatted in monospace font, with whitespace preserved. If not ticked, it'll be presented as prose text or HTML.", "input_type": "checkbox", "default_value": false}, {"name": "tests", "label": "Marking tests", "help_url": "", "hint": "A list of tests used to mark the student's answer.A list of tests used to validate that the student's code is acceptable.
Each item is a list with two string values:
Create plots of the following functions, with $x$ equal to 100 linearly spaced values in the interval $[-5,5]$.
", "advice": "", "rulesets": {}, "builtin_constants": {"e": true, "pi,\u03c0": true, "i": true}, "constants": [], "variables": {}, "variablesTest": {"condition": "", "maxRuns": 100}, "ungrouped_variables": [], "variable_groups": [], "functions": {}, "preamble": {"js": "", "css": ""}, "parts": [{"type": "mark-code-3", "useCustomName": false, "customName": "", "marks": 1, "scripts": {}, "customMarkingAlgorithm": "mark:\n feedback(html(code_result[3][\"stdout\"]));\n apply(validation_test_feedback);\n apply(marking_test_feedback)", "extendBaseMarkingAlgorithm": true, "unitTests": [], "showCorrectAnswer": true, "showFeedbackIcon": true, "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "nextParts": [], "suggestGoingBack": false, "adaptiveMarkingPenalty": 0, "exploreObjective": null, "prompt": "$\\displaystyle f(x)=2\\cos\\left(\\frac{\\pi x}{4}\\right)$
\n", "settings": {"show_input_hint": true, "code_language": "pyodide", "correct_answer": "import numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(-5,5,100)\nf = 2 * np.cos((np.pi * x) / 4)\n\nplt.plot(x,f)\nplt.xlabel('x')\nplt.ylabel('f(x)')", "correct_answer_subvars": true, "show_stdout": true, "show_stderr": false, "show_marking_errors": false, "placeholder": "", "modifier": "", "preamble": "import sys\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(5,3))", "postamble": "fig.savefig(sys.stdout, format='svg')\nax = plt.gca() # get axis handle\n\n# Get data from the plot\nline = ax.lines[0]\nlinexy = line.get_xydata()\nlinexy = linexy[np.argsort(linexy[:,0])]\n\n# Correct axes for comparison\nx_c = np.linspace(-5,5,100)\ny_c = 2 * np.cos((np.pi * x_c) / 4)", "tests": "[\n [\"Correct plot\", 1.0, \"np.isclose(linexy[:, 0],x_c).all() and np.isclose(linexy[:,1],y_c).all()\"]\n]", "validation_tests": "[\n [\"You should add one line to your plot\", \"len(ax.lines) == 1\"],\n [\"Ensure the x-axis contains 100 points\", \"len(linexy[:, 0]) == 100\"]\n]", "variables": "dict()"}}, {"type": "mark-code-3", "useCustomName": false, "customName": "", "marks": 1, "scripts": {}, "customMarkingAlgorithm": "mark:\n feedback(html(code_result[3][\"stdout\"]));\n apply(validation_test_feedback);\n apply(marking_test_feedback)", "extendBaseMarkingAlgorithm": true, "unitTests": [], "showCorrectAnswer": true, "showFeedbackIcon": true, "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "nextParts": [], "suggestGoingBack": false, "adaptiveMarkingPenalty": 0, "exploreObjective": null, "prompt": "$\\displaystyle f(x)=(x+1)^3-2$
\n\n", "settings": {"show_input_hint": true, "code_language": "pyodide", "correct_answer": "import numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(-5,5,100)\nf = (x + 1) ** 3 - 2\n\nplt.plot(x,f)\nplt.xlabel('x')\nplt.ylabel('f(x)')", "correct_answer_subvars": true, "show_stdout": true, "show_stderr": false, "show_marking_errors": false, "placeholder": "", "modifier": "", "preamble": "import sys\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(5,3))\n", "postamble": "fig.savefig(sys.stdout, format='svg')\nax = plt.gca() # get axis handle\n\n# Get data from the plot\nline = ax.lines[0]\nlinexy = line.get_xydata()\nlinexy = linexy[np.argsort(linexy[:,0])]\n\n# Correct axes for comparison\nx_c = np.linspace(-5,5,100)\ny_c = (x + 1) ** 3 - 2", "tests": "[\n [\"Correct plot\", 1.0, \"np.isclose(linexy[:, 0],x_c).all() and np.isclose(linexy[:,1],y_c).all()\"]\n]", "validation_tests": "[\n [\"You should add one line to your plot\", \"len(ax.lines) == 1\"],\n [\"Ensure the x-axis contains 100 points\", \"len(linexy[:, 0]) == 100\"]\n]", "variables": "dict()"}}, {"type": "mark-code-3", "useCustomName": false, "customName": "", "marks": 1, "scripts": {}, "customMarkingAlgorithm": "mark:\n feedback(html(code_result[3][\"stdout\"]));\n apply(validation_test_feedback);\n apply(marking_test_feedback)", "extendBaseMarkingAlgorithm": true, "unitTests": [], "showCorrectAnswer": true, "showFeedbackIcon": true, "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "nextParts": [], "suggestGoingBack": false, "adaptiveMarkingPenalty": 0, "exploreObjective": null, "prompt": "$\\displaystyle f(x)=x^2e^{-x^2}$
\n\nNote on this final one, my default choice of 100 points has left me a bit jaggedy near $x=0$! You might like to try changing the x array so that it has a larger number of points.
", "settings": {"show_input_hint": true, "code_language": "pyodide", "correct_answer": "import numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(-5,5,100)\nf = x ** 2 * np.exp(-x ** 2)\n\nplt.plot(x,f)\nplt.xlabel('x')\nplt.ylabel('f(x)')", "correct_answer_subvars": true, "show_stdout": true, "show_stderr": false, "show_marking_errors": false, "placeholder": "", "modifier": "", "preamble": "import sys\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(5,3))\n", "postamble": "fig.savefig(sys.stdout, format='svg')\nax = plt.gca() # get axis handle\n\n# Get data from the plot\nline = ax.lines[0]\nlinexy = line.get_xydata()\nlinexy = linexy[np.argsort(linexy[:,0])]\n\n# Correct axes for comparison\nx_c = np.linspace(-5,5,100)\ny_c = x ** 2 * np.exp(-x ** 2)", "tests": "[\n [\"Correct plot\", 1.0, \"np.isclose(linexy[:, 0],x_c).all() and np.isclose(linexy[:,1],y_c).all()\"]\n]", "validation_tests": "[\n [\"You should add one line to your plot\", \"len(ax.lines) == 1\"],\n [\"Ensure the x-axis contains 100 points\", \"len(linexy[:, 0]) == 100\"]\n]", "variables": "dict()"}}], "partsMode": "all", "maxMarks": 0, "objectives": [], "penalties": [], "objectiveVisibility": "always", "penaltyVisibility": "always", "type": "question"}, {"name": "Oscillations", "extensions": ["mas-codeassess", "postmessage-to-parent", "programming"], "custom_part_types": [{"source": {"pk": 195, "author": {"name": "Christian Lawson-Perfect", "pk": 7}, "edit_page": "/part_type/195/edit"}, "name": "Code", "short_name": "mark-code-3", "description": "Mark code provided by the student by running it and a series of validation and marking tests.
\nThe validation tests are used to reject an answer if the student has misunderstood the task, for example if they haven't defined a required variable or function.
\nMarking tests check properties of the student's code. Each test awards a proportion of the available credit if it is passed.
\nYou can optionally show the student the STDOUT and/or STDERR when running their code.
\nYou can give a preamble and postamble which are run before and after the student's code, and also modify the student's code before running it.
", "help_url": "", "input_widget": "code-editor", "input_options": {"correctAnswer": "if(settings[\"correct_answer_subvars\"],\n render(settings[\"correct_answer\"])\n,\n settings[\"correct_answer\"]\n)", "hint": {"static": false, "value": "\"Write \"+capitalise(language_synonym(settings[\"code_language\"]))+\" code\""}, "language": {"static": false, "value": "language_synonym(settings[\"code_language\"])"}, "placeholder": {"static": false, "value": "if(settings[\"correct_answer_subvars\"],\n render(settings[\"placeholder\"])\n,\n settings[\"placeholder\"]\n)"}, "theme": {"static": true, "value": "textmate"}}, "can_be_gap": true, "can_be_step": true, "marking_script": "mark:\napply(main_error);\napply(show_images);\napply(matplotlib_feedback);\napply(postamble_feedback);\napply(validation_test_feedback);\napply(marking_test_feedback)\n\ninterpreted_answer:\nstudentAnswer\n\nmain_result:\ncode_result[3]\n\nmarking_results:\ncode_result[6..(len(settings[\"tests\"])+6)]\n\nvalidation_results:\ncode_result[(len(settings[\"tests\"])+6)..len(code_result)]\n\nmain_error:\nassert(main_stdout=\"\" or not settings[\"show_stdout\"],\n feedback(\"Your code produced this output:{escape_html(main_stdout)}\")\n);\nassert(main_result[\"success\"],\n warn(\"\"\"There was an error in your code.\"\"\");\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in your code:
{escape_html(main_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in your code.\")\n )\n)\n\nmarking_test_feedback:\nmap(\n let(\n [name,weight,code], test,\n header, \"Test: {name} \",\n if(r[\"success\"],\n let(\n result, r[\"result\"],\n max_credit, weight/total_weight,\n credit, if(result isa \"number\", result, award(1,result)),\n switch(\n credit=0, negative_feedback(header+\"was not passed.\"),\n credit=1, add_credit(max_credit, header+\"was passed.\"),\n add_credit(credit*max_credit, header+\"was partially passed.\")\n )\n )\n ,\n if(settings[\"show_marking_errors\"],\n negative_feedback(\"\"\"There was an error:
{escape_html(r[\"stderr\"])}\"\"\")\n ,\n negative_feedback(header+\"was not passed.\")\n )\n )\n ),\n [test,r],\n zip(settings[\"tests\"],marking_results)\n)\n\nvalidation_test_feedback:\nmap(\n let([name,code], test,\n if(r[\"success\"],\n if(r[\"result\"],\n true\n ,\n warn(\"\"\"Your code failed the test {name}.\"\"\");\n fail(\"\"\"Your code failed the test {name}.\"\"\");false\n )\n ,\n warn(\"\"\"There was an error running the test {name}.\"\"\");\n fail(\"\"\"There was an error running the test {name}:
{escape_html(r[\"stderr\"])}\"\"\")\n )\n ),\n [test,r],\n zip(settings[\"validation_tests\"],validation_results)\n)\n\ntotal_weight:\nsum(map(weight,[name,weight,code],settings[\"tests\"]))\n\npre_submit:\n[run_code(code_language,\n [\n matplotlib_preamble,\n variables_as_code(language_synonym(code_language), settings[\"variables\"]),\n render(settings[\"preamble\"]),\n if(trim(settings[\"modifier\"])=\"\", studentAnswer, eval(expression(settings[\"modifier\"]))),\n render(settings[\"postamble\"]),\n matplotlib_postamble\n ]\n +map(code,[name,marks,code],settings[\"tests\"])\n +map(code,[name,code],settings[\"validation_tests\"])\n)]\n\ncode_result:\npre_submit[\"code_result\"]\n\nmain_stdout:\nsafe(main_result[\"stdout\"])\n\ncode_language:\nsettings[\"code_language\"]\n\npreamble_result:\ncode_result[2]\n\npreamble_stderr:\npreamble_result[\"stderr\"]\n\npostamble_result:\ncode_result[4]\n\npostamble_stderr:\npostamble_result[\"stderr\"]\n\npostamble_feedback:\nassert(postamble_result[\"stdout\"]=\"\",\n feedback(\n if(settings[\"postamble_feedback_whitespace\"],\n html(\"\"\"
{escape_html(postamble_result[\"stdout\"])}\"\"\")\n ,\n postamble_result[\"stdout\"]\n )\n )\n);\nassert(postamble_result[\"success\"],\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in the marking routine postamble:
{escape_html(postamble_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in the marking routine postamble.\")\n )\n)\n\nmatplotlib_preamble:\nif(code_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n plt.clf() \n\"\"\"),\n \"\"\n)\n\nmatplotlib_postamble:\nswitch(\ncode_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n fig = plt.gcf()\n if fig.get_axes():\n fig.savefig(sys.stdout, format='svg')\n\"\"\"),\n \"\"\n)\n\nmatplotlib_result:\ncode_result[5]\n\nmatplotlib_feedback:\nswitch(\ncode_language=\"pyodide\",\n assert(matplotlib_result[\"stdout\"]=\"\",\n feedback(matplotlib_result[\"stdout\"])\n ),\n \"\"\n)\n\n\n\nimages:\nflatten(map(\n get(r,\"images\",[]),\n r,\n code_result\n))\n\nshow_images:\nassert(len(images)=0 or not settings[\"show_stdout\"],\n feedback(\"Your code produced the following {pluralise(len(images),'image','images')}:\");\n map(\n feedback(html(x)),\n x,\n images\n )\n)", "marking_notes": [{"name": "mark", "description": "This is the main marking note. It should award credit and provide feedback based on the student's answer.", "definition": "apply(main_error);\napply(show_images);\napply(matplotlib_feedback);\napply(postamble_feedback);\napply(validation_test_feedback);\napply(marking_test_feedback)"}, {"name": "interpreted_answer", "description": "A value representing the student's answer to this part.", "definition": "studentAnswer"}, {"name": "main_result", "description": "
The result of running the student's code and the preamble, without any tests.
\nNormally used to detect errors in the student's code.
", "definition": "code_result[3]"}, {"name": "marking_results", "description": "The results of running the marking tests.
", "definition": "code_result[6..(len(settings[\"tests\"])+6)]"}, {"name": "validation_results", "description": "The results of running the validation tests.
", "definition": "code_result[(len(settings[\"tests\"])+6)..len(code_result)]"}, {"name": "main_error", "description": "Show STDOUT if allowed.
\nCheck the student's code runs on its own. Fail if there was an error, and show STDERR if allowed.
", "definition": "assert(main_stdout=\"\" or not settings[\"show_stdout\"],\n feedback(\"Your code produced this output:{escape_html(main_stdout)}\")\n);\nassert(main_result[\"success\"],\n warn(\"\"\"There was an error in your code.\"\"\");\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in your code:
{escape_html(main_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in your code.\")\n )\n)"}, {"name": "marking_test_feedback", "description": "
Feedback on the marking tests. For each test, if the test was passed then add the corresponding amount of credit. If there was an error, show the error.
", "definition": "map(\n let(\n [name,weight,code], test,\n header, \"Test: {name} \",\n if(r[\"success\"],\n let(\n result, r[\"result\"],\n max_credit, weight/total_weight,\n credit, if(result isa \"number\", result, award(1,result)),\n switch(\n credit=0, negative_feedback(header+\"was not passed.\"),\n credit=1, add_credit(max_credit, header+\"was passed.\"),\n add_credit(credit*max_credit, header+\"was partially passed.\")\n )\n )\n ,\n if(settings[\"show_marking_errors\"],\n negative_feedback(\"\"\"There was an error:{escape_html(r[\"stderr\"])}\"\"\")\n ,\n negative_feedback(header+\"was not passed.\")\n )\n )\n ),\n [test,r],\n zip(settings[\"tests\"],marking_results)\n)"}, {"name": "validation_test_feedback", "description": "
Give feedback on the validation tests. If any of them are not passed, the student's answer is invalid.
", "definition": "map(\n let([name,code], test,\n if(r[\"success\"],\n if(r[\"result\"],\n true\n ,\n warn(\"\"\"Your code failed the test {name}.\"\"\");\n fail(\"\"\"Your code failed the test {name}.\"\"\");false\n )\n ,\n warn(\"\"\"There was an error running the test {name}.\"\"\");\n fail(\"\"\"There was an error running the test {name}:{escape_html(r[\"stderr\"])}\"\"\")\n )\n ),\n [test,r],\n zip(settings[\"validation_tests\"],validation_results)\n)"}, {"name": "total_weight", "description": "
The sum of the weights of the marking tests. Each test's weight is divided by this to produce a proportion of the available credit.
", "definition": "sum(map(weight,[name,weight,code],settings[\"tests\"]))"}, {"name": "pre_submit", "description": "The code blocks to run.
\nIn order, they are:
\nThe results of the code blocks: a list with an entry corresponding to each block of code.
", "definition": "pre_submit[\"code_result\"]"}, {"name": "main_stdout", "description": "The stdout from the student's code.
", "definition": "safe(main_result[\"stdout\"])"}, {"name": "code_language", "description": "The language the code is written in. Either \"pyodide\" (Python) or \"webr\" (R)
", "definition": "settings[\"code_language\"]"}, {"name": "preamble_result", "description": "The result of running the preamble block.
", "definition": "code_result[2]"}, {"name": "preamble_stderr", "description": "The STDERR produced by the preamble block.
", "definition": "preamble_result[\"stderr\"]"}, {"name": "postamble_result", "description": "The result of running the postamble.
", "definition": "code_result[4]"}, {"name": "postamble_stderr", "description": "The STDERR produced by the postamble block.
", "definition": "postamble_result[\"stderr\"]"}, {"name": "postamble_feedback", "description": "Show the STDOUT from the postamble, if there is any.
", "definition": "assert(postamble_result[\"stdout\"]=\"\",\n feedback(\n if(settings[\"postamble_feedback_whitespace\"],\n html(\"\"\"{escape_html(postamble_result[\"stdout\"])}\"\"\")\n ,\n postamble_result[\"stdout\"]\n )\n )\n);\nassert(postamble_result[\"success\"],\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in the marking routine postamble:
{escape_html(postamble_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in the marking routine postamble.\")\n )\n)"}, {"name": "matplotlib_preamble", "description": "
Preamble for a hack to ensure that figures produced by matplotlib in Python are displayed.
\nThis code clears the matplotlib output, if matplotlib has been loaded.
", "definition": "if(code_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n plt.clf() \n\"\"\"),\n \"\"\n)"}, {"name": "matplotlib_postamble", "description": "A hack to show any figures produced with matplotlib in the stdout.
", "definition": "switch(\ncode_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n fig = plt.gcf()\n if fig.get_axes():\n fig.savefig(sys.stdout, format='svg')\n\"\"\"),\n \"\"\n)"}, {"name": "matplotlib_result", "description": "The result of running the matplotlib hack.
", "definition": "code_result[5]"}, {"name": "matplotlib_feedback", "description": "Feedback from the matplotlib hack: if a figure is produced, it's displayed as SVG here.
", "definition": "switch(\ncode_language=\"pyodide\",\n assert(matplotlib_result[\"stdout\"]=\"\",\n feedback(matplotlib_result[\"stdout\"])\n ),\n \"\"\n)\n\n"}, {"name": "images", "description": "Any images produced by the code blocks.
", "definition": "flatten(map(\n get(r,\"images\",[]),\n r,\n code_result\n))"}, {"name": "show_images", "description": "Show the images produced by the code.
", "definition": "assert(len(images)=0 or not settings[\"show_stdout\"],\n feedback(\"Your code produced the following {pluralise(len(images),'image','images')}:\");\n map(\n feedback(html(x)),\n x,\n images\n )\n)"}], "settings": [{"name": "show_input_hint", "label": "Show the input hint?", "help_url": "", "hint": "", "input_type": "checkbox", "default_value": true}, {"name": "code_language", "label": "Code language", "help_url": "", "hint": "The language that the student's code will be written in.", "input_type": "dropdown", "default_value": "pyodide", "choices": [{"value": "pyodide", "label": "Python"}, {"value": "webr", "label": "R"}]}, {"name": "correct_answer", "label": "Correct answer", "help_url": "", "hint": "A correct answer to the part.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "correct_answer_subvars", "label": "Substitute question variables into the correct answer?", "help_url": "", "hint": "If ticked, then JME expressions between curly braces will be evaluated and substituted into the correct answer.studentAnswer
.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "preamble", "label": "Preamble", "help_url": "", "hint": "This code is run before the student's code. Define anything that the student's code or your tests need.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "postamble", "label": "Postamble", "help_url": "", "hint": "This code is run after the student's code but before the validation and unit tests.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "postamble_feedback_whitespace", "label": "Format postamble output as code?", "help_url": "", "hint": "If ticked, any output produced by the postamble will be formatted in monospace font, with whitespace preserved. If not ticked, it'll be presented as prose text or HTML.", "input_type": "checkbox", "default_value": false}, {"name": "tests", "label": "Marking tests", "help_url": "", "hint": "A list of tests used to mark the student's answer.A list of tests used to validate that the student's code is acceptable.
Each item is a list with two string values:
The equation
\n\\[x(t)=Ae^{−kt}\\cos(t)\\]
\nrepresents the displacement x(t) of an oscillator, such as a pendulum or spring, with initial amplitude $A$, dying away due to the damping effect of e.g. air resistance, which is quantified by the parameter $k$.
", "advice": "This could look like the following:
\nimport matplotlib.pyplot as plt", "rulesets": {}, "builtin_constants": {"e": true, "pi,\u03c0": true, "i": true}, "constants": [], "variables": {}, "variablesTest": {"condition": "", "maxRuns": 100}, "ungrouped_variables": [], "variable_groups": [], "functions": {}, "preamble": {"js": "", "css": ""}, "parts": [{"type": "1_n_2", "useCustomName": false, "customName": "", "marks": 0, "scripts": {}, "customMarkingAlgorithm": "", "extendBaseMarkingAlgorithm": true, "unitTests": [], "showCorrectAnswer": true, "showFeedbackIcon": true, "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "nextParts": [], "suggestGoingBack": false, "adaptiveMarkingPenalty": 0, "exploreObjective": null, "prompt": "
import numpy as np
# Set parameters
k = 0.1
A = 2
# Create an array for t
t = np.linspace(0,40,200)
# Use t to create arrays for x and xk0
x = A*np.exp(-k*t)*np.cos(t)
xk0 = A*np.cos(t)
# one row, two columns, first plot
plt.figure()
plt.plot(t,x)
plt.ylabel('with damping')
plt.xlabel('t')
# one row, two columns, second plot
plt.figure()
plt.plot(t,xk0)
plt.ylabel('no damping')
plt.xlabel('t')
This question concerns creating two figures. What is the correct code to produce two separate pairs of axes?
", "minMarks": 0, "maxMarks": 0, "shuffleChoices": true, "displayType": "radiogroup", "displayColumns": 0, "showCellAnswerState": true, "choices": ["import matplotlib.pyplot as plt
\nplt.figure()
plt.plot()
plt.figure()
plt.plot()
", "import matplotlib.pyplot as plt
plt.plot()
plt.plot()
", "\nimport matplotlib.pyplot as
plt\nplt.figure()
plt.plot()
plt.plot()\n
"], "matrix": ["1", 0, 0], "distractors": ["", "", ""]}, {"type": "gapfill", "useCustomName": false, "customName": "", "marks": 0, "scripts": {}, "customMarkingAlgorithm": "", "extendBaseMarkingAlgorithm": true, "unitTests": [], "showCorrectAnswer": true, "showFeedbackIcon": true, "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "nextParts": [], "suggestGoingBack": false, "adaptiveMarkingPenalty": 0, "exploreObjective": null, "prompt": "In the absence of any damping we have $k = 0$. Let's call this case $x_{k0}$. Give an expression for $x_{k0}(t)$
\n$x_{k0}(t) = $ [[0]]
", "gaps": [{"type": "jme", "useCustomName": false, "customName": "", "marks": 1, "scripts": {}, "customMarkingAlgorithm": "", "extendBaseMarkingAlgorithm": true, "unitTests": [], "showCorrectAnswer": true, "showFeedbackIcon": true, "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "nextParts": [], "suggestGoingBack": false, "adaptiveMarkingPenalty": 0, "exploreObjective": null, "answer": "A*cos(t)", "showPreview": true, "checkingType": "absdiff", "checkingAccuracy": 0.001, "failureRate": 1, "vsetRangePoints": 5, "vsetRange": [0, 1], "checkVariableNames": false, "singleLetterVariables": false, "allowUnknownFunctions": false, "implicitFunctionComposition": false, "caseSensitive": false, "valuegenerators": [{"name": "a", "value": ""}, {"name": "t", "value": ""}]}], "sortAnswers": false}, {"type": "mark-code-3", "useCustomName": false, "customName": "", "marks": 1, "scripts": {}, "customMarkingAlgorithm": "", "extendBaseMarkingAlgorithm": true, "unitTests": [], "showCorrectAnswer": true, "showFeedbackIcon": true, "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "nextParts": [], "suggestGoingBack": false, "adaptiveMarkingPenalty": 0, "exploreObjective": null, "prompt": "Use the NumPy function linspace
to create an array t
that contains $200$ values in the range $[0,40]$.
Use t
to create arrays x
and xko
that contain values for $x(t)$ and $x_{k0}(t)$ in the case $A = 2$ and $k = 0.1$
Create a plot of $x(t)$ and $x_{k0}(t)$ on separate axes, laid out as in part a). You should observe something like this:
\n"}], "partsMode": "all", "maxMarks": 0, "objectives": [], "penalties": [], "objectiveVisibility": "always", "penaltyVisibility": "always", "type": "question"}]}], "allowPrinting": true, "navigation": {"allowregen": true, "reverse": true, "browse": true, "allowsteps": true, "showfrontpage": false, "showresultspage": "oncompletion", "navigatemode": "sequence", "onleave": {"action": "none", "message": ""}, "preventleave": true, "startpassword": "", "allowAttemptDownload": false, "downloadEncryptionKey": ""}, "timing": {"allowPause": true, "timeout": {"action": "none", "message": ""}, "timedwarning": {"action": "none", "message": ""}}, "feedback": {"showactualmark": true, "showtotalmark": true, "showanswerstate": true, "allowrevealanswer": true, "advicethreshold": 0, "intro": "", "end_message": "", "reviewshowscore": true, "reviewshowfeedback": true, "reviewshowexpectedanswer": true, "reviewshowadvice": true, "feedbackmessages": [], "enterreviewmodeimmediately": true, "showexpectedanswerswhen": "inreview", "showpartfeedbackmessageswhen": "always", "showactualmarkwhen": "always", "showtotalmarkwhen": "always", "showanswerstatewhen": "always", "showadvicewhen": "inreview"}, "diagnostic": {"knowledge_graph": {"topics": [], "learning_objectives": []}, "script": "diagnosys", "customScript": ""}, "type": "exam", "contributors": [{"name": "Will Rushworth", "profile_url": "https://numbas.mathcentre.ac.uk/accounts/profile/20560/"}], "extensions": ["mas-codeassess", "postmessage-to-parent", "programming"], "custom_part_types": [{"source": {"pk": 195, "author": {"name": "Christian Lawson-Perfect", "pk": 7}, "edit_page": "/part_type/195/edit"}, "name": "Code", "short_name": "mark-code-3", "description": "Mark code provided by the student by running it and a series of validation and marking tests.
\nThe validation tests are used to reject an answer if the student has misunderstood the task, for example if they haven't defined a required variable or function.
\nMarking tests check properties of the student's code. Each test awards a proportion of the available credit if it is passed.
\nYou can optionally show the student the STDOUT and/or STDERR when running their code.
\nYou can give a preamble and postamble which are run before and after the student's code, and also modify the student's code before running it.
", "help_url": "", "input_widget": "code-editor", "input_options": {"correctAnswer": "if(settings[\"correct_answer_subvars\"],\n render(settings[\"correct_answer\"])\n,\n settings[\"correct_answer\"]\n)", "hint": {"static": false, "value": "\"Write \"+capitalise(language_synonym(settings[\"code_language\"]))+\" code\""}, "language": {"static": false, "value": "language_synonym(settings[\"code_language\"])"}, "placeholder": {"static": false, "value": "if(settings[\"correct_answer_subvars\"],\n render(settings[\"placeholder\"])\n,\n settings[\"placeholder\"]\n)"}, "theme": {"static": true, "value": "textmate"}}, "can_be_gap": true, "can_be_step": true, "marking_script": "mark:\napply(main_error);\napply(show_images);\napply(matplotlib_feedback);\napply(postamble_feedback);\napply(validation_test_feedback);\napply(marking_test_feedback)\n\ninterpreted_answer:\nstudentAnswer\n\nmain_result:\ncode_result[3]\n\nmarking_results:\ncode_result[6..(len(settings[\"tests\"])+6)]\n\nvalidation_results:\ncode_result[(len(settings[\"tests\"])+6)..len(code_result)]\n\nmain_error:\nassert(main_stdout=\"\" or not settings[\"show_stdout\"],\n feedback(\"Your code produced this output:{escape_html(main_stdout)}\")\n);\nassert(main_result[\"success\"],\n warn(\"\"\"There was an error in your code.\"\"\");\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in your code:
{escape_html(main_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in your code.\")\n )\n)\n\nmarking_test_feedback:\nmap(\n let(\n [name,weight,code], test,\n header, \"Test: {name} \",\n if(r[\"success\"],\n let(\n result, r[\"result\"],\n max_credit, weight/total_weight,\n credit, if(result isa \"number\", result, award(1,result)),\n switch(\n credit=0, negative_feedback(header+\"was not passed.\"),\n credit=1, add_credit(max_credit, header+\"was passed.\"),\n add_credit(credit*max_credit, header+\"was partially passed.\")\n )\n )\n ,\n if(settings[\"show_marking_errors\"],\n negative_feedback(\"\"\"There was an error:
{escape_html(r[\"stderr\"])}\"\"\")\n ,\n negative_feedback(header+\"was not passed.\")\n )\n )\n ),\n [test,r],\n zip(settings[\"tests\"],marking_results)\n)\n\nvalidation_test_feedback:\nmap(\n let([name,code], test,\n if(r[\"success\"],\n if(r[\"result\"],\n true\n ,\n warn(\"\"\"Your code failed the test {name}.\"\"\");\n fail(\"\"\"Your code failed the test {name}.\"\"\");false\n )\n ,\n warn(\"\"\"There was an error running the test {name}.\"\"\");\n fail(\"\"\"There was an error running the test {name}:
{escape_html(r[\"stderr\"])}\"\"\")\n )\n ),\n [test,r],\n zip(settings[\"validation_tests\"],validation_results)\n)\n\ntotal_weight:\nsum(map(weight,[name,weight,code],settings[\"tests\"]))\n\npre_submit:\n[run_code(code_language,\n [\n matplotlib_preamble,\n variables_as_code(language_synonym(code_language), settings[\"variables\"]),\n render(settings[\"preamble\"]),\n if(trim(settings[\"modifier\"])=\"\", studentAnswer, eval(expression(settings[\"modifier\"]))),\n render(settings[\"postamble\"]),\n matplotlib_postamble\n ]\n +map(code,[name,marks,code],settings[\"tests\"])\n +map(code,[name,code],settings[\"validation_tests\"])\n)]\n\ncode_result:\npre_submit[\"code_result\"]\n\nmain_stdout:\nsafe(main_result[\"stdout\"])\n\ncode_language:\nsettings[\"code_language\"]\n\npreamble_result:\ncode_result[2]\n\npreamble_stderr:\npreamble_result[\"stderr\"]\n\npostamble_result:\ncode_result[4]\n\npostamble_stderr:\npostamble_result[\"stderr\"]\n\npostamble_feedback:\nassert(postamble_result[\"stdout\"]=\"\",\n feedback(\n if(settings[\"postamble_feedback_whitespace\"],\n html(\"\"\"
{escape_html(postamble_result[\"stdout\"])}\"\"\")\n ,\n postamble_result[\"stdout\"]\n )\n )\n);\nassert(postamble_result[\"success\"],\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in the marking routine postamble:
{escape_html(postamble_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in the marking routine postamble.\")\n )\n)\n\nmatplotlib_preamble:\nif(code_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n plt.clf() \n\"\"\"),\n \"\"\n)\n\nmatplotlib_postamble:\nswitch(\ncode_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n fig = plt.gcf()\n if fig.get_axes():\n fig.savefig(sys.stdout, format='svg')\n\"\"\"),\n \"\"\n)\n\nmatplotlib_result:\ncode_result[5]\n\nmatplotlib_feedback:\nswitch(\ncode_language=\"pyodide\",\n assert(matplotlib_result[\"stdout\"]=\"\",\n feedback(matplotlib_result[\"stdout\"])\n ),\n \"\"\n)\n\n\n\nimages:\nflatten(map(\n get(r,\"images\",[]),\n r,\n code_result\n))\n\nshow_images:\nassert(len(images)=0 or not settings[\"show_stdout\"],\n feedback(\"Your code produced the following {pluralise(len(images),'image','images')}:\");\n map(\n feedback(html(x)),\n x,\n images\n )\n)", "marking_notes": [{"name": "mark", "description": "This is the main marking note. It should award credit and provide feedback based on the student's answer.", "definition": "apply(main_error);\napply(show_images);\napply(matplotlib_feedback);\napply(postamble_feedback);\napply(validation_test_feedback);\napply(marking_test_feedback)"}, {"name": "interpreted_answer", "description": "A value representing the student's answer to this part.", "definition": "studentAnswer"}, {"name": "main_result", "description": "
The result of running the student's code and the preamble, without any tests.
\nNormally used to detect errors in the student's code.
", "definition": "code_result[3]"}, {"name": "marking_results", "description": "The results of running the marking tests.
", "definition": "code_result[6..(len(settings[\"tests\"])+6)]"}, {"name": "validation_results", "description": "The results of running the validation tests.
", "definition": "code_result[(len(settings[\"tests\"])+6)..len(code_result)]"}, {"name": "main_error", "description": "Show STDOUT if allowed.
\nCheck the student's code runs on its own. Fail if there was an error, and show STDERR if allowed.
", "definition": "assert(main_stdout=\"\" or not settings[\"show_stdout\"],\n feedback(\"Your code produced this output:{escape_html(main_stdout)}\")\n);\nassert(main_result[\"success\"],\n warn(\"\"\"There was an error in your code.\"\"\");\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in your code:
{escape_html(main_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in your code.\")\n )\n)"}, {"name": "marking_test_feedback", "description": "
Feedback on the marking tests. For each test, if the test was passed then add the corresponding amount of credit. If there was an error, show the error.
", "definition": "map(\n let(\n [name,weight,code], test,\n header, \"Test: {name} \",\n if(r[\"success\"],\n let(\n result, r[\"result\"],\n max_credit, weight/total_weight,\n credit, if(result isa \"number\", result, award(1,result)),\n switch(\n credit=0, negative_feedback(header+\"was not passed.\"),\n credit=1, add_credit(max_credit, header+\"was passed.\"),\n add_credit(credit*max_credit, header+\"was partially passed.\")\n )\n )\n ,\n if(settings[\"show_marking_errors\"],\n negative_feedback(\"\"\"There was an error:{escape_html(r[\"stderr\"])}\"\"\")\n ,\n negative_feedback(header+\"was not passed.\")\n )\n )\n ),\n [test,r],\n zip(settings[\"tests\"],marking_results)\n)"}, {"name": "validation_test_feedback", "description": "
Give feedback on the validation tests. If any of them are not passed, the student's answer is invalid.
", "definition": "map(\n let([name,code], test,\n if(r[\"success\"],\n if(r[\"result\"],\n true\n ,\n warn(\"\"\"Your code failed the test {name}.\"\"\");\n fail(\"\"\"Your code failed the test {name}.\"\"\");false\n )\n ,\n warn(\"\"\"There was an error running the test {name}.\"\"\");\n fail(\"\"\"There was an error running the test {name}:{escape_html(r[\"stderr\"])}\"\"\")\n )\n ),\n [test,r],\n zip(settings[\"validation_tests\"],validation_results)\n)"}, {"name": "total_weight", "description": "
The sum of the weights of the marking tests. Each test's weight is divided by this to produce a proportion of the available credit.
", "definition": "sum(map(weight,[name,weight,code],settings[\"tests\"]))"}, {"name": "pre_submit", "description": "The code blocks to run.
\nIn order, they are:
\nThe results of the code blocks: a list with an entry corresponding to each block of code.
", "definition": "pre_submit[\"code_result\"]"}, {"name": "main_stdout", "description": "The stdout from the student's code.
", "definition": "safe(main_result[\"stdout\"])"}, {"name": "code_language", "description": "The language the code is written in. Either \"pyodide\" (Python) or \"webr\" (R)
", "definition": "settings[\"code_language\"]"}, {"name": "preamble_result", "description": "The result of running the preamble block.
", "definition": "code_result[2]"}, {"name": "preamble_stderr", "description": "The STDERR produced by the preamble block.
", "definition": "preamble_result[\"stderr\"]"}, {"name": "postamble_result", "description": "The result of running the postamble.
", "definition": "code_result[4]"}, {"name": "postamble_stderr", "description": "The STDERR produced by the postamble block.
", "definition": "postamble_result[\"stderr\"]"}, {"name": "postamble_feedback", "description": "Show the STDOUT from the postamble, if there is any.
", "definition": "assert(postamble_result[\"stdout\"]=\"\",\n feedback(\n if(settings[\"postamble_feedback_whitespace\"],\n html(\"\"\"{escape_html(postamble_result[\"stdout\"])}\"\"\")\n ,\n postamble_result[\"stdout\"]\n )\n )\n);\nassert(postamble_result[\"success\"],\n if(settings[\"show_stderr\"],\n fail(\"\"\"There was an error in the marking routine postamble:
{escape_html(postamble_result[\"stderr\"])}\"\"\")\n ,\n fail(\"There was an error in the marking routine postamble.\")\n )\n)"}, {"name": "matplotlib_preamble", "description": "
Preamble for a hack to ensure that figures produced by matplotlib in Python are displayed.
\nThis code clears the matplotlib output, if matplotlib has been loaded.
", "definition": "if(code_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n plt.clf() \n\"\"\"),\n \"\"\n)"}, {"name": "matplotlib_postamble", "description": "A hack to show any figures produced with matplotlib in the stdout.
", "definition": "switch(\ncode_language=\"pyodide\",\n safe(\"\"\"\nimport sys\nif 'matplotlib' in sys.modules:\n import matplotlib.pyplot as plt\n fig = plt.gcf()\n if fig.get_axes():\n fig.savefig(sys.stdout, format='svg')\n\"\"\"),\n \"\"\n)"}, {"name": "matplotlib_result", "description": "The result of running the matplotlib hack.
", "definition": "code_result[5]"}, {"name": "matplotlib_feedback", "description": "Feedback from the matplotlib hack: if a figure is produced, it's displayed as SVG here.
", "definition": "switch(\ncode_language=\"pyodide\",\n assert(matplotlib_result[\"stdout\"]=\"\",\n feedback(matplotlib_result[\"stdout\"])\n ),\n \"\"\n)\n\n"}, {"name": "images", "description": "Any images produced by the code blocks.
", "definition": "flatten(map(\n get(r,\"images\",[]),\n r,\n code_result\n))"}, {"name": "show_images", "description": "Show the images produced by the code.
", "definition": "assert(len(images)=0 or not settings[\"show_stdout\"],\n feedback(\"Your code produced the following {pluralise(len(images),'image','images')}:\");\n map(\n feedback(html(x)),\n x,\n images\n )\n)"}], "settings": [{"name": "show_input_hint", "label": "Show the input hint?", "help_url": "", "hint": "", "input_type": "checkbox", "default_value": true}, {"name": "code_language", "label": "Code language", "help_url": "", "hint": "The language that the student's code will be written in.", "input_type": "dropdown", "default_value": "pyodide", "choices": [{"value": "pyodide", "label": "Python"}, {"value": "webr", "label": "R"}]}, {"name": "correct_answer", "label": "Correct answer", "help_url": "", "hint": "A correct answer to the part.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "correct_answer_subvars", "label": "Substitute question variables into the correct answer?", "help_url": "", "hint": "If ticked, then JME expressions between curly braces will be evaluated and substituted into the correct answer.studentAnswer
.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "preamble", "label": "Preamble", "help_url": "", "hint": "This code is run before the student's code. Define anything that the student's code or your tests need.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "postamble", "label": "Postamble", "help_url": "", "hint": "This code is run after the student's code but before the validation and unit tests.", "input_type": "code", "default_value": "", "evaluate": false}, {"name": "postamble_feedback_whitespace", "label": "Format postamble output as code?", "help_url": "", "hint": "If ticked, any output produced by the postamble will be formatted in monospace font, with whitespace preserved. If not ticked, it'll be presented as prose text or HTML.", "input_type": "checkbox", "default_value": false}, {"name": "tests", "label": "Marking tests", "help_url": "", "hint": "A list of tests used to mark the student's answer.A list of tests used to validate that the student's code is acceptable.
Each item is a list with two string values: