Grading
Overview
To pass the course (E) you need to pass the project assignment (6.0 credits) and the seminar series (1.5 credits), this includes:
- Hand in drafts 1, 2 and 3, P/F
- Pass (>=E) the project report (evaluated based on properties listed below), A-F
- Pass report reflection document, P/F
- Pass the oral presentation, P/F
- Attend all guest lectures and hand in reflections, or pass compensation assignment, P/F
If you miss the deadlines for project report or the oral presentation there will be make-up dates for examination in the spring.
Project report grading
The project report will be the main source of evaluation and it will also determine your final grade. The report will be evaluated according to the criteria described in the Table below.
Evaluation criterion and evaluation scale |
Evaluation criterion description |
Comments and examples |
Correctness |
The risk calculations, conclusions, and other syntheses are made correctly. |
The fusion of parameters should follow the described method or otherwise make sense and be motivated. E.g. it doesn’t make sense to sum the attacker capability with a vulnerability score. And the calculations must match the developed models. |
Consistency |
The threat models follow the metamodel (Yacraf, normally). The models are consistent within each sub model (within a phase) and between one another (across the phases). The same phenomenon is modelled in the same way or transparently transformed between models. Are the different models interconnected to form a single underlying unambiguous model. Risk calculations are done according to the same framework for all scenarios and models. Terminology is used consistently (and correctly). |
E.g. The consistency should be demonstrated (or rather the report should not leave places where the consistency can be doubted or questioned). |
Coverage |
Given a decided scope the models and analysis should cover this scope in a reasonably complete way. |
E.g. for attack models - apparent critical attack vectors should not be missing, for system models - key components should not be missing, for business goals - obvious losses should not be missing. The Yacraf metamodel should also be broadly/fully covered in the instance models. |
Variance |
The models cover a variety of phenomena. |
Not all models/analyses should follow and repeat the same underlying logic. E.g. a model containing three web servers with very similar vulnerabilities. (Of course the system may surely have three similar servers but with respect to this particular criteria they do not "add value".) One should demonstrate ability to analyze a wide variety of security topics. Different types of systems with different types of attack vectors and vulnerabilities. |
Technical Detail |
The models should be on such a depth so that they are not trivial or superficial. Also, it should be clear what the technical challenges / implementations / weaknesses / attacks are. This criterion is specifically addressing the the Yacraf phases 2 and 4 with system and attack models. |
A (too) high-level description of an attack vector would e.g. be "DoS web server <- deploy botnet <- buy botnet", where a corresponding high-level remedy would be: "buy load balancer". For the system models more detail e.g. mean configurations of the inner logic of a component and its underlying platforms. Note! Adding technical detail just for the sake of it is not the purpose of this criterion. The details much also be well balanced (see separate criterion) so that technical detail of a system is relevant in relation to some adressed attack vector. |
Balance |
Balance the level of detail to avoid that one part/aspect of the models and analysis gets a lot of attention while others get a little attention. |
Balance should be maintained between different Yacraf phases. It also applies to the different parts of the model so that one part of the system or business is analysed in great detail while another part is very coarsely described. |
Realism and motivation |
The model input data should be realistic, but not real. Your model and analysis should be of such characteristic that it could have been describing something that could have been real. Some things are straight forward and not questionable while for some assumptions, interpretations, motivations, and external references are needed to support the claims of the models. |
In principle there are two ways of motivating non-questionable claims, either (trustworthy) references are put forward as support, or you put forward your own argumentation that is convincing the reader about the realism. This can relate to anything from costs of loss events to common system architecture solutions and technology stacks, to threat profiles and attack vectors. Strong references, and use of, external sources should be used. E.g. analysis according to some certain MITRE ATT&CK techniques, or some vulnerability class, and the use of threat emulation profiles. For the threat analysis there are many parameters that is qualitatively assessed (e.g. prob. of action). For these assessments some form of reasoning / motivation is needed. Motivating why you are not diving deeper into something is a good thing. E.g. "we consider TLS 1.3 secure, because ... " |
Assurance |
The confidence of the final recommendations in the report should be high. |
This relates to the amount of different mitigation scenarios that are analysed, but also how they are compared against each other in relation to how viable or costly they are to implement. This criterion also relates to the remaining uncertainty after the realism has been motivated. This should be explicitly described and reflected upon. It also includes that the scenarios are well balanced in comparison to each other with respect to detail, coverage, realism, etc. It should also be clear thet it is the analysis that is the motivating the recommendations. |
Readability |
The report must be well-structured, understandable and easy to follow. |
When it comes to the formatting and formalities the report must for instance be free of spelling errors, inconsistent or incomprehensible sentences, and grammatical errors. It must have reasonable size of paragraphs and sentences. It must use numbered headings and it must be easy to follow all internal references in the report. It must use figure/table text and reference figures/tables in text. (E.g. figure text description: “Figure 1. A business model”. In text: ”In Figure 1 the business model is described” .) |
Each criterion will be evaluated using an individual point scale, as listed in the Table. Each criterion can be evaluated to Fail (-1) or Pass (>=0) and get a maximum number of points indicating performance with respect to the criterion. The maximum point per criterion is described in the Table. In total a maximum of 14 points can be achieved and the final grades will be set based on the following schema:
A = 12-14
B = 9-11
C = 6-8
D = 3-5
E = 0-2
If one criterion is assessed to Fail (-1), the assignment as a whole is Failed, and you need to revise, resubmit and pass a new evaluation at the make-up examination in order to pass the course.