이 문서는 SonarQube v6.4 기반의 문서로 아카이브 되었습니다.
최신 문서를 참조하십시오.

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 2 Next »


This is not an exhaustive list of metrics. For the full list, consult the metrics search web service on your SonarQube instance.

Complexity

NameKeyDescription
Complexitycomplexity

It is the complexity calculated based on the number of paths through the code. Whenever the control flow of a function splits, the complexity counter gets incremented by one. Each function has a minimum complexity of 1. This calculation varies slightly by language because keywords and functionalities do.

More details

Complexity /classclass_complexityAverage complexity by class.
Complexity /filefile_complexityAverage complexity by file.
Complexity /methodfunction_complexityAverage complexity by function.

Documentation

NameKeyDescription
Comment linescomment_lines

Number of lines containing either comment or commented-out code.

Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines.

The following piece of code contains 9 comment lines:

/**                                    +0 => empty comment line
 *                                     +0 => empty comment line
 * This is my documentation            +1 => significant comment
 * although I don't                    +1 => significant comment
 * have much                           +1 => significant comment
 * to say                              +1 => significant comment
 *                                     +0 => empty comment line
 ***************************           +0 => non-significant comment
 *                                     +0 => empty comment line
 * blabla...                           +1 => significant comment
 */                                    +0 => empty comment line
 
/**                                    +0 => empty comment line
 * public String foo() {               +1 => commented-out code
 *   System.out.println(message);      +1 => commented-out code
 *   return message;                   +1 => commented-out code
 * }                                   +1 => commented-out code
 */                                    +0 => empty comment line

More details

Comments (%)comment_lines_density

Density of comment lines = Comment lines / (Lines of code + Comment lines) * 100

With such a formula:

  • 50% means that the number of lines of code equals the number of comment lines
  • 100% means that the file only contains comment lines
Public documented API (%)public_documented_api_densityDensity of public documented API = (Public API - Public undocumented API) / Public API * 100
Public undocumented APIpublic_undocumented_apiPublic API without comments header.
Commented-out LOCcommented_out_code_linesCommented lines of code

Duplications

NameKeyDescription

Duplicated blocks

duplicated_blocks

Number of duplicated blocks of lines.

For a block of code to be considered as duplicated:

  • Non-Java projects:
    • There should be at least 100 successive and duplicated tokens.
    • Those tokens should be spread at least on:
      • 30 lines of code for COBOL
      • 20 lines of code for ABAP
      • 10 lines of code for other languages

  • Java projects:
    • There should be at least 10 successive and duplicated statements whatever the number of tokens and lines.

Differences in indentation as well as in string literals are ignored while detecting duplications.


Duplicated filesduplicated_filesNumber of files involved in duplications.
Duplicated linesduplicated_linesNumber of lines involved in duplications.
Duplicated lines (%)duplicated_lines_density

Density of duplication = Duplicated lines / Lines * 100

Issues

NameKeyDescription

New issues

new_violations

Number of new issues.

New xxxxx issues

new_xxxxx_violations

Number of new issues with severity xxxxx, xxxxx being blocker, critical, major, minor or info.

Issues

violations

Number of issues.

xxxxx issues

xxxxx_violations

Number of issues with severity xxxxx, xxxxx being blocker, critical, major, minor or info.

False positive issuesfalse_positive_issuesNumber of false positive issues
Open issuesopen_issuesNumber of issues whose status is Open
Confirmed issuesconfirmed_issuesNumber of issues whose status is Confirmed
Reopened issuesreopened_issuesNumber of issues whose status is Reopened

Severity

Severity

Description

BlockerOperational/security risk: This issue might make the whole application unstable in production. Ex: calling garbage collector, not closing a socket, etc.
CriticalOperational/security risk: This issue might lead to an unexpected behavior in production without impacting the integrity of the whole application. Ex: NullPointerException, badly caught exceptions, lack of unit tests, etc.
MajorThis issue might have a substantial impact on productivity. Ex: too complex methods, package cycles, etc.
MinorThis issue might have a potential and minor impact on productivity. Ex: naming conventions, Finalizer does nothing but call superclass finalizer, etc.
InfoUnknown or not yet well defined security risk or impact on productivity. 

Maintainability 

NameKeyDescription
Code Smellscode_smellsNumber of code smells.
New Code Smellsnew_code_smellsNumber of new code smells.
Maintainability Rating (formerly SQALE Rating)sqale_rating

Rating given to your project related to the value of your Technical Debt Ratio. The default Maintainability Rating grid is:

A=0-0.05, B=0.06-0.1, C=0.11-0.20, D=0.21-0.5, E=0.51-1

The Maintainability Rating scale can be alternately stated by saying that if the outstanding remediation cost is:

  • <=5% of the time that has already gone into the application, the rating is A
  • between 6 to 10% the rating is a B
  • between 11 to 20% the rating is a C
  • between 21 to 50% the rating is a D
  • anything over 50% is an E
Technical Debtsqale_indexEffort to fix all maintainability issues. The measure is stored in minutes in the DB.
Technical Debt on new codenew_technical_debtTechnical Debt of new code
Technical Debt Ratiosqale_debt_ratio

Ratio between the cost to develop the software and the cost to fix it. The Technical Debt Ratio formula is:

	Remediation cost / Development cost 

Which can be restated as:

	Remediation cost / (Cost to develop 1 line of code * Number of lines of code)

The value of the cost to develop a line of code is 0.06 days.
Technical Debt Ratio on new codenew_sqale_debt_ratio

Ratio between the cost to developer the code changed in the leak period and the cost of the issues linked to it.

Quality Gates


NameKeyDescription
Quality Gate Statusalert_statusState of the Quality Gate associated to your Project. Possible values are : ERROR, WARN, OK
Quality Gates Detailsquality_gate_detailsFor all the conditions of your Quality Gate, you know which condition is failing and which is not.

Reliability

NameKeyDescription
BugsbugsNumber of bugs.
New Bugsnew_bugsNumber of new bugs.
Reliability Ratingreliability_rating

A = 0 Bug
B = at least 1 Minor Bug
C = at least 1 Major Bug
D = at least 1 Critical Bug
E = at least 1 Blocker Bug

Reliability remediation effortreliability_remediation_effortEffort to fix all bug issues. The measure is stored in minutes in the DB.
Reliability remediation effort on new codenew_reliability_remediation_effortSame as Reliability remediation effort by on the code changed in the leak period.

Security

NameKeyDescription
VulnerabilitiesvulnerabilitiesNumber of vulnerabilities.
New Vulnerabilitiesnew_vulnerabilitiesNumber of new vulnerabilities.
Security Ratingsecurity_rating

A = 0 Vulnerability
B = at least 1 Minor Vulnerability
C = at least 1 Major Vulnerability
D = at least 1 Critical Vulnerability
E = at least 1 Blocker Vulnerability

Security remediation effortsecurity_remediation_effortEffort to fix all vulnerability issues. The measure is stored in minutes in the DB.
Security remediation effort on new codenew_security_remediation_effortSame as Security remediation effort by on the code changed in the leak period.

 

 

Metric

Key

Description

ClassesclassesNumber of classes (including nested classes, interfaces, enums and annotations).
DirectoriesdirectoriesNumber of directories.
FilesfilesNumber of files.
LineslinesNumber of physical lines (number of carriage returns).
Lines of codencloc

Number of physical lines that contain at least one character which is neither a whitespace or a tabulation or part of a comment.

More details

Lines of code per languagencloc_language_distributionNon Commenting Lines of Code Distributed By Language
Methodsfunctions

Number of functions. Depending on the language, a function is either a function or a method or a paragraph.

More details

ProjectsprojectsNumber of projects in a view.
Public APIpublic_api

Number of public Classes + number of public Functions + number of public Properties

More details

Statementsstatements

Number of statements.

More details

Tests

Metric

Key

Description

Condition coveragebranch_coverage

On each line of code containing some boolean expressions, the condition coverage simply answers the following question: 'Has each boolean expression been evaluated both to true and false?'. This is the density of possible conditions in flow control structures that have been followed during unit tests execution.

Condition coverage = (CT + CF) / (2*B)
 
where
 
CT = conditions that have been evaluated to 'true' at least once
CF = conditions that have been evaluated to 'false' at least once
 
B = total number of conditions

Condition coverage on new codenew_branch_coverage

Identical to Condition coverage but restricted to new / updated source code.

Condition coverage hitsbranch_coverage_hits_dataList of covered conditions.
Conditions by lineconditions_by_lineNumber of conditions by line.
Covered conditions by linecovered_conditions_by_lineNumber of covered conditions by line.
Coveragecoverage

It is a mix of Line coverage and Condition coverage. Its goal is to provide an even more accurate answer to the following question: How much of the source code has been covered by the unit tests?

Coverage = (CT + CF + LC)/(2*B + EL)
 
where
 
CT = conditions that have been evaluated to 'true' at least once
CF = conditions that have been evaluated to 'false' at least once
LC = covered lines = lines_to_cover - uncovered_lines
 
B = total number of conditions
EL = total number of executable lines (lines_to_cover)

Coverage on new codenew_coverage

Identical to Coverage but restricted to new / updated source code.

Line coverageline_coverage

On a given line of code, Line coverage simply answers the following question: Has this line of code been executed during the execution of the unit tests?. It is the density of covered lines by unit tests:

Line coverage = LC / EL
 
where
 
LC = covered lines (lines_to_cover - uncovered_lines)
EL = total number of executable lines (lines_to_cover)

Line coverage on new codenew_line_coverageIdentical to Line coverage but restricted to new / updated source code.
Line coverage hitscoverage_line_hits_dataList of covered lines.
Lines to coverlines_to_coverNumber of lines of code which could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover).
Lines to cover on new codenew_lines_to_coverIdentical to Lines to cover but restricted to new / updated source code.
Skipped unit testsskipped_testsNumber of skipped unit tests.
Uncovered conditionsuncovered_conditionsNumber of conditions which are not covered by unit tests.
Uncovered conditions on new codenew_uncovered_conditionsIdentical to Uncovered conditions but restricted to new / updated source code.
Uncovered linesuncovered_linesNumber of lines of code which are not covered by unit tests.
Uncovered lines on new codenew_uncovered_linesIdentical to Uncovered lines but restricted to new / updated source code.
Unit teststestsNumber of unit tests.
Unit tests durationtest_execution_timeTime required to execute all the unit tests.
Unit test errorstest_errorsNumber of unit tests that have failed.
Unit test failurestest_failuresNumber of unit tests that have failed with an unexpected exception.
Unit test success density (%)test_success_densityTest success density = (Unit tests - (Unit test errors + Unit test failures)) / Unit tests * 100

The same kinds of metrics exist for Integration tests coverage and Overall tests coverage (Units tests + Integration tests).

Metrics on test execution do not exist for Integration tests and Overall tests.

  • No labels