Panel | ||||
---|---|---|---|---|
| ||||
Table of Contents
|
SonarQube에서 제공하는 모든 메트릭을 다루지는 않습니다. 여러분이 사용하는 SonarQube 인스턴스에서 제공하는 전체 메트릭 리스트를 확인하고자 하는 경우 metrics search web service를 참조합니다.
복잡도(Complexity)
Name | Key | Description |
---|---|---|
Complexity | complexity | 코드이 경로를 기반으로 계산한 복잡도입니다. 소프트웨어 기능의 로직 흐름에 분기가 발생할 때마다, 복잡도 값은 1씩 증가합니다. 각 함수의 복잡도는 최소 1입니다. 각 프로그래밍 언어별로 키워드keyword 및 기능functionality가 상이하여, 복잡도 계산 정보는 약간 상이할 수 있습니다. |
Complexity /class | class_complexity | class 별 평균 복잡도 |
Complexity /file | file_complexity | file 별 평균 복잡도 |
Complexity /method | function_complexity | function 별 평균 복잡도 |
문서화(Documentation)
Name | Key | Description | |
---|---|---|---|
Comment lines | comment_lines | 실제 주석 혹은 주석 처리된 코드의 라인 수입니다. 중요하지 않은non-significant 코멘트 라인(빈 코멘트라인, 특수 문자만 Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines. The following piece of code contains 9 comment lines:
| |
Comments (%) | comment_lines_density | Density of comment lines 주석 라인 밀도 = Comment lines 주석 라인 수 / (Lines of code코드 라인 수 + Comment lines주석 라인 수) * 100 With such a formula: 위의 식에 따라:
| |
Public documented API (%) | public_documented_api_density | Density of public documented API 문서화 된 공용 API 밀도 = (Public 공용 API - Public undocumented API) / Public API 전체 수 - 문서화 되지 않은 공용 API 수) / 공용 API 전체 수 * 100 | |
Public undocumented API | public_undocumented_api | Public API without comments header.주석 헤더가 없는 공용 API 수 | |
Commented-out LOC | commented_out_code_lines | Commented lines of code |
주석 처리 된 라인 수 |
코드 중복(Duplications)
Name | Key | Description |
Duplicated blocks | duplicated_blocks | Number of duplicated blocks of lines. For a block of code to be considered as duplicated:
Differences in indentation as well as in string literals are ignored while detecting duplications. |
Duplicated files | duplicated_files | Number of files involved in duplications. |
Duplicated lines | duplicated_lines | Number of lines involved in duplications. |
Duplicated lines (%) | duplicated_lines_density | Density of duplication = Duplicated lines / Lines * 100 |
이슈(Issues)
Name | Key | Description |
New issues | new_violations | Number of new issues. |
New xxxxx issues | new_xxxxx_violations | Number of new issues with severity xxxxx, xxxxx being blocker, critical, major, minor or info. |
Issues | violations | Number of issues. |
xxxxx issues | xxxxx_violations | Number of issues with severity xxxxx, xxxxx being blocker, critical, major, minor or info. |
False positive issues | false_positive_issues | Number of false positive issues |
Open issues | open_issues | Number of issues whose status is Open |
Confirmed issues | confirmed_issues | Number of issues whose status is Confirmed |
Reopened issues | reopened_issues | Number of issues whose status is Reopened |
심각도(Severity)
Severity | Description |
---|---|
Blocker | Operational/security risk: This issue might make the whole application unstable in production. Ex: calling garbage collector, not closing a socket, etc. |
Critical | Operational/security risk: This issue might lead to an unexpected behavior in production without impacting the integrity of the whole application. Ex: NullPointerException, badly caught exceptions, lack of unit tests, etc. |
Major | This issue might have a substantial impact on productivity. Ex: too complex methods, package cycles, etc. |
Minor | This issue might have a potential and minor impact on productivity. Ex: naming conventions, Finalizer does nothing but call superclass finalizer, etc. |
Info | Unknown or not yet well defined security risk or impact on productivity. |
유지보수성(Maintainability)
Name | Key | Description |
Code Smells | code_smells | Number of code smells. |
New Code Smells | new_code_smells | Number of new code smells. |
Maintainability Rating (formerly SQALE Rating) | sqale_rating | Rating given to your project related to the value of your Technical Debt Ratio. The default Maintainability Rating grid is: A=0-0.05, B=0.06-0.1, C=0.11-0.20, D=0.21-0.5, E=0.51-1 The Maintainability Rating scale can be alternately stated by saying that if the outstanding remediation cost is:
|
Technical Debt | sqale_index | Effort to fix all maintainability issues. The measure is stored in minutes in the DB. |
Technical Debt on new code | new_technical_debt | Technical Debt of new code |
Technical Debt Ratio | sqale_debt_ratio | Ratio between the cost to develop the software and the cost to fix it. The Technical Debt Ratio formula is: Remediation cost / Development cost Which can be restated as: Remediation cost / (Cost to develop 1 line of code * Number of lines of code) The value of the cost to develop a line of code is 0.06 days. |
Technical Debt Ratio on new code | new_sqale_debt_ratio | Ratio between the cost to developer the code changed in the leak period and the cost of the issues linked to it. |
품질 게이트(Quality Gates)
Name | Key | Description |
Quality Gate Status | alert_status | State of the Quality Gate associated to your Project. Possible values are : ERROR, WARN, OK |
Quality Gates Details | quality_gate_details | For all the conditions of your Quality Gate, you know which condition is failing and which is not. |
신뢰성(Reliability)
Name | Key | Description |
Bugs | bugs | Number of bugs. |
New Bugs | new_bugs | Number of new bugs. |
Reliability Rating | reliability_rating | A = 0 Bug |
Reliability remediation effort | reliability_remediation_effort | Effort to fix all bug issues. The measure is stored in minutes in the DB. |
Reliability remediation effort on new code | new_reliability_remediation_effort | Same as Reliability remediation effort by on the code changed in the leak period. |
보안성(Security)
Name | Key | Description |
Vulnerabilities | vulnerabilities | Number of vulnerabilities. |
New Vulnerabilities | new_vulnerabilities | Number of new vulnerabilities. |
Security Rating | security_rating | A = 0 Vulnerability |
Security remediation effort | security_remediation_effort | Effort to fix all vulnerability issues. The measure is stored in minutes in the DB. |
Security remediation effort on new code | new_security_remediation_effort | Same as Security remediation effort by on the code changed in the leak period. |
Metric | Key | Description |
---|---|---|
Classes | classes | Number of classes (including nested classes, interfaces, enums and annotations). |
Directories | directories | Number of directories. |
Files | files | Number of files. |
Lines | lines | Number of physical lines (number of carriage returns). |
Lines of code | ncloc | Number of physical lines that contain at least one character which is neither a whitespace or a tabulation or part of a comment. |
Lines of code per language | ncloc_language_distribution | Non Commenting Lines of Code Distributed By Language |
Methods | functions | Number of functions. Depending on the language, a function is either a function or a method or a paragraph. |
Projects | projects | Number of projects in a view. |
Public API | public_api | Number of public Classes + number of public Functions + number of public Properties |
Statements | statements | Number of statements. |
테스트(Tests)
Metric | Key | Description | |
---|---|---|---|
Condition coverage | branch_coverage | On each line of code containing some boolean expressions, the condition coverage simply answers the following question: 'Has each boolean expression been evaluated both to true and false?'. This is the density of possible conditions in flow control structures that have been followed during unit tests execution불리언 표현식을 가진 모든 코드 라인에 대해, condition coverage는 매우 간단한 다음 질문을 던집니다: "각 불리언 표편이 참과 거짓으로 평가되었는가?" 이것은 단위 테스트가 실행되는 동안 흐름 제어 구조에 존재할 수 있는 가능한 조건의 전부입니다.
| |
Condition coverage on new code | new_branch_coverage | Identical to Condition coverage but restricted to new / updated source code.Condition coverage와 동일하며, 신규 혹은 업데이트 한 소스 코드에만 적용됩니다 | |
Condition coverage hits | branch_coverage_hits_data | List of covered conditions.커버 된 조건 리스트 | |
Conditions by line | conditions_by_lineNumber of conditions by line. | 코드 라인 당 조건 수 | |
Covered conditions by line | covered_conditions_by_lineNumber of covered conditions by line. | 코드 라인 당 커버된 조건 후 | |
Coverage | coverage | It is a mix of Line coverage and Condition coverage. Its goal is to provide an even more accurate answer to the following question: How much of the source code has been covered by the unit tests?
| |
Coverage on new code | new_coverage | Identical to Coverage but restricted to new / updated source code. | |
Line coverage | line_coverage | On a given line of code, Line coverage simply answers the following question: Has this line of code been executed during the execution of the unit tests?. It is the density of covered lines by unit tests:
| |
Line coverage on new code | new_line_coverage | Identical to Line coverage but restricted to new / updated source code. | |
Line coverage hits | coverage_line_hits_data | List of covered lines. | |
Lines to cover | lines_to_cover | Number of lines of code which could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover). | |
Lines to cover on new code | new_lines_to_cover | Identical to Lines to cover but restricted to new / updated source code. | |
Skipped unit tests | skipped_tests | Number of skipped unit tests. | |
Uncovered conditions | uncovered_conditions | Number of conditions which are not covered by unit tests. | |
Uncovered conditions on new code | new_uncovered_conditions | Identical to Uncovered conditions but restricted to new / updated source code. | |
Uncovered lines | uncovered_lines | Number of lines of code which are not covered by unit tests. | |
Uncovered lines on new code | new_uncovered_lines | Identical to Uncovered lines but restricted to new / updated source code. | |
Unit tests | tests | Number of unit tests. | |
Unit tests duration | test_execution_time | Time required to execute all the unit tests. | |
Unit test errors | test_errors | Number of unit tests that have failed. | |
Unit test failures | test_failures | Number of unit tests that have failed with an unexpected exception. | |
Unit test success density (%) | test_success_density | Test success density = (Unit tests - (Unit test errors + Unit test failures)) / Unit tests * 100 |
The same kinds of metrics exist for Integration tests coverage and Overall tests coverage (Units tests + Integration tests).
Metrics on test execution do not exist for Integration tests and Overall tests.