Skip to content

Code Quality Analysis

Analyze code quality metrics across your embedded codebase. The eo report codequality command calculates cyclomatic complexity, lines of code, coupling, and cohesion for C, C++, Python, Go, and Rust files.

Quick start

# Analyze locally (no upload)
eo report codequality --local ./src

# Analyze and upload to EmbedOps
eo report codequality ./src

# Verbose output with file-by-file metrics
eo report codequality --local -v ./src

Options

Flag Description Default
--local, -l Run locally without uploading false
--language Filter to specific language (c, cpp, python, go, rust) all
--output, -o JSON output path for local reports ./code-quality-report.json
--verbose, -v Show file-by-file metrics false

Threshold overrides

Each metric has three thresholds: excellent, good, and warning. Override any threshold with flags:

eo report codequality --local ./src \
  --complexity-excellent 8 \
  --complexity-good 15 \
  --loc-warning 800
Metric Excellent Good Warning
Complexity --complexity-excellent (10) --complexity-good (20) --complexity-warning (30)
LOC --loc-excellent (200) --loc-good (500) --loc-warning (1000)
LCOM --lcom-excellent (40%) --lcom-good (65%) --lcom-warning (80%)
Coupling --coupling-excellent (5) --coupling-good (10) --coupling-warning (15)

How metrics are calculated

Lines of code (LOC)

LOC counts logical lines—non-empty, non-comment lines of code. This matches the "Code" column from scc.

Logical LOC = Physical lines - Blank lines - Comment lines

Comment detection varies by language: - C/C++/Go/Rust: // single-line and /* */ multi-line comments - Python: # comments and """ docstrings

Cyclomatic complexity

Complexity measures the number of independent paths through code. The implementation aligns with scc's complexity patterns.

Counted patterns by language:

Language Patterns
C/C++ for, if, switch, while, else, \|\|, &&, !=, ==
Go for, if, switch, select, else, \|\|, &&, !=, ==
Python for, if, elif, while, else:, or, and, !=, ==, except:
Rust for, if, match, while, else, \|\|, &&, !=, ==, ?.

Rules: - Patterns in strings are not counted (printf("if error") adds 0) - Patterns in comments are not counted - Word boundaries are enforced (copy_if( does not match if() - Go files use AST parsing for accuracy; other languages use regex

To verify against scc:

scc --by-file --no-cocomo ./src

Coupling (CBO - Coupling Between Objects)

The coupling metric implements CBO from the Chidamber-Kemerer metrics suite. CBO counts unique types that a file depends on through:

  1. Import/include statements
  2. Type references in function signatures (parameters, return types)
  3. Field/member types
  4. Base classes and inheritance
  5. Template/generic instantiations
  6. Type casts and assertions
  7. Object instantiation

Each unique type counts as 1, regardless of how many times it appears.

Detection by language:

Language What's counted
C #include, struct usage, function parameters, casts, sizeof
C++ Above + templates, inheritance, namespace types, new expressions
Python import, type hints, decorators, inheritance, isinstance()
Go import, package-qualified types, struct fields, type assertions
Rust use, trait bounds, impl blocks, generic parameters, path types

Thresholds:

Research suggests CBO > 9 indicates high coupling. The defaults are:

Rating Threshold
Excellent ≤ 5
Good ≤ 10
Warning ≤ 15

Categories:

Dependencies are categorized as: - stdlib: Standard library types (e.g., stdio.h, fmt, std::) - external: Third-party packages - local: Project-internal modules

LCOM (lack of cohesion of methods)

LCOM estimates how well a class/struct's methods work together. Lower is better.

The implementation uses a heuristic based on the methods-to-fields ratio:

ratio = methods / fields
Condition LCOM
No fields detected 80%
ratio > 3.0 70%
ratio > 1.5 45%
Otherwise 20%

Limitations: - Field detection may count local variables and function parameters - C struct-to-function association uses proximity (not accurate for all codebases) - Rust falls back to function-density heuristics

For files without classes (pure functions), a generic heuristic applies:

function_density = functions / total_lines * 100

density > 5%  → LCOM = 75%
density > 2%  → LCOM = 50%
Otherwise     → LCOM = 25%

Health score calculation

Each file receives a 0-100 health score based on weighted penalties from all metrics.

Weights

Metric Weight
Complexity 35%
Coupling 25%
LCOM 25%
LOC 15%

Penalty calculation

For each metric, the penalty is calculated using linear interpolation between thresholds:

Range Penalty
≤ excellent 0%
excellent → good 0-25% (linear)
good → warning 25-75% (linear)
> warning 75-100% (capped)

Final health:

health = 100 - (complexity_penalty × 0.35 + coupling_penalty × 0.25
              + lcom_penalty × 0.25 + loc_penalty × 0.15)

Health categories

Score Category
≥ 80 Excellent
≥ 60 Good
≥ 40 Warning
< 40 Critical

Output

Terminal output

The command displays: - Overall health percentage and category - Total files, LOC, and average complexity - Quality distribution (files per category) - Top 5 files needing attention (sorted by lowest health)

JSON output (local mode)

With --local, results are saved to the output file:

{
  "summary": {
    "OverallHealth": 80.5,
    "TotalFiles": 42,
    "TotalLOC": 8500,
    "AvgComplexity": 12.3,
    "FunctionsAnalyzed": 156,
    "Distribution": {
      "Excellent": 28,
      "Good": 10,
      "Warning": 3,
      "Critical": 1
    },
    "TopIssues": [
      {
        "Path": "src/complex_module.c",
        "Health": 35.5,
        "Complexity": 45,
        "LCOM": 75
      }
    ]
  },
  "file_metrics": [
    {
      "FilePath": "src/main.c",
      "Language": "c",
      "Complexity": 15,
      "LOC": 230,
      "LCOM": 45.0,
      "Coupling": 8,
      "Functions": 5,
      "Health": 72.5
    }
  ]
}

The TopIssues array contains up to 5 files with health scores below 80%, sorted by lowest health first.


Supported file types

Language Extensions
C .c, .h
C++ .cpp, .cc, .cxx, .hpp
Python .py
Go .go
Rust .rs

Validating results

Compare complexity and LOC against scc:

# Install scc
brew install scc  # macOS
# or: go install github.com/boyter/scc/v3@latest

# Compare results
scc --by-file --no-cocomo ./src
eo report codequality --local -v ./src

The Code and Complexity columns should match.