Scoring service

It is the microservice responsible for getting a grade for each API.


Table of contents

  1. Summary
  2. Installation
  3. Performance and configuration
    1. Scoring
      1. Criteria
      2. Design and Security modules
      3. Documentation module
      4. Overall
    2. Customization
    3. New rules creation


Summary

API Scoring is the microservice responsible for getting a grade for each API. It has several modules to certify different aspects of each API and calculate a final score for them using a weighted average.

This open-source API-First-based scoring service evaluates your APIs according to a set of rules that the user can customize.

We want to make it simple. In the end, each certified API will be broken down into a single grade, which will mean how well-designed your API is.

If you want to know if your API complies with your design rules, if it addresses some of the OWASP vulnerabilities, and if it complies with documentation guidelines… this is your service.

Due to an issue in protolint, getting a score for gRPC APIs in Windows is not possible.

Installation

Know that we recommend the use of Node v18.13.0 to work on this project.

Deploy the service following these steps:

  1. Clone this repository:

     git clone git@github.com:InditexTech/api-scoring-engine.git
    
  2. Place yourself in the correct package:

     cd packages/certification-service/code/
    
  3. Install the dependencies:

     npm i
    
  4. Optionally, add your GitHub credentials in one of the following ways to be able to validate private repositories:

    • as environment variables:

       CERWS_GH_USERNAME # GitHub username.
       CERWS_GH_PASSWORD # GitHub personal access token.
      
    • or in the configmap.yml file:

       cerws:
         common:
           rest:
             client:
               github-rest-client:
                 username: <GITHUB_USERNAME>
                 password: <GITHUB_PERSONAL_ACCESS_TOKEN>
      
  5. Run the service:

     npm start
    


You can make use of this service by just making a request to its API or, even better, using the Scoring CLI tool that we have developed for this matter.

You can also use the IDE Extensions to help you design your API at the same time you validate it with the service. The API hub provides the rating of the modules evaluated on the scoring service, giving you real-time insights into what score to expect.

You can check the available endpoints in the API section.

Performance and configuration

As you have previously read, the microservice works with a compound of rules that come by default in the Rulesets repository.

Scoring

In order to give meaningful feedback to the users while evaluating their APIs, API Scoring system serves a grade from A+ to D, so developers can have a rating scale to refer to know the aspects that compromise their APIs and how to improve them.

Since Spectral, markdownlint, and protolint use a similar engine, we have defined a mechanism that works in the same fashion for the Design, Security, and Documentation modules.

Criteria

The scoring system for each component will follow the previously mentioned letter-based grade system and:

  • Each module is individually scored.
  • The score for each module starts at the maximum value, namely 100 (A+).
  • The final score is a weighted average of the modules that apply.
  • For each type of broken rule, a specific number of points is deducted from the score.
  • There are three kinds of rules: errors, warnings, and information. This latter kind does not decrease the grade anyhow.
  • Designers can consult which rules have been broken using the API 360 hub extension, so they know how to improve them.
Score Letter
100 A+
90 - 99 A
75 - 89 B
50 - 74 C
0 - 49 D

To obtain the final score of an API, the microservice considers every individually certified module and operates according to a weighted average.

Notice that not every module is supported by every protocol. Remember that Security is only supported for REST, while Design and Documentation are supported for REST, Event, and gRPC. GraphQL is not supported for the moment.

Three modules approach

Module Weight
Design 0.4
Security 0.45
Documentation 0.15 Convention rules 0.30
Custom rules 0.70

Two modules approach

Module Weight
Design 0.85
Security N/A
Documentation 0.15 Convention rules 0.30
Custom rules 0.70

Design and Security modules

The Design and Security modules’ grade is calculated the same way, according to the following formula:

Figure 1. Module grade calculation.

The score of these modules depends, as you see in the Figure 1, on the following values:

  • Warnings: number of warning-type rules that broke.
  • Errors: number of error-type rules that broke.
  • TotalRules: number of rules within the module.
  • N: Factor number that indicates de severity of the errors. By default, it is 5.

Documentation module

As you can notice in the Table 1 and 2, the documentation module has a 15% weight in the overall score, and, at the same time, it includes an inner weighted average.

  • The Markdown convention rules weigh 30% of the documentation grade.
  • The Markdown custom rules weigh 70% of the documentation grade.

Therefore, each rule’s weight will be affected by an extra factor. You can check how this works in the xref:#example[expample] below.

Overall

Once each module’s grade is calculated, it is time to do the weighted average to get the final score. For this, you need to sum all the module’s scores multiplied by their weight, i.e.:

Figure 2. Overall score calculation (3 modules approach).

Figure 3. Overall score calculation (2 modules approach).

For the Documentation module, different weights are assigned base and custom rules. To calculate the Documentation module score, the service operates with a weighted average of the two kinds of rules.

Each module’s rules can be whether warnings, errors, or information. The breach of each of them means:

  • Warning: This is not compliant with the guidelines, or the contract is not fully documented, but it’s still valid — they have a lower impact on grade.
  • Error: Failures of this severity might severely impact the usage of this contract as the source of truth, or it’s not valid at all — they have a higher impact on the grade.
  • Information: Suggestions on how to improve your API documentation that doesn’t mean it’s poorly documented — They have no impact on the grade.

A correction factor is added for error-type rule breaches, assigning them a higher weight.

Customization

You can modify the score calculation by adjusting some parameters in the configmap.yml file :

  • If you want to increase the weight of errors, you can increase the value of the error-coefficient-weight property, which by default is 5.

  • If you want to modify the weight of the modules, you can change them in the modules-weights and modules-weights-without-security properties to adjust them to your needs.

  • You can also modify in the Documentation module the weights of the base and custom rules in rules-weights.

You can modify the whole set of rules in the lint-repository-folder property. The default route is the set of rules in this repository, but you can change it to the Rulesets repository.

Once you install and deploy the service as explained in the ⚙️ Installation section, you can make a POST to the /rulesets/refresh endpoint (with this request, the application will download the new ruleset to the /src/rules folder, overwriting the current rules):

curl --location --request POST 'http://localhost:8080/apifirst/v1/rulesets/refresh' <!-- todo: update localhost URL -->
  • You can change the location where the rules are downloaded by modifying the configuration file:

    cerws:
      common:
        rest:
          client:
            github-rest-client:
              lint-repository-folder: "/src/rules"
    
    • If you modify this path you will need to update the following properties to update their paths too:

      cerws:
        lint:
          rest:
            general-default-ruleset: # Path of the file with the rules that will be applied in the linting of rest APIs
            security-default-ruleset: # Path of the file with the rules that will be applied in the security linting of rest APIs
          event:
            general-default-ruleset: # Path of the file with the rules that will be applied in the linting of async APIs
          avro:
            general-default-ruleset: # Path of the file with the rules that will be applied in the linting of async APIs with Avro
          grpc:
            configuration-directory: # Path of the protolint configuration
            severities-file:  # Path of the configuration file with the severity of the violations for grpc rules (by default the severity is warn)
        markdown:
          markdown-lint-config: # Path of the markdownlint configuration file
          markdown-lint-api-custom-rules: # File path with custom markdownlint rules
      
  • You can also use the apicli tool to update the rulesets with the Rulesets repository with a CLI command.

New rules creation

The scoring rules are based on different linters depending on the files that are linting. If you want to create your own rules, you should follow each linter’s documentation: