Skip to content

Front end

StevenBucaille edited this page Aug 6, 2019 · 3 revisions

Angular presentation

Angular is a TypeScript-based framework for building Web pages.

The Dashboard interface

As shown in the 'What is Gadolinium ?' section, the interface is divided in 4 parts :

  1. The upper-left part : API list.
  2. The upper-right part : Servers list.
  3. The bottom-left part : Uptime visualization.
  4. The bottom-right part : Latency visualization.

API list

This list shows all API's tested or being tested. Two actions can be carried out on an API, by clicking on it, it will display the server list and the data visualization for this API, and two buttons will appear :

  1. The export button, available only if the API testing process is finished.
  2. The delete button : this button will delete the API from the tool, delete all the servers running tests for this API and leave blank the data visualization parts.

Server list

The list of servers doesn't offer control. It just displays the state of each servers in the testing process.

Uptime visualization

The Uptime visualization part displays, if uptime test has been ordered from the User, and if the testing process has begun, two charts to allow visualization :

  1. A donut chart, showing the overall availability of the API for the past tests.

[Insert 13 Donut Chart Screenshot]

  1. A list of progress bars showing the overall availability of the API and showing the progress of these tests.

[Insert 14 Progress bars Screenshot]

Latency visualization

The Latency visualization part displays, if latency test has been ordered from the User, and if the testing process has begun, tree charts to allow visualization :

  1. A bar chart showing the average latency for each operation, for all zones combined.

  1. A line chart showing the average latency of each operation over time, for all zones combined.

[Insert 17 Time by Operation Over Time Screenshot]

  1. A line chart showing the average latency of each zones over time, for all operations combined.

[Insert 18 Time by Zone Over Time Screenshot]

OpenAPI Test Configuration Modal

The OpenAPI Test Configuration Modal is a form appearing when clicking on the "Add a new API" button. It consists of filling the information in order to test an API in accordance with the OpenAPI Extension Proposal.
By default, all the fields are deactivated, they only activate when the sent file is approved by the Master (if the bar turns green). For this first version, the fields are, for both Latency and Uptime tests :

  • Test repetitions
  • Interval of time between repetitions

Then, the User can select which zones to test the API from, for Latency and Uptime (separately).

Angular Services for Gadolinium

In Angular, a Service allows Components to interact with each other and exchange data. With help of Subjects, Observables and Subscriptions, if a Service modify an Observable variable, all subscribed Components will be notified and can run a specific function to handle these changes.
For instance, the most representative Observable in the tool will be the selectedApi. Because all the interface depends on which API the User clicked on, it is important that every Components are aware of which API has been clicked and what data to display. Thus, multiple services have been created in order to address these problems.

APIStatusService

The APIStatusService, which its name is close to the APIStatus file, is the service in charge of handling all APIStatus file changes that the Master communicates. At page loading, or when the file changes, the Master will directly send, via the APIStatus event the content of the APIStatus file.
This will allow the APIList Component to display or hide the list of contained API's. It will also allow the App Component to know if it has to display ServerList, UptimeChart and LatencyChart Component.
As well, it allows all Chart Components to know which data to load, if there is no API selected anymore, they have to unload all the data from the charts.

The APIStatusService also handle the LatencyTestUpdate and UptimeTestUpdate which contains the new content added to the APIStatus file from a Slave.
These event have a main goal, which will be explained more deeply in the section of Charts, it consists of updating the list of data instead of recalculating it all.
Considering a list of 1000 records about latency to format for chart purpose, updating by adding one new record is more efficient than reformating all 1001 records.

OpenAPITestService

The OpenAPITestService is the service used by the OpenAPI Test Configuration modal to send, first of all, the OpenAPI Specifications file, then the OpenAPI Test Configuration.
Assisted by the FileUploadService, which consists of sending by POST request the information, the OpenAPITestService ensure that the fields are properly filled (The OK button won't show up green if you haven't filled all the Latency and Uptime fields).
For this first version, and the processus needs to be improved, the Service first sends the file by the POST /OpenAPI endpoint, then sends the OpenAPITestConfiguration information via the openApiTestConfig Socket event.

TestResultsService

The TestResultsService is the Service directly in communication with the LatencyResultsService and the UptimeResultsService.
By using the APIStatusService, it ensures, when an API is selected, if the previous selected API was different, that the LatencyResultsService and UptimeResultsService reinitialize and display the proper data, or if no API is selected, to erase these data.
It also relays the update of one of the two test to the proper service.

The LatencyResultsService and UptimeResultsService have two functions :

  1. Initialize and format test results to address data to the charts
  2. Update these data with a new test result in order to update the charts.

In order to be able to update the data after a new record, intermediate variables have been created in order to manage this "updateness".
There are 3 :

  • meanOfZoneLatencyByOperations : combine and calculate the mean latency of each operation by all zones.
  • meanOfZoneLatencyByOperations : combine and calculate the mean latency of each zones by all operations.
  • meanOfAllRecords : combine and calculate the mean latency at a certain time, by all operations and zones.

LatencyResultsService

The LatencyResultsService is in charge of 3 different charts, as presented earlier :

  1. The OperationTimesByZones chart
  2. The TimeOperationsOverTime chart
  3. The TimeZonesOverTime chart

It means that the service has to format the data by 3 different ways.

As the values are susceptible to change, we need an other variable to store all these data, because the chart library needs the data to be formatted a way that is not adequate to edit.
From the generation at the Slave level to the displaying in the chart, the data is formatted in 3 different way :

This allows to store metadata such as how much record have been considered to calculate the mean, this way we can then recalculate a mean from an existing one with this formula :

newValue = (oldValue * nbOfTimeTested + newRecordValue) / ++nbOfTimeTested

OperationTimesByZones Chart

Initializing

This chart, as said, is a bar chart. The Y axis represents the latency time in ms and the X axis contains a category for each operation.
In view of the functionning of the C3 library, it is needed to send the data 'in column' and specify the 'categories'.
For this chart, the columns will contain, for each zone, an array with a head with the name of the zone and the tail containing the values of each operation mean latency.
The categories will contain an array of all the operation names.
Where the first category will have as value the first element of the tail of each column arrays.

An example for the Swagger's PetStore :

columns : [
  ["asia-northeast2", 793.3, 2328.9, 927.5, 644.5, 1070, 705, 859.5,...],
  ["eu-west2", 543.3, 276.1, 320.4, 203.8, 493.3, ...],
  ...
]
categories : ["addPet", "updatePet", "findPetsByStatus", "getPetById",...]
Updating

Because a new record only relates to one operation, we look for the existing value of this such operation, add the value to the mean and recalculate the mean.

TimeByOperationOverTime

Initializing

For this chart, it is a line chart. The Y avis represents the latency time in ms and the X the time at which the tests occurred. The data represents the evolution of latency time of each operation over time, by calculating the mean value of all zones.
The X axis changes from the OperationTimesByZones chart, it is not categories but a timeserie.
The columns will contain, for each operation, a couple of two arrays with :

  • In the first, representing the timeserie for the operation, has as the head of the array, a string concatening 'date' with the id of the operation, and as the tail all the timestamps of each record, in a special format YYYY MM DD HH mm SS.
  • In the second, representing the set of value of the test, all zones combined, for the operation.

In the tail of both, an element in the first array has a value in the second, which means, for each timestamps there is a linked value in the set of value.

Taking the PetStore example :

"columns" : [
  ["dateaddPet", "2019 07 23 15 31 00", "2019 07 23 15 32 00", "..."],
  ["addPet", "1226.50", "1241.05", "..."],
  ["dateupdatePet", "2019 07 23 15 31 00", "2019 07 23 15 32 00","..."],
  ["updatePet", "2328.9", "668.35", "..."],
  "..."
]

In order to have everything working in the chart, we need to link the couples :

"xs" : {
  "addPet" : "dateaddPet",
  "updatePet" : "dateupdatePet"
  "..."
}
Updating

Taking the new record, we look 'has the operation already been tested ?' :

  • No, we create the two correspondent arrays, in the first, create the head by concatening 'date' with the name of the operation and push the record date at the good format, in the second, create the head with the name of the operation and push the record value.
  • Yes, next question 'has the operation already been tested at the same time ?' :
    • No, we just push the first array with the new date, and the second with the record value.
    • Yes, we gather the index of the time in the first array, and we add the record value at the existing one at the same index in the second array, and recalculate the mean.

TimeByZonesOverTime

Initializing

For this chart, it is identical to TimeByOperationOverTime, except that the data represents the evolution of latency time of each zone over time, by calculating the mean value of all the operations.

Taking the PetStore example :

"columns" : [
  ["dateasia-northeast2", "2019 07 23 15 31 00", "2019 07 23 15 32 00", ...],
  ["asia-northeast2", "1226.50", "1241.05", ...],
  ["dateueu-west2", "2019 07 23 15 31 00", "2019 07 23 15 32 00",...],
  ["eu-west2", "2328.9", "668.35", ...],
  ...
],
"xs" : {
  "asia-northeast2" : "dateasia-northeast2",
  "eu-west2" : "dateeu-west2"
}
Updating

Because a new record only relates to one operation, we look for the existing value of this such operation, add the value to the mean and recalculate the mean.

UptimeResultsService

The UptimeResultsService is in charge of 2 different charts, as presented earlier :

  1. The Donut chart
  2. The MultipartProgressBars

For this service, the computation is not as difficult and complex, because it is just describing whether an API is up or not, and when.

The Donut chart

This chart is simply a set of two array, one representing the number of time the API was up, the other the number of time it was down.

The aspect of the data used by C3 looks like this :

[
  ["Down"],
  ["Up", true, true, true, true, true, true]
]

In this case, the API was available 100% of the time.

The MultipartProgressBars

This component is not a chart, it is just a list of progress bars representing the uptime over time via green or red color.

The data is a list of object representing each a server. In an object, there is the information of the server such as its location, name and overall availability. Additionally, there is the list of the records

[Insert 19 MultiPart Progress Bar Explanations Screenshot]

Updating

To update MultipartProgressBars, we compare the new record with the last state recorded by the same server :

  • if the state is still the same (it was down and it is still down, or it was up and it is up), then extend the length of this last state by 1, and set the new end date with the date of the new record.
  • if the state is different (it was down and it is now up, or it was up and it is new down), then let the last state recorded as it is, and push a new state in the array with the following parameters :
    • The starting and ending date are the date of the record
    • Set the state with the state of the record
    • The state length is 1.