Skip to main content

Details

The time taken for a distance or trace calculation consists of several parts.

  • REST overhead * The overhead of calling the api server and serialization of the request and response.
    For our other cloud services calling the gDcc, the network overhead will be slightly less, but not enough to make a significant difference in the numbers.

  • Cache lookup

    • A call to the database to see if the result is already there.

    • The production version will have faster database leading to improved cache performance, compared to the these numbers.

  • Azure Service Bus overhead

    • The overhead of sending the request to the Azure Service Bus, and receiving the response.

    • It seems ASB is optimized for throughput and not latency, so there is quite a bit of overhead in just establishing the connection

  • The Calculation

    • The calculation of the distance or trace itself.

    • This is only relevant if the result is not in the cache, and the duration of the operation depends on the distance between the coordinates.

    • Should the calculation find that there's no valid path between coordinates, the size of the roadnet will also affect the duration of the operation (larger roadnet = slower calculation)

The REST and ASB overhead is mostly latency, so it varies little in regards to the message size in these measurements.

Measurements

The numbers below are taken from a local developer machine in Europe calling the developer instance of the gDcc at https://gdcc-dev.amcsplatform.com/. The numbers have a lot of uncertainty. Probably somewhere around 30 ms. For the cache storage numbers probably more.

The calculations are based on randomly chosen valid coordinates all over Denmark.

  • REST overhead: 0.08 seconds (this will vary with the clients distance to the datacenter)
  • ASB overhead: 0.15 seconds

All values are in seconds. For each operation the following is shown:

  • Operation : The measured operation
  • Total : The total time in seconds on client side for the operation where:
    • The api checks the cache and sees that it's not there
    • The api sends the request to backend
    • The backend calculates the request
    • The backend stores the result in the cache
    • The backend notifies the api of the result
    • The api retrieves the result and returns it to the client
  • Total when Cached : The total time in seconds on client side for the operation where:
    • The api checks the cache and sees that it not there and returns the result
  • Cache lookup : The estimated time spent on just the cache lookup
  • Cache lookup : The estimated time spent on the actual calculation when required
  • Cache lookup : The estimated time spent on storing the result in the cache
OperationTotalTotal when CachedCache lookupCalculationCache Storage
Distance 1 to 10.340.090.020.12< 0.01
Distance 1 to 100.920.230.160.410.13
Distance 1 to 1004.000.990.920.552.31
Distance 1 to 50012.254.904.830.696.51
OperationTotalTotal when CachedCache lookupCalculationCache Storage
Trace 1 leg1.350.560.490.60< 0.01
Trace 10 legs6.331.731.664.54< 0.01
Trace 100 legs47.562.082.0143.791.54

Some notes on this

  • The REST overhead will be less (close to zero) when the client is in the cloud, as the client should be in the same datacenter as the api.
  • The Azure Service Bus overhead might be reduced by switching to another pub/sub technology (redis streams or rabbit MQ)
  • The cache performance will be better in production, as the database will be faster