SIP Metrics Comparison – ACM   DRAFT 5/6/2009

 

Based on draft-ietf-bmwg-sip-bench-term-00   (benchmarking)

and          draft-ietf-pmol-sip-perf-metrics-03 (E2E)

 

When comparing setup-related terms, “Established” in Benchmarking is sometimes equivalent to

“Successful” in the E2E metrics, and slightly different from the use of “Established” in the E2E metrics.

Benchmarking Established Sessions (3.1.7) require a 200 OK response.

E2E Successful Sessions require a provisional response (non-100 Trying), and 200 is acceptable.

E2E Session Establishment Ratio requires a 200 OK, but excludes attempts that result in 3XX responses.

 

The table below lists metrics in the benchmarking draft and similar metrics in the E2E draft.

 

bmwg-sip-bench-term-00

pmol-sip-perf-metrics-03

3.4.6    Session Attempt Delay

Time from INVITE to 200 OK

AVERAGE over many attempts

4.3 Session Request Delay (SRD)

4.3.1 Successful Session Setup SRD

Time from INVITE to 180 (or 200 OK)

Single measurement result

3.4.5 Session Establishment Performance

RATIO of Established Sessions to Attempts

4.7 Session Establishment Ratio (SER)

RATIO of Attempts with 200 OK to Attempts (excluding those with 3XX)

 

ALSO similar:

4.8.  Session Establishment Effectiveness Ratio (SEER)

Another RATIO with many different qualifications.

 

There are many other dissimilar metrics in each draft:

 

Benchmarking has a large number of “Rate” metrics which are intended to capture the maximum events per second

that a Device Under Test can handle without failure. These are time rates, and not ratios.

 

The E2E draft defines many metrics that track both setup and take-down, and both success and failure. Many of these

conditions do not have a parallel metric in the benchmarking work, and they are not proposed as benchmarks.