A central issue in any system that relies on multiple protocols is finding break-even points - to know when one method is preferable to another. In numerical computing such approaches are called polyalgorithms , and the breakeven points can often be specified in terms of a few parameters giving problem characteristics independent of the computing environment. In communication systems, however, the issue is significantly more complex because of dependence on hardware, networks, and software implementations. Particularly for SOAP which is undergoing rapid development (for example, currently there is no publicly available SOAP parser for C++), an extensive testing framework is required.
This section describes the framework used for performance tests, the tests performed and observations on the results. The framework, written in Java, executes test programs that are instrumented to automatically accumulate performance data over several runs in a common format suitable for visualization and other processing. Plots showing data for the tests performed are presented in the appendices. For each test, the mean value of accumulated runs is plotted together with error bars corresponding to one standard deviation in each direction.
Figure 3 summarizes the machines used in the tests. These machines range from typical workstations to high-end servers. All the UltraSPARC machines were running SunOS v 5.7; the Linux machines were running RedHat 6.2 with kernel version 2.2.16.
On the UltraSPARC's, JDK 1.2 Solaris VM (build Solaris_JDK_1.2.2_05a, native threads, sunwjit), and JDK 1.3 Standard Edition (build 1.3.0-beta_refresh) with Java HotSpot(TM) Client VM (build 1.3.0-beta_refresh, mixed mode) were used. On the Linux machines, JDK 1.2 Classic VM (build 1.2.2_006, green threads, nojit) and JDK 1.3 Standard Edition (build 1.3.0beta_refresh-b09) with Java HotSpot(TM) Client VM (build 1.3.0beta-b07, mixed mode) were used.
Java's System.currentTimeMillis() call was used for timing measurements. A high-resolution clock was used for some tests on UltraSPARC systems. The tests were divided into sets A, B, and C. Each set was executed on various combinations of machine configurations, hardware environments, and protocols.