dicom2 logo
 
Download | Install | Usage (1) (2) | How to | Problems | Limitations | Performances 
TOCLimitations 
Performances
     

WARNING: this page is old. I mean, really. Like 2000 or so. So take it with a pinch of salt. Or just skip it actually :)

Although dicom2 should handle most combinations of different size, bit structures and photometric interpretation, you can check at the medical image samples page if a particular type of file has already been successfully tested... 
 
All tests have been conducted on the following platforms: 
 

O.S. Processor Mhz Byte Order RAM HD Compiler
Windows NT Pentium II 300 Little Endian 96 SCSI 2 Borland C++ 5.02
Linux 2.0.31 Pentium II 300 Little Endian 96 EIDE egcs 1.0.2
 sun logo  Solaris 2.5.1 UltraSparc 170 Big Endian 128 SCSI 2 Sun CC 4.2
 
I recently removed the Windows 95 and Linux tests performed on a Pentium 120 : this configuration is no more available :) Moreover, these results would not be accurate enough regarding the new I/O optimizations implemented since version 1.8. 
 
Each test was run using a set of 100 files of different size, bit structures (where "[x, y | z]" means "x bits stored, y bits allocated and high bit z"), photometric interpretation and syntax. The following tables report the results, in second (the smaller, the faster), of each call to dicom2 with a different set of options and tasks. Numbers in brackets at the bottom of some tables report the results previously computed with version 1.7, when the difference with version 1.8 is noteworthy (the comparison is made between the total of each call, except these using -p or --get, which where not available in version 1.7). 
100 files, 
Explicit VR Little Endian syntax 
-w --win
-w
-a
-p --compression=no
-d
-r
-t
-w -a -d -r -t
-d --halve
-d --crop=10:10:100:100
-d --fliph --flipv
--get=min:max
TOTAL (without -p --get)
version 1.7
 
256x256 
[12, 16 | 11] 
MONOCHROME2
sun logo
5.5
2.5
4.8
5.3
2.9
5.1
5.2
2.8
5.4
8.5
6.4
9.4
3.6
0.8
5.2
3.8
1
5.3
1
0.6
2
19
4.5
14.2
2.5
0.8
3.9
2.3
0.6
3
3.5
0.9
5.5
2.5
0.9
3.1
51.7
17.4
54.4
   
108.8
 
256x256 
[12, 12 | 11] 
MONOCHROME2
sun logo
5.6
3.2
5.5
5.1
4.2
6.9
5.3
4.1
7.2
8.6
7.4
11.1
3.1
0.5
4
3.5
1.2
5.8
0.7
0.4
1.1
19
5.9
15.9
2.4
1.2
4.8
1.8
0.6
2.7
4
2.5
10.5
1.8
1.8
5
50.5
23.8
64.4
 
 
105.3
 
320x240 
[8, 8 | 7] 
RGB
sun logo
-
-
-
6.8
1.8
6.8
7
1.7
7.7
8.9
6.3
11.7
6.6
1.1
6.9
7
1
6.7
0.7
0.4
1
44
4
19.6
6.5
0.9
4.7
3.2
0.7
3.7
7
2
8
-
-
-
88.8
13.6
65.1
     
 
 
100 files, 
Explicit VR Big Endian syntax 
-w --win
-w
-a
-p --compression=no
-d
-r
-t
-w -a -d -r -t
-d --halve
-d --crop=10:10:100:100
-d --fliph --flipv
--get=min:max
TOTAL (without -p --get)
version 1.7
 
256x256 
[12, 16 | 11] 
MONOCHROME2
sun logo
5.5
2.7
4.6
5.3
3
4.8
5.7
2.9
5.1
8.5
6.7
9.3
3.7
0.8
5.1
3.9
1.1
5.3
1
0.6
2
20
4.6
14
2.6
0.8
3.8
2.3
0.6
2.8
3.6
1
5.3
2.5
1
2.9
53.6
18.2
52.8
73.1
27.5
 
 
256x256 
[12, 12 | 11] 
MONOCHROME2
sun logo
5.5
3.3
5.4
5.2
4.3
6.7
5.4
4.1
7
8.6
7.4
11
2.8
0.6
3.8
3.5
1.3
5.7
0.7
0.4
1.3
18
6
15.4
2.6
1.3
4.7
1.9
0.6
2.7
4
2.7
10.3
2.9
1.9
4.8
49.6 24.6
63
63.7 32.1  
 
320x240 
[8, 8 | 7] 
RGB
sun logo
-
-
-
6.9
1.8
6.8
6.5
1.7
7.7
8.9
6.3
11.7
6.6
1.1
6.9
7
1
6.7
0.7
0.4
1
44
4
19.6
6.5
0.9
4.7
3.2
0.7
3.7
7
2
8
-
-
-
88.8 13.6
65.1
     
 
 
Top A few observations
Bottom
    The Linux system is a killer: it is definitely a good choice in this context. 
 
The Windows NT results are very disappointing: 2 to 5 times slower than the Linux measures, on the same hardware, a Pentium II @ 300 mhz. I guess that the really low performances of the I/O functions or drivers may be the main cause. The quality of the code generated by the Borland C++ might be involved too. This is not a multi-tasking problem, as Windows 95 (which is not a true multi-tasking system) was performing poorly too. It is so bad that even the UltraSparc @ 170 mhz is running faster :( And do not ask me why the RGB files are processed 80% slower than MONOCHROME files :) Anyway, all NT results were mostly unstable: 10% to 30% difference between each benchmark session. 
 
The I/O functions have been enhanced since version 1.8. The algorithms used to read or write little-endian files on big-endian machine (and vice-versa) were not optimized, resulting in unnecessary slow latency. This is no more the case, and you might not notice any performance penalty resulting from byte-swapping. Have a look at the two set of tables (one is coded using Little Endian syntax, the other with Big Endian) : the difference, which ran from 40% to 100% with version 1.7, shall not exceed 5% with version 1.8 ! 
 
Although the first two sets have the same size (256x256), the same photometric interpretation (MONOCHROME2) and quite the same structure ([12 in 16 | 11] and [12 in 16 | 11]), dicom2 performs  quicker on the first set: do not worry about this ! It has been optimized to work faster (10% to 90%) on samples stored in a multiple of 8 bits (i.e. when each cell starts at the beginning of a word or a byte). This optimization seems to have no effect with any Windows system ! 
 
It is obvious that converting the same file to many destination formats during a single call to dicom2 is more efficient than calling dicom2 for each destination format. Look at the table, and compare the -w -a -d -r -t test to the sum of the -w, -a, -d, -r, and -t tests.
 
Download | Install | Usage (1) (2) | How to | Problems | Limitations | Performances  TOCLimitations 
 
Medical Imaging / Sébastien Barré / Jan. 1998