Friday, July 27, 2007


Percentage Used Methods Covered: 0.2572512

Of the system methods used in the wild, about 75% of them are uncovered/ no unit tests.

Top10 Frequent Uncovered Methods:

System.Runtime.InteropServices.UCOMIConnectionPoint.Unadvise (int)[5629],
System.Runtime.InteropServices.UCOMIConnectionPoint.Advise (object,int&)[5509],
System.String.get_Length ()[4866],System.Object.GetType ()[2976],
System.Runtime.InteropServices.GCHandle.Alloc (object,gchandletype)[2744],
System.Runtime.InteropServices.GCHandle.AddrOfPinnedObject ()[2708],
System.String.get_Chars (int)[1751],System.Type.get_FullName ()[1706],
System.Data.DataRow.get_Item (string)[1590]

Top10 Types Used:

System.Windows.Forms.Control: 39031,
System.String: 37039,
System.Collections.ArrayList: 33015,
System.Threading.Monitor: 29379,
System.Type: 15596,
System.Collections.IEnumerator: 12835,
System.Runtime.InteropServices.UCOMIConnectionPoint: 11138,
System.Collections.Hashtable: 10776,
System.Delegate: 10191,
System.Runtime.InteropServices.GCHandle: 10056


My last work focused on feasibility and demonstrating the concept. This time I focused on correctness and completeness. For example, the analysis didn't distinguish overloads. Plus I suspected that only some of the coverage data was being joined with the usage data. I also worked on some minor enhancements like getting the class library method length.

Turns out the monocov tool outputted the method name without any parameters if it didn't have coverage data. I was only matching against ones without coverage. If it did have coverage data, it would generate a name like:
MethodName (string,byte[],System.Data.DataTable)

I have my tool match this same format so that it could work with the overloads. Unfortunately, this results in some custom code because the output shortens certain types like:
String.String-> string
String.Int32[] -> int[]
System.Object -> object
so I had to do the same.

Currently there is still a problem if the type is a reference (System.String&)

I also fixed monocov so that it would be consistent in how it would output method names without coverage data. Otherwise, it would just repeat the same method name for output.
I also fixed a bug in my simplified exporter that would use the same class name if more than one class was in the source file. While I was in monocov, I also added the feature to be able to export the length of the method.

Now that I can generate much more sound results, I will be calculating some statistics and be posting them shortly.

Friday, July 13, 2007


Unit Testing Effective?

Apparently there is an opposite correlation between methods used in the field, and unit testing coverage. That is, the methods that the unit test cover, were not used in the field. The methods that were used, had no coverage.

A larger sampling and verification of the results is necessary, however it is an interesting result so far.

Data Collection Methods

Over 500 projects were downloaded from the project. The projects were selected based on the label: Mono or CSharp. In addition, 2 projects from a company were included. The projects were manually built, or a binary distribution was acquired. Some projects had to be excluded due to immaturity(not building), misclassification, and lacking the appropriate resources to build the project.


For this run, only 35 of the projects were included in these results.

The information shown includes the type.method and corresponding:
  • Frequency across all applications.
  • Number of applications that the type.method appears in.
  • Coverage for that type.method. (1 = 100%, 0.50 = 50%, 0 = 0%, -1 = no test data)
Note: Coverage data was not collected for Managed.Windows.Form at this time.

One thing I think would help would be to display the (lines of code, complexity of method) in order to estimate how hard the method is to test or if it is worth testing.

Viewing the Results Yourself

I uploaded the results here:

You can get the mono student projects from here
svn checkout mono-soc-2007

Compile from source or run the executable (Relies on WindowForms) under the directory:

You can easily load the results under the Results tab using the(Import Results) button. Load the xml file on the web, or in christopher/FieldStat/Data/Results/FieldStatResults.xml

The default sort is on AppFrequency, you can resort on the other fields by clicking the columns.

Lessons and Design Issues

My original focus was initially just on accurately getting usage results from one executable and comparing that to the coverage data. However, I soon realized whenI was working with real data that the real challenge was with dealing with many applications and how the information varies across them. There are some interesting emergent problems. For instance, an application uses the log4net library. Is that treated as part of the application? What if several applications are linking that library.

The collection of the applications for sampling field data should be a project in itself. There should be easy automated way to scan, sourceforge, etc and gather executables as samples.

Tuesday, July 3, 2007

Prototype Complete

I was recently squirreled away in the Rocky Mountains of Canada, so I couldn't make updates over the wire. However, I have successful completed the prototype mono coverage tool.

So far, I have only ran the program on a small set of data and coverage data, so I cannot make any bold statements about any glaring mismatches in unit testing. However, I do have some observations.
  • Statements that are uncovered, tend to be paths for exceptional behavior.
  • I think more levels of coverage should be supported: Branch, conditionals, etc.
  • The monocov tool could use a good round of refactoring.
Some stumbling blocks included
  • extra hoops in building mono as a non-root user (prefixes, no ldconfig, etc)
  • The monocov tool's export was intended for pretty presentation, rather than exporting to other tools.
Outstanding tasks include:
  • Frequency information isn't accounting for overloads.
  • Need to collect all coverage information and run the tool over several field binaries.
Phase 2 developments steps include implementing CodeRank to weight the frequency, and introducing an alternative user interface for viewing the data.