Publications

The following are my publications in various fields. A BibTex file with the appropriate bibliographic citations is available.

Computer Vision and Artificial Intelligence

An Algorithm for Projective Point Matching in the Presence of Spurious Points

Point matching is the task of finding correspondences between two sets of points such that the two sets of points are aligned with each other. Pure point matching uses only the location of the points to constrain the problem. This is a problem with broad practical applications, but it has only been well studied when the geometric transformation relating the two point sets is of a relatively low order. Here we present a heuristic local search algorithm that can find correspondences between point sets in two dimensions that are related by a projective transform. Point matching is a harder problem when spurious points appear in the sets to be matched. We present a heuristic algorithm which minimizes the effects of spurious points.

Jason Denton and J. Ross Beveridge. An algorithm for projective point matching in the presence of spurious points.Pattern Recognition, 40:586-595, 2007.

Two Dimensional Projective Point Matching

Point matching is the task of finding a set of correspondences between two sets of points under some geometric transformation. A local search algorithm for point matching is presented, and shown to be effective at solving problems where the point sets are related by a projective transformation. Random starts local search is shown to be capable of solving very difficult point matching problems, and a heuristic key feature algorithm is presented which can substantially improve the effectiveness of local search in most cases.

Jason Denton and J. Ross Beveridge. Two dimensional projective point matching. In Southwest Symposium on Image Analyis and Interpretation, pages 77-81, April 2002.

The Traveling Salesrep Problem, Edge Assembly Crossover, and 2-opt

Optimal results for the Traveling Salesrep Problem have been reported on problems with up to 3038 cities using a GA with Edge Assembly Crossover (EAX). This paper first attempts to independently replicate these results on Padberg's 532 city problem. We then evaluate the performance contribution of the various algorithm components. The incorporation of 2-opt into the EAX GA is also explored. Finally, comparative results are presented for a population-based form of 2-opt that uses partial restarts.

Jean-Paul Watson, Charlie Ross, Victoria Eisele, Jason Denton, Jose Bins, Cesar Guerra, L. Darrell Whitley, and Adele E. Howe. The traveling salesrep problem, edge assembly crossover, and 2-opt. In Proceedings of the Fifth International Conference on Parallel Problem-Solving from Nature, April 1998.

Software Engineering

A Software Implementation Progress Model

Software project managers use a variety of informal methods to track the progress of development and refine project schedules. Previous formal techniques have generally assumed a constant implementation pace. This is at odds with the experience and intuition of many project managers. We present a simple model for charting the pace of software development and helping managers understand the changing implementation pace of a project. The model was validated against data collected from the implementation of several large projects.

Dwayne Towell and Jason Denton. A software implementation progress model. In Luciano Baresi and Reiko Heckel, editors, Fundemental Approaches to Software Engineering, pages 93-106, Vienna, Austria, March 2006. Springer.

Software Enginnering as Technolgy Transfer

We propose, as a challenge to the software engineering research community, a new view of software engineering as technology transition. In this view, the role of software engineering is to take the initial results of theorists and harden these results to the point where they are ready to be used in fielded, commercial systems. Current techniques for software verification, while adequate for serial, non-critical systems, are not sufficient to deal with verifcation and validation of parallel, concurrent, and embedded systems. We examine how the software development process might be modifed by this view of software engineering as a process to migrate new technology to systems ready for use in high reliability environments.

Daniel E. Cooke and Jason Denton. Software engineering as technology transfer. In Proceedings of the Fifteenth International Conference on Software Engineering and Knowledge Engineering, pages 340-45, San Francisco, CA, July 2003. Knowledge System Institute.

Module Size Distribution and Defect Density

Data from several projects show a significant relationship between the size of a module and its defect density. Here we address implications of this observation. Does the overall defect density of a software project vary with its module size distribution? Even more interesting is the question can we exploit this dependence to reduce the total number of defects? We examine the available data sets and propose a model relating module size and defect density. It takes into account defects that arise due to the interconnections among the modules as well as defects that occur due to the complexity of individual modules. Model parameters are estimated using actual data. We then present a key observation that allows use of this model for not just estimation the defect density, but also potentially optimizing a design to minimize defects. This observation, supported by several data sets examined, is that the module sizes often follow exponential distribution. We show how the two models used together provide a way of projecting defect density variation. We also consider the possibility of minimizing the defect density by controlling module size distribution.

Yashwant K. Malaiya and J. A. Denton. Module size distribution and defect density. In Proceedings of the International Symposium on Software Reliability Engineering, pages 62-71, October 2000.

Requirements Volatility and Defect Density

In an ideal situation the requirements for a software system should be completely and unambiguously determined before design, coding and testing take place. In actual practice, often there are changes in the requirements, causing some of the software components to be redesigned, deleted or added. Higher requirement volatility will cause the resulting software to have a higher defect density.

In this paper we analytically examine the influence of requirement changes taking place during different times by examining the consequences of software additions, removal and modifcations. We take into account interface defects which arise due to errors at the interfaces among software sections. We compare the resulting defect density in the presence of requirement volatility, with defect density with that would have resulted in an ideal situation where initial requirements are perfect. The results show that if the requirement changes take place closer to the release date, there is a greater impact on defect density. In each case we compute the defect equivalence factor representing the overall impact of requirement volatility. Further work required to obtain an overall model that can be used in an empirical model for defect density, is mentioned.

Yashwant K. Malaiya and Jason A. Denton. Requirements volatility and defect density. In Proceedings of the International Symposium on Software Reliability Engineering. IEEE, November 1999.

Estimating The Number of Residual Defects

Residual defects is one of the most important factors that allow one to decide if a piece of software is ready to be released. In theory, one can find all the defects and count them, however it is impossible to find all the defects within a reasonable amount of time. Estimating defect density can become diffcult for high reliability software, since remaining defects can be extremely hard to test for. One possible way is to apply the exponential SRGM and thus estimate the total number of defects present at the beginning of testing. Here we show the problems with this approach and present a new approach based on software test coverage. Software test coverage directly measures the thoroughness of testing avoiding the problem of variations of test effectiveness. We apply this model to actual test data to project the residual number of defects. The results show that this method results in estimates that are more stable than the existing methods. This method is easier to understand and the convergence to the estimate can be visually observed.

Yashwant K. Malaiya and Jason A. Denton. Estimating the number of residual defects. In Third IEEE International High-Assurance Systems Engineering Symposium, pages 98-195, Washington D. C., November 1998. IEEE.

Estimating the Number of Defects: A Simple and Intuitive Approach

The number of defects is an important measure of software quality which is widely used in industry. Unfortunately, accurate estimation of defect density can be a difficult task. Sampling techniques generally assume that the faults found are a representative sample of the all existing faults, which results in inaccurate estimates. Other existing techniques provide little information in addition to the number of faults already found. Software test coverage tools can easily and accurately measure the extent to which the software has been exercised. Both testing time and test coverage can be used as measures to model the defect finding process. However test coverage is a more direct measure of test effectiveness and can be expected to correlate better with the number of defects found. Here we describe a simple and intuitive procedure which can be used to estimate the total number of residual defects, once a suitable coverage level has been achieved. The technique is consistent with common testing approaches used. The method will be illustrated using actual data and is compared with existing approaches. Our results show that the method yields consistent estimates. An enhanced version of this approach is being implemented in a GUI tool.

Naixin Li, Yashwant K. Malaiya, and Jason A. Denton. Estimating the number of defects: A simple and intuitive approach. In Proceedings of the International Symposium on Software Reliability Engineering, Paderborn, Germany, November 1998.

What do Software Reliability Parameters Represent?

Here we investigate the underlying basis connecting the software reliability growth models to the software testing and debugging process. This is important for several reasons. First, if the parameters have an interpretation, then they constitute a metric for the software test process and the software under test. Secondly, it may be possible to estimate the parameters even before testing begins. These a priori values can serve as a check for the values computed at the beginning of testing, when the test-data is dominated by short term noise. They can also serve as initial estimates when iterative computations are used.

Among the two-parameter models, the exponential model is characterized by its simplicity. Both its parameters have a simple interpretation. However, in some studies it has been found that the logarithmic poisson model has superior predictive capability. Here we present a new interpretation for the logarithmic model parameters. The problem of a priori parameter estimation is considered using actual data available. Use of the results obtained is illustrated using examples. Variability of the parameters with the testing process is examined.

Yashwant K. Malaiya and Jason A. Denton. What do software reliability parameters represent? In Proceedings of the International Symposium on Software Reliability Engineering, pages 124-135, Albuquerque, NM, November 1997.

Dissertation and Thesis

Two Dimensional Projective Point Matching

Point matching is a problem which occurs in several forms in computer vision and other areas. Solving practical point matching problems requires that a point matching algorithm allow for an appropriate class of geometric transformations between the points in the model and their instance in the data. Many real world point matching problems require a two dimensional projective transformation to relate the model to the data. Point matching under this class of transformations has received little attention, and existing algorithms are inadequate. Existing, general, polynomial time point matching algorithms by Baird, Cass, and Barrel are formulated for lower order transformation classes and have diffculty scaling. The RANSAC algorithm, which represents the current best solution to the problem under the projective transformation, can not solve the problem when there are significant amounts of noise and clutter in the data sets; a condition likely to occur in many real problem instances.

Presented here is a new algorithm for point matching based on local search. This algorithm is a general solution to the two dimensional point matching problem under all transformation classes; although the focus is on the projective transform case. This algorithm gracefully deals with more clutter and noise than the existing algorithms, while still providing an efficient solution to easier problem instances. A randomized version of the algorithm is presented, and a superior version which uses a key feature algorithm to identify partial matches which may be a part of the optimal solution is detailed. The effectiveness of these algorithms is validated for image registration and model recognition problems using data obtained from real imagery; point sets of various sizes containing varying amounts of noise and clutter.

Jason Denton. Two Dimensional Projective Point Matching. PhD thesis, Colorado State University, Ft. Collins, Colorado, 2002.

Accurate Software Reliability Estimation

A large number of software reliability growth models are now available. It is widely known that none of these models performs well in all situations, and that choosing the appropriate model a priori is difficult. For this reason recent work has focused on how these models can be made more accurate, rather than trying to find a model which works in all cases. This includes various efforts at data filtering and recalibration, and an examination of the physical interpretation of model parameters. Here we examine the impact of the parameter estimation technique on model accuracy, and show that the maximum likelihood method provides for estimates which are more reliable than the least squares method. We present an interpretation of the parameters for the popular logarithmic model, and show that it may be possible to use this interpretation to overcome some of the difficulties found in working with early failure test data. We present a new software reliability model, based on the objective measure of program coverage, and show how it can be used to predict the number of defects in a program. We discuss the meaning of the parameters of this model, and suggest what needs to be done in order to gain a greater understanding of it. Finally, we present a tool we have developed which supports and integrates many of the techniques and methods presented here, making them easily accessible to practitioners.

Jason Denton. Accurate use of software reliability models. Master's thesis, Colorado State University, Ft. Collins, Colorado, 1999.