Graduate Theses and Dissertations
Permanent URI for this collectionhttps://hdl.handle.net/10877/135
Browse
Browsing Graduate Theses and Dissertations by Department "Computer Science"
Now showing 1 - 20 of 204
Results Per Page
Sort Options
Item 3D Sketch Recognition Using The Microsoft Kinect(2014-05) Bulgerin, Travis; Lu, Yijuan; Ngu, Anne; Zong, ZiliangThe concept of sketch-based recognition has recently been used to enhance object categorization and speed up image retrieval. However, in each of the previous studies, the user was required to sketch on a two-dimensional plane. Currently there haven’t been any studies on the performance of incorporating depth information into a sketch. The motivation behind this project is to develop software that will allow a user to draw in a three-dimensional space, incorporating this information, as well as determining whether this depth information will result in higher accuracies for object categorization. First, utilizing the Microsoft Kinect, software was developed to establish a virtual drawing board for three-dimensional sketching. Second, a new learning-based approach is proposed to allow for model feature extraction and recognition. The experimental results demonstrate the validity of the study as well as the effectiveness of the proposed solution.Item A Finite State Machine CASE Tool(1997-05) Cureington, James A.; Davis, Wilbon P.; Amon, Tod T.; Martin, Cecil E.Software systems specified in terms of states and events such as communication protocols, window applications, and various data processing algorithms, are easily implemented by Finite State Machines (FSMs). State transition graphs and tables are two common methods of graphically representing FSMs. Implementing an FSM from either representation is a time consuming and tedious task. The goal of this thesis was to develop a Computer Aided Software Engineering (CASE) tool that allows an FSM to be graphically specified and C source code to be generated from the specification implementing the FSM. The CASE tool was developed using the waterfall software life-cycle model, and the artifacts of each phase of the life-cycle are provided in the appendices. The thesis research involved determining the pros and cons of using FSMs and CASE tools to develop software systems, determining the different types of commonly used FSMs, and investigating different FSM implementation methods.Item A Framework of the Software Process Based on Object-Oriented Methods(1992-05) Boyer, Carol; Zalewski, Janusz; Hazlewood, Carol; Hwang, C.J.The purpose of this study is to investigate the application of object-oriented techniques as a systematic method for describing the software engineering process. The proposed research questions and criteria for evaluating the object-oriented model are: (1) Does the model facilitate a clear understanding of the process activities and their relationships?; (2) Does the model provide a well defined approach to specifying the dynamic and concurrent nature of the software engineering process?; (3) Does the model provide various levels of detail?; and, (4) Is the model reuseable in various environments? This thesis proposes an object-oriented model of the software development process which unites standardization and flexibility by providing a basic model framework that can be expanded to meet the needs of the project. Simulations are conducted on the object-oriented model and compared to existing models and evaluated on the criteria above. This thesis research is based on two primary sources of data: (1) an overview of the current practice described in the literature; and (2) the results of simulations of the object-oriented software process model prototype.Item A Java based cost modelling tool: Structure and interfacing(2000-08) Rahman, MohammedNo abstract prepared.Item A Massively Parallel Exact TSP Solver for Small Problem Sizes(2022-08) Jerald Xavier, Benila Virgin; Burtscher, Martin; Metsis, Vangelis; Yang, KechengThe Traveling Salesman Problem (TSP) is a combinatorial optimization problem tasked with finding the shortest tour for visiting a set of cities such that each city is visited exactly once, and the tour ends in the starting city. This problem has gained attention among researchers because it is easy to describe yet difficult to solve. TSP has numerous important real-life applications, but its NP-hardness makes it difficult to find an optimal solution even for relatively small problem sizes. The literature describes many heuristic algorithms that solve the problem approximately but only few exact algorithms. The TSP solver implemented in this study is a GPU-accelerated exact solver for small problem sizes. The goal is to exploit the computing capabilities of modern GPUs for finding an optimal solution using simple algorithms. The algorithms used are exhaustive search for up to 7 cities and branch and bound for problems up to 30 cities. The branch and bound algorithm performs an irregular traversal of the search tree, making it challenging to parallelize efficiently, especially for massively parallel GPUs. I implemented the algorithm in CUDA and tested it on GPUs with different compute capabilities. My solver is exact and very fast. It outperforms the CONCORDE and LKH solvers for problems with up to 15 cities. On a 13-city instance, my solver is 36 and 15 times faster than CONCORDE and LKH, respectively.Item A Methodology for Mapping Programming Languages to Programming Problems(2006-08) Michlowitz, Jason Lawrence; Hazlewood, Carol; Podorozhny, Rodion; Chen, XiaoSeveral algorithms that solve different types of problems are implemented, tested, and compared by applying a set of metrics. The results are analyzed using Principal Components Analysis to calculate a Relative Complexity Metric. The results of the study reveal that a programming language does have an effect on the simplicity, speed and other attributes of an implementation. The results of the study also reveal which languages are best suited for a particular type of programming technique, such as recursion.Item A Multi-objective Autotuning Framework For The Java Virtual Machine(2016-05) Saha, Shuvabrata; Qasem, Apan; Ekstrand, Michael; Chen, XiaoDue to inherent limitations in performance, Java was not considered a suitable platform for for scalable high-performance computing (HPC) for a long time. The scenario is changing because of the development of frameworks like Hadoop, Spark and Fast-MPJ. In spite of the increase in usage, achieving high performance with Java is not trivial. High performance in Java relies on libraries providing explicit threads or relying on runnable-like interfaces for distributed programming. In this thesis, we develop an autotuning framework for JVM that manages multiple objective functions including execution time, power consumption, energy and perfomance-per-watt. The framework searches the combined space of JIT optimization sequences and different classes of JVM runtime parameters. To discover good configurations more quickly, the framework implements novel heuristic search algorithms. To reduce the size of the search space machine-learning based pruning techniques are used. Evaluation on recommender system workloads show that significant improvements in both performance and power can be gained by fine-tuning JVM runitme parameters.Item A New Hybrid Learning Algorithm for Drifting Environments(2005-08) Enumulapally, Anil Kumar; Kaikhah, Khosrow; Hazlewood, Carol; Ali, MoonisAn adaptive algorithm for drifting environments is proposed and tested in simulated environments. Two simple but powerful problem solving technologies - Neural Networks and Genetic Algorithms with Online Learning, help the artificially intelligent agents to adapt to a changing environment. Neural networks and genetic algorithms are combined to evolve weights, architecture, and learning rules for the generation of efficient networks. Online learning helps these networks to capture the dynamics of a changing environment efficiently. Supervised learning 1s achieved using a variation of regular backpropagation that works on dynamic random networks. Our algorithm proposes two types of online learning, namely local online learning which requires a pre-defined training set and global online learning which does not It is shown that both types of online learning improve the performance of networks to capture subtleties of the varying environments. The algorithm's efficiency is demonstrated using a mine sweeper application. Different learning technologies have been compared. The results establish that online learning within the evolutionary process is the most significant factor for adaptation and 1s far superior to evolutionary algorithms alone. The evolution and learning work in a co-operating fashion to produce excellent results in short time. Offline learning is observed to increase the average fitness of whole population. It is also demonstrated that online learning is self sufficient and can achieve results without any pre-training stage. When mine sweepers are able to learn online, their performance in the drifting environment is significantly improved.Item A Parallel Implementation of A Greedy TSP Algorithm(2020-11) Gonzalez, Andres; Burtscher, Martin; Peng, Wuxu; Yang, KechengThe Traveling Salesman Problem has often been used as an exploration ground for building heuristics to calculate the shortest path of a complete graph that traverse every vertex exactly once. This has a number of important practical applications. Since it is an NP-hard problem, many heuristics have been proposed to obtain near-optimal solutions in polynomial time. The heuristic explored in this study is based on Kruskal’s algorithm, where the edges of a graph are sorted in non-decreasing order. The smallest edges that meet the eligibility criteria are included in the path until a tour that includes all vertices has been constructed. Whether an edge meets the criteria depends on which smaller edges have been inserted. This makes the algorithm difficult to parallelize. I combined previously published with new parallelization techniques and implemented the algorithm in OpenMP and CUDA. The resulting GPU code is very fast, taking just 0.06 seconds to process a graph with 11,849 vertices and 70,193,476 edges. This is approximately 8 times faster than the serial CPU implementation and has a solution quality that is within 18% of the optimal. Compared to the optimal solver CONCORDE, my code is 1,206,302 times faster.Item A Rule-Based Real-Time Scheduler(1992-12) Elliott, Dale Bardill; Zalewski, Janusz; Hazlewood, Carol; Hwang, C.J.In this thesis I propose enhancing a real-time scheduling algorithm to better meet the future needs of real-time systems by increasing its decision-making abilities. This enhanced scheduler can apply its additional intelligence in the fields of real-time dynamic scheduling, precedence constraints and transient overloads. Additionally, this algorithm will allow the scheduler to alter the current schedule based on a request by a running task. This feature is not available in other real-time scheduling algorithms. The scheduler has two components; the Base Scheduler and the Inference Engine. The Base Scheduler produces the first-run task schedule and the Inference Engine changes it if certain conditions apply. Those conditions, as well as the corresponding changes, are stored in a Rule Base that dictates what the scheduler does in specific situations. This thesis also explores ways of reducing the increased overhead brought about by these additional features.Item A Sentence Generator for Natural Language Evaluators(1994-12) Gandy, Howard Craig; Kaikhah, Khosrow; Hwang, Caneo Jinshong; Ogden, Robert D.By the tum of the century, it is expected that most computer applications will include a natural language processing component. Both developers and consumers of NLP systems have expressed a need for standard natural language system evaluators. Automated natural language evaluators appear to be the only logical solution to the overwhelming number of NLP systems that have been produced, are being produced, and will be produced. The system described within is based on the Benchmark Evaluation Tool [7] and is the first step toward fully automating the evaluation process. This effort was accomplished in two phases. In phase one, we identified a subset of the Benchmark Evaluation Tool for each class of NLP systems. And in phase two, we designed and implemented a natural language generation system to generate non-causal semantically meaningful test sentences. We followed an Object-Oriented Design (OOD) strategy [1]. In this approach all concepts, including semantic and syntactic rules are defined as objects. Each test sentence is generated as a chain of words satisfying a number semantic, syntactic and contextual constraints. The constraints imposed on the generation process increase dynamically while the sentence is being generated. This strategy guarantees semantic cohesiveness while maintaining syntactic integrity.Item A Simulation Framework for Performance Evaluation and Security Research in Multi-Interface Multi-Channel Networks(2010-12) Kim, Heywoong; Gu, Qijun; Chen, Xiao; Guirguis, Mina S.In wireless networks, devices can be equipped with multiple interfaces to utilize multiple channels and increase the overall throughput of a network. Various channel assignment protocols have been developed to better utilize multiple channels and interfaces However, the research of channel assignment protocols is still lack of a good simulation tool that can content with a variety of requirements and specifications of channel assignment protocols. This thesis proposes MIMC-SIM, a generic simulation framework to study channel assignment protocols in multi-interface and multi-channel networks. The MIMC-SIM framework is built in OMNeT++ with INET and implements a new layer between the network layer and the MAC layer The MIMC-SIM framework has a novel structure which supports generic features and specific behaviors of channel assignment protocols It also provides a generic and flexible code structure for implementing channel assignment protocols.Item A Study of Interprocess Communication and Synchronization Techniques and Their Use in Multiprocessor Operating Systems(1984-07) Ellis, Marsha H.; Wade, James F.; Borm, Alfred E.; Stimmel, David T.No abstract prepared.Item A Study of Methods to Cluster Network Nodes for a Multi-processor Simulation Environment(1999-05) Stevens, Ruth Ann; Hall, Gregory; Amon, Tod; Peng, WuxuNo abstract prepared.Item A Study of Microcomputers in Elementary and Middle Schools of Beaumont, Texas: Its Uses, Effects, and Benefits(1991-05) Cartwright, Tammy Fontenette; Slomka, Jeffrey; Early, Grady; Byrum, DavidNo abstract prepared.Item A Tool for Automatic Suggestions for Irregular GPU Kernel Optimization(2014-12) Taheri, Saeed; Burtscher, Martin; Qasem, Apan; Zong, ZiliangFuture computing systems, from handhelds all the way to supercomputers, will be more parallel and more heterogeneous than today’s systems to provide more performance without an increase in power consumption. Therefore, GPUs are increasingly being used to accelerate general-purpose applications, including applications with data-dependent, irregular memory access patterns and control flow. The growing complexity, non-uniformity, heterogeneity, and parallelism will make these systems, i.e., GPGPU-accelerated systems, progressively more difficult to program. In the foreseeable future, the vast majority of programmers will no longer be able to extract additional performance or energy-savings from next-generation systems because their programming will be too difficult, i.e., the programmer will no longer possess the necessary expertise to understand and exploit the systems effectively. In this project, the characteristics of GPU codes will be quantified and, based on these metrics, different optimization suggestions will be made.Item A View on Components of Software and Application Programming in Distributed Environments(1992-08) Wijono, Jeffrey; Hazlewood, CarolWe describe a view of application design in the modem programming environment in which reusability of software, network transparent display, and processing, vertically and horizontally, play an important role. We provide a toolkit for creating applications. We address reusability and probe the ways for adding to or customizing the functionality of existing applications. The toolkit consists of a transformation library, a communication library, and a control and display library. The transformation library converts the output of applications to a standard form. The user is free to manipulate the data which is then normally sent to a display process for display. For example, we can consider a UNIX command like 'ps -ef as an application. The output of this application displays the user ID, process ID, parent process ID, Stime, TTY, time, and command to the standard output. The control library is used to invoke the command repeatedly, with a given frequency, for the purpose of monitoring the processes. By using the transformation library, the output of this application will be converted to a standard form, a table that consists of rows and columns. A row represents a process and a column represents one of the attributes of the ps -ef command. Using this table the user program will enhance this UNIX command working in concert with the display process which has been created using functions from the display library which currently, as an example, consists of a few process independent widgets created using Motif toolkit. All communication is accomplished using functions from the communication library.Item A Work Definition Language(1996-08) McCann, Robert T.; Martin, Cecil E.; Davis, Wibon P.; Kaikhah, Khosrow; Jackson, William R., Jr.All organized human endeavor can be described and modeled graphically through the use of appropriately defined symbols and notations on directed graphs. Such a system of symbols and notations with semantics and syntax is called a work definition language (WDL).1 In software development, there are several methods used for describing various views of the system to be automated.2 In this thesis, it is established that several of these methods produce complimentary views of the system and that these views may be easily unified into a consistent whole when a standard presentation system, such as a WDL, is used. Using a WDL, the results of different analysis methods, such as State Transition Diagrams, Control Flow Diagrams, Data Flow Diagrams, and Process Flow Diagrams, can be easily combined into one unified model of a system to be automated. Entity Relationship Diagrams (E-R Diagrams) are included in this WDL because semantic data models are just another view of the system to be automated.3 E-R Diagrams depict the logical form of the data identified in the data dictionary, data flows, object models, process models and control flow diagrams. Because different analysis methods depict different models (views) of the same system and use different presentation schemes, it is very difficult to develop one unified view of the system to be automated. Using a WDL offers a convenient way to present the different views of the system to be automated.4 1 Martin, Cecil, CS5393 course notes. 2 Pressman, Roger S., Software Engineering, A Practitioner's Approach. third edition, McGraw-Hill, 1992, ch. 11, 12, 13. 3 Booch, Grady, Object Oriented Design with Applications. The Benjamin Cumings Publishing company, Inc., 1991, ch.5. 4 Booch, ch. 5.Item Adaptive Single-Pass Compression of Unbounded Integers(2017-12) Hyatt, Christopher; Tamir, Dan; Yan, Yan; Qasem, ApanDue to webpage creation, data gathering and storage, and the increasing data needed for machine learning, the amount of data in the world is rapidly growing. Organizations such as Google, Wikipedia, and the National Security Agency (NSA) use techniques to convert all of this data into searchable indexes of references. These indexes are growing as quickly as the data used to create them, and are an equal subject for compression. This research explores the ability to dynamically compress data in one pass. This, effectively improves latency and throughput. To achieve dynamic compression in one pass, the combination of Tunstall and two other compression algorithms, Elias-Delta code and a variation of Group VarInt, are used. The two resulting pairs are Variable length nibbles with Tunstall (VLNT) and Delta-Tunstall (δ-T). This is the first known attempt at compressing data using these algorithm pairs. These compression algorithms are applied to a number of different datasets. A synthetic dataset and a Wikipedia dataset are used to test the algorithms’ ability to compress integers. The synthetic dataset is created using several probability distribution functions (PDF), such as geometric and Poisson. Meanwhile, the Wikipedia dataset is acquired from actual inverted indexes for a number of different Wikipedia search terms. VLNT and d-T are also used to compress the members of the Silesia benchmarks dataset. VLNT and δ-T show promise as good platforms for data compression and are recommended for future research focusing on single-pass compression of unbounded datasets.Item An Abstraction Layer for Controlling Heterogeneous Mobile Cyber-Physical Systems(2013-05) Hanz, Trevor R.; Guirguis, Mina S.; Ngu, Anne H.H.; Lu, YijuanMobile Cyber-Physical Systems (CPSs) widely vary in the underlying hardware technologies and capabilities of their mobile devices. This makes the problem of developing portable high-level Mobile CPS applications difficult. To this end, this thesis present a framework that relies on abstracting the representation of the mobile CPS in a manner that allows the system to fully utilize the capabilities of the hardware while providing a simplified interface to the end-user. In addition, this framework incorporates a physics engine to handle different physics models for different robots to ensure better control. We assess the behavior of our proposed framework through real experiments conducted in our Mobile CPS lab with various iRobot Creates.