|
||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||
CONTACT - TEACHING
- RESEARCH - THESIS ABSTRACT - PREVIOUS WORK - OTHER Dr
Ben
Tagger
Dr Ben Tagger
COMP1007 Principles of Programming: - Problem Classes (programming techniques, object-oriented paradigms) - Lab Classes (Groovy, OO methods, Prolog) COMP1008 Object-Oriented Programming: - Problem Classes (programming techniques, object-oriented paradigms) - Lab Classes (Java, OO methods) UP
|
|
Winner of Best Paper
in Conference Alejandra González Beltrán, Ben Tagger, Anthony Finkelstein “Ontology-based Queries over Cancer Data” In proceedings Semantic Web Applications and Tools for Life Sciences (SWAT4LS 2010). Berlin, Germany. December 10, 2010. [Show pdf] |
|
Posters
|
Alejandra González
Beltrán, Ben Tagger,
Anthony Finkelstein “Querying distributed cancer databases using domain concepts” In the UCL Computational Biology Symposium, Tuesday 15th February 2011. [Show pdf] |
|
Alejandra González
Beltrán, Ben Tagger,
Anthony Finkelstein “Ontology-based queries for the caGrid infrastructure” In “Building a Collaborative Biomedical Network“, caBIG Annual Meeting, September 13-15, 2010, Washington, D.C., U.S.A. |
Thesis Abstract: "A Framework for Managing Changing Biological Experimentation".
Sponsoring company. NIMR– National Institute for Medical
Research
|
There is no point expending time and effort developing a model if it is based on data that is out of date. Many models require large amounts of data from a variety of heterogeneous sources. This data is subject to frequent and unannounced changes. It may only be possible to know that data has fallen out of date by reconstructing the model with the new data but this leads to further problems. How and when does the data change and when does the model need to be rebuilt? At best, the model will need to be continually rebuilt in a desperate attempt to remain current. At worst, the model will be producing erroneous results.
The recent advent of automated and semi-automated data processing and analysis tools in the biological sciences has brought about a rapid expansion of publicly available data. Many problems arise in the attempt to deal with this magnitude of data; some have received more attention than others. One significant problem is that data within these publicly available databases is subject to change in an unannounced and unpredictable manner. Large amounts of complex data from multiple, heterogeneous sources are obtained and integrated using a variety of tools. These data and tools are also subject to frequent change, much like the biological data. Reconciling these changes, coupled with the interdisciplinary nature of in silico biological experimentation, presents a significant problem.
We present the ExperimentBuilder, an application that records both the current and previous states of an experimental environment. Both the data and metadata about an experiment are recorded. The current and previous versions of each of these experimental components are maintained within the ExperimentBuilder. When any one of these components change, the ExperimentBuilder estimates not only the impact within that specific experiment, but also traces the impact throughout the entire experimental environment. This is achieved with the use of keyword profiles, a heuristic tool for estimating the content of the experimental component. We can compare one experimental component to another regardless of their type and content and build a network of inter-component relationships for the entire environment. Ultimately, we can present the impact of an update as a complete cost to the entire environment in order to make an informed decision about whether to recalculate our results. |
|
|
The Second Year Report - [download] The structure of the following report is different to the criteria set out for the second year report. We have included the whole literature review (rather than a list of the reviewed topics) in order to provide a more comprehensive overview of the research to date. Section one contains the problem statement. The entire literature review is contained in section two but for those wishing to see a list of reviewed material can refer to the table of contents on the previous pages. Section three provides conclusions from the literature review and sections four and five describe the proposed contribution and the scope of the thesis. Section six describes the research that has been carried out so far and the work that is going to be completed in order to fulfill the contribution. Finally, we provide a brief discussion of the validation of the research and a timetable of the remaining activities.
First Year Report - [download] This document provides a report of the major research to be undertaken during the course of the EngD programme. It also aims to include the requirements within the first year report form and, therefore, will include the form (as an appendix) with the relevant parts indicated on that form. Section two aims to provide an introduction and overview of the chosen research problem. A preliminary literature review is contained in section three with a brief conclusion of the review at the end of the section. Section four will detail the contribution and scope of the proposed research during the EngD programme and section five will contain the proposed research activities needed to complete the proposed research. Section six will describe how the research is to be validated and section seven provides a timetable of the planned research activities.
Addition to the EngD First Year Report - [download] The purpose of this document is to provide an additional report to the one submitted on the 16th of November 2005 to address matters arising from that report and from the meeting that was held on the 5th of December 2005. Upon reading this report, three things should be clear; the problem to be addressed, the contribution to address the problem and the validation to show that the problem has successfully been addressed.
Literature Review - [download] This document aims to provide an idea of the current state of research with regards to biological data management as well as the versioning of biological data. It is not intended that this document provide an exhaustive list of relevant publications. It is my hope to represent the main achievements in the areas appropriate to the area of my intended research. Obviously, the writing of this document will be an ongoing process with relation to the continuing research being carried out. Therefore this document should be updated when needed in order for it to keep in touch with the current states of research (including, hopefully, my own work). Several relevant areas of research have been identified. These include (but may not be limited to); - Biological
Data Sources Software Requirements Specification for Grid3D Application - [download] The need for a method of visualisation of biological data has been identified. Currently, there are many groups offering many different tools for visualising data. These visualisation tools provide a range from tabular to three-dimensional graphical views. However, a need has been identified for a tool that can display data in a three dimensional environment, which allows the user to investigate the data and easily see various attributes of the data at a glance. An Enquiry into the Extraction of Tacit Knowledge - [download] The extraction of knowledge from a person or group of people is becoming a growing factor in how business is conducted in the 21st century. It is no longer satisfactory to simply employ people who can do the job. We must know how they do it. We must understand the processes behind their work. Transparency is the key for both maximising our productivity and safeguarding our sustainability. Arguably the most useful knowledge that one can capture is tacit knowledge. This is, broadly speaking, knowledge that resides in the sub-conscious level of the human mind. Simply put, it is knowledge that we know but we may not be aware that we know. Predictably, it is also the most difficult to capture accurately. This essay aims to provide a brief enquiry into the nature of tacit knowledge as well as some of the aspects of the extraction. I will begin by providing a brief description of what is meant by tacit knowledge. I will then endeavour to illustrate some of the difficulties in extracting tacit knowledge and indeed, why we would want to even extract it in the first place. I will then provide a brief non-exhaustive survey of some methods currently used in the extraction of tacit knowledge. I will conclude with some thoughts for the future.
Reengineering the TCL Compiler - [download], [presentation]] With the completion of the mapping of the human genome, the incentive to provide methods for understanding the function of genomes has never been greater. It was predicted that the mapping of the human genome would herald a new era for the life sciences. Unfortunately, little (by comparison) has come to fruition due to lack of understanding of the genome. The aim of functional genomics is to establish methods of deriving gene functionality, given the information from structural genomics. The robot scientist aims to automate part of a laboratory function establishing metabolic pathways. The purpose of the robot scientist is to provide a learning system that can discriminate between competing experiments, select the ‘best’ ones, perform the experiments, analyse the results and then repeat the process. The role of the TCL Job Compiler is to convert the experiments (those that have been chosen within the robot scientist) into machine operations that can be used by the Biomek Workstation (the robot that physically performs the experiments). This project aims to completely reengineer and refactor the current TCL Job Compiler, providing greater readability, portability, versatility and maintainability.
An Introduction and Guide to Successfully Implementing a LIMS (Laboratory Information Management System) - [download] This paper aims to introduce the technology of LIMS (Laboratory Information Management System). LIMS have been around for over twenty years, but still remain difficult to implement successfully. This paper will provide a brief introduction of a LIMS followed by a description of some of the existing technologies available to users of today’s LIMS. A LIMS projects will rarely fail through technical restrictions, but rather through human inadequacies. The paper will describe some of the pitfalls of LIMS implementation and some of the most likely causes for a failed LIMS project. It will then give a generalised approach for the development of a successful LIMS implementation and finally, a look to the future addressing some of the needs of the LIMS industry. |
|
| PEOPLE > Ben Tagger EngD VEIV | |
||||
|
||||
This page last modified: 19 August, 2005 by Graham Knight |
Computer Science Department - University College London - Gower Street - London - WC1E 6BT - +44 (0)20 7679 7214 - Copyright © 1999-2005 UCL