- Accessibility (1)
- Alan Turing (2)
- Artificial Intelligence (5)
- Augmented/Virtual Reality (3)
- BSc (27)
- Compiler Design (2)
- Computer Architecture (1)
- Computer History (2)
- Computer Security (1)
- Data Analytics (2)
- eLearning (3)
- Formal Computing (2)
- Geographic information Systems (2)
- Graphics (10)
- Haskell (1)
- Image Processing (5)
- James Bond (1)
- Knowledge Management (1)
- Linux (1)
- MSc (15)
- Networks (2)
- Open Source (2)
- Pure Research (3)
- Robotics (1)
- Statistics (1)
- Swarm Intelligence (1)
- Testing (1)
- Visualisation (3)
Puppy Linux is a distribution that provides users with a simple environment, and can be used to recover files (as long as they aren't NTFS). This project suggests that you extend the functionality of Puppy Linux to recover NTFS files.
This project will look at MIT research at the intersection of vision and graphics who created an algorithm that offers its users a new way of looking at the world. The technique which uses an algorithm that can amplify both movement and colour, can be used to monitor everything from breathing in a sleeping infant, to the pulse in a hospital patient. Its creators, led by computer scientist William Freeman, call it "Eulerian Video Magnification"
They've posted the source code online. It's free to use for non-commercial purposes.
Psychogeography is defined as the "the study of the precise laws and specific effects of the geographical environment, consciously organized or not, on the emotions and behavior of individuals." One of the typical approach is to draw a large circle at random on a map, and travel along that circle, commenting on what you see, what you hear, how it feels, (e.g. look at the metal drains, are there dates on them? look at shapes of the buildings, and how the telephone wires and electricity wires snake around them, etc.). This project will seeks develop a random route generator (circular), and create a means by which the experiences can be recorded, photographed, etc. and automatically formed into a webpage.
The Shakespeare Apocrypha is the name given to a group of plays (e.g. Sir Thomas More, Cardenio, and The Birth of Merlin) that have sometimes been attributed to William Shakespeare, but whose attribution is questionable for various reasons. Using Stylistic statistical-based metrics, e.g. Zipf analysis, Sentence Length, Sentence structure, words used, tense, infrequent n-gram occurrences, active vs. passive voice, etc. and the development of other suitable metrics as part of the project, similarities will be measured between the canonical tales and the apocryphal ones.
This project will focus particularly on kinematics (measurement of the movement of the body in space) using simple computer vision techniques to identify polynominal splines and create a "stickman" figure which can be overlayed beside the original actor using augmented reality techniques.
This project focuses on developing a computer simulation of a panopticon in the style of the Grangegorman simulation. A panopticon is an idea for the architecture of a prison to explore the nature of power, designed by English philosopher Jeremy Bentham (and cited by French philosopher Michel Foucault), that allows jailers to observe all prisoners without the prisoners being able to tell whether they are being watched.
In 1947 Eric Arthur Blair wrote a novel called "The Last Man in Europe", the title of which was changed to 1984 and the author's name to George Orwell. It became one of the most aclaimed novels of the 20th century.
One of themes of 1984 was that the government was simplifying the English language (both vocabulary and grammar) to remove any words or possible constructs which describe the ideas of freedom and rebellion. This new language is called Newspeak and is described as being "the only language in the world whose vocabulary gets smaller every year". In an appendix to the novel Orwell included an essay about it, and the basic principles of the language are explained
The objective of this project is develop a text filter that will take in normal text, and convert it into Newspeak. An initial system will simply change the words in the text to their equivalent in Newspeak, e.g.
"bad", "poor", "lame" all become "ungood"
child", "children", "boy", "girl" become "young citizens"
"quite", "rather", "kind of", "kinda" become "plus"
Like these programs;
From there the next stage is to investigate the more fundemental translation process, whereby the grammar and structure of the text is changed to the style as outlined by Orwell.
The Bobby Accessibility check was the de facto standard for increasing the accessibility degree of a website (using the guidelines established by the World Wide Web Consortium's (W3C) Web Access Initiative (WAI), as well as Section 508 guidelines from the Architectural and Transportation Barriers Compliance Board (Access Board) of the U.S. Federal Government). Since it was officially closed in February 2008, a new standard accessibility checker has yet to emerge, this project will compare accessibility tools on a variety of types of webpages, e.g the Nine types of webpages (Five good and four bad).
In 1934, Paul Otlet sketched out plans for a global network of computers (or “electric telescopes”) that would allow people to search and browse through millions of interlinked documents, images, audio and video files. He described how people would use the devices to send messages to one another, share files and even congregate in online social networks. He called the whole thing a “réseau,” which might be translated as “network” — or arguably, “web.”
Otlet’s vision hinged on the idea of a networked machine that joined documents using symbolic links. While that notion may seem obvious today, in 1934 it marked a conceptual breakthrough. “The hyperlink is one of the most underappreciated inventions of the last century,” Mr. Kelly said. “It will go down with radio in the pantheon of great inventions.”
These links were non-static and had many interesting features, this project centres on developing a simulation of how Otlet's Web would have worked.
A student may wish to focus solely on a review of relevant literature related to any of the following topics. The review will need to be very comprehensive and demonstrate deep critical reflection on the topic, and will need to produce an artefect (e.g. this is an artefect on Web Accessibility), and indicating how it could be used, and who would use it, etc. Note, since there is no experiment element to these topics, this is a risky approach to your Masters, you will have to be willing to read at least 75 papers (and some books) in three months if you choose to do this.
- Forensic Linguistics
- Acousto-optic computing
- Genetic computing
- Quantum computing
- Defensive Programming
- Literate Programming
- Cloud and Grid Computing
- Swarm Intelligence
- Simulated Annealing
- Inference Attacks
- Genetic Algorithms
- Utility Computing
- Google Hacking
- Positive and Negative Databases
- Dimensional Databases
- Semantic Web
- Edge-Detection in Computer Vision
- Comprehensive review of Software Engineering Methodologies
- A Large-Scale Study of Web Password Habits
- Web 3.0
This project is an image processing project that sees if you can write a program to automatically detect Wally in a picture as above. This might seem fairly simple at first, just a simple template match, but it's not as easy as all that, sometimes the pictures have characters that look a lot like Wally but with a slightly different jumper or hat, so it's going to have to be a Laplacian of Gaussian or Hessian affine or whatever. Once you can consistently find Wally using one technique perhaps you could try and do a few different techniques and see which one performs best under which criteria.
The Open Source Shakespeare site provides the complete texts of Shakespeare's plays. As well a concordance, a keyword search, statistics, character search, as well as an ordering of plays by genre, by number of lines, and chronologically.
Could we pick an Irish playwright (out of copyright obviously) and do the same, e.g. Dion Boucicault, Edward Plunkett (Lord Dunsany), Lady Gregory, Richard Brinsley Sheridan, W.B. Yeats, or another Tudor playwright, e.g. Christopher Marlowe, William Rowley, Ben Jonson, or even some classical playwrights, e.g. Aeschylus, Sophocles, or Euripides.
von Clausewitz (the great military strategist) suggested that it is vital to grasp the fundamentals of any situation in the "blink of an eye" (coup d'œil). In a military context the astute tactician must immediately grasp a range of implications and has to begin to anticipate plausible and appropriate courses of action, they do this by understanding the class of problem they are facing. There is an analogous requirement in computer security, whereby the people protecting the system need to be able to understand the type of attack and defend against it. To help achieve this it would be helpful to have a website (wiki) that developed an easy-to-understand classification for cyberweapons, with a visualisation, that was searchable in a variety of ways and easy to add to.
The ELIZA program was developed by Joseph Weizenbaum, and published in 1966, it was capable of engaging in conversations with users which were framed in the style of a psychologist. The program applied very simple natural language processing techniques and was very successful and was taken seriously by many users, who would open their hearts to it. It's about time to bring it into the 21st century, let's use more sophisticated AI techniques and see how much better we can make ELIZA, and evaluate it in a Turing Test style assessment (e.g. the Loebner Prize).
This project is designed to create an interactive version of the James Bond and the OSI Reference Model analogy. The analogy explains the OSI Ref model by analogy to having James Bond being in a seven storey building, getting a message on the 7th floor, getting it translated, encrypted and miniaturized on the 6th floor, having security checks on the 5th floor, etc. The objective would be to create an interactive version when you can dictate the message to be sent, decide on the encryption methods, etc. It would be somewhat like the Serious Gordon tool.
There is a famous (probably apocryphal) story that in the 1980s the US Pentagon funded the development of an artificial neural network that would recognise photographs of tanks. To do this they took 100 photographs of tanks, and then took 100 photographs of fields with no tanks, and trained the artificial neural network on these photographs. When testing the system, it was discovered that system did not appear to be recognising tanks at all. There was puzzlement until someone figured out that all of the images with tanks had been taken on a cloudy day while all the images without tanks had been taken on a sunny day. Thus the network actually learned to recognise clouds ;-)
Even if this story isn't true, it is a perfect illustration of the biggest problem with neural networks, it is virtually impossible to analyze and understand what they are learning. One can't tell if a net has memorized inputs, or is 'cheating' in some other way. This project proposes we test this story by creating various sets of 100 images of a tank (well maybe a toy tank) on a cloudy day and then 100 images of no-tank on a sunny day. So the first set will be 10% sky and 90% ground, the next 20% sky and 80% ground, the next 30% sky and 70% ground, the next 40% sky and 60% ground, the final set 50% sky and 50% ground.
It is much easier to solve a specific programming problem than a general one, since a specific one has so many more constraints than a general one. For example, if a robot arm has to pick up a wallet off floor, it's an achievable system, developing a robot arm system that picks up any object off the floor is much more difficult. Good program design suggests that we should focus on the general problem as well as the specific one, and in fact too much focusing on the constraints are considered "hacks" This project suggest we take a sample project, e.g. finding an object in an image, and using it to investigate how easy is to do specifically when the object is known, and then to develop a general object detector, and then to compare and contrast the two.
The objective of this project is to make the teaching of statistics more exciting, how do we make the following slides more interactive, more enjoyable, etc? So maybe an automatic exercise generator, so if we look at the confidence interval exercises, they all have a very similar format, allow the teacher to input a new description and values and it will work out a solution. Another tool would be to show the normal curve and display standard distributions and calculate areas under the curve, another tool would show the transformation of the Z curve into the T curve. Also a tool for selecting the correct statistical test for a given dataset.
My notes that need jazzing up are here;
Locative media combines Global Positioning Systems (GPS) with mobile technology or augmented reality, so for example you could be walking along wearing AR goggles and when you reach a specific location an augmented image would appear, this project looks at combining AR and GPS technologies.
The Turing machine was described by Alan Turing in 1937 as a theoretical device that manipulates symbols contained on a strip of tape. It is a very interesting theoretical tool and teaching tool, and they have been a number of simulators created in the past, but most are very ugly and are limited in functionality, so this project would set out to create a "pretty" looking Universal Turning Machine inspired by 1940s computers, Steampunk, and any other nice looking machines, e.g. I would see the input as a white paper tape of infinite length with a width of 15cm, the machine stamps (and removes) white cards (10cm X 10cm) with black symbols onto the paper tape. The stamping mechanism would look something like this video (without the need for someone to put their hand in ;-)
Isaac Asimov's Three Laws of Robotics were introduced in his 1942 short story "Runaround", the Laws state the following:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This project proposes you build a robot (with as many different sensors as possible) using a Lego MindStorms kit and explore how you would implement these laws. The robot would be fairly humanoid in shape with arms, and maybe legs, maybe like Mek-Quake or Ro-Jaws or Pneuman. Which sensors work best with which laws? How do you implement "harm"? How do you implement "existence"? etc. You would have to read "The Emperor's New Mind" and may have to look at Murphy and Woods The Three Laws of Responsible Robotics.
Bristol Robotics Laboratory have looked at the first law in detail, and io9 spoke to a range of experts on all three laws.
Bristol Robotics Laboratory have looked at the first law in detail, and io9 spoke to a range of experts on all three laws.
This is an image processing project that looks at making it easier to judge if the ball has gone over the line in a game of soccer. Using image processing techniques you are required to remove the goal net from a collection of images to expose the goal line and the ball. Following this you will look at doing the same for videos.
This project requires that you obtain a dataset of a bunch of soccer games and do some data mining and data analytics on this data. Looking at some of the following questions;
- How does home/away effect results?
- If a player is sent off, how likely is it for the other team to score in the next 10 mins? 20 mins? 30 mins?
- If a new player is brought on, how likely is it for the team to score in the next 10 mins? 20 mins? 30 mins?
- Does number of corners per side predict the winner? the final score?
- Does number of off-sides per side predict the winner? the final score?
- Does the number of fouls committed by your team and the other team predict the winner? the final score?
- How does the system effect the outcome? e.g. 4-4-2, 4-3-3, 3-4-3, 4-5-1, 3-5-2.
- What about playing a team with a different system?
The key objective to this research is to see how do these factors work both individually and in combination.
This project is primarily a graphics project with a requirements for an understanding of basic computer architecture (and maybe you could throw in a few mini-games if you wanted), the project is that you develop a system whereby when someone puts on VR goggles, they will be inside of a computer and will be able to navigate around the motherboard, jumping over resistors and dodging around capacitors. If they want they can dive into the integrated circuits and run along the silicon substrates like a massive maze-complex, or fly over to the hard disk and see the disk-head reader interacting with the magnetic fields that create the 1s and 0s. The sounds of the environment is also very important for this project, if you were shrunk to miniature size the various sounds from different parts of the computer can be harmonic and remarkable. Also once you get near the fan, it may blow you around a bit so be aware of that. Finally you may have a narration explaining each part of the computer as you are travelling around them.
Creating Visualisations in the knowledge domain is a very interesting and fun challenge, with the advent of tools like Adobe Flex visualisations are a bit easier to create, but the hard bit is conveying a clear meaning using these visualisations.
One visualisation that I would be very interested in seeing get created is a tool that allows students to project a course pathway through a modularised course, so let's say you want to do a degree part-time over 6 years, give them a visual representation of which modules they will be doing in each of the six years.
Another project to look at would concern the use of Word Clouds to visualisation tagging, and to measure conceptual drift.
Another project concerns the visualisation of how wikipedia articles can over time, this would be based on a previous student;s work available here (please note the tool will take a minute to load);
This project is supposed to explore what cyberspace is about, what is it for? Is it a real thing? Where does it exist? We know it existed before the Web, and before the internet, and before the telephone, and before the telegraph. How can we map it? How can we understand it? and what applications can we create to make it more real and understandable? So the project needs to create an application of some kind to help bring together all the conclusions.
There are a number of topics that I would like build an eLearning Tools for, the kind of thing I am thinking of is as follows: http://www.comp.dit.ie/dgordon/Toolkit/Tools/SeeSort.exe So you would be required to build a tool like this and create a website to accompany it. You would have to be able to justify your design choices based on some teaching or learning theory. The topics I am thinking of are;
- Edge Dectection Algorthims
- Fourier Transformations
- Array Searching
- Linked Lists
- Concurrency Control
- Backup and Recovery logs
- File Management and types
- Process Management
- Memory Management
There would be a web site to go along with each tool, and each tool could be added to the NDLR.
The great pre-Socratic philosopher Heraclitus is said to have written one of the most important philosophical books that mankind ever produced. Unfortunately, no copies of this book survive, but what does is about 100 fragments from the book quoted in other sources. Today's philosophers each have their own views on how these fragements fits together and what the themes of Heraclitus' book were. This project seeks to create a database of these quotes that is queriable on the basis of a word or theme and presents the relevant quotes. Additionally this tool should allow a user to group quotes together based on their own views of how thefragements link together into chapters, and explore the quotes (a la http://tcup.currentform.com/explore.php). Would require a student willing to study 2500 year old philosophy with good database and user interface design skills, and an ability to discuss issues such are information represention, data storage, etc.
This project is to investigate and develop a series of Learning Objects (a unit of educational content delivered via the internet), using eith the IMS Content Packaging or the SCORM (Sharable Content Objective Reference Model) standard. As well as having the standard learning object parameters, the learning objects for this project will be aware of how learning style can effect presentation means.
This project is to investigate and develop a model of swarm intelligence. The basic architecture of a swarm is the simulation of collections of concurrently interacting agents: with this architecture, you can implement a large variety of agent based models.
Ubuntu is free and open source Linux-based operating system, meaning users are free to run, copy, distribute, study, change and improve the software under the terms of the GNU GPL license. Bugs are constantly being discovered in Ubuntu, as they are in any operating system. The objective of the project would be for each group to identify 10 really good bugs and develop some solutions for them.
Developing graphical simulations of ancient computers using OpenGL, Suggested simulations would include;
- An Abacus, an astrolabe and a slide ruler (1 project)
- Antikythera Mechanism (1 project)
- Jacquard Loom (1 project)
- Schickard Clock (1 project)
- Pascalina (1 project)
- Curta Calculator (1 project)
This project would be to implement a parser for the Z Specification language, using the Haskell programming language. The parser should be compliant with the official Z standard as far as possible.