Topic: Artificial Intelligence Sight Loss Assistant
Supervisor: Dr Kevin Swingler
The Artificial Intelligence Sight Loss Assistant (AISLA) project aims to use state of the art computer vision and artificial intelligence to develop personal assistant technology for people with sight loss. Topics within the project include computer vision, natural language processing and human-AI interfaces. A PhD in AI and computer vision can lead to an academic career or jobs in industries such as automotive, building self driving cars, digital assistant design or security. Companies like Google, Amazon and Facebook are at the forefront of commercial AI.
Topic: Efficient search techniques for large-scale global optimisation problems in the real world
Supervisor: Dr Sandy Brownlee
Optimisation problems become really difficult once they become "large-scale": like allocating thousands of skilled engineers to jobs, or prioritising where to spend public money in improving energy efficiency of thousands of homes. This project will look at how to learn the structure of these problems, allowing us to intelligently divide them up so they can be solved efficiently.
Topic: Search-based software improvement for green computing
Supervisor: Dr Sandy Brownlee
Reducing computational energy consumption is important at the extremes (i.e., mobile devices and datacentres), and in many cases there is even a trade-off between functionality and energy consumption. Yet improving existing code is difficult because it is easy to break functionality and there is a lot of noise when we measure energy. This project will explore how search-based approaches like genetic algorithms can be used to improve the efficiency of code, accounting for these difficulties: saving the planet and putting off your next recharge.
Topic: Understanding and Visualising the Landscape of Multi-objective Optimisation Problems
Supervisor: Prof. Gabriela Ochoa
In commerce, industry and science, optimisation is a crosscutting, ubiquitous activity. Optimisation problems arise in real-world situations where resources are constrained and multiple criteria are required or desired such as in logistics, manufacturing, transportation, energy, healthcare, food production, biotechnology and others. Most real-world optimisation problems are inherently multi-objective. For example, when evaluating potential solutions, cost or price is one of the main criteria, and some measure of quality is another criterion, often in conflict with the cost. The analysis of multi-objective optimisation surfaces is thus of paramount importance, yet it is not well developed. This project will look at developing and applying network-based models of fitness landscapes and search trajectories to multi-objective optimisation problems. The ultimate goal is to provide a better understanding of algorithms and problems and demonstrate that better knowledge leads to better optimisation across a number of domains.
Topic: Machine Learning approaches to tackle Cyber Attacks
Supervisor: Dr Mario Kolberg
The range of internet services has increased dramatically in recent years, however, at the same time cyber-attacks have grown both in number and sophistication endangering user trust and uptake of such services. Thus there is a need for researchers to develop solutions to these evolving cyber-attacks. However, these attacks are evolving as attackers keep changing their approaches.
Security measures such as firewalls are put in place as the first line of network defense to safeguard these networks but attackers are still able to exploit vulnerabilities in these networks. Intrusion Detection Systems (IDS) have shown potential to be a successful counter measure against potential attacks. However, there are still many open issues, such as their efficiency and effectiveness in the presence of large amount of network traffic. Several IDS have been proposed that can differentiate between attacks and benign network traffic and raise an alarm when a potential threat is detected. However, these systems must be able to analyse large quantity of data in real time to be applicable in modern networks. Unfortunately the larger the data quantity, the more irrelevant information stored. One solution may be to extract key features and apply Machine Learning (ML) techniques to detect attacks. This project will investigate using ML approaches to detect intrusion attacks at runtime.
Topic: Bio inspired Peer-to-Peer Overlay algorithms
Supervisor: Dr Mario Kolberg
Peer-to-Peer (P2P) overlay networks are self-organising, self-managing, and hugely scalable networks without the need for a centralised server component. Utilizing inspiration from biological processes to construct and maintain P2P overlays has attracted some research interest to date. The majority of related solutions focus on providing efficient resource discovery mechanisms using swarm intelligence techniques. In fact such techniques have proven performance benefits in regard to routing and scheduling in dynamic networks, while they have also inherent support for adaptability and robustness in light of node failures. Conversely, except for very few examples, using such techniques for topology management has not really been exploited. This project will investigate the use of bio-inspired solutions for topology management addressing some of the techniques’ challenges such as relatively high computational and messaging complexity.
Topic: The application of cognitive computational methods to enhance vocational rehabilitation
Supervisor: Dr Sæmundur Haraldsson
Vocational Rehabilitation (VR) is a field within healthcare which aims to assist long term sick-listed and unemployed individuals to enter the workforce or education . VR has yet to fully embrace the use of cognitive computer systems, including Artificial Intelligence (AI) approaches. As such it offers indefinite avenues of research for inquisitive minds, e.g., predicting future regional demand for VR, optimising VR pathways for maximum probability of success, and many more. Potential PhD candidates would collaborate with international partners of the ADAPT consortium to exploit state-of-the-art AI and Data Science methods to improve decision making and planning in VR. The projects would form the foundation for the field of VR informatics with international real-world impact on people's health and wellbeing as well as current societal issues.
Title: Predicting the Performance of Backtracking While Backtracking
Supervisor: Patrick Maier
Backtracking is a generic algorithm for computing optimal solutions of many combinatorial optimisation problems such as travelling salesman or vehicle routing. Unfortunately, the time a backtracking solver requires to find an optimal solution, to prove optimality, or to prove infeasibility is very hard to predict, which limits the practicality of such solvers for real-world problems.
Research in algorithms has mainly focused on specific problem classes and on identifying characteristic features of hard problem instances. Instead, this project aims to mine a generic backtracking solver for performance data at runtime (that is, while solving a particular problem instance) and to build statistical models that can be used to estimate the future performance of the solver on the current problem. Interesting estimates include: How likely is it that the current solution is optimal? Assuming the current solution is optimal, how long will it take to prove optimality? Can the search be parallelised, and if so, how many CPUs would be required to get the answer in one hour?
Title: Computational Modelling of Biological Systems
Supervisor: Carron Shankland
We can understand the world through modelling it, manipulating the model to incorporate new features, and analysing that model. For example, model disease spread: what happens when we add a vaccine, or quarantine, or mutation of the disease? The model can help inform policy decisions such as we’ve seen with the recent Covid outbreak. Or we might model a tumour cell, and the effects of different kinds of radiotherapy on that cell to develop better and safer treatment schedules. I have data for these systems, and am keen to supervise students using a range of modelling techniques, perhaps incorporating the use of evolutionary computation to refine the model.
Topic 1: Deepfake and Fake news detection
Due to the growing presence of social media or social networking sites people are digitally connected more than ever. This also empowers citizens to express their views in multitude of topics ranging from Government policies, events in everyday life to just sharing their emotions. However, the growing influence experience by the propaganda of fake news is now cause for concern for all walks of life. Election results are argued on some occasions to have been manipulated through the circulation of unfounded and sometime doctored stories on social media. In addition to fake text, there has been huge growth of AI based image/media manipulation algorithms commonly known as ‘deepfake’. Near realistic fake videos are being generated that contributes significantly to spreading misinformation. This project will research on developing new algorithms that combines deep learning based Natural Language Processing (NLP) and Computer Vision (CV) techniques to detect fake news and prevent misinformation spreading.
Topic: The multimedia blockchain
This project proposes to develop a blockchain based media distribution framework to a) enable trust, privacy and security in the media consumption chain; and b) empower transparent and trusted media distribution ecosystem in the creative sector. The first one aims to provide an efficient solution to issues related to trust, privacy and security in the consumption chain, while the latter part intend to provide a transparent and trusted media distribution ecosystem empowering creative content creators, publishers, consumers and digital archives. One key aspect of this proposed framework is the proposition of a transparent and decentralised blockchain architecture for media transactions and the provisions of media integrity through novel signal processing techniques that can address challenges posed by recent advances media manipulation such as deep fake. Further reading on this is available here.
Topic: Image/video auto-captioning
Image auto-captioning is an emerging area that has many applications. It is easy to capture and share images, extract the geo-location or even identify the objects. However, it is a very challenging problem to train a computer to see an image, understand and describe its content. This project will research into developing robust algorithms that can generate natural language descriptions of images/videos and their regions/segments. The project will explore about the inter-modal correspondences between language and visual data. The work will contribute the development of multimodal approaches that combines deep learning based Natural Language Processing (NLP) and Computer Vision (CV) techniques.
Topic: Non-Linear Deep Learning
Supervisor: Keiller Nogueira
Over the past decade, Convolutional Networks (ConvNets) have renewed the perspectives of the research and industrial communities. Although this deep learning technique may be composed of multiple layers, its core operation is the convolution, an important linear filtering process. Easy and fast to implement, convolutions actually play a major role, not only in ConvNets but in digital image processing and analysis as a whole, being effective for several tasks. However, aside from convolutions, researchers also proposed and developed non-linear filters, such as operators provided by mathematical morphology. Even though these are not so computationally efficient as the linear filters, in general, they are able to capture different patterns and tackle distinct problems when compared to the convolutions. This project will research the combination of deep learning and non-linear filters, mainly morphological operations, in order to create a new network that can be used for different applications and tasks.
Topic: Small Data Learning
Supervisor: Keiller Nogueira
The recent impressive results of methods based on deep learning for computer vision applications brought fresh air to the research and industrial community. Although extremely important, deep learning has a relevant drawback: it needs a lot of labelled data in order to learn patterns. However, some domains do not usually have large amounts of labelled data available which, in turn, makes the use of such technique unfeasible. This project will research strategies to better and efficiently exploit deep learning using few annotated samples.