My research interests lie within the areas of data science, machine learning and predictive analytics. I am also interested in reinforcement learning and multiagent systems (distributed decision making, decentralised coordination, scalability), and their application to real-world complex problems.
Below is a list of projects that I have worked or have been working on, along with a short description and representative publications.
Real-world congestion problems (e.g. traffic congestion) are typically very complex and large-scale. Multiagent reinforcement learning (MARL) is a promising candidate for dealing with this emerging complexity by providing an autonomous and distributed solution to these problems. However, there are three limiting factors that affect the deployability of MARL approaches to congestion problems. These are learning time, scalability and decentralised coordination i.e. no communication between the learning agents. In this work we introduce Resource Abstraction, an approach that addresses these challenges by allocating the available resources into abstract groups. This abstraction creates new reward functions that provide a more informative signal to the learning agents and aid the coordination amongst them. We show that the system using Resource Abstraction significantly improves the learning speed and scalability, and achieves the highest possible or near-highest joint performance for large-scale congestion problems in scenarios involving up to 1000 reinforcement learning agents.
K. Malialis, S. Devlin and D. Kudenko. Resource Abstraction for Reinforcement Learning in Multiagent Congestion Problems. In Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2016. [pdf] (Acceptance rate 24.9%)
The increasing adoption of technologies and the exponential growth of networks has made the area of information technology an integral part of our lives, where network security plays a vital role. One of the most serious threats in the current Internet is posed by distributed denial of service (DDoS) attacks, which target the availability of the victim system. Such an attack is designed to exhaust a server's resources or congest a network's infrastructure, and therefore renders the victim incapable of providing services to its legitimate users or customers. To tackle the distributed nature of these attacks, a distributed and coordinated defence mechanism is necessary, where many defensive nodes, across different locations cooperate in order to stop or reduce the flood. This work investigates the applicability of distributed reinforcement learning to intrusion response, specifically, DDoS response. We propose a novel approach to respond to DDoS attacks called Multiagent Router Throttling. Multiagent Router Throttling provides an agent-based distributed response to the DDoS problem, where multiple reinforcement learning agents are installed on a set of routers and learn to rate-limit or throttle traffic towards a victim server. One of the novel characteristics of the proposed approach is that it has a decentralised architecture and provides a decentralised coordinated response to the DDoS problem, thus being resilient to the attacks themselves. We apply task decomposition, coordinated team rewards and reward shaping to address the scalability challenge. The scalability of the proposed system is successfully demonstrated in experiments involving up to 1000 reinforcement learning agents. The significant improvements on scalability and learning speed lay the foundations for a potential real-world deployment.
K. Malialis, S. Devlin and D. Kudenko. Distributed Reinforcement Learning for Adaptive and Robust Network Intrusion Response. In Connection Science, Volume 27, Issue 3, July 2015, Pages 234-252. [link]
Best Student Paper Award K. Malialis, D. Kudenko. Distributed Response to Network Intrusions Using Multiagent Reinforcement Learning. In Engineering Applications of Artificial Intelligence, Volume 41, May 2015, Pages 270-284. [link]