Distributed Intelligence Augmentation Lab (DIAL) is in the Mechanical and Manufacturing Engineering department at University of Calgary. Our vision is to create a world where humans and intelligent systems fluidly coexist to solve engineering challenges. Our focus is on decentralized systems where a central authority does not directly control all system components. Our approach bases on algorithmic game theory along with machine learning, network science, and experiments. Our research applies to human-robot systems (smart factories), cloud-based manufacturing systems, decentralized design systems (intelligent design assistants), and complex networked systems (air transportation, smart grid).

Research Overview

Complex systems today require the participation of thousands of stakeholders. As a result, decision-making autonomy and control are decentralized. Conventional approaches fail in decentralized systems as individual system components have different and often conflicting objectives. A centralized algorithm cannot expect system components to behave as expected. 

Our work leverages algorithmic game theory to steer systems into desirable outcomes despite agents acting in their self-interest. Our lab realizes this by developing the following capabilities:

  1. Model: modeling system dynamics using theory, simulation, and experiments. Consider the evolutionary dynamics of complex networked systems. We predicted the network topology evolution of complex networks with high accuracy by modeling the edge linking probability using discrete games framework (theoretical/simulation). Another example is that of a decentralized design system. We conducted human-subject studies using a web-based aircraft design studio to understand information dynamics in decentralized design (experimental).
  2. Reverse engineer: In step 1, we modeled the system dynamics. In this step, we reverse engineer the rule of the game for the system to reach the targeted state. For example, in the complex networked system policymakers can design policies that steer the network evolution towards targeted structure by evaluating the effect of their policies. 

It is not always possible to theoretically reverse engineer a system using algorithmic game theory. For example, consider a smart factory where humans and collaborative robots (cobots) are coworking. Here, only one set of agents are strategic (i.e., self-interested humans). The robots are not self-interested by default. Existing algorithmic game theory literature deals with cases where all agents are self-interested and do not address such hybrid strategic engineered systems. Our lab integrates reinforcement learning with mechanism design to enable computational modeling and reverse engineering such hybrid strategic systems. 

Our research offers the potential to answer several open questions, which includes:

  • How much autonomy to give to robots in human-robot systems? Are competitive or cooperative robots more effective?
  • What are the features of intelligent assistants that augment the performance of human designers?
  • How to effectively allocate resources in smart factories and cloud-based design manufacturing given conflicting objectives of machine owners and designers?
  • Is generative design effective in design assistants?

Our lab envisions creating intelligent systems that not only augment the potential of humans but can fluidly operate under decentralized control.