Distributed Autonomous Vehicle Routing

The vehicle routing problem (VRP) is one of the most studied combinatorial optimization problems and is concerned with the optimal design of routes to be used by a fleet of vehicles to service a set of customers. With the growing popularity of autonomous vehicles (e.g., driverless cars and unmanned aerial systems) in the modern world, there is pressing need to revisit this problem in the context of decentralized multi-agent systems. We are developing distributed algorithms that attempt to optimize the global cost of routes for a set of autonomous vehicles.


Resource-aware Runtime Monitoring

Runtime monitoring and enforcement of desirable properties is an effective technique that ensures correctness at execution time. Runtime monitoring is particularly useful in large systems, where exhaustive analysis is not practical and conventional testing may fail to identify complex corner cases. The challenging problem in this area is to implement resource-efficient runtime monitors that do not intervene in the normal execution of the program under inspection. We are conducting research on automated design and development of runtime monitors for distributed real-time systems that (1) need to provide certain security and privacy policies and (2) take into consideration resource constraints (e.g., power, time budget, CPU cycles, memory, etc), while effectively exploiting the heterogeneous nature of today's computing platforms, such as many-core GPUs and multi-core processors.


Model-based Implementation of Distributed Real-time Applications

Model-based software development advocates the system design flow as a chain of steps starting from a high-level specification and leading to an implementation on a given execution platform. It involves the use of methods and tools for progressively deriving the implementation by making adequate design choices and step-by-step transformations that terminate when a correct-by-construction implementation is obtained. We are developing a novel framework for automated implementation of parallel/distributed real-time applications that are correct-by-construction from an abstract model specified in terms of a set of heterogeneous components, synchronization primitives, and priority rules for scheduling purposes. Our approach is based on algorithmic transformation techniques. These algorithms take as input a model of system processes and generate as output distributed and/or multi-threaded code, while ensuring that the execution of the code is observationally equivalent to the input abstract model and, hence, all the functional properties of the high-level model are preserved.

Rigorous Performance Evaluation

The state-of-the-art in evaluation and improvement of performance of parallel and distributed systems ranges over a wide spectrum: from analytical performance analysis methods to empirical approaches. However, these approaches either are unfeasible to automate or provide us with insights that cannot be necessarily generalized. We have developed a novel technique based on probabilistic state exploration that can rigorously compute the expected mean value of important metrics, such as the speed up factor in parallel algorithms and fault recovery time in distributed fault-tolerant protocols.

Automated Program Repair

The repair problem is as follows. Let P be a program, Σ be a logical specification where P satisfies Σ and Π be a logical property such that P does not satisfy Π. The repair problem is to devise an algorithm that generates a program P' such that P' satisfies both Σ and Π. Program repair is interesting, as it addresses multiple major problems in computing systems that occur very often: (1) incomplete specification, (2) change of environment, (3) change of platform, and (4) change of fault or threat model. My research group is actively doing research on developing repair techniques in the context of addition of fault-tolerance, security, timing constraints, and certain performance measures.

Tools

RiTHM

The tool RiTHM (Runtime Time-triggered Heterogeneous Monitoring) takes a C program under inspection and a set of LTL properties as input and generates an instrumented C program that is verified at run time by a time-triggered monitor. The output C program can be instrumented in two ways: (1) it is monitored in a time-triggered fashion with a fixed polling period, or (2) the monitor is augmented with PID or fuzzy controllers that ensure minimum variation in monitor polling period while maximizing memory utilization. In both cases, the monitor is invoked in a time-triggered fashion, ensuring that the states of the program can be reconstructed at each invocation by using efficient instrumentation. The monitor verification decision procedure is sound and complete and takes advantage of the GPU many-core technology to speedup monitoring and reduce the runtime overhead.

BIP

The aim is to develop theory, methods and tools for building real-time and distributed systems consisting of heterogeneous components. The focus is on the following challenging problems: (1) Develop a framework for the incremental composition of heterogeneous components. Three different sources of heterogeneity are considered related to interaction, execution and abstraction, (2) Develop results ensuring correctness-by-construction for essential system properties such as mutual exclusion, deadlock-freedom and progress in order to minimize a posteriori validation, and (3) Provide automated support for component integration and generation of glue code meeting given requirements.

SYCRAFT

Sycraft (SYmboliC synthesizeR and Adder of Fault-Tolerance) is a tool for transforming distributed fault-intolerant programs to distributed fault-tolerant programs. In Sycraft, a distributed fault-intolerant program is specified in terms of a set of processes and an invariant. Each process is specified as a set of actions in a guarded command language, a set of variables that the process can read, and a set of variables that the process can write. The issue of distribution is addressed in shared-memory model where processes are constrained by their ability in reading and writing program variables. Given a set of fault actions and a safety specification, the tool synthesizes a fault-tolerant program via a symbolic implementation of respective synthesis algorithms. Sycraft has successfully been used to synthesize some of the classic protocols in the literature of fault-tolerant computing in distributed systems (e.g., Byzantine agreement, token ring, etc.).

Past events involved

  1. ICDCS'15, PC member.
  2. DAC'15, PC member.
  3. FORTE'15, PC member.
  4. NFM'15, PC member.
  5. RV'15, PC member.
  6. SSS'15, PC member.
  7. FSEN'15, PC member.
  8. DATE'15, PC member.
  9. AusPDC'15, PC member.
  10. RV'14, PC co-chair.
  11. CSRV'14, Co-chair.
  12. SSS'14, PC vice chair.
  13. RTSS'14, PC member.
  14. RTAS'14, PC member.
  15. AusPDC'14, PC member.
  16. SSS'13, PC member.
  17. SEKE'13, PC member.
  18. RV'13, PC member.
  19. COORDINATION'13, PC member.
  20. FSEN'13, PC member.
  21. AusPDC'13, PC member.
  22. SSS'12, PC vice chair.
  23. SEKE'12, PC member.
  24. COORDINATION'12, PC member.
  25. ICDCN'12, PC member.
  26. SSS'12, Local arrangements chair.
  27. FHIES'12, PC member.
  28. LAFT'11, PC co-chair.
  29. SSS'11, PC member.
  30. EUC'11, PC member.
  31. EMC'11, PC member.
  32. SEKE'11, PC member.
  33. SSS'10, PC member.
  34. ICESS'10, PC member,
  35. SSS'09, PC member.
  36. LAFT 2009, PC member.
  37. SSS'09, Publicity co-chair.