My Projects My Projects

This is a list of my projects over the last couple of years. If you are a student or just interested in my work, feel free to contact me for more information! Most of them produced some publications and are related to my research work with the Australian Centre for Robotic Vision, the SpaceMaster Robotics Team, the Advanced Concepts Team or the IDSIA Robotics Lab.

Student Projects

We are looking for excited students to join us here in Brisbane, performing world-class research with real world implications! If you are interested in robotics, computer vision or machine learning check out our current research and get in contact with me! We have opportunities for PhD research (need to have finished at least a bachelor degree), as well as undergraduate final year projects. Topics include (but are not limited to): Robust Robotic Grasping: How can we teach a robot to grasp objects everywhere, all the time! No more pesky success rates! Deep Learning for Robotics: What role does robotics play in grounding deep learning, how can embodiment help to create better AI, learning to reach and servo to objects and create robotic eye-hand coordination Learning from visual demonstrations: learning from predicting own models and actions (such as intrinsic rewards) or prediction errors from human demonstrations Reinforcement Learning for Robotic Vision: How can we learn from visual demonstration, efficiently, in only a few trials to perform complicated actions! Here's some more information about RAS topics:

Current Research Topics and Projects

Vision & Actions (Australian Centre for Robotic Vision)

The ability to see is the remaining technological road-block to the ubiquitous deployment of robots. The ARC Centre of Excellence for Robotic Vision plays a key role in overcoming this roadblock. We develop the underlying science and technologies that will enable robots to see, to understand their environment using the sense of vision, and to perform useful tasks in the complex, unstructured and dynamically changing environments in which we live and work.

In the Vision and Action theme we will create new theory and methods for using image data for control of robotic systems that navigate through space, grasp objects, interact with humans and use motion to assist in seeing. State of the art vision-based control of robotic systems treat visual sensing and image processing as separate black box from which more classical sensor outputs are derived.These techniques will be applied to inferring and responding to human intent in cooperative work situations. A prototypical problem is the pick and place in unstructured environments, as a demonstrator we are taking part in the Amazon Picking Challenge 2016.

since 2014, ACRV


since 2014 ACRV, 2010-2014 IDSIA

The overarching goal of my research is to create robots with more autonomous and more adaptive behaviours, leading to more 'intelligent' robots. My current research is at the intersection between Machine Learning, Computer Vision and robotic control. Our main platform right now is Baxter. The last few years I was trying to make the iCub see, that is, developing computer vision algorithms for object detection, identification and localisation.

Check out this video!

The iCub humanoid robot is an open-system, meaning both open-source and open-hardware, robotic platform developed in EU funded FP7 research projects. It provides (in our setup) a 41 degree-of-freedom (DOF) upper-body, comprising two arms, a head and a torso. The iCub is generally considered an interesting experimental platform for cognitive and sensorimotor development and embodied Artificial Intelligence research, with a focus on object manipulation.

Neuroscience, Robotics and Bio-signal Processing

Transferring some of the knowledge from neuroscience into robotics will be of more and more interest in the coming years! On the other hand robots are really interesting tools to test some of the theories about behavioural, mental and cognitive development!

From the other side it is interesting to investigate how signals from humans, may it be visual cues, muscle activations or even neurons firing can be used to tele-operate a robot. [Video] Currently I am looking at how the sensor data of recently available consumer devices, such as, LEAP Motion Sensor, MS Kinect, Thalmic Lab's MYO can be fused...

since 2014 ACRV, 2011-2014 IDSIA

EMG sensing

since 2014 ACRV, 2010-2014 IDSIA, 2009-2010 ESA/ACT, 2007-2009 SpaceMaster

Autonomy is important for robotic spacecraft operations. It allows to expand the science return on space missions. My interests are in multi-robot cooperation (e.g. to jointly build a lunar base before sending astronauts) to autonomous science operations (e.g. visual detection of `points-of-interest' for mars rovers) all the way to HART (human-robot-agent-teams) for joint exploration scenarios in which shared autonomy is required.

Right now we are trying to build a Lunar Payload - named LunaRoo here at QUT. Check out the subpage or find some of my previous projects further down this page!

Previous Research Projects

Wearable Interfaces For Hand

WAY addresses the challenging problems of functional substitution & recovery of sensori- motor capacities of the human hand. Apart from developing ready-to-use Brain-Neural Computer Interfaces (BNCI) for the disabled; the interfaces and systems developed within the project will be functionally evaluated by amputees and patients with hand dysfunctions,

Function Recovery (WAY)

exploiting two complementary demonstrators: a multifunction hand prosthesis and a hand exoskeleton (called Hand Assistive Devices, HAD) The two demonstrators will be developed by the biorobotics group of the Scuola Superiore Sant'Anna. IDSIA develops algorithms facilitating control of hand prosthesis & exoskeletons thru brain-computer interfaces.

WAY Project

the iCub and Juxi

Intrinsically Motivated Agents

One of the main projects at IDSIA recently was the Intrinsically Motivated Cumulative Learning Versatile Robots -- or IM-CLeVeR for short -- project. With partners, include CNR (in Rome), Universities of Ulster and Aberystwyth (UK) the project focusses on the iCub robot and how it can learn to interact with its environment. It was an FP7 funded project.


It is quite a big project here at IDSIA and a lot of work is done to make the iCub perform object manipulation. The main parts I am working on currently are to get the software setup and start on the interface between the Machine Learning (Reinforcement Learning) group and robotics. Right now I am helping to make the iCub see things.

Enhancing Biomorphic Agility Through

The first project is nicknamed STIFF and is lead by the German Aerospace Center (DLR) (Biorobotics Section). STIFF is a research project on enhancing biomorphic agility of robot arms and hands through variable stiffness & elasticity. Funded by the 7th framework programme of the European Union (grant agreement No: 231576).

Variable Stiffness (STIFF)

IDSIA is responsible for learning high-level task-specific controllers based on reinforcement signals for the flexible variable-impedance robot arm developed by DLR (HASy), and for extracting cost functions from human behaviours in collaboration with UEDIN (also sometimes referred to as inverse reinforcement learning).


Evolving Docking

Evolving Docking

Aiming to evolve neurocontrollers (NC) that can be used to control spacecraft, with focus on automatic rendezvous and docking (AR&D). The main goal is to find answers to the following questions: 'Can we design a NC that can adapt to varying conditions? What is the tradeoff between the optimality of such NC and their adaptability and robustness?'

2009-2010, Advanced Concepts Team/ESA

The motivation for this research, is to find a state feedback, that is u(x), providing the controller with more adaptiveness and robustness. A first step towards a fully reactive NC, an artificial neural network (ANN) is trained offline by a fitness based genetic algorithm to fulfil a docking task. To compare an optimal control strategy as a function of time, u(t), is found using numerical algorithms, an open loop control strategy bound to fail in an uncertain and disturbed environment.

Fractionated Spacecraft Simulation

The ACT in collaboration with DG-PS started the investigation of possible future and advanced scenarios and concepts in the broad area of space and security (e.g. disaster monitoring, space debris, NEOs, etc.). A special focus was on DARPA's F6 project because of its interesting framework, which includes possible dual use, its aim for asset security and its re-configurability.

Apart from that the project also provides a new methodology for space systems design (see the proposed value-centric design method) with the ability of extending and/or reconfiguring space architectures over time by adding and removing assets when needed. We aim to simulate a dynamic resource sharing between multiple small spacecraft in orbit.

2009-2010, Advanced Concepts Team/ESA

Taking a first step from theoretically described resource sharing to its application, ensuring that each spacecraft module has the needed resources for operation. For this we use a Multi-Agent System (MAS) to model the spacecrafts as agents with a given resource requirement and the ability to transfer (point-to-point) resources between the agents.

Multi Robot Cooperation

Investing Multi Robot Formations for Area Coverage in Space Applications was part of my MSc studies. Two algorithmic implementations of multi-robot area coverage are investigated in a space exploration scenario. A marsupial robot society is tasked to, e.g. create maps, create habitats,... An overview of multi-robot systems in space, both currently in use and planned,

2008-2009, TKK, LTU, 東京大学

was given (cited by NASA's working group on Mars Exploration). A vector force technique and a machine learning approach, organizational-learning classifier system (OCS) introduced by Takadama, were implemented in C++. The two were compared in a developed simulator, SMRTCTRL, the first approach was tested using a multi-robot society of LEGO Mindstorms.




A CanSat is a 'scale satellite' integrated within a tin can & launched onboard a weather balloon (or amateur rocket). It performs autonomous measurements of temperature, pressure, etc. The EISBAR project (Educational Investigation into Satellite Building and Atmospheric Recording) during my Master's designed, implemented and tested a small module.

2007-2008, JMUW, LTU

To meet the aim of measuring various specific values, the project was split into sub-components. The module was able to record pressure, temperature, position and radiation levels. The data collected was transmitted to a ground-station via radio signals and also stored on-board. Afterwards it was analysed to learn more about atmospheric radiation levels.

Visual Programming

Focusing on the programming language and IDE Processing, evolved out of MIT in 2001, we tried to evaluate what it can be used for. Processing is used in various areas including Visualization, Hardware, Video, Animation,... We also looked into alternatives and offsprings of Processing like Arduino IDEs and similar projects for other languages.

2006-2007, TU Wien

Tangible Interface


Apart from the above projects I was also involved in the following...

Sozialpsychologie e-Learning

SpaceMaster Robotics Team

SpaceMaster Robotics Team is a student team consisting of members formerly studying in the Joint European Master in Space Science & Technology (SpaceMaster) programme. It is currently spread all over Europe. And we do cool things :) More about the following projects can be found on the SMRT homepage.

SMRT - SpaceMaster Robotics Team