I take human-driven data, usually derived from sensors that are wearable or embedded in devices, and design algorithms to extract meaningful information, such as endpoint prediction, gesture recognition, activity recognition, interaction prediction, and HCI modeling. For this, I utilize signal processing, machine learning, sensor fusion, and human movement modeling techniques. My programming languages include Python, C, Julia, and Matlab.
In recent years, new sensing technologies have emerged for assessing human motion, interaction and behaviour. Human driven signals are challenging as they are unpredictable, complex, diverse and therefore hard to model.
How to validate a sensing technology, how to determine its accuracy and value? Each sensor has its limitations and artefacts and characterizing them is key for hindering their effects. I join forces with hardware engineers and silicon manufacturers to track and guide improvement.
Prediction filters, navigation, localization, detection, tracking, classification and mapping algorithms tailored for human movement and interaction are the core of my work.
Human-robot interaction, self-driven cars and gesture-based controllers require capable technology to capture human intent and to handle the unpredictable nature of human behaviour.
How to join human comfort and measurement accuracy? More and more, users are critical and selective on what to use, wear and which devices they are willing to have at home and work. I pair with product designers and UX researchers in order to find where, when, how and what to measure?.
I run experiments to onboard users to new experiences, evaluating their effort and satisfaction while using new controllers. I also design and execute experiments to gather data for machine learning. I deliver end-to-end human experiment solutions, from experiment protocol design to statistical analysis and data visualization.
What vocabulary of gestures or interaction work? How learnable an interaction is? What is the feasible region that takes into account interaction accuracy, computation cost, power, user effort (mental/physical) and satisfaction? What dynamic model better describe a given human motion and interaction? How robust it is?