ECCV Workshop on Benchmarking Multi-target Tracking 2016

My paper titled "Online multi-target tracking with strong and weak detections (link)" (R. Sanchez Matilla, F. Poiesi and A. Cavallaro) has been accepted in the European Conference on Computer Vision Workshop (ECCV - Benchmarking Multi-target Tracking: MOTChallenge 2016). We participated to the challenge with a new multi-object tracker based on Probability Hypothesis Density Particle Filter (we named it EAMTT). At submission date EAMTT had the best online tracking results overall in MOT15 and MOT16 among the public trackers. Today (29/08) EAMTT is still ranked first in MOT15 and is ranked second in MOT16. - Abstract - We propose an online multi-target tracker that exploits both high and low confidence target detections in a Probability Hypothesis Density Particle Filter framework. High-confidence (strong) detections are used for label propagation and target initialization. Low-confidence (weak) detections only support the propagation of labels, i.e. tracking existing targets. Moreover, we perform data association just after the prediction stage thus avoiding the need for computationally expensive labelling procedures such as clustering. Finally, we perform sampling by considering the perspective distortion in the target observations. The proposed tracker runs on average at 12 frames per second. Results show that our method outperforms alternative online trackers on the Multiple Object Tracking 2016 and 2015 benchmark datasets in terms tracking accuracy, false negatives and speed.

Oral presentation at BMVC

My paper titled "Detection of fast incoming objects with a moving camera (link)" (F. Poiesi and A. Cavallaro) has been accepted as oral presentation in British Machine Vision Conference (BMVC). Have a look to the project page for videos (and soon code!). - Abstract - Using a monocular camera for early collision detection in cluttered scenes to elude fast incoming objects is a desirable but challenging functionality for mobile robots, such as small drones. We present a novel moving object detection and avoidance algorithm for an uncalibrated camera that uses only the optical flow to predict collisions. First, we estimate the optical flow and compensate the global camera motion. Then we detect incoming objects while removing the noise caused by dynamic textures, nearby terrain and lens distortion by means of an adaptively learnt background-motion model. Next, we estimate the time to contact, namely the expected time for an incoming object to cross the infinite plane defined by the extension of the image plane. Finally, we combine the time to contact and the compensated motion in a Bayesian framework to identify an object-free region the robot can move towards to avoid the collision. We demonstrate and evaluate the proposed algorithm using footage of flying robots that observe fast incoming objects such as birds, balls and other drones.

Paper accepted in IEEE Trans. on Circuits and Systems for Video Technology

My paper titled "Support Vector Motion Clustering (link)" (I.A. Lawal, F. Poiesi, D. Anguita and A. Cavallaro) has been accepted in IEEE Trans. on Circuits and Systems for Video Technology (TCSVT). - Abstract - We present a closed-loop unsupervised clustering method for motion vectors extracted from highly dynamic video scenes. Motion vectors are assigned to non-convex homogeneous clusters characterizing direction, size and shape of regions with multiple independent activities. The proposed method is based on Support Vector Clustering (SVC). Cluster labels are propagated over time via incremental learning. The proposed method uses a kernel function that maps the input motion vectors into a high-dimensional space to produce non-convex clusters. We improve the mapping effectiveness by quantifying feature similarities via a blend of position and orientation affinities. We use the Quasiconformal Kernel Transformation to boost the discrimination of outliers. The temporal propagation of the clusters' identities is achieved via incremental learning based on the concept of feature obsolescence to deal with appearing and disappearing features. Moreover, we design an on-line clustering performance prediction algorithm used as a feedback (closed-loop) that refines the cluster model at each frame in an unsupervised manner. We evaluate the proposed method on synthetic datasets and real-world crowded videos, and show that our solution outperforms state-of-the-art approaches.

Best online multi-target tracker on MOT challenge

We recently submitted our results to the Multi Object Tracking challenge and we ranked as first with our new Probability Hypothesis Density Particle Filter (PHD_PF) in the category of online trackers (second overall). Check the results out at these links 1, 2. Merry Christmas!

Presentation at London Big-O Meetup

I have been invited to present my work about formations of flying cameras to film a moving target at the London Big-O Meetup on Mon 9th (link1, link2). Just RSVP yourself if you want to attend. Before my presentation there will be Amrith presenting his work about conceptual design of Unmanned Aircraft Systems that considers communication systems, payload systems, propulsion system, avionics architecture, and many others. Presentations will start at 18.30.

Paper accepted at IROS 2015

My paper titled "Distributed vision-based flying cameras to film a moving target (link)" (F. Poiesi and A. Cavallaro) has been accepted to the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015 held in Hamburg, Germany - Abstract - Formations of camera-equipped quadrotors (flying cameras) have the actuation agility to track moving targets from multiple viewing angles. In this paper we propose a solution for the infrastructure-free distributed control of multiple flying cameras tracking an object. The proposed approach is a vision-based servoing that can deal with noisy and missing target observations, accounts for the quadrotor oscillations and does not require an external positioning system. The flight direction of each camera is inferred via geometric derivation, and the formation is maintained by employing a distributed algorithm that uses the information of the target position on the camera plane and the position of neighboring flying cameras. Simulations show that the proposed solution allows the tracking of a moving target by the cameras flying in formation also with noisy target detections, and when the target is outside some of fields of view or lost for a few frames.

Multi-target trajectory 3D visualiser is ONLINE

The alpha web-version of the 3D visualiser for multi-target tracking results is online! If you like, there is a 2-minute video on Youtube to briefly understand what I am talking about. You can upload your stuff and the visualiser will help you analysing, debugging and showing your results for scientific reports or papers. The visualiser appeared on this paper. Just drop me an email if you are interested in it, I will give you further details and make an account for you. Believe me that the visualiser will change your life :)

Paper accepted at ISSNIP 2015

My paper titled "Self-positioning of a team of flying smart cameras (link)" (F. Poiesi and A. Cavallaro) has been accepted to the IEEE 10th International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP) 2015 held in Singapore - Abstract - Quadcopters are highly maneuverable and can provide an effective means for an agile dynamic positioning of sensors such as cameras. In this paper we propose a method for the self-positioning of a team of camera-equipped quadcopters (flying cameras) around a moving target. The self-positioning task is driven by the maximization of the monitored surface of the moving target based on a dynamic flight model combined with a collision avoidance algorithm. Each flying camera only knows the relative distance of neighboring flying cameras and its desired position with respect to the target. Given a team of up to 12 flying cameras, we show they can achieve a stable time-varying formation around a moving target without collisions.

ARTEMIS Co-summit and VISIGRAPP

I am going to be in Berlin (GE) from the 9th to the 14th March attending the Artemis Co-summit 2015 (10/03-11/03) and VISIGRAPP 2015 (11/03-14/03). I will be at the booth of COPCAMS during the Co-summit and at VISIGRAPP presenting the newly published 3D visualiser for tracking results (link).

Paper accepted at IVAPP 2015

My paper titled "MTTV: an interactive trajectory visualization and analysis tool (link)" (F. Poiesi and A. Cavallaro) has been accepted to the 6th International Conference on Information Visualization Theory and Applications (IVAPP) 2015 held in Berlin, Germany - Abstract - We present an interactive visualizer that enables the exploration, measurement, analysis and manipulation of trajectories. Trajectories can be generated either automatically by multi-target tracking algorithms or manually by human annotators. The visualizer helps understanding the behavior of targets, correcting tracking results and quantifying the performance of tracking algorithms. The input video can be overlaid to compare ideal and estimated target locations. The code of the visualizer (C++ with openFrameworks) is open source.

New website

Welcome to my new website! I will do my best to keep it updated. In Home you will find all my recent activities and news. In Publications the list of my published works. In Data I will organise links to datasets that could be useful for different applications. In About me a brief description of who I am. I hope you will find this website useful :) I currently tested the website on Chrome, FireFox and Safari. So please use these browsers to make sure you have the best view of it.