Results of Metro Station Applications |
In this part, we will describe the results obtained on real visual surveillance applications for metro stations. The videos come from the CCTV networks of the metro operators partners of the AVS-PV european project.
We have formalized the expertise of three security engineers in a knowledge base. The following models of scenarios (figures 7 and 8) are extracted from the AVS-PV knowledge base.
In the following, we detail how the scenarios ``forbidden access to area'' and ``graffiti on wall'' described in 7 and 8 are recognized. These scenarios belong to the knowledge base built for AVS-PV european project and videos have been recorded in a STIB Metro Station in Brussels. In the two next examples, the camera observes the platform of a metro station. The aims of those two scenarios are: to prevent vandalism against equipment and to ensure the safety of passengers.
Figure 9: at t = 111, person 1 is inside the tracks area. The change triggers the event ``person 1 enters tracks area. This area is labelled as a forbbiden area, so the first event of ``forbidden access to area'' scenario is recognized. at t = 141, the person 1 is far from the wall and is still inside the ``tracks'' area, so the the event ''person 1 exits the ``tracks'' area'' hasn't be triggered. The non-occurence of the event matches the second event (negative event) of the scenario ``forbidden access to area''. An alarm is sent to the human operator
Figure 10: at t = 166, person 1 is near to the equipment ``wall'', so the event ``person 1 moves close to equipment wall'' is triggered. This event instanciates the first event of the scenario ``graffiti on wall''
Figure 11: at t = 196, person 1 is still near to the equipment ``wall'', so the event ``person 1 moves off equipment wall'' hasn't be triggered. The non-occurence of the events matchs the second event (negative event) of the scenario ``graffiti on wall''. An alarm is sent to the human operator
In the following, we detail how the scenarios ``Presence period near fragile equipment'' and ``Repeated period near fragile equipment'' described in figures 12 and 13 are recognized. These scenarios belong to the knowledge base built for AVS-PV european project and videos have been taken in a VAG Metro Station in Nuremberg (Germany). In this example, the camera observes the entrance of a metro station. The aim of those two scenarios is to prevent vandalism against ticket vending machines. These machines have been defined in the context (see section 4) as fragile equipment.
Figure 14: at t = 33, person 1 is far from equipment labeled as ``fragile'' at t = 43, person 1 is close to equipment labeled as ``fragile'', so the event ``person 1 moves close to equipment machine'' is triggered. The first event of scenario ``Period near fragile equipment'' is instanciated.
Figure 15: at t = 47, the person 1 is stopped and is still
close to the machine, so the negative event of scenario ``Period near
fragile equipment'' is instanciated. Secondly, the fact that person 1 stopped
triggers the event ``the person 1 stops. The three events of the scenario
are recognized and the alarm is sent to the human operator. The complete
recognition of this scenario triggers the specific loopback event:
''. This specific event matches the first event of
the scenario ``Repeated Presence period near fragile equipment''.
Figure 16: at t = 178, the event ``person 1 moves close to equipment'' is triggered. This equipment is the same equipment that as the one at t = 43. The scenario ``Repeated Presence period near fragile equipment'' is now totally recognized. An alarm is sent to the human operator.
The results of these applications were considered very satifactory by the metro operators. The formalism we have proposed for scenario description has enabled us to represent the expertise for these applications. Although the knowledge modeling is still difficult. The main reason is that we need to manage the passage from vague security concepts (such as ``abnormal behavior'') to rigorous scenario models. These results have been processed off-line on a Sun Ultra10 workstation. The computing time per image is between 220ms and 530ms for the complete chain (including perception and interpretation). Among the 25 images digitized per second, 5 images (one per 200ms) are processed.