The main expected results of beAWARE are:
- New enhanced decision support and early warning services based on aggregated analysis of multimodal data and previous crisis management records
- Shorter reaction time and
- Higher efficiency of reactions
- Improved coordination of emergency reactions in the field
- Contribution to the European Policy regarding disaster risks and crises management
The above results are based on the beAWARE main technologies:
Multilingual speech and written communication analysis in emergency calls
Speech recognition: beAWARE aims at developing and adapting speech technologies for speech produced in noisy conditions and/or stressful situations. Available technologies will be adapted to the specific domain of emergency calls. Transcribed and written communication analysis: beAWARE aims at developing robust multilayer parsing techniques that are able to deliver semantic respectively deep-syntactic structures that will serve, on the one hand, as transfer structures for Machine Translation and, on the other hand, as input to the projection ontoconceptual (ontological) structures that will be fed into the overall ontological repository in which the content obtained via different modi will be represented.
Aggregate multimodal information from sensor networks, meteorological stations, etc. and social media for decision support and validation purposes and issue early warnings
Machine-to-Machine and Internet-Of-Things platforms are resources for collecting real-time participatory and opportunistic sensing information that can be utilize to detect an emergency or enhance the contextual information of dedicated physical vicinity. Information distributed over several Web resources, including forums, blogs, and social networks will be gathered, and content pertinent to specific topics related to emergency events will be crawled and extracted.
Visual context analysis during emergency calls
beAWARE will develop technologies to extract high-level information such as indoor/outdoor, city landscape/deserted area by extracting low-level features from visual data and translate them into high-level concepts based on supervised machine learning techniques.
Semantic integration of multimodal information from the emergency calls, M2M/IoT platforms and social media for decision support and generation of early warnings
beAWARE will develop technologies for semantic integration of the diverse multimodal content from social networks and blogs to enable reasoning for decision support in the PSAP, as well as for the generation of early warnings. An ontological framework will be developed that will semantically represent all the extracted information from user input (processing results of image, video, text and speech), location information, historical data from other emergency calls and from social media analytics. Appropriate ontological structures will be created, which will provide the backbone of the reasoning mechanisms.
Multilingual report generation from aggregated emergency data
beAWARE will develop techniques for generation of multilingual written information of different kinds from ontological representations. Among the generated information, beAWARE will aim to provide reports of the aggregated content related to a single emergency from different modi (audio, video, social networks), supplementary contextual information for the PSAPs, emergency case statistics, etc.