Event detection over social media channels, in existing approaches, has been performed via text-based input (for example, text-based tweets) using a support vector machine (SVM) classifier for detection of a single event from a single input. However, existing approaches do not include a mechanism for understanding relevant images in an effort to visually quantify event characteristics. For instance, in the case of a destructive event, quantification of event characteristics may include damage assessment to physical structures.
Accordingly, existing approaches carry only limited information and are language specific. As such, a need exists to convert unstructured images to structured semantics, as trends in structured semantics over time can be used for trainable and extendable event detection.