Patented Signals analysis technology reads, assembles and processes wordgrams of varying length within every textual element to “learn” statistically relevant topics within the text. Built on a mixture of RNN, Semantic Parsing, Multi-Model Data Connection, Graph Models, LSTM, Signals’ analytics engine dynamically creates semantic associations across all textual elements, grouping related elements together into categories, top trends, and geo clusters.
Autonomously learning the content of any textual data provides a fast, unbiased method for processing customer comments, product reviews, chat sessions, voice to transcript interactions, survey data and other unstructured content
Signals solves the challenge of textual analytics and enables in-line structured data exploration, automated predictive results, and effortless collaboration within your analytics context. Specifically, the Signals intelligence core engine will automatically categorize your structured data based on known numerical, categorical, ordinal, temporal, user and geographical data types.
Automatic spatial-temporal algorithms, such as KDE, Graph-Model, Convolutional Neural Network (CNN) and Page-Rank weighting, Signals performs multiple complex processing elements to identify key elements within any data set.
From ad hoc jobs to enterprise data sets, the ability to select and dimension data, including sentiment, geography, category and topics associated with unstructured textual data across these structured dimensions for discrete analysis is fast and easy.
Using deep-learning algorithms, Signals enables predictive analytics on user behaviors, user modeling, event modeling, trend spotting, and topic summarization. These foundational predictive elements deliver the insights necessary to tune business practices and drive business value.
Natural Language Processing is a fundamental processing core within Signals. Supporting and performing NLP natively in 24 distinct languages, Signals eliminates core language translation and human tagging, automatically detecting originating language identifiers without user interaction.
Signals automatically performs complex parts of speech parsing, tagging, named entity extraction, lemmatization (cleaning), tokenization, knowledge-graph association, emoji identification, and other language nuances to navigate through the ambiguity and imprecision of written and spoken language. The end result is the ability to discern relevant content, emotion and even motion within unstructured textual elements.
Signals uses proprietary technology, including Hidden Markov Model and Deep Learning (RNN), opinion analysis, and statistical analysis to assess sentiment across all 24 supported languages. Sentiment is calculated on multiple levels, leveraging part-of-speech identification to build dynamic sentiment scores.
Signals provides an intuitive visual feedback loop within the UI to set or tune sentiments specific to product, industry or speciality.
Signals identifies clusters of semantically similar textual elements and groups them into unique topic groups for each processed data set. Centered around these opinion-critical, unique categorical n-grams, the engine further identifies specific words, phrases, and language constructs that are associated with positive and negative meanings in the topical categories. In addition to words and phrases, a contextual analysis around the data-driven category is utilized to improve the accuracy of the sentiment and properly reflect its semantic differences within each category.
Signals provides street-level geo encoding with a 95%+ global accuracy rate.
Signals' robust geo-analytics processing engine automatically performs space-time-kernel density estimation to generate a relative heat index for different sentiment categories across geo dimensions.