Contexta360 Secures €1.0 Million Seed Round Financing Led by VentureBuilders Capital

Speech Analytics and A.I company to accelerate sales, marketing, geographic coverage and execution of product roadmap

AMSTERDAM / LONDON April 8, 2019— Contexta360, a leading speech analytics and conversational computing company today announced it has completed a €1.0 Million seed round financing. VentureBuilders Capital led the investment round with participation from existing and new investors.

The investment will help the company expand its sales, marketing, geographic coverage as well as broaden and accelerate product development. Arie Kuiper, Partner at VentureBuilders Capital, will act as strategic advisor to the Contexta360 Board of Directors.

VentureBuilders Capital’s latest fund invests in B2B software companies that redefine or create large new markets.

The seed round builds on an exceptional year which saw the appointment of key senior staff, a rapidly growing roster of clients in the financial services, government, telco, contact center and logistics market places as well as solid execution of continued product development.

Contexta360 appointed customer interaction software veteran and business leader, Andrew White as CEO who brings global experience in product management, marketing, operations, sales, marketing and customer success. White previously lived and worked in the USA and Europe as well as having managed large business units in APAC.

Arie Kuiper – Lead Partner at VentureBuilders Capital said: “Contexta360 finally allows contact centers to have the same level of insight and control of voice interactions as they do of digital interactions. We are incredibly impressed with the quality of their technology, the speed with which they keep developing this technology and their professional behavior towards customers. We believe Contexta360’s products have the potential to lead a shift in their industry, and we are excited to collaborate with and support them as they work to build a great company.”

The Contexta360 Platform

The Contexta360 Platform is available as a cloud service or on premise, which proves very popular with their government, security and Financial Services clients. The platforms delivers three layers of capability, namely:

  1. Speech-to-Text   –  This is delivered across multiple languages, at extreme quality / accuracy and with the unique capability to learn industry lexicon / terms.
  2. Text to Understanding – Contexta360 has leading NLP capabilities and programming to enable the platform to turn text into understanding
  3. Understanding to insight and Action – Once there is understanding, we can leverage our A.I, additional data sources and process rules to create actions that improve revenue, customer experience, scale, learning and reduce risk and cost.

Our platform is delivered in the following modules:

Contexta360: Identity – The Ability to highlight if a caller’s identity is in question

Contexta360: Assist – The Ability to Assist an agent real time or summarize actions

Contexta360: Analyzer – Gain insights from the content and context of calls, track talk-time, topic of conversation, category, sentiment, C-Sat, key word analysis, spot questions.

Contexta360: Connect – Connect CRM platforms such as Salesforce or Microsoft CRM Inject transcripts, Add sentiment, actions detected, talk ratio, topics discussed to opportunity and account records

Contexta360: Compliance – The Ability to add one or many compliance questions that must be asked and answered so that high risk calls can be identified and highlighted to compliance teams

Contexta360: Quality Monitor – The Ability mine and report on the data by date, team, enterprise, function, context, term, gain insights and take action

Contexta360: Omni Channel – The Ability to integrate other communication data sources to gain a 360-degree context (Chat, Web, SMS, Social channels)

“As an industry we’re just scratching the surface of how powerful speech analytics can be to all organisations. The voice channel remains the ‘Prestigious’ channel, ideal for ‘moments that matter’ and ‘customers that matter’. However, organisations have very little intelligence on what is being said and how it is being said”, Said Andrew White – CEO of Contexta360. “I am fortunate to be working with some of the best speech scientists, A.I, developers and NLP PhDs on the planet. This is what makes us unique. There are lots of companies that do speech to text, but that is just 5% of the journey.  Real insight comes from understanding, a new level in speech quality and a flexible approach to customer engagement”, White concludes.

About Contexta360

Contexta360 is a high growth company based in Amsterdam and London. We are a team of highly skilled software developers and computer scientists with a passion for artificial intelligence, speech-to-text, and natural language understanding. We help enterprise organisations capture voice and video conversations across multiple languages, transcribing and analyzing them for compliance, sentiment, topic, context, effectiveness and Customer experience. We build a 360-degree view of customer interaction by analyzing your conversations or transactional history from chat, email, social and CRM / ERP data files.

Omnichannel Reality

In this blog we will cover the real needs of modern omnichannel contact centers.

We discuss going beyond multi-channel routing and call options and into omnichannel conversational analysis, real-time data strategies, post interaction actions summary and quality and performance capabilities.

This is part three of a five-part blog series looking at how AI, speech analytics and conversational computing is changing the processes, efficiency and CX / C-Sat metrics within many organizations around the world.

  • Part one covered “do you think you are digital?” This highlights the gaps in more than 50 per cent of customer interactions.
  • Part two covered the issues relating to whether humans have the technologies to support them in an increasingly challenging customer interaction function.
  • Part four covers how these speech technologies are helping to transform sales and service functions and leadership.
  • Part five concludes on the future vision for speech analytics and conversational computing.

In this blog there are two areas I would like to expand on. The human-to-human omnichannel dynamic, and the human-to-automated conversation service.

Let us start with H2H.

“Your next call will connect in three seconds”

Wow. Agent pressure to listen, understand and digest in real time is becoming a significant pressure point. We have released options to our customers to get information, request services, or make transactions across multiple channels. We all now openly use our apps, IVRs, websites, forms, chat sessions and send emails and, in most cases, this is well received by the consumer as it means we can do what we want without needing to talk in real time, or on someone else’s availability. We are truly in the world of NOW.

Recent surveys have identified that, on average, the human customer interaction point is now managing around 60 per cent of total channel interactions (40 per cent having been automated / self-service enabled). This naturally differs from industry to industry, but in just about every industry when the call matters the need for excellence amplifies. Interactions that matter are critical moments that directly influence CX / C-Sat, customer retention, acquisition and revenue. They include situations relating to complaints, key client requests, big sales movements, complex transactions or compliance requirements and so on. When the call matters, real time is best, but the agent has a lot to process and this impacts quality of service.

  • What did the customer say in the voice channel yesterday / last week?
  • What did the customer write in the chatbot earlier today?
  • What product updates have been released internally recently?
  • What is going on externally (live feed news / Bloomberg etc)?

And the list goes on.

Our key resources have to absorb not only the content of previous or recent interactions, but also the context, sentiment, importance and relevance across all channels – and all in the nano-seconds before connection. This is practically impossible for the human being.

Our teams need help. They need a mechanism to pre-brief them in the three seconds before the call. A pre-briefing that considers all historical communications, transactions and across all channels. Not just that there was a call, email or chat session, but the context of the communication, the sentiment and perhaps the sales stage or service/ support stage based on language used in the previous calls or written communications.

And the H2D conundrum is probably even more critical as in many cases the interaction content is invisible. We may be getting transactional data, but not the context, content or sentiment. At best we record these transactions (voice response, chatbot or email) but few digitally digest the content.

Therefore, in parallel, there is also a need to monitor the automated channels for “moments” – these may be moments of frustration, potential complaint or, more positively, perhaps moment of imminent sale. These are moments where AI technology can analyze the communication and conversation, detecting the trajectory of the call, and relate this to the customer profile, product complexity, CX impact and revenue / cost implications. These are digital conversation moments that, ideally, would be best escalated to a real-time channel, whether voice, video or chat, to be handled optimally. Let’s face it, we humans are pretty amazing. The computers are not taking over just yet.

See you at the next blog “Transform your sales hit rate, quality of execution and highlight the forecast ‘gamers’”

Can humans keep up?

This blog looks at the thorny topic of whether the humble human can keep up with the demands of customers or the citizens we serve.

This is part two of a five-part blog post looking at how AI, speech analytics and conversational computing are changing the processes, efficiency and CX / C-Sat metrics within many organizations around the world.

  • Part one covered “do you think you are digital?” This highlights the gaps in more than 50 per cent of customer interactions.
  • Part three covers the “real needs” within omnichannel and real-time data strategies.
  • Part four covers how these speech technologies are helping to transform sales and service functions and leadership.
  • Part five concludes on the future vision for speech analytics and conversational computing.

 

I want to make clear that this chapter is not about the Borg taking over and replacing the human race. Naturally, there is a place for complete machine automation. We use these services every day. Love them or hate them, they play a critical part in many organizations’ strategy to reduce or automate “the mundane”, so our super intelligent applications (the human) can deal with the more valuable, important, or complex interaction. However, two issues have emerged in the past few years in this regard:

  1. Have our (no-human required) automation services gone too far? Many customers would say yes. So, we have to consider new demarcation points of what gets 100 per cent automated.
  2. If you are lucky enough to get to a human, the next question arises. Is that human able to do a good job and cope with the sheer complexity of the products, historical communications and external market information at the precise time of the call? Whether the function is to serve or answer a question, or perhaps to sell a product or service, the data related to that particular call, customer and moment is now a lot to handle.

Let us consider why humans get involved (or perhaps should get involved) in an interaction with a customer, prospect or citizen. Typically there are four dynamics. Is this a premier client / prospect? Is there an issue or escalation? Is there an imminent sales opportunity? Or, is this a complex product or service? There are many more permutations that involve issues such as compliance, data privacy, location and so on, but this is the high-level framework we typically live within.

Much is being done to profile our customers, prospects and citizens to better understand their respective socio-economic status, historical purchase patterns, click stream data and more, but this is typically used to route the customer to the correct channel of service:  IVR, web self-service, form, app or perhaps a human.

 

Low-importance customer x low-value inquiry x low complexity = automate

High-value customer x high-value inquiry x high complexity = human

…and all the permutations in between these two extremes.

 

Organizations also regularly consider all the scenarios, irrespective of class and value, where there is an escalation. This is a “moment that matters”, as escalations typically directly impact commercial success, emotion, security or potential brand damage. This is where any interaction should potentially move to a real-time voice or video channel.

These moments that matter are critical. However, the agent taking the call typically has three to five seconds notice before the caller leaves the queue and the call is taken. Yes, in many advanced contact centers, the agent may have some rudimentary data to hand: who are they connecting with, their profile, all recent interactions and so on. But let’s face it, keeping up with everything internally, everything that is going on externally (for example, the FX markets made a big shift 30 minutes ago, or product marketing sent an update yesterday about an offer on a particular product line or support sent a notice that there is a known defect and how to resolve it) is a daunting task.

Yes, knowledge management tools have helped agents with process and structure that can help with solving complex interactions, but a knowledge base still requires the agent to capture the situation, understand it and then enter this into the keyboard before any meaningful intelligence or action can be given. Also, many of the calls that have happened in the past do not capture the detail, the sentiment or context – agents are just too busy, lazy, there may be no field to capture the detail or they are new members of staff or merely ineffective.

The emergence of “tuned” speech processing that aligns to your specific industry terms, products and phrases with incredibly high accuracy, together with the addition of AI and deep-learning capabilities, are now giving the agents the edge – not replacing them but making them superhuman. Technologies that “listen-in” to the call conversation, can detect questions, sentiment, key words, look at internal data or notices such as product notifications or sales offers, or external data from sources such as Bloomberg or other new-wire services and summarise all the previous voice.

The businesses or organizations that deploy this capability are capturing massive insight. The digital picture is significantly richer. Agent performance is dramatically improved, both in quality and richness of what is being captured within the system of record, but also offering real-time or near real-time services that help the agent detect a question and reference this against information they may not have digested or received, or simply assisting with a summary of the call actions once the call has ended and they hit the wrap-up timer.

The contact center in many organizations has lived through the phase of low value / transactional service point. The services that they used to deliver are now widely automated. The role of the next phase contact center is a strategically important and high-value knowledge and service system. This is a premium role, non-scripted, that requires knowledgeable professionals that can react to customer needs and conduct a range of AI and information systems to deliver excellent service or sales strategy.  The contact center is no longer a humble job, it is the center of a business, as all calls that reach this location seriously matter.

See you at the next blog.

Do you think your organization is digital now?

This is part one of a five-part blog post looking at how AI, speech analytics and conversational computing are changing the processes, efficiency and CX / C-Sat metrics within many organizations around the world.

 

  • Part two covers the topic of whether humans can keep up in a big data, real-time digital world.
  • Part three covers the “real needs” within omnichannel and real-time data strategies.
  • Part four covers how these speech technologies are helping to transform sales and service functions and leadership.
  • Part five concludes on the future vision for speech analytics and conversational computing.

 

So, it is clear that many organizations think they have finally made the big leap and now consider themselves to be a digital company.

Of course, we have made massive strides in just the past few years. I love my mobile banking app, the various collaboration and communication services I use are truly incredible, and the data that is now available to me at work via so many sources is mind-boggling.

That is me as a consumer, an employee, a leader and a citizen. I like it. Much of the time I don’t want to talk to anyone, I just want information, make a transaction or change a setting. If I can do this simply and easily myself, I’m all in!

The upside is that I am empowered, the service, change or transaction is controlled and secure and I can get reports on what I have done, changed, spent etc, as it is a digital record. It is black and white – there is little or no scope for grey – it is digital.

The companies or organizations that provide these services – whether through a mobile app, an AI chatbot, website, web form or an IVR service – glean masses of information and intelligence. This can be about location, product, $$$, click-stream, time on site, purchase trends and so on. Fundamentally, I am digitally connected to the company.

Now, let us consider that not all interactions are via apps, websites or IVR. On average, 60 per cent of consumer interaction is via voice and video. Naturally, this varies from industry to industry or service to service and there are also different mixes for inbound calls versus outbound calls. However, any way that you cut it, it is still a lot of calls.

This is amplified “when the call matters”.  This could be a service emergency, escalation, complaint or need to buy and need for specialist advice. With these scenarios, call rates jump to more than 83 per cent.

Now this is where the “digitalness” of an organization starts to waiver. If I feel the need to call my bank, it is not because I want to transfer some money, pay a bill or buy some stock. I can do all of that. No, it is because I need an advisory service: I need some advice; something is not working; or I am angry. All of these situations are “moments that matter’”. Whether my interactions are positive moments or neutral moments or negative moments, I believe I am a tier-one customer to my bank. If I am calling, I expect them to react and, on the whole, they do.

In many cases banks use speech analytics, AI and conversational computing to complete the “digital signature” of customer interactions. But many sales, service and support organizations are not doing this yet. This is where the competitive battleground is emerging and research organizations such as Gartner, Forrester and 451 Research are forecasting a significant growth in AI technologies that assist the human, rather than replace the human. Banks have had a strong track record of leading the field in technology advancement. They lead (with their big budgets) and the rest follow once pricing normalizes.

 

So, what have they been doing that other industries should learn from?

Well, many financial services organizations (for compliance, efficiency, sales or CX reasons) recognized that the call center agents are only human. The first break in the digital signature of a customer is the fact that the agent may not have understood the request or question. Or worse, they did not want to understand the question or request as it is too complex to deal with. The second break point is that the agent then has to enter the request into the keyboard. This is prone to errors, and susceptible to fatigue and “gaming the system”. Ultimately, the top 20 per cent of agents do a great job. They are skilled, diligent and professional. The middle 60 per cent are OK, and the bottom 20 per cent are rogue.

The overall digital signature from this data input point is very poor, and this is what many organizations, especially in Financial Services, have been using AI and speech technology to resolve. This gives them a significantly better digital picture as calls can be transcribed into the system. All questions or actions can be captured via speech to text, analyzed for understanding, sentiment, technique, key words and topics. In short, it offers digital insight into the conversation, not via the agent’s keyboard, but via the conversation itself – no gaps, no gaming, just great data.

The next chapters investigate what we can do with this data, now we have it.

See you at the next blog.