Security holes: 3 fraud scenarios via chatbots

01 June 2017
Security holes: 3 fraud scenarios via chatbots

Chatbots have rapidly turned into an efficient instrument for customer support, data accumulation, expert evaluation, etc. Almost 34 000 virtual companions were created in six months after Facebook platform for chatbot development had been launched. Today they are implemented by virtually all companies which want to have competitive advantages.

There are two main types of chatbots:

  • partially automated – can answer only the most popular questions; connect you with an operator in case of difficult requests;
  • with artificial intelligence – completely automated bots capable of recognizing human speech and imitating human manner of communication.

Security holes: 3 fraud scenarios via chatbots (1)

By using bots correctly, companies increase client loyalty to the brand and get detailed information about customers’ preferences. However, the more people trust bots, the bigger risk occurs since virtual assistants have become wonderful targets for cyberattacks.

3 fraud scenarios via chatbots


1. Personal data accumulation

Chatbot is a real treasure of personal information which can be used for any purpose. Moreover, data sent via messengers is connected to a real account which makes it truly valuable for hackers.

2. Bot hacking

A hacked bot can request personal data (a password, financial information) claiming to be a tech support representative at a particular company. Obviously, after necessary data is received, all money can be instantly withdrawn from the account.

Security holes: 3 fraud scenarios via chatbots (2)

3. Fraud bot launch

It is not always bot hacking. One can create new bots introducing them as client support of a particular resource. If the sender’s email can be easily checked, it is much more difficult with chatbots – not everyone knows how to make sure that they communicate with a real brand, not a fraud.


Companies using chatbots have to implement reliable data security protocols and inform users about peculiarities of chatting with virtual assistants. Employees will never ask for account passwords.

In general, developers will have to address security issues in the nearest future. Otherwise, chatbots will no longer be an efficient communication channel clients can trust.

Similar news
06 February 2018
Glazariy yazyka project created a “You can do it” chatbot in Telegram to help with Russian language. Project experts provide...
30 January 2018
Chatbots can perfectly assist companies in the cooperation with customers. They rapidly process requests and provide necessary information, but dialogues...
26 January 2018
Mobile clinic Doc+ has created a chatbot for collecting information about the patient’s diseases. This is the first artificial intelligence...

Subscribe to news

be aware of the news industry conference