• About
  • Advertise
  • Careers
  • Contact Us
Thursday, June 19, 2025
  • Login
No Result
View All Result
NEWSLETTER
Tech | Business | Economy
  • News
  • Tech
    • DisruptiveTECH
    • ConsumerTech
    • How To
    • TechTAINMENT
  • Business
    • Mobility
    • Environment
    • Travel
    • StartUPs
  • Economy
  • TECHECONOMY TV
  • TBS
  • About Us
  • Contact Us
  • Telecoms
  • News
  • Tech
    • DisruptiveTECH
    • ConsumerTech
    • How To
    • TechTAINMENT
  • Business
    • Mobility
    • Environment
    • Travel
    • StartUPs
  • Economy
  • TECHECONOMY TV
  • TBS
  • About Us
  • Contact Us
  • Telecoms
No Result
View All Result
Tech | Business | Economy
No Result
View All Result
ADVERTISEMENT
Home Business Security

AI, ML Reliability and security: BlenderBot & Other Cases

by Yinka Okeowo
August 24, 2022
in Security
2
UBA
Advertisements

Since its launch in early August 2022, Blenderbot, an AI-driven research project by Meta, has been hitting the headlines.

Blenderbot is a conversational bot, and its statements about people, companies or politics appear to be unexpected and sometimes radical.

Data bias is one of the challenges with machine learning, and it is important that any organisation using machine learning within their own business address and resolve it, quickly and appropriately.

Other similar projects previously faced the same problem that Meta did with Blenderbot, such as, Microsoft’s chatbot Tay for Twitter, which ended up making racially defamatory statements.

This reflects the specifics of generative machine learning models trained on texts and images from the Internet.

To make their outputs convincing, they use huge sets of raw data, but it is hard to stop such models from picking up biases if they are trained on the web.

While these specific and other similar projects are largely underpinned by research and science based goals, some organisations do make use of language models in practical areas, such as customer support, translation, writing marketing copy, text proofreading and so on.

To make these models less biased, developers can curate the datasets used for training. However, this is very difficult in the case of web-scale datasets.

To prevent embarrassing errors, one approach is to filter data for biases, for example, using particular words or phrases to remove the respective documents and prevent the model from learning on them.

Another approach is to filter out inappropriate outputs in case model generates questionable text before it reaches users.

Looking more broadly: protection mechanisms are necessary for any ML model, and not only from biases. If developers use open data to train the model, attackers can exploit this with a technique called “data poisoning,” where attackers add specially crafted malformed data to the dataset. As a result, the model will not be able to identify some events or will mistake them for others and make the wrong decisions.

“Although in reality such threats remain rare at this stage, as they require a lot of effort and expertise from attackers, organisations still need to follow protective practices. This will also help minimise errors in the process of training models,” comments Vladislav Tushkanov, Lead Data Scientist at Kaspersky. “Firstly, organisations need to know what data is being used for training and where it comes from. Secondly, the use of diverse data makes poisoning more difficult. Finally, it is important to thoroughly test the model before rolling it out into combat mode and constantly monitor its performance.”

Organisations can also refer to MITRE ATLAS – a dedicated knowledgebase to navigate businesses and experts through threats for machine learning systems. ATLAS also provides a matrix of tactics and techniques used in attacks on ML.

At Kaspersky, we conducted specific tests on our anti-spam and malware detection systems by imitating cyberattacks to reveal potential vulnerabilities, understand the possible damage and how to mitigate the risk of such attack.

Machine learning is widely used in Kaspersky products and services for threat detection, alert analysis in Kaspersky SOC or anomaly detection in production process protection.

Loading

Advertisements
MTN ADS

Author

  • Yinka Okeowo
    Yinka Okeowo

    View all posts
0Shares
Tags: AIBlenderbot
Yinka Okeowo

Yinka Okeowo

Next Post

Date for Commercial 5G Launch in Nigeria: What We Know

Comments 2

  1. Pingback: BlenderBot & Other Cases – TechEconomy.ng | #itsecurity | #infosec - NATIONAL CYBER SECURITY NEWS TODAY
  2. Pingback: Truecaller for Business Launches New Features to Aid Entrepreneurs’ Productivity | Jokes Naija

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recommended

Dark Side of AI - Sophos Anticipates AI-Based

Sophos Anticipates AI-Based Attack Techniques and Prepares Detections

2 years ago
#RenewedHope by Tolufunmi Akinseye

#RenewedHope: Empowering Nigerian Youths in the Dawn of New Democratic Era

2 years ago

Popular News

    Connect with us

    • About
    • Advertise
    • Careers
    • Contact Us

    © 2017 TECHECONOMY.

    No Result
    View All Result
    • News
    • Tech
      • DisruptiveTECH
      • ConsumerTech
      • How To
      • TechTAINMENT
    • Business
      • Mobility
      • Environment
      • Travel
      • StartUPs
    • Economy
    • TECHECONOMY TV
    • TBS
    • About Us
    • Contact Us

    © 2017 TECHECONOMY.

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In
    Translate »
    This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.