AI Security A Huge Concern

By:  |  Category: Blog, Security Friday, December 22nd, 2017  |  No Comments

Worried about AI? You aren’t alone.

A new panel by Webroot shows that 86% of security professionals are concerned that AI and ML (machine learning) technology could be used against them. And they are more on target than off, because it’s already happening right now with fake celebrity videos of an inappropriate nature.

Seventy-five percent of cyber security professionals in the US believe that, within the next three years, their company will not be able to safeguard digital assets without AI, and yet overall, 99 percent believe AI could improve their organization’s cyber security. AI is a double-edged sword.

Those surveyed  noted key uses for AI including time-critical threat detection tasks, such as identifying threats that would have otherwise been missed and reducing false positive rates.

“There is no doubt about AI being the future of security as the sheer volume of threats is becoming very difficult to track by humans alone,” says Hal Lonas, chief technology officer at Webroot.

AI changes the technology landscape

This is the first time in history that AI has come up to the level predicted in Sci-Fi for decades. And some of the smartest people in the world are working on ways to tap AI’s immense power to do just that.

And some bad guys are using it to make fake celebrity videos designed to draw in (or phish) a manipulated user to then open an infected attachment.

With help from a face swap algorithm of his own creation using widely-available parts like TensorFlow and Keras, Reddit user “Deepfakes” tapped easily accessible materials and open-source code that anyone with a working knowledge of machine learning could use to create serviceable fakes.

“Deepfakes” has produced videos or GIFs of Gal Gadot (now deleted ), Maisie Williams, Taylor Swift, Aubrey Plaza, Emma Watson, and Scarlett Johansson, each with varying levels of success. None are going to fool the discerning watcher, but all are close enough to hint at a terrifying future.

After training the algorithm — mostly with YouTube clips and results from Google Images — the AI goes to work arranging the pieces on the fly to create a convincing video with the preferred likeness. It could be a celebrity, a co-worker, or an ex-girlfriend. AI researcher Alex Champandard shared with Motherboard that any decent consumer-grade graphics card could produce these effects in hours. That’s terrifying!

Here’s how it plays out…

Your user gets a spear-phishing email based on their social media “likes and shares”, inviting them to see a “private” celebrity video with…their favorite movie star in a compromised scenario. Play this forward and your user will be able to order fake celeb videos with any two (or more) celebrities of their liking and get it delivered within 24 hours for 20 bucks.

A high volume of these video downloads will come with some extra spice–additional malware like Trojans and Keyloggers that give the bad guys full access.  Never has it been more important to educate your staff with security awareness training that sends them frequent simulated tests using phishing emails, the phone, and text to their smartphone.

If you need help with security training give EnhancedTECH a call at 714-970-9330 or contact us at [email protected] for a complimentary cybersecurity consultation.


Source image: https://www.pexels.com/photo/woman-writing-in-white-board-1350615/

Leave a Comment
Read previous post:
How the CIO/CFO Relationship Can Work Better

The continual advancement of technology as an integral part of the majority of businesses has meant an expanded role for...