Artificial Intelligence-Beneficial or Dangerous?

By:  |  Category: Blog, Security Wednesday, March 28th, 2018  |  No Comments

The ethical implications in the rise of Artificial Intelligence are raising some great conversations.

Kevin Townsend recently penned an intriguing article about AI in SecurityWeek, analyzing the current AI climate and the expected near future, based on a recent important scientific paper titled: “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

The report was constructed by 26 authors from 14 institutions, incorporating academia, civil society, and industry. It stemmed from a 2 day workshop/dialogue held in Oxford, UK, in February 2017.

Artificial intelligence (AI) is the use of computers to perform the analytical functions normally only available to humans – but at machine speed. ‘Machine speed’ is described by Corvil’s David Murray as, “millions of instructions and calculations across multiple software programs, in 20 microseconds or even faster.” AI simply makes the unrealistic, real.

The problem discussed in the paper is that this function has no ethical bias. It can be used as easily for malicious purposes as it can for beneficial purposes. AI is largely dual-purpose; and the basic threat is that zero-day malware will appear more frequently and be targeted more precisely, while existing defenses are neutralized – all because of AI systems in the hands of malicious actors.

In short, AI could be used to make fake news more common and more realistic; or make targeted spear-phishing more compelling at the scale of current mass phishing through the misuse or abuse of identity. This will affect both business cybersecurity (business email compromise, BEC, could become even more effective than it already is), and national security.

It is possible, however, that the whole threat of unbridled artificial intelligence in the cyber world is being over-hyped.

F-Secure’s Patel comments, “Social engineering and disinformation campaigns will become easier with the ability to generate ‘fake’ content (text, voice, and video). There are plenty of people on the Internet who can very quickly figure out whether an image has been photoshopped, and I’d expect that, for now, it might be fairly easy to determine whether something was automatically generated or altered by a machine learning algorithm.

“In the future,” he added, “if it becomes impossible to determine if a piece of content was generated by ML, researchers will need to look at metadata surrounding the content to determine its validity (for instance, timestamps, IP addresses, etc.).”

In short, Patel’s suggestion is that AI will simply scale, in quality and quantity, the same threats that are faced today. But AI can also scale and improve the current defenses against those threats.

If you need assistance with a managed security solution for your business, give EnhancedTECH a call at 714-970-9330 or contact us at sales@enhancedtech.com.

Samantha Keller

Samantha Keller

Director of Marketing and PR at EnhancedTECH
Samantha Keller (AKA Sam) is a published author, tech-blogger, event-planner and mother of three fabulous humans. Samantha has worked in the IT field for the last fifteen years, intertwining a freelance writing career along with technology sales, events and marketing. She began working for EnhancedTECH ten years ago after earning her Bachelor’s degree from UCLA and attending Fuller Seminary. She is a lover of kickboxing, extra-strong coffee, and Wolfpack football.Her regular blog columns feature upcoming tech trends, cybersecurity tips, and practical solutions geared towards enhancing your business through technology.
Samantha Keller

Latest posts by Samantha Keller (see all)

Leave a Comment
Read previous post:
Innovation Platform
10 Ways to Improve Your Business Communications System

Business Competition in Today’s Economy is Fierce Your customers expect the highest level of care, and you and your staff...

Close