Slides: Click Here
Artificial Intelligence (AI) technology as we know it is neither good nor bad. But it seems like you can’t go anywhere these days without hearing about how every company is using the power of AI, which is often actually machine learning (ML).
As ML becomes a more ubiquitous tool for problem solving purposes, it will inevitably lead to its abuse in the form of adversarial ML, which can either be algorithms created for malicious purposes or neutral algorithms used for bad.
This presentation discusses and provides examples of the differences, but also presents how someone could theoretically take existing neutral ML tools and hack them together with other tools to create their own adversarial ML solution using neural networks to break captchas and steal Bitcoin, all for less than \$100 and no data science background.
I will also show and explain how people are breaking captchas today.
What I’m looking to do here is raise awareness, not fear, of this likely outcome.
The application of malicious intent to technology is a lot closer than we think.