We’ve reached an inflection point with AI. It’s in the news we consume and the general cultural conversation, but there are also more tangible markers we can point to. Read more: Machine learning versus AI: what’s the difference?
In just two years Amazon’s Echo, and as a result Alexa, is in more than six per cent of American homes. By 2020, people will be making 200 billion voice searches every month. Or, if you prefer using the market as a bellwether, $5.4 billion (£4.2bn) was invested in AI startups in 2016 – a sum that doesn’t reflect the amount internal R&D groups poured into AI in that same time frame. This would likely dwarf that number.
While the fact that, in general, people no longer associate AI with Skynet bodes well for the technology’s role in our day-to-day lives, that fear of AI insurrection has been replaced with new worries. A 2016 Cornell study found that since 2009, while conversations about AI have been “consistently optimistic”, worries over loss of control, ethics, and automation have grown in that same time span.
What’s more, there are negative AI conversations happening that most people don’t associate with AI. Post-election, conversations about filter bubbles have pervaded – the filter, in this case, being an algorithm learning and evolving based on advances in deep learning. Any conversation we have about big data is quietly a conversation about AI. A self-driving car is powered by AI. Google has recently applied deep learning AI to Google Translate in search of “one shot” translations – essentially making cross-language translations it’s never explicitly been asked. Netflix’s algorithm tries to give you what you want, and Spotify uses AI to create your Discover Weekly playlist. It’s even part of the way we have sex – dating apps like Tinder have algorithms that learn what users want, and change the way they present your profile. There is no escaping it.
So as a human, a person caught in this paradigm shift, how are you supposed to navigate this system? How do you assure that AI is working for you, and not the other way around?
Track the AI already in your life
Take a look under your own digital hood. What companies are you giving data to, and for what reasons? Ultimately, most companies are using AI to improve the product or service they’re delivering you. Read more: Holding AI to account: will algorithms ever be free from bias if they’re created by humans?
However, if your fear is AI taking over without you realising it, you need to start paying attention now. We all know we’re giving Google and Facebook our data, those are obvious, but (almost) every digital service is collecting usage data. If “data is the new oil”, you need to be aware of all the places your crude is being shipped. Conduct an audit of the digital services you’re using on a daily basis. If you don’t want to share with them, simple steps like clearing your browsing history and internet cache can help a little. But because of terms of service, at this point, usage is tacit acceptance.
Break it!
When you know that an AI is being used, you can break it. This is the big solve for our fears about filter bubbles. Filter bubbles are created when algorithms think they know us, the issue though is that they’re self-perpetuating – an AI feeds you what it thinks you want, you like it, and it will hone in closer and closer to one single thing be that music, articles and so on.
Subscribe to WIRED
If you don’t want to be in a bubble, you can pop it by feeding the AI a bunch of information that isn’t accurate. If you want to learn more about trap music, play Gucci Mane albums so your Discover Weekly reflects this. If you want Tinder to stop feeding you clones, start swiping left on the feature all your dates seem to have. If you want Facebook serving you alternative viewpoints, find a group of respectable people on the other side and like their pages.
Credit: Source link