Alex Fink is a Tech Executive and the Founder and CEO of the Otherweb, a Public Benefit Corporation that uses AI to help people read news and commentary, listen to podcasts and search the web without paywalls, clickbait, ads, autoplaying videos, affiliate links, or any other ‘junk’ content. Otherweb is available as an app (iOS and Android), a website, a newsletter, or a standalone browser extension. Prior to Otherweb, Alex was Founder and CEO of Panopteo and Co-founder and Chairman of Swarmer.
Can you provide an overview of Otherweb and its mission to create a junk-free news space?
Otherweb is a public benefit corporation, created to help improve the quality of information people consume.
Our main product is a news app that uses AI to filter junk out, and to allow users unlimited customizations – controlling every quality-threshold and every sorting mechanism the app uses.
In other words, while the rest of the world creates black-box algorithms to maximize user engagement, we want to give users as much value in as little time as possible, and we make everything customizable. We even made our AI models and datasets source-available so people can see exactly what we’re doing and how we evaluate content.
What inspired you to focus on combating misinformation and fake news using AI?
I was born in the Soviet Union and saw what happens to a society when everyone consumes propaganda, and no one has any idea what’s going on in the world. I have vivid memories of my parents waking up at 4am, locking themselves in the closet, and turning on the radio to listen to Voice of America. It was illegal of course, which is why they did it at night and made sure the neighbors couldn’t hear – but it gave us access to real information. As a result, we left 3 months before it all came tumbling down and war broke out in my hometown.
I actually remember seeing photos of tanks on the street I grew up on and thinking “so this is what real information is worth”.
I want more people to have access to real, high-quality information.
How significant is the threat of deepfakes, particularly in the context of influencing elections? Can you share specific examples of how deepfakes have been used to spread misinformation and the impact they had?
In the short term, it’s a very serious threat.
Voters don’t realize that video and audio recordings can no longer be trusted. They think video is evidence that something happened, and 2 years ago this was still true, but now it’s obviously no longer the case.
This year, in Pakistan, Imran Khan voters got calls from Imran Khan himself, personally, asking them to boycott the election. It was fake, of course, but many people believed it.
Voters in Italy saw one of their female politicians appear in a pornographic video. It was fake, of course, but by the time the fakery was uncovered – the damage was done.
Even here in Arizona, we saw a newsletter advertise itself by showing an endorsement video starring Kari Lake. She never endorsed it, of course, but the newsletter still got thousands of subscribers.
So come November, I think it’s almost inevitable that we’ll see at least one fake bombshell. And it’s very likely to drop right before the election and turn out to be fake right after the election – when the damage has already been done.
How effective are current AI tools in identifying deepfakes, and what improvements do you foresee in the future?
In the past, the best way to identify fake images was to zoom in and look for the characteristic mistakes (aka “artifacts”) image creators tended to make. Incorrect lighting, missing shadows, uneven edges on certain objects, over-compression around the objects, etc.
The problem with GAN-based editing (aka “deepfake”) is that none of these common artifacts are present. The way the process works is that one AI model edits the image, and another AI model looks for artifacts and points them out – and the cycle is repeated over and over again until there are no artifacts left.
As a result, there is generally no way to identify a well-made deepfake video by looking at the content itself.
We have to change our mindset, and to start assuming that the content is only real if we can trace its chain of custody back to the source. Think of it like fingerprints. Seeing fingerprints on the murder weapon is not enough. You need to know who found the murder weapon, who brought it back to the storage room, etc – you have to be able to trace every single time it changed hands and make sure it wasn’t tampered with.
What measures can governments and tech companies take to prevent the spread of misinformation during critical times such as elections?
The best antidote to misinformation is time. If you see something that changes things, don’t rush to publish – take a day or two to verify that it’s actually true.
Unfortunately, this approach collides with the media’s business model, which rewards clicks even if the material turns out to be false.
How does Otherweb leverage AI to ensure the authenticity and accuracy of the news it aggregates?
We’ve found that there’s a strong correlation between correctness and form. People who want to tell the truth tend to use certain language that emphasizes restraint and humility, whereas people who disregard the truth try to get as much attention as possible.
Otherweb’s biggest focus is not fact-checking. It’s form-checking. We select articles that avoid attention-grabbing language, provide external references for every claim, state things as they are, and don’t use persuasion techniques.
This method is not perfect, of course, and in theory a bad actor could write a falsehood in the exact style that our models reward. But in practice, it just doesn’t happen. People who want to tell lies also want a lot of attention – this is the thing we’ve taught our models to detect and filter out.
With the increasing difficulty in discerning real from fake images, how can platforms like Otherweb help restore user trust in digital content?
The best way to help people consume better content is to sample from all sides, pick the best of each, and exercise a lot of restraint. Most media are rushing to publish unverified information these days. Our ability to cross-reference information from hundreds of sources and focus on the best items allows us to protect our users from most forms of misinformation.
What role does metadata, like C2PA standards, play in verifying the authenticity of images and videos?
It’s the only viable solution. C2PA may or may not be the right standard, but it’s clear that the only way to validate whether the video you’re watching reflects something that actually happened in reality, is to a) ensure the camera used to capture the video was only capturing, and not editing, and b) ensure that no one edited the video after it left the camera. The best way to do that is to focus on metadata.
What future developments do you anticipate in the fight against misinformation and deepfakes?
I think that, within 2-3 years, people will adapt to the new reality and change their mindset. Before the 19th century, the best form of proof was testimony from eyewitnesses. Deepfakes are likely to cause us to return to these tried-and-true standards.
With misinformation more broadly, I believe it’s necessary to take a more nuanced view and separate disinformation (i.e. false information that is intentionally created to mislead) from junk (i.e. information that is created to be monetized, regardless of its truthfulness).
The antidote to junk is a filtering mechanism that makes junk less likely to proliferate. It would change the incentive structure that makes junk spread like wildfire. Disinformation will still exist, just as it has always existed. We’ve been able to cope with it throughout the 20th century, and we’ll be able to cope with it in the 21st.
It’s the deluge of junk we have to worry about, because that’s the part we are ill-equipped to handle right now. That’s the main problem humanity needs to address.
Once we change the incentives, the signal-to-noise ratio of the internet will improve for everyone.
Thank you for the great interview, readers who wish to learn more should visit the Otherweb website, or follow them on X or LinkedIn.
Credit: Source link