The European Union issued the Digital Services Act on Aug. 25. This raises a question about digital rights in the U.S., and who is most at risk for privacy threats.
“Its laws are focused on individual user protection, and on preventing harmful content and misinformation from spreading across the web,” according to an Aug. 25 Quartz article.
Among the generations most immersed in this technology are the “digital natives,” or those who have grown up with the internet as an always-present companion. While this familiarity might suggest an understanding of technology, there is a concerning trend among these digital natives — one of acceptance and free sharing of personal information.
As digital natives, USF students must be cautious with their use of technology. They must be prepared to ask questions and defend their digital liberties.
Students should take precautions such as avoiding using full names or addresses on social media, adjusting privacy settings on apps and online platforms, securing devices with updates and using malware software.
AI has the power to establish real and positive change. However, digital natives have blindly accepted and engaged with AI technologies, from using ChatGTP for school assignments to having conversations with Snapchat’s My AI. AI is slowly permeating the internet, and it has consequently entered the private lives of many.
Since the beginning of the internet, privacy has been a concern.
“Privacy is necessary for an open society in the electronic age. … We must defend our own privacy if we expect to have any. … For privacy to be widespread it must be part of a social contract,” Eric Hughes wrote in A Cypherpunk’s Manifesto.
Cypherpunks were a group that emerged around the start of the internet. They were advocates in some of the original debates around online digital privacy, promoting the safeguarding of user information, according to an April 2020 Medium article.
Their concerns in regard to issues of user privacy were entirely valid. In recent months, Snapchat’s AI has been posting on its own without user consent, according to an Aug. 16 CNN article. Also, TikTok is accessing personal data even when the app is deleted, according to a March 16 ABC article.
Headlines like these should only reinforce these concerns to modern consumers.
Considering privacy concerns, terms and conditions agreement pop-ups have been a common sighting on the internet. With every younger generation, the terms and conditions are read less and less, according to McKinsey & Company’s Sept. 2022 newsletter on data trust.
“If it’s a reputable source in my mind, I would likely skip it,” said sophomore chemical engineering student Nick Deg when asked if he reads the terms and conditions in an Aug. 26 interview with the Oracle.
Deg is not alone. Only 1% of technology users were found to read the terms and conditions, according to a January 2020 study conducted by ProPrivacy.
“But part of having the personalized experience we like means giving up access to data — which might be used in ways we don’t want, and those ways might not be as safe as we think,” the newsletter read.
Technology will only become more complex and invasive in the lives of its consumers. In fact, these kinds of invasive technologies already exist.
“Now, consumer neurotech devices are being sold worldwide to enable us to track our own brain activity,” said Nita Farahany in her April TED talk on mental privacy and cognitive liberty.
“As companies from Meta to Microsoft, Snap and even Apple begin to embed brain sensors in our everyday devices like earbuds, headphones, headbands, watches and even wearable tattoos, we’re reaching an inflection point in brain transparency.”
Farahany continued about the “extraordinary risks” that come with brain-sensing neurotechnology, often known as brain-computer interfaces (BCI). She also said making our brains transparent to BCI allows the data collected to be accessed, not by researchers or scientists directly, but by the creators of the technology.
This data is extremely private, and the companies that receive user data often sell it for profit, according to a March 6 article by Complete Background Screening.
“Brain data in many instances will be more sensitive than the personal data of the past because it reflects our feelings, our mental states, our emotions, our preferences, our desires, even our very thoughts,” said Farahany.
This information in the hands of power-hungry corporations or even ill-intentioned companies is very dangerous. The era of BCI is quickly approaching, and digital natives must be prepared.
In a rapidly evolving technological landscape, university students hold a responsibility to embrace digital safety. By practicing vigilant online habits, protecting personal information and simply asking questions about the platforms they engage with, USF students can navigate new technology with awareness and security.
The New York Times Privacy Project lists several ways to ensure digital privacy, such as password managers, two-factor authentication, browser extensions and antivirus software.
USF students must establish habits of safe and secure use of technology immediately. Otherwise, students put their minds at risk in a future of intrusive technology.
Credit: Source link