Select your language

Augmented reality and privacy, the challenges of the metaverse

Realidad aumentada y privacidad, los desafíos del metaverso

Latin America. Augmented reality and the metaverse offer great opportunities, but they also pose serious challenges in terms of privacy and security.

Ruth Almaraz Palmero, a professor at OBS Business School, an institution belonging to Planeta Formación y Universidades, says that the use of technologies such as artificial intelligence (AI) in these environments exposes users to constant risks, such as mass surveillance, fraud and data manipulation.

Augmented reality has become a new door for interaction, commerce and entertainment that combines physical and tangible elements with virtual ones. "It is the technology that allows additional information to be added to an image of the real world when it is viewed through a device. This device adds extra information to what the real image already offers, thus offering a transforming reality," says Almaraz.

Almaraz explains that "in the metaverse, the user experiences events in the virtual world as if they were real and faces all kinds of risks to their privacy. For example, mass surveillance, discrimination, loss of autonomy, fraud or identity theft. Personal data can be at risk through the vulnerabilities of wearable devices" and adds, "the virtual environment itself could also pose real physical risks to the health of users. From a privacy standpoint, the use of the metaverse can be very intrusive, as the set of data being processed increases exponentially."

- Publicidad -

AI and the challenge of safeguarding privacy
One of the biggest perceived dangers in augmented reality relates to privacy. According to the OBS expert, "artificial intelligence systems need a lot of data. In fact, the best online services and products couldn't function without the personal data used to train their AI algorithms. However, there are many ways to improve the acquisition and use of information, including the algorithms themselves and the overall management of data."

Augmented reality generates doubts and lack of trust since it is a relatively new domain and the mechanisms for transmitting authenticated content continue to evolve, today, although 85% of people believe in the benefits of AI, 61% express distrust towards these systems according to the study Privacy in a New World of AI.

Another key point of this lack of privacy and regulation is that sophisticated hackers could replace a user's augmented reality with their own in order to deceive people or provide false information. Various cyberthreats can cause content to become untrustworthy even if the source is authentic if there is spoofing, spying, or data manipulation.

Landscape of AI-related cybercrimes in Colombia
In Colombia, an increase in the use of advanced techniques by cybercriminals has been observed. These use AI to create more personalized and hard-to-detect attacks. Cybercriminals are currently mostly using these two types of online scams:

● Advanced Phishing: Using AI to personalize emails and messages that appear legitimate, increasing the chances of victims falling for the trap.

● Deepfakes: The creation of fake videos or audios that can be used to deceive people or companies.

Regulations and strategies in Colombia to regulate AI
The Ministry of Information and Communications Technologies of Colombia (MinTIC) is carrying out different strategies to minimize the impact of AI on the privacy of citizens, through the following points:

- Publicidad -

● Adoption of UNESCO Recommendations: Colombia has been a pioneer in the implementation of an ethical framework for AI, adopting UNESCO recommendations that promote principles such as transparency, privacy, and accountability in the use of AI technologies. This framework seeks to ensure that automated decisions respect human rights and do not discriminate against citizens, according to the Colombian Ministry of Information and Communications Technologies (MinTIC).
● Personal Data Protection Law: Colombian legislation establishes clear obligations for the processing of personal data, which includes the use of AI. This law requires data controllers to implement measures to prevent algorithmic bias and ensure respect for the rights of individuals.

Prevention Strategies
● Monitoring and Audits: Regular audits and security assessments are being conducted to ensure compliance with data protection regulations and prevent the misuse of AI-based technologies.

● Education and Public Awareness: The government is promoting educational campaigns to inform citizens about the risks associated with AI and digital scams, helping to create a culture of caution and knowledge on how to identify potential frauds.

● Development of Secure Platforms: Initiatives such as the Data Sandbox allow government and private entities to develop pilot projects that integrate AI and Big Data, ensuring that safe practices are implemented from the beginning.

Colombia is also seeking to align with international standards on data protection and privacy, including the application of regulations similar to the European GDPR. The OBS expert concludes that, "data protection in the metaverse and wearables must follow principles of minimization and transparency, in addition, in the future, these rules must be aligned with new laws such as the Digital Services Act and the Data Act in the EU."

Richard Santa, RAVT
Richard Santa, RAVTEmail: [email protected]
Editor - Latin Press, Inc.
Periodista de la Universidad de Antioquia (2009), con experiencia en temas sobre tecnología y economía. Editor de las revistas TVyVideo+Radio y AVI Latinoamérica. Coordinador académico de IntegraTec y LiveTec.

No comments

• If you're already registered, please log in first. Your email will not be published.

Leave your comment

In reply to Some User
Suscribase Gratis
SUBSCRIBE TO OUR ENGLISH NEWSLETTER
DO YOU NEED A PRODUCT OR SERVICES QUOTE?
LATEST INTERVIEWS
SITE SPONSORS










LATEST NEWSLETTER
Ultimo Info-Boletin