Recently, with the biggest news about “Godfather of AI quits Google”,more and more discuss topics about “AI and privacy” become more important, acturaly this topic that has been gaining attention in recent years. With the increasing use of AI-powered technologies and the amount of personal data being collected, there are growing concerns about the potential privacy risks involved.
- The European Union’s General Data Protection Regulation (GDPR): This regulation, which went into effect in 2018, requires companies to obtain explicit consent from individuals before collecting their personal data and to provide transparency about how that data is being used. It also includes provisions for the right to be forgotten and the right to access and delete personal data.
- The California Consumer Privacy Act (CCPA): This law, which went into effect in 2020, gives California residents the right to know what personal information companies are collecting about them and to request that it be deleted. It also requires companies to obtain explicit consent before selling personal information to third parties.
- The use of differential privacy: Differential privacy is a technique that can be used to protect individuals’ privacy when analyzing large datasets. It involves adding random noise to the data to obscure individual identities while still allowing for accurate analysis.
- The development of privacy-preserving AI techniques: Researchers are also developing AI techniques that can perform analysis on sensitive data without compromising individuals’ privacy. For example, federated learning involves training machine learning models on data that is distributed across multiple devices, so that the data never leaves the devices and remains private.
The issue of AI and privacy is complex and multifaceted, and there are ongoing discussions and debates about the best ways to address the privacy risks involved with AI-powered technologies.
So, some big technology company make some action about these:
- Google’s Federated Learning of Cohorts (FLoC): Google has been testing a new approach to online advertising that uses federated learning to group users into cohorts based on their browsing behavior, rather than tracking individuals’ behavior across the web. This approach aims to preserve users’ privacy while still allowing advertisers to reach their intended audience.
- The White House’s Executive Order on AI: In February 2021, the White House issued an executive order that included provisions on AI and privacy. The order called for the development of a national AI strategy that includes considerations of privacy, civil liberties, and civil rights.
- The European Union’s proposed AI regulation: In April 2021, the European Commission proposed new regulations for AI that include provisions for protecting privacy and personal data. The regulations would require high-risk AI systems to undergo a conformity assessment before they can be placed on the market, and would also require transparency and human oversight for certain types of AI systems.
- Apple’s Private Relay: Apple has announced a new feature called Private Relay, which is designed to protect users’ internet browsing activity from being tracked by advertisers and other third parties. The feature uses a combination of techniques, including proxy servers and differential privacy, to mask users’ IP addresses and browsing activity.
The developments in AI and privacy continue to evolve, as both regulators and industry leaders work to balance the benefits of AI-powered technologies with the need to protect individuals’ privacy and personal data.
+ There are no comments
Add yours