As long as people understand the potential risks, the answer to the second question is almost always, “Yes.” And with the emergence of artificial intelligence, the answer to that question will become increasingly more clear. The vast improvements in user experience far, far outweigh the potential security risks to private information.
I obviously disagree with this notion. But I defer to someone with far better knowledge than I, Edward Snowden:
Technologists have worked tirelessly to re-engineer the security of the devices that surround us, along with the language of the Internet itself. Secret flaws in critical infrastructure that had been exploited by governments to facilitate mass surveillance have been detected and corrected. Basic technical safeguards such as encryption — once considered esoteric and unnecessary — are now enabled by default in the products of pioneering companies like Apple, ensuring that even if your phone is stolen, your private life remains private. Such structural technological changes can ensure access to basic privacies beyond borders, insulating ordinary citizens from the arbitrary passage of anti-privacy laws, such as those now descending upon Russia.
Once the information is out there, it is out there. You can’t reel it back in. Google has it all and knows how to find it all, which means it can be exploited.
If Apple isn’t storing it, and even has a hard time making sure I get all my iMessages across devices1, then that seems like a small price to pay to protect myself in a small way from the exposure I get with Google.
Which I have never had a problem with, personally. ↩