|
|
Why Are You Getting This?
You signed up to receive The Privacy Professor Tips, or initiated contact to stay in touch with Rebecca and/or Privacy & Security Brainiacs (PSB) and consented to receive the Tips. Please read our Privacy Notice & Communication Info at the bottom of this message for more information. You may unsubscribe from there as well.
NOTE! For those of you who requested or agreed to receive the Tips over the past 12 months: We recently learned that you did NOT receive any of our Tips during the past 12 months due to an omission in our distribution list settings! So those of you just now receiving the Tips after not receiving them over the months, this is why. We apologize for any confusion this caused.
| | |
Who Can You Trust? Digital Trust and Digital Identities
“The trust of the innocent is the liar's most useful tool.” - Stephen King
| | In the U.S. we celebrate Thanksgiving on the 4th Thursday of every November with a cornucopia of food, fun and often football. As you are feasting with your family and friends, for any reason, this year, take a look around. Do you really see everyone who is at the table with you? Could you be surreptitiously joined by one or more digital identities…hitchhiking on your visitors through fitness trackers, watches, jewelry, or even medical devices? New privacy and security risks are quickly emerging involving expanding types of digital identities being gathered through voluminous personal profiles being constantly added to. As our digital identities expand, it requires continuous awareness of the associated new privacy risks. Everyone needs to be more diligent in being hyper-aware of such risks and to not become not be so quick to trust digital communications. | | |
Our featured question this month covers how to protect against these new and continuously increasing digital identities risks and being cautious about digital trust. We’ve also included some great reader questions covering a healthcare and HIPAA topic, chatbots used in healthcare and other industries, non-computer-based social engineering, privacy notice updates, spotting AI deepfakes, and data used to build digital identities.
Please read to the end where we provide some information about our recent activities and online courses.
Since 2005, we have been freely distributing the Privacy Professor Tips monthly publication to help both businesses and individuals, of all ages, identify risks throughout their daily lives, within their own businesses, and to help them know how to prevent security incidents, privacy breaches, and to keep from being a victim of scams.
By sharing this Tips issue with others in your organization, you are also supporting a wide range of regulatory and other legal compliance requirements to sending such awareness communications. Thank you for reading and sharing!
| | |
We would love to hear from you!
Did you find the tips we provided useful? Did you like this issue? Do you have questions for us to answer? Please let us know at info@privacysecuritybrainiacs.com.
| | |
November Tips of the Month
- News You May Have Missed
- Privacy & Security Questions and Tips
- Where to Find the Privacy Professor
| | |
We are finding more unique news stories to share with you than ever before. We also share news items that we believe are important for most folks to know, but that often do not get much mention in traditional news, or even in security and privacy news outlets.
Here are just a few of the 100+ news stories we discovered throughout the past month that provide a wide range of interesting security and privacy related news. These news items demonstrate that such types of risks exist basically anywhere in the world, and that everyone needs awareness.
This month we list 40 news items. We are grouping them into four broad categories: The first is for this month’s topic of “Digital Identities and Digital Trust,” followed by “Of broad interest,” “Privacy in businesses, governments, and other organizations,” and “Laws, legal issues, and lawsuits about or significantly involving privacy and/or security issues.” Many readers will find all the items of interest, but for those of you who prefer one or two specific categories, this will help you find your news items of interest more quickly. Within each category the items are in no particular order.
Do you have interesting, unusual, bizarre or odd stories involving security and privacy? Some of the most interesting, bizarre or odd stories are in local news! Or questions about any of the notes we included for the stories we listed this month? Let us know!
| | Specific to Digital Identities and Digital Trust | | |
A lot has been in the news lately about digital identities, digital trust, and related issues. Here are 10 representative news reports from this year.
1. ‘mDL don’t phone home’: digital ID experts sound alarm over privacy capability. “Influential digital identity professionals and privacy groups are warning that mobile driver’s licenses can comply with the international standard and still represent a major surveillance risk, due to “phone home” capabilities that can be hidden from users.”
2. New Security Breach Threatens Crypto And Everyday Apps. “The npm breach is a stark reminder of how fragile digital trust can be. One phishing email led to billions of downloads of compromised code, which in turn opened the door to stolen funds and damaged businesses. For business leaders, the lesson is clear. The open source code that powers your apps and services is both a strength and a vulnerability.”
3. Italian digital identity provider suffers data breach, 5.5M customers affected. “The stolen data was published and advertised on a dark web forum, with the entire database on sale for a price. InfoCert claimed the leak came about via the systems of a third-party supplier, to which customers were registered, and that “illicit activity” had been committed against this supplier.”
4. Popular dating app exposes 72,000 identity documents in security breach. Verification system stored users' selfies and government IDs without proper security measures, highlighting privacy risks in mandatory age checks.
5. Identity-first security: The heartbeat of modern cybersecurity. By focusing on identity, we shift the focus from protecting perimeters to encompassing everything beyond the office walls, no matter where people log in from. Identity first is, “a strategy that recognizes identity as the primary security perimeter. Instead of focusing solely on protecting network boundaries, this approach emphasizes verifying and securing individual identities—whether they belong to users, devices, or applications.”
6. Lessons from National Digital ID Systems for Privacy, Security, and Trust in the AI Age. “Since their emergence in the early 2000s, national digital identity systems have promised substantial value for both governments and the public. At their best, they enable secure access to government services (such as healthcare, voting, social welfare programs, and/or taxation), reduce administrative friction, and can be used to verify identities for financial transactions or commercial interactions (including banking or employment verification). In practice, however, not all national digital identity systems have delivered on that ideal. Some implementations have raised serious concerns about privacy, surveillance, exclusion, and data security. Others have faced backlash over centralized data collection, opaque governance, or mission creep into areas beyond their original scope.”
7. Identity is under siege as AI and cyber exploits evolve and outpace defenses. “Cybercriminals are no longer simply targeting systems; they are targeting trust. They understand that identity is the key to access, lateral movement, and long-term exploitation. Once stolen, credentials unlock sensitive data, allow persistence within networks, and enable follow-on attacks with minimal risk.”
8. Protecting Patient Data – Are Healthcare Providers Doing Enough in 2025? Digital identity management is key to blocking unauthorized access, simplifying user authentication, and ensuring systems are resilient even under attack.
| | “Woman answering hologram call on arm.” Image from RawPixel.com at Freepik. | | |
12. Des Moines police look for victims after secret camera found in porta-potty at festival. “Dozens of people were unknowingly recorded while entering and using the restroom at the Harvest & Handmade Fair on Oct. 4.” “The camera was installed for approximately six hours, starting just before 10 a.m. It was placed under the toilet seat, facing forward. Adults and children were recorded while the camera was in place.”
13. Kids at Waterloo region summer camps were unknowingly livestreamed, privacy investigation launched. Access to unauthorized livestream was available to subscribers.
14. The Surveillance Empire That Tracked World Leaders, a Vatican Enemy, and Maybe You. Inside the hidden world of First Wap, whose untraceable tech has targeted politicians, journalists, celebrities, and activists around the globe.
15. ChatGPT’s Horny Era Could Be Its Stickiest Yet. OpenAI will soon let adults create erotic content in ChatGPT. Experts say that could lead to “emotional commodification,” or horniness as a revenue stream.
16. DuckDuckGo now lets you hide AI-generated images in search results. “Privacy-focused browser DuckDuckGo is rolling out a new setting that lets users filter out AI images in search results. The company says it’s launching the feature in response to feedback from users who said AI images can get in the way of finding what they’re looking for.”
17. This ‘Privacy Browser’ Has Dangerous Hidden Features. The Universe Browser is believed to have been downloaded millions of times. But researchers say it behaves like malware and has links to Asia’s booming cybercrime and illegal gambling networks.
18. Birth certificate dispute keeps Arizona teen from boys' basketball team. A fixed clerical error on 13-year-old’s birth certificate could force him to tryout for the girls' team — despite proof he was born male. The district added that it "has informed the parents that documentation such as a chromosome analysis could be considered to help support or verify eligibility." The cost of genetic testing would be approximately $1,500, according to the family.”
19. Phony AI-generated videos of Hurricane Melissa flood social media sites. “Although it’s common for hoax photos, videos and misinformation to surface during natural disasters, they’re usually debunked quickly. But videos generated by new artificial intelligence tools have taken the problem to a new level by making it easy to create and spread realistic clips.”
| | Privacy In Businesses, Governments, And Other Organizations… | 21. UK Digital ID – Pros and Cons of BritCard. “Supporters argue it will help combat illegal immigration, reduce fraud, and modernize access to services. Opponents, however, see it as an unnecessary intrusion into private life, a dangerous expansion of government power, and a step towards a surveillance state.”” “Consumers in the UK expressed particular concerns over the use of facial recognition in digital ID, coinciding with the Home Office’s rollout of new facial recognition vans for law enforcement.” | |
22. Majority of UK consumers don’t trust digital ID, research finds. A survey from Checkout.com found Brits are among the least trusting in the world of the technology. “Only 32% of Britons said they trusted the practice.”
23. The glaring security risks with AI browser agents. New AI-powered web browsers such as OpenAI’s ChatGPT Atlas and Perplexity’s Comet are trying to unseat Google Chrome as the front door to the internet for billions of users. But consumers may not be aware of the major risks to user privacy that come along with agentic browsing, a problem that the entire tech industry is trying to grapple with. AI browsers like Comet and ChatGPT Atlas ask for a significant level of access, including the ability to view and take action in a user’s email, calendar, and contact list.
24. Former General Manager for U.S. Defense Contractor Pleads Guilty to Selling Stolen Trade Secrets to Russian Broker. Peter Williams, a 39-year-old former L3Harris executive, pleaded guilty to selling eight zero-day exploits to a broker tied to the Russian govt. Williams, head of Trenchant that develops spyware, exploits and zero-days for governments, sold the exploits for $1.3 million to Russia.
25. Age verification tools on adult websites bypassed in seconds. Using widely available technology, well-known ethical hackers Chris Kubecka and Paula Popovici quickly accessed numerous pornography sites without ever verifying their ages.
26. Rebuilding digital trust: How blockchain is making privacy a default (Opinion Piece).” “Blockchain is not a magical solution. Like any technology, it can be poorly built, used carelessly or simply misunderstood. New threat actors and attack vectors will find emerging technologies like blockchain. So, privacy must be thoughtfully designed from the ground up. Especially in blockchain, where recorded data is permanent and can't be erased, tough questions must be asked, like how to protect people's privacy while still keeping systems accountable and how to ensure transparency doesn't come at the cost of personal freedom.”
27. Western executives who visit China are coming back terrified. Robotics has catapulted Beijing into a dominant position in many industries.
| |
28. Toys "R" Us Canada Confirms Data Breach After Customer Records Surface on Dark Web. The attackers managed to copy certain records from the retailer's database containing personal information, which may include names, physical addresses, email addresses, and phone numbers.
29. Consumers embrace AI for research, but few have allowed it to make purchases. "Customers are worrying over payment security, privacy and potential mistakes with autonomous AI purchases."
30. Security News This Week: Apple and Google Pull ICE-Tracking Apps, Bowing to DOJ Pressure. Plus: China sentences scam bosses to death, Europe is ramping up its plans to build a “drone wall” to protect against Russian airspace violations, and more.
| | Laws, Legal Issues, And Lawsuits About Or Significantly Involving Privacy And/Or Security Issues… | | 31. Trump administration moves to overrule state laws protecting credit reports from medical debt. “More than a dozen states like New York and Delaware prohibit the reporting of medical debt on a consumers’ credit report. Medical debt is often the most disputed part of a consumer’s credit report, because insurance payments can take time, and oftentimes patients do not have the means to fully pay a medical bill if insurance is not covering a procedure that has already taken place.” | | Image from studiogstock on Freepik. | | |
32. What the US Supreme Court's decision upholding Texas law means for data privacy. The “ruling upheld a Texas law that requires websites with significant adult content to verify users' ages before granting access. While the decision aims to protect minors from explicit material, it raises profound questions about the privacy risks tied to collecting sensitive personal information on some of the internet's most intimate and adult-related sites.” The “ruling isn't just about adult websites and requesting identification to grant visitors' access. It's a wake-up call about the delicate balance between regulatory compliance and safeguarding user data in an era where breaches spread like wildfire and the dark web can have not only your ID but the explicit websites you visited. Will this also include search history, IP addresses, liked or disliked videos? If proper security controls are not put in place, this could get scary very fast.”
33. Google hit with $314 million US verdict in cellular data class action. “The plaintiffs filed the class action in state court in 2019 on behalf of an estimated 14 million Californians. They argued that Google collected information from idle phones running its Android operating system for company uses like targeted advertising, consuming Android users' cellular data at their expense.”
34. Judge advances data privacy class action against LinkedIn. The judge rejected LinkedIn's argument that no one would have been able to sift through the amount of data shared and so privacy wasn't violated. “Cole asserts that the combination of URL and Facebook ID data that LinkedIn Learning transmits via the Pixel permits the recipient of such data ‘to see who watched what video,’” Pitts wrote, adding later: “In sum, Cole has plausibly alleged that LinkedIn disclosed her personally identifiable information by transmitting the URLs of videos Cole watched on LinkedIn Learning ‘along with’ her Facebook ID, which together would ‘readily permit an ordinary person to identify [Cole]’s video-watching behavior.'"
35. Apple and Google challenged by parents’ rights coalition on youth privacy protections. The Digital Childhood Institute, which filed a complaint with the FTC, is part of a newer crop of online safety groups focused on shaping tech policy around conservative political belief.
36. New Zealand digital trust framework, associated rules went into effect July 24. The document covers accredited services for authentication, digital credentials. “The rules prioritize privacy and the minimization of risk to personal data such as biometrics.”
| | |
37. The Gambia strengthens digital trust with personal data protection & privacy bill. With the data protection and privacy bill now in pace, the digital ID advisor suggests that the next logical steps for the country should be to ensure “implementation, awareness raising, capacity-building for oversight, enforcement, and ensuring that the new legal regime supports vibrant digital services while protecting citizens’ rights.”
38. Medicaid insurers could be cleared to text enrollees under Trump’s work requirements law. Under a law from the early 1990s, managed care plans were effectively banned from texting enrollees because Medicaid members are traditionally assigned a health plan by a state or county. Therefore, enrollees cannot opt in and implied consent is not present, unlike in other lines of business in the insurance market. Plans are also not allowed to text members, according to rules from the Federal Communications Commission.
39. Why China's police plan to build a male DNA bank has raised privacy fears. Police in Xilinhot, Inner Mongolia, sparked controversy after announcing plans to collect men's blood samples to update an identification database for ID cards and passports. “Chen Xuequan, a law professor at the University of International Business and Economics in Beijing, warned of possible misuse. “After the samples are collected, who is in charge of analyzing? After analysis, what will they do with it?” he asked, as quoted by the news report. “If they keep the samples, they can analyze anytime and even get more private information out of it.””
40. Disney and Universal Sue AI Company Midjourney for Copyright Infringement. In the complaint, the Hollywood giants allege Midjourney generates “endless” ripoffs of everything from minions to Darth Vader.
| | Check out our Privacy & Security Brainiacs blog page for more unique security and privacy news items. Have you run across any surprising, odd, offbeat or bizarre security and/or privacy news? Please let us know! We may include it in an upcoming issue. | | |
Privacy & Security Questions and Tips
Rebecca answers hot-topic questions from Tips readers
November 2025
| | |
We continue to receive a wide variety of questions about security and privacy. Questions about current hot topics in society, and increasingly more about healthcare privacy and security. Thank you for sending them in! Our featured question this month covers how to protect against these new and continuously increasing digital identities risks and being cautious about digital trust. We’ve also included some great reader questions covering a healthcare and HIPAA topic, chatbots used in healthcare and other industries, non-computer-based social engineering, privacy notice updates, spotting AI deepfakes, and data used to build digital identities.
Are the answers interesting and/or useful to you? Please let us know! Keep your questions coming!
| | Q1: What proactive steps can companies take to ensure their marketing strategies respect privacy laws and avoid costly compliance missteps like the one you highlighted for the HIPAA non-compliance penalty? | | |
A1:
Great question, Ilya! NOTE: This is a question I received in response to a recent post I made on LinkedIn in response to the recent HIPAA settlement against Cadia Healthcare Facilities for posting one of their patient’s PHI to its public facing website without first obtaining a valid, written HIPAA authorization.
The key to preventing such non-compliance situations and subsequent penalties and long-term oversight of required corrective action plans (CAPs) includes the following security and privacy protection and compliance trifecta:
1) Implemented, maintained, and consistently enforced policies and supporting procedures. Policies should explicitly cover marketing and sales activities. Procedures should exist specifically for those to follow who are in areas where marketing, sales, public relations, and other types of external communications are made.
2) Regular training and ongoing awareness communications. Marketing, sales and other similar types job activities should not only receive the general security and privacy training, but they should also receive targeted training that covers security and privacy requirements for those specific types of job activities. They should also be sent frequent reminders, related real-life news, and in-person awareness activities.
3) Regular risk management activities. These need to include specifically speaking with marketing and sales folks, along with business unit leaders, regularly. At least once a year, but I recommend quarterly, or monthly depending upon your type of organization. Discussions should cover the types of actions allowed, and not allowed, to be in compliance with HIPAA. And for all other applicable legal requirements. For example, most of the thousands of marketing and sales folks I’ve spoken to over the past 4 decades didn’t realize that they must know and be in compliance with the privacy notices posted on their website. This is a common area where privacy violations run rampant. Marketing and sales activities involving protected health information (PHI), and all other types of data that can be associated with an individual, also need to be included in formal risk assessments and risk management activities (e.g., online scans for PHI on the organization’s social media sites, business sites, etc.).
| | |
(Somewhat) Quick Hits:
Here are five more questions, most of which we are answering at a comparatively high level. We provide more in-depth information and associated details about these topics in separate blog posts, videos on our YouTube channel, in infographics and e-books, LinkedIn posts to our business page, and within our online training and awareness courses.
| | |
Q2: What are the privacy and security risks with using AI chatbots for our hospital patient portal? Our Board of Directors are pressuring us to “jump on board the AI chatbot train!”
A2:
Before they do that, lower the crossing gates before a privacy train rolls over your patients’ privacy rights and crashes into your business with costly fines and long-term penalties!
Using AI chatbots within hospital patient portals pose significant privacy and security risks related to the handling of sensitive patient data, as well as the significant potential for regulatory non-compliance; particularly HIPAA in the US. Implementing chatbots must be done with careful planning and throughtful consideration of many factors. Keep in mind that once a patient has authenticated and entered the patient portal, which is provided to support treatment, payment and operations (TPO), generally all activities and communications are covered by HIPAA requirements.
Regarding privacy risks, here are a few:
- AI chatbots may collect, store, or process protected health information (PHI) in ways that violate HIPAA requirements. Certainly, if the chatbot is connected to third-party contracted services that lack business associate agreements (BAAs) or are not in compliance with HIPAA.
- Patients and clinicians may disclose more information than intended due to the conversational and “trust-building” nature of chatbots, increasing the risk of unauthorized data exposure.
- Chatbots may use conversational data to further train algorithms, potentially mixing health information with other user data and sharing or reusing sensitive details in future responses, thereby risking secondary or unauthorized use. Or, when the patient includes information about family members, friends, or others within the conversation, that could potentially impact the integrity of the patient’s records by including such data within the patient’s records.
- Something else rarely considered: hacker tactics can use prompt injection attacks to trick chatbots into revealing confidential data, such as patient lists or medical records. This is a comparatively new form of social engineering that exploits the AI's natural language interface.
Here are a few security risks:
- Chatbots are vulnerable to prompt injection, malware, and constructed input attacks that bypass protections or exfiltrate patient information, threatening both privacy, and clinical decision-making that could literally harm patients.
- Lack of proper authentication or access controls may allow unauthorized users or attackers to impersonate patients or clinicians and access sensitive health data through the portal.
- AI-generated responses may introduce medical misinformation or errors (often generally referenced has hallucinations), leading to patient harm if not adequately reviewed by qualified medical professionals.
- Cleartext data transmission or insufficient endpoint security when integrating chatbots into hospital systems can expose PHI to interception, leaks, or ransomware attacks.
Generally, unless all data from the AI chatbot is going to remain within the healthcare provider’s control, within the organization’s own business ecosystem, and used only for TPO, using AI chatbots are highly risky, and often violate HIPAA (and GDPR, and other international privacy laws) because of the vast amounts of data sharing, and use of the data for training AI. Hospitals face reputational risk, legal liabilities, and regulatory fines for failing to safeguard patient data, and to support all privacy requirements including many different patient access and correction rights, when adopting AI technologies in patient-facing settings such as a patient portal.
| | Q3: What is a real-life everyday type of non-computer-based social engineering? | | |
A3:
Great topic! Social engineering humans with human-based tactics have been being used as long as humans have been on earth.
From the time I was a really young girl I’ve been intrigued with “snake oil” sales persons. Here’s a fun fact: The term “snake oil” originally referred to real traditional medicine from China made from the oil in Chinese water snakes, which is rich in omega-3 fatty acids. Railroad workers from China in the U.S. during the mid-1800s used those genuine snake oil liniments for joint pain. Early US frontier entrepreneurs saw an opportunity! However, most couldn’t import the real thing. So, they sold something else that they called snake oil, deceiving those who purchased it. Thus, the often-used name for any type of deceit involving such trickery…snake oil sales person. All types of social engineering is at its core built upon deceit.
Social engineering attacks made using non-computer-based methods are more widely used now than ever before. In business, often by malicious insiders who exploit their authorized access to personal data, intellectual property, and other types of valuable business assets. If they meant to work there to perform such crimes, that success in fulfilling their intent often involved social engineering to get hired.
Most are still being used today. Here are just a few examples:
1) Postal / mail-redirection / card-replacement impersonation. A social engineering crook impersonated Microsoft cofounder Paul Allen and tricked Citibank into sending a replacement debit card to the crook’s address. The crook then accessed Paul Allen’s funds and soon made $15,936 of charges. Ultimately, all but $658 were returned to Allen.
2) Dumpster-diving for medical records and other sensitive documents found in trash. Multiple news investigations have uncovered un-shredded patient files, insurance forms, and other records in many company dumpsters, including many that had improperly discarded personal data by business associates, leading to HIPAA and other types of regulatory inquiries, fines, and large-scale exposure of PHI and other types of personal data.
3) Shoulder-surfing at ATMs to see and document the card numbers and PINs: Criminals who loiter near ATMs and covertly observe PIN entry. Such crooks have been convicted after using observed PINs to withdraw cash. This low-tech social technique and directly led to many thefts, privacy breaches, and large financial losses.
4) Third-party telephone-record sellers and private investigators: U.S. authorities have prosecuted many networks of private investigators who used pretexting to obtain phone, tax, and financial records for sale. These operations created widespread privacy breaches for huge numbers of individuals.
5) Pretending to be someone who belongs within an organization. They then enter areas (restricted to non-employees) and obtain data, intellectual property, or other types of assets. This is still a tactic commonly, successfully used by social engineering crooks, or people who simply are trying such tactics to get hired. Possibly the most famous example of the latter, inserting himself in an environment where he wanted to work, is that of the actor Steve Guttenberg. As he has told this tale multiple times, when he decided to be an actor, he moved to Los Angeles, snuck onto the Paramount Studios lot, set up his own office, and started making (business land line) phone calls to agents and producers. Soon, he began landing auditions, which led to breakthrough roles in films such as The Boys from Brazil, Diner and Police Academy.
| | |
Q4: Have you seen any privacy policy/notice updates lately for widely used websites?
A4:
Yes. Many have updated their privacy notices. Here are a few different healthcare industry organizations, and a couple of other industries, that have updated their notices:
Organizations subject to new U.S. privacy laws requiring privacy notice updates, such as the New Hampshire data privacy law (effective January 1, 2025), new HIPAA requirements (effective February 16, 2026) and others recently have made updates, or should be in the process of making updates.
It is good to see many different organizations keeping their website privacy notices updated. This is necessary to do on an ongoing basis to not only comply with new legal requirements, but also to reflect their organization’s practices for how they are collecting, deriving, using, sharing, processing, storing, and otherwise accessing any type of data that can be attributed to specific individuals in some way.
| | |
Q5: We recently had a social engineering attempt from a malicious AI deepfake. It sure looked like a ringer for our CEO! We are happy that we had recently provided training that included one of your articles about deepfake tactics to help raise awareness that prevented it. Do you have any more recommendations for protecting against AI deepfake social engineering attacks?
A5:
The fact is that AI algorithms and models can now generate hyper-realistic voices, faces, and entire whole-body views of real people. They can also generate text styles that mimic real individuals. All of these hyper-realistic impersonations can often be accomplished using minimal data.
This creates risks of unauthorized use of personal likeness (deepfakes, voice clones, AI “avatars”). Also of identity blending, where real and synthetic features are mixed together, making verification nearly impossible. Add to these challenges persistent “digital ghosts,” which are AI versions of people that persist online even after consent is withdrawn or the person dies.
Here are just a few of many possible ways to protect against AI deepfake social engineering attacks by increasing awareness and changing personal habits. This is increasingly referenced as strengthening “cognitive firewalls.” Even if your organization has the best and/or most expensive security tools, they cannot protect the organization, workforce members, or customers/patients/etc., if the humans involved cannot recognize the signs of such impersonations. Teach your workforce members, and customers/etc. as applicable, the following actions:
o Be skeptical by default. Look at every unexpected message, video, audio or other communication as potentially an impersonation until verified it is not (e.g., call back the purported individual on a known number or confirm in person).
o Don’t rush to fulfill high-urgency requests. AI scammers rely on emotion and time pressure. Even taking just 30 seconds of pause and consideration can prevent an AI cybercrook from being successful.
o Develop a “digital signature of trust.” Agree on shared code words or verification questions with family, close friends, co-workers, and customers as applicable, for emergencies or quick verification needs.
o Audit the organization’s public footprint. Limit or obfuscate high-fidelity personal data (voice clips, audio such as on podcasts, long videos, handwriting samples, webinars, news interviews, etc.) that can be used for cloning.
o Educate the organization’s workforce members and customers/patients/etc., continually. Share information (such as the monthly Tips messages) with them, and follow credible security and privacy resources to recognize new AI impersonation patterns.
Everyone in the organization, and those who have relationships with the organization, must always keep in mind that increasingly more posts to all social media sites are not being moderated for accuracy, and in fact, increasingly more social media posts are false. Check valid news outlets to check if what you see on social media sites are true.
| | |
Q6: What types of data are included within our digital identities? From what sources is our data being collected to create our digital identities?
A6:
I love these questions! It is so important to understand, by not only the general public, but also by business decision makers, lawmakers, healthcare leaders, educators, and more who actually use and depend upon such digital identities. An important point to make is that digital identities are much more than just a user ID and password. Or, a PIN. Or a social security number. Digital identities are profiles that reveal the characteristics, likes, dislikes, locations, and everything else about the life of an individual that can be collected and placed into the continuously growing profile of each individual.
Here is an overview listing with general information types provided:
- Personal data: Also called, when specific to industry or use, as protected health information (PHI), or personally identifiable information (PII), and a vast array of other names. This is data such as name, birth date, government-issued ID numbers (passport, driver’s license, social security number), email addresses, addresses, phone numbers, biometric data (fingerprints, facial recognition, iris scans, voice patterns, DNA, tattoos, other unique biometric markers), and more.
- Authentication credentials (login Information): Usernames and identifiers, passwords, PIN codes, security tokens, cryptographic keys, and digital certificates, etc.
- Account and device identifiers: IP addresses, MAC addresses, device fingerprints, unique mobile device IDs, medical device identifiers, wearable “smart” devices (e.g., fitness trackers, smart watches), etc.
- Behavioral and activity data: Online and other types of search histories, purchase records (online, with credit cards, other digital payments, etc.), browsing activity, photos and videos viewed, geolocation data, use of specific apps and associated data (date, time, location, purchases, images and videos viewed, etc.), engagement on social platforms (likes, comments, shares, chat content, comments, etc.), communication metadata, etc.
- Financial Information: Bank account numbers, transaction histories, payment card details, payment app info, etc.
- Other accumulated or assigned data: Health data and records, academic credentials, digital badges, reputation scores, relationship data among people, companies, or devices, public records (career info, lawsuits, residences, credit ratings, arrest records, etc.), etc.
Sources of all this data collection for digital identities include, but are not limited to:
- Direct user input. E.g., when signing up for services or filling forms (name, birthdate, email, etc.), manual entry by others overhearing or seeing information about specific individuals
- Device interactions: E.g., through login devices, used applications, and internet-connected services (IP addresses, device IDs)
- Online activities: E.g., browsing, app use, searching, making purchases, engaging on social networks, access through “smart” IoT devices, subscribing to content
- Biometric capture devices: E.g., scanners and sensors capturing physical and behavioral biometrics during authentication processes, medical devices, consumer health monitoring devices, smart IoT clothing and other types of devices
- Government and official records: E.g., for legal, regulatory, financial, or healthcare systems, including digital driver's licenses and government credentials
- Cloud and IoT environments: E.g., application interactions, machine identities, behavioral context from cloud and device ecosystems
- AI generated: E.g., photos, videos, audio
There is an unlimited number of sources, and data, that is increasingly being collected to establish digital identities for most individuals. It is important to understand that basically anywhere that any form of information is found can be a source of data that is collected and subsequently included in an individual’s digital identity. This includes information overheard in airport boarding gates and restaurants, on elevators, in shopping malls, on buses and trains…literally anywhere! This is why keeping the public, and workforce members, constantly aware of security and privacy risks and established policies and associated procedures and protections.
| | Send us any questions you have. And, keep reading the monthly Privacy Professor Tips! | | |
We are also excited to provide ways for MSPs, law firms, and other professional services organizations to offer our monthly tips to their clients! It is already working well for some such organizations. Get in touch with us for the details!
Here are some security and privacy gifts for you to consider in our 11-page “Privacy and Security Gifts” guide.
What topics would you like to see us create videos, and more formal online courses, for? Let us know!
Have questions about our education offerings? Contact us!
| | Where to Find The Privacy Professor | | Rebecca is happy to be teaching Cybersecurity & Privacy Basics for Engineers and Technical Professionals. Online / Jan 30, 2026 / Course Code: 0105-WEB26. Time: 12:00 PM - 2:00 PM Eastern Time. Check it out! | | |
Permission to Share
If you would like to share, please forward the Tips message in its entirety. You can share excerpts as well, with the following attribution:
Source: Rebecca Herold. November 2025 Privacy Professor Tips
www.privacysecuritybrainiacs.com.
NOTE: Permission for excerpts does not extend to images.
Privacy Notice & Communication Information
You are receiving this Privacy Professor Tips message as a result of:
1) subscribing through PrivacyGuidance.com or PrivacySecurityBrainiacs.com or
2) making a request directly to Rebecca Herold or
3) connecting with Rebecca Herold on LinkedIn.
When LinkedIn users invite Rebecca Herold to connect with them, she sends a direct message when accepting their invitation. That message states that in the spirit of networking and in support of the communications that are encouraged by LinkedIn, she will send those asking her to link with them her monthly Tips messages. If they do not want to receive the Tips messages, the new LinkedIn connections are invited to let Rebecca know by responding to that LinkedIn message or contacting her at rebeccaherold@rebeccaherold.com.
If you wish to unsubscribe, just click the SafeUnsubscribe link below.
| | | | |