Unwise Use of Smart Technology


In today's society, speed and convenience rule. It's why we love our so-called 'smart' devices. Whether it's a doorbell that that  decides whether or not to unlock the door based on facial recognition... or a refrigerator that orders groceries based on your health and diet... we love the promise of everyday tasks becoming faster, easier or going away altogether. But, at what cost? 

Smart devices that are always on, always connected and always sharing may be putting us at some very sizable privacy and security risks. It's up to us to ensure we understand how our private data is being captured, analyzed and distributed. 

Automation is great, but how much control are we willing to give up? H ow do we evaluate which devices are as smart as they claim to be? Which ones are making decisions based on biased or flawed logic, or inaccurate data?  

We must be as smart as our devices should be if we're going to maintain our privacy and security.  Read on to learn about the everyday threats posted by unwise use of smart technology. 

  
us  Data Security & Privacy Beacons
People and places making a difference**

Have you seen an organization or individual taking actions to improve privacy? Send me a note to nominate a privacy beacon of your own!

The Social Security Administration is distributing email reminders to consumers to check their Social Security Statement online. The statement can be helpful not only for future financial planning but also to ensure no one has fraudulently commandeered your identity. The other piece of this I liked was the SSA's use of two-factor authentication for users of its online system.  Drop me a note if you'd like me to forward my copy of the reminder to you. It's a nice template you may be able to replicate in your business or agency. 

Bruce Sussman and the SecureWorld team does a great job of providing real-world examples of common cyberattacks. Check out Bruce's recent story on smishing attacks everyone should see. If you're not familiar, smishing is the term applied to phishing attacks that proliferate via text messaging (or SMS). See below for a smishing attack I received just days after reading Bruce's article.  

Nick Robins-Early at the HuffPost does similar great work to educate the public on how to spot a cyberthreat. His recent story on clues to fake news offers excellent reporting on the issue. Beyond simply sharing the trend, Nick offers up tips on how to avoid becoming part of the problem by spreading fake news. 

The HackerOne bug bounty program has made it easy and extremely worthwhile for white hat hackers and researchers to report problems they spot within sometimes very popular technology. For example, a white hat recently found a significant security vulnerability with PayPal's login form. Via the HackerOne bug bounty program, he submitted the issue and ultimately received a $15,300 reward for doing so. 

Mozilla has developed a data privacy and security rating system for smart devices called Privacy Not Included. Interestingly, the devices Mozilla rates as "super creepy" are some of the most widely used items! The ratings are easy to understand and very comprehensive. If the company's researchers have not already analyzed a device, users can submit it for review. 

**P rivacy beacon shout-outs do not necessarily indicate an organization or person is addressing every privacy protection perfectly throughout their organization (no one is). It simply highlights a noteworthy example that is, in most cases, worth emulating.
real Too Smart for Privacy Protection? 
Smart devices methodically collect personal data
  

By the end of 2020, there are expected to be more than  20 billion internet-connected devices. T hat's billion with "B!" 

With so many connected devices in our homes, offices, retail centers, parks, cars, you name it, there's a massive amount of personal information floating around. It raises a lot of questions, like:
  • How much of it is protected from prying eyes (like the four Ring employees fired for inappropriately accesses customer video footage)? 
  • With whom are device vendors sharing information (like the third-party trackers embedded within Ring)? 
  • How well is data protected from the criminal element? 
  • If you are a smart device engineer, business or vendor, what are you doing to control and protect all that highly-impactful data?
Many of us in the data privacy and security community are concerned smart devices, or rather the developers of smart devices, may consider their technology too smart for privacy protection. Collecting troves of sensitive information brings with it the responsibility of protecting it. It's one thing if a technology provider has the data; it's quite another if the technology provider has built something that allows an everyday user to become the keeper of our data. Just consider smart doorbell video, or should I say, surveillance  footage. 

No laughing matter
 
Imagine you slip on ice while out walking the dog, and it's all caught on a neighbor's doorbell camera. Depending on your relationship with the camera's owner, it may make for a good laugh, and that's the extent of it. What if, however, your slip gets uploaded to YouTube and goes viral, turning what otherwise may have been a mildly embarrassing moment into a worldwide sensation. Things get even scarier if you imagine your video being used for other purposes, like a denied insurance claim or job offer... or to track your location. 

Now imagine you're walking down the street at the precise time a crime occurs. You have zero to do with it, but you were caught on camera approaching the scene at the exact right time. Video taken out of context can create big-time problems for people.

Then there are the run-of-the-mill technology mix-ups like the one reported by the Google Home Hub user. When the user loaded a smart home camera, a still image from a different user's home popped up on the screen. Why wasn't the application code engineered to prevent this or tested to ensure something like this could not happen?

It's really impossible to say how video, information and all kinds of other data we willingly give up today might some time in the future be used against us.

My 2020 Experiment

Those concerns and others haven't stopped us from purchasing these devices. I imagine this is because most people don't understand the extent, nor the ramifications, of the potential data exposure. It may take a major data breach to catch users' attention. Or, some really good research and communication. 

The need for more education around smart devices has inspired one of my 2020 projects. 

This month, I am beginning a year-long experiment with the Echo Dot I purchased for Christmas. I'll be putting Alexa through a variety of tests, speaking certain "out of character" phrases and seeing the impact. Will it lead to targeted phone calls or ads? There's no telling how this information will be used. I anticipate it will be an enlightening experience to say the least, and I look forward to sharing the results with you. 

Do you have suggestions for words or phrases I should try? Let me know! (NOTE: I appreciate all suggestions, but I will not say things that could possibly get me arrested or that could otherwise lead to harmful actions for me and my family.) 

Ask Yourself: Why do they need this?

Before you purchase or use a smart device, take a moment to understand what data is being collected from you. Traditionally, you can find this information in a provider's privacy policy or privacy notice. You can also reach out to them directly. 

Once you have that information, ask yourself why the device needs that data. If you can't come up with a viable reason, consider whether the speed, convenience or fun is worth the privacy pay off.

For example, Wyze recently reported it had leaked the personal data for 2.4 million security cameras. The  data included health information , such as bone density. Wait a minute! Why would a security home system need that kind of health data? 

wantedFacebook Attempts to Thwart Deep Fake Videos
Do new rules go far enough?

Facebook has said it will ban many types of misleading videos  from its site. The new rules are a push against deep fake content and online disinformation campaigns. 

How effective the social media giant's rules will be remains to be seen. 

Not all manipulated video will be banned

It's important to recognize these rules are designed to remove deep fake videos only. What leaves shallow fakes free to roam the social network. These are videos manipulated to a lesser degree, but still blur the line between truth and lies. An example of a shallow fake is a video slowed down to make the subject appear intoxicated. 

What's more, Facebook said it will allow video manipulation in parodies and satire. It will also allow clips edited to cut out or change the order of words. There was also some confusion about how Facebook intends to review political ads. 

All in all, Facebook's rules appear to be highly subjective.
 
Although deep fake videos are rare today, they're becoming more prevalent. It's likely we'll also see them become increasingly sophisticated, and therefore, harder to detect (both for the average user and Facebook). 

Pros and cons of Facebook's ban

A big positive of Facebook's announcement is the heightened awareness it likely generated around the existence of fake videos. However, the announcement may have one very serious consequence. Facebook users may now have a false of security around the validity of videos on the social network. They may believe, to a greater degree, in the validity of what they view on Facebook, assuming its been appropriately vetted for manipulation. 

It's so important to remember (and to educate our children on the fact) Facebook and its social media cohorts are not news outlets. Although they're often treated as legitimate sources for news, they are platforms for crowd-sourced content that do not follow journalistic practices. Just look at the craziness spreading around the coronavirus

Take your own precautions when using social media sites for news. Verify the validity of everything you see, video or otherwise, before believing (or sharing!) the content. 


worldsMigrate & Patch
Microsoft Windows vulnerabilities making headlines

Starting in January 2020, Microsoft is putting all its support and security eggs in the Windows 10 basket. That means anyone running Windows 7 or Windows Server 2008/R2 will be out of luck should they need any service. Worse, they will be operating systems that will NOT be patched when new exploits are discoveredMost of these entities will be migrating to Windows 10 if they haven't already. 

But, that's not to say Windows 10 isn't also without it's issues. 

In fact, the U.S. National Security Agency (NSA) recently revealed it had found a serious vulnerability in Windows 10 and Server 16. A built-in security feature that verifies a system is downloading software legitimately from Microsoft was flawed. 

As one cryptography expert put it, "This is bad."

The reason such a vulnerability is so serious is because it can allow cybercriminals and attack bots to develop exploits that appear to be coming from Microsoft, but are actually malware that can take control of the system. 

It's exactly this kind of issue that makes 'smart' devices to scary. Cyberattackers are fast innovators. They develop new ways to break in and take over systems that allow them access to all kinds of value data. If a company the size and sophistication of Microsoft can be found vulnerable, certainly the makers of smartwatches, smart home security systems and smart appliances can!
 easyThe Dark Side of Targeted Advertising
YouTube is  limiting the data it collects from videos targeting young viewers. It's part of a larger effort to comply with rules established by the  1998 Children's Online Privacy Protection Act (COPPA) and others enforced by the FTC. 

Compliance with the rules may be a higher priority following a recent $170 million fine levied against YouTube and Google for  allegedly violating COPPA , which requires companies to get parental consent before collecting and sharing data on children under age 13.

The internet's second-most visited website with more than 2 billion viewers, YouTube now requires content creators to designate videos by audience. Users who do not properly label children's videos will be subject to FTC sanctions. 

Reportedly, YouTube has warned video creators to expect less revenue now that data will be limited, and as a result, advertisements will be less targeted.  In addition,  some features like comments, live chat and notifications will not be available on videos marked specifically for children.

The harm in gathering kids' data

If you've ever found yourself "forced" into buying a toy or a snack, you've likely experienced firsthand how susceptible children can be to advertising. However, in today's online world, this can have a much larger impact. Data collected for the purposes of advertising can be used for a whole lot more. Not the least of these uses is to create consumer profiles starting at a very young age. 

Targeted advertising for adults has its place. Some people enjoy ads in line with their actual interests and behaviors. However, children are in no position to say yes or no to data collection, nor to understand the potential consequences. YouTube's recent changes highlight the need for increased data privacy protection across generations... but especially for children who are highly susceptible to, and also largely unaware of, the risks of collection and sharing of data.

The future of COPPA

COPPA is important legislation and has certainly made a difference in the digital lives of kids in the U.S. However, aspects of it are significantly out of date (The last time it was amended was 2013.). With so many changes taking place and so few consumers reading privacy policies thoroughly, there's been talk of revising COPPA. This is one data security and privacy expert who would support that action wholeheartedly. 

fuelingFive Privacy Problems with 'Smart' Devices
Tech developers can reshape each one of these issues
 
 
 
 
 
 
 
From my research, I've found that the  privacy breaches and data security incidents that occur within the Internet of Thi ngs (IoT) 'smart' device sector are generally caused by one of five issues. Those of you leading technology development projects have a massive opportunity to affect change in these areas:

For the full article, visit ISACA

Lack of built-in security controls: Four security and privacy features should be built into every smart device, and importantly, enabled by default:
  1. Strong encryption for data in storage and in transit
  2. Multi-factor authentication
  3. Activity logging
  4. Device management user interfaces
Oversharing of user data:  Data is widely shared across the provider's business units and also with third parties. Users would be shocked to learn about these parties, which include entities like government agencies, insurance companies, law enforcement, data aggregators and many others.

Listening is turned on by default: Some devices, such as smart speakers, are listening all the time and keeping recordings of what they capture. Many providers also have large teams of people who listen to user conversations. 

Devices are open and accessible: A large number of popular smart devices have no authentication or encryption and can be easily found through tools such as Shodan. This allows allowing attackers to establish direct connections to these devices while bypassing any firewall restrictions.

Horrible or non-existent privacy policies:  As long as smart device providers and app developers go unpenalized for substandard privacy notices and failing to do what their notices promise, they will continue their privacy-poor practices.

My hope for 2020 is to find at least 10 smart devices from 10 unique providers that address each of these five privacy problems. The time is long overdue for the vulnerabilities in the billions of IOT devices to be fixed.

UPDATE: I am thrilled to be part of the NIST IOT research and recommendations development team , which is working hard to eradicate problems like those described above. 

droneFresh Phish
FedEx look-alike text offers plenty of red flags  
 

As more people text to communicate, phishing attempts are migrating the SME channel, gaining themselves the moniker 'smishing.' 

I received this one below the other day. (And, I'm not alone. Quite a few news outlets began reporting this scam shortly after I received my smish.)

See if you can spot the red flags that indicate this is not the communication from FedEx it claims to be...


Did you spot the red flags?
  • FEDEX in all caps is not how the company represents it's brand. It's typically FedEx. (FedEx says scammers fraudulently use its name and logo all the time.)
  • "Hello mate," is seldom-used greeting in the U.S. (not to mention, the punctuation is incorrect).
  • One flag you would not have known is that I wasn't expecting any packages. However, that's often how these scammers get you to click. Confusion is a common tool used by cybercriminals. 
What to do... 

It's easy to fall for a widespread smish like this. Just be sure never to click on a suspicious link. Find another way to the provider and ask questions through a legitimate, verified channel, such as an https website, telephone number found on that site or a chat / text option also found on the site. 
PPInewsWhere to Find the Privacy Professor  
  
 

On the air... 

HAVE YOU LISTENED YET? 

Do you have an information security, privacy or other IT expert or luminary you'd like to hear interviewed on the show? Or, a specific topic you'd like to learn more about? Please let me know!

I'd also love for your organization to be a sponsor! Shoot me an email and I'll send you more details.

All episodes are available for on-demand listening on the VoiceAmerica site, as well as iTunes, Mobile Play, Stitcher, TuneIn, CastBox, Player.fm, iHeart Radio and similar apps and sites. 

Some of the many topics we've addressed... 
  • student privacy
  • identity theft
  • medical cannabis patient privacy
  • children's online privacy and safety  
  • applications and systems security
  • cybercrime prosecutions and evidence
  • government surveillance
  • swatting 
  • GDPR
  • career advice for cybersecurity, privacy and IT professions
  • voting / elections security (a series)
Please check out some of my recorded episodes. You can view a complete listing of shows to date, grouped by topic. After you listen,  let me know what you think ! I truly do use what I hear from listeners.

SPONSORSHIP OPPORTUNITIES: Are you interested in being a sponsor or advertiser for my show? It's quickly growing with a large number of listeners worldwide. Please get in touch! There are many visual, audio and video possibilities.

We have current sponsorship openings in three of the four weeks' shows each month. If your organization wants to sponsor one show each month, I will cover topics  related to your organization's business services and/or products.


In the news... 


Advertising Now Available!

Tips of the Month is now open to sponsors. If you're interested in reaching our readers (maybe you have an exciting new privacy product or service or an annual event just around the corner), the Tips email may be just the thing to help you communicate to more people! 

We have a variety of advertising packages to meet every budget. 


3 Ways to Show Some Love

The Privacy Professor Monthly Tips is a passion of mine and something I've offered readers all over the world for since 2007 (Time really flies!). If you love receiving your copy each month, consider taking a few moments to...

1) Tell a friend! The more readers who subscribe, the more awareness we cultivate.

2) Offer a free-will subscription! T here are time and hard dollar costs to producing the Tips each month, and every little bit helps. 

3) Share the content. All of the info in this e mail is sharable (I'd just ask that you follow

 
 
We live in a time when machines have advanced far enough to be smarter than humans. But, we have all the tools we need to remain on top of the technological food chain. Ingenuity, free will, empathy and a sense of responsibility to one another can not be replicated. 

Although one could argue (and plenty have!) that these qualities can be programmed into a machine, I believe the human will always be the superior being. We just have to remember that, and let our minds, not our machines guide our behavior. 

Here's to a super cyber safe and 'smart' February!

Rebecca
Need Help?


share2Permission to Share

If you would like to share, please forward the Tips message in its entirety. You can share  excerpts, as well, with the following attribution:

Source: Rebecca Herold. February 2020 Privacy Professor Tips. www.privacyprofessor.com.

NOTE: Permission for excerpts does not extend to images.

Privacy Notice & Communication Infoprivpolicy

You are receiving this Privacy Professor Tips message as a result of:

1) subscribing through PrivacyGuidance.com
2) making a request directly to Rebecca Herold; or 
3) connecting with Rebecca Herold on LinkedIn

When LinkedIn users initiate a connection with Rebecca Herold, she sends a direct message when accepting their invitation. That message states that in the spirit of networking and in support of the encouraged communications by LinkedIn, she will send those asking for LinkedIn connections her Tips message monthly. If they do not want to receive the Tips message, LinkedIn connections are invited to let Rebecca know by responding to that LinkedIn message or contacting her at [email protected]

If you wish to unsubscribe, just click the SafeUnsubscribe link below.
 
 
The Privacy Professor
Rebecca Herold & Associates, LLC
Mobile: 515.491.1564
View our profile on LinkedIn     Follow us on Twitter